Skip to main content
Health Research Alliance Author Manuscripts logoLink to Health Research Alliance Author Manuscripts
. Author manuscript; available in PMC: 2022 Oct 1.
Published in final edited form as: West J Nurs Res. 2021 Apr 24;43(10):949–961. doi: 10.1177/01939459211004274

Evaluating Stakeholder Engagement: Stakeholder-Centric Instrumentation Process (SCIP)

Jenny Martinez a,*, Catherine Verrier Piersol a, Sherrie Holloway b, Lauren Terhorst c, Natalie E Leland c
PMCID: PMC8429065  NIHMSID: NIHMS1688860  PMID: 33896283

Abstract

Evaluating engagement in a research partnership can capture the success and impact of the research team-stakeholder partnerships. This paper describes the Stakeholder-Centric Instrumentation Process (SCIP), an iterative method to develop an evaluation that reflects research team-stakeholder collective values, language, and priorities. We describe our implementation of the SCIP and provide the Stakeholder-Centric Engagement Evaluation, an evaluation developed in collaboration with our advisory committee. Mean scores across three administrations of the tool remained constant. We monitored responses received from our advisory committee during each administration for changes in scores that guided refinements to our stakeholder engagement strategy. Face validity and acceptability questions showed high satisfaction for the tool’s time required to complete, (M=4.50, SD=0.86), clarity (M=4.56, SD=0.78), and relevance (M=4.67, SD=0.49) (maximum score=5). The SCIP methodology and the Stakeholder-Centric Engagement Tool can be used during study planning and data collection to capture research team-stakeholder collaborations that reflect stakeholder priorities.

Keywords: Comparative Effectiveness Research, Stakeholder Participation, Stakeholder Engagement, Research Design


Engaging patients, caregivers, practitioners, and other healthcare stakeholders in research is essential for actionable studies that directly inform patient choices, treatment options, and healthcare policies (Forsythe et al., 2019; Martínez et al., 2019). The shift from traditional research models to stakeholder-centric methodologies is intended to produce scientific findings that align with consumer-prioritized outcomes and the realities of everyday practice (Pandi-Perumal et al., 2019). Such studies have the potential to improve patient care, research and education about health care delivery or research procedures across a variety of fields. Further benefits of engagement include a democratic and accountable research process that empowers stakeholders (Harrison et al., 2019) as members of communities with lived or professional experiences that are impacted by study findings; such constituencies can include health systems, industry, purchasers, payers, policy makers, researchers, practitioners, patients or caregivers (Forsythe et al., 2016, 2018). Practicing stakeholder engagement in research, however, requires a collaborative approach that champions authenticity, equity, inclusion, and sustainability.

One commonly used framework for engaging stakeholders identifies reciprocal relationships, partnerships, co-learning, and transparency-honesty-trust as key practices for collaborative research (Sheridan et al., 2017). Developed by the Patient-Centered Outcomes Research Institute (PCORI), a leading funder of such work, these engagement principles have guided the development of research that partners with stakeholders throughout the planning, conduct, and dissemination of studies (Sheridan et al., 2017). For PCORI awardees, stakeholder involvement must be embedded in all phases of the study through ongoing reports (Forsythe et al., 2018).

Despite widespread support for stakeholder-engaged research and growing requirement among funders to embed stakeholder engagement into funded projects, there is a lack of consensus on best practices for its use (Harrison et al., 2019). There is also limited research on valid and inclusive strategies for engaging vulnerable groups that (a) are likely to experience healthcare disparities or distrust and (b) have little to no experience engaging in scientific research (DeCamp et al., 2015; Martínez et al.,2019b, 2019c, Poger et al., 2020). Gaps in evidence and limited investigator experience may lead to low-quality stakeholder engagement, thereby exacerbating disparities in research participation and health inequities. Stakeholder engagement is an important component of conversations about equity, inclusion, and power sharing in research.

There is a small but growing body of evidence available to guide investigators in engaging stakeholders through the design, conduct, and dissemination stages of research (Forsythe et al., 2019). For example, past research has found wide variability in how stakeholder engagement occurs, as most collaboration has been documented during participant recruitment or dissemination of findings (Goodman et al., 2020). Past studies have also identified predominantly qualitative data-collection processes when assessing and reporting stakeholder engagement (Goodman et al., 2017, 2019; Martínez et al.,2019b, 2019c), and that little is known about systematic or data-driven best practices (Goodman et al., 2020). To this end, future evaluations of stakeholder engagement must be reliable and valid, scalable and applicable to project characteristics, easily implemented by researchers, and equitable for all stakeholders. Information gathered through such evaluations of stakeholder engagement can provide clear results about ways stakeholders have participated in current studies, inform robust guidelines for collaboration in research, and enhance accountability of investigators to their partners.

Purpose

This paper describes our development and implementation of the Stakeholder-Centric Engagement Evaluation. We developed and implemented this stakeholder-informed assessment within our PCORI-funded large pragmatic trial which compared the effectiveness of two non-pharmacological approaches in enhancing the quality of life in nursing home residents with dementia. Guided by the aforementioned calls for equitable and inclusive research practices, the Stakeholder-Centric Engagement Evaluation was developed through an iterative process that engaged all of our stakeholder partners regardless of their background or prior research expertise. We present the methodology, which emerged as the Stakeholder-Centric Instrumentation Process (SCIP). We propose that investigators can use the SCIP to customize an evaluation tool in collaboration with their stakeholders to best meets their study’s needs.

Methods

The research team in this study included researchers and members from the clinical community, co-led by an academically trained investigator and a health professional (study principal- and co-investigators). Our research team convened a diverse and comprehensive advisory committee to guide the development of the proposal submitted to PCORI. The advisory committee included 18 members who represented the various stakeholder communities identified by PCORI and those most closely impacted by the research topic (Forsythe et al., 2016, 2018). Advisory committee members were identified and recruited through longstanding professional networks that members of the research team had established with post-acute care providers, advocacy organizations, and practitioners. We describe our advisory committee’s membership in Table 1. Together, the advisory committee and the research team constituted the study’s leadership.

Table 1.

Stakeholder Communities Represented in the Study Advisory Committee

PCORI Stakeholder Communities Examples of Study Advisory Committee Members (N=18) from our Studya

Patients
(n=1)
To represent patients with cognitive limitations, we engaged a patient advocate from the Alzheimer’s Association to ensure the perspective of nursing home residents with dementia was included.
Caregivers
(n=2)
To represent the spectrum of caregivers for the patient population, we engaged a professional caregiver (i.e., certified nursing assistant) and a non-professional caregiver (e.g., family member).
Clinicians
(n=7)
To include the range of healthcare practitioners and staff members that engage with nursing home residents with dementia: dementia educator, lead nurse, crisis management nurse, social worker, speech language pathologist, physical therapist, recreational therapist, and an ancillary staff stakeholder with expertise in dining services. Occupational therapy was represented by research team members.
Researchersb
(n=1)
To represent the topics in our research, we engaged a researcher with expertise in dementia care and geriatric mental health.
Hospitals and Health Systems
(n=6)
To represent the perspective of organizations providing care to the patient population, we engaged stakeholders with expertise in nursing home administration, nursing home regulations, and clinical operations. We further engaged a director of rehabilitation, a nursing home medical director, and a speech therapy and clinical operations stakeholder.
Policy Makers
(n=1)
To represent policy, we also engaged a national healthcare policy stakeholder.
a

We refer to the study advisory committee members as study stakeholders in this paper.

b

The researchers in this group did not include the study principal- and co-investigators. We refer to the study principal- and co-investigators as the research team in this paper.

Our application was successfully funded by PCORI in August 2017. Upon receiving the funds to initiate the project, the research team convened an in-person study kickoff meeting. This event brought the advisory committee and the research team together to develop clear working guidelines for stakeholder engagement, including procedures for evaluating our collaboration throughout the project. Our search for an evaluation tool that reflected our advisory committee needs and priorities led to the Stakeholder-Centric Instrumentation Process (SCIP), a seven-step process which emerged from the advisory committee-study stakeholder collaboration. We used the SCIP twice in our study. During the first round (Cycle 1), the advisory committee and research team located and developed an Initial Evaluation to assess stakeholder engagement. Upon the conclusion of Cycle 1, advisory committee provided feedback indicating that the Initial Evaluation did not met their needs. Thus, the SCIP was repeated again (Cycle 2), yielding a new tool, the Stakeholder-Centric Engagement Evaluation.

Both cycles of the SCIP occurred soon after our large pragmatic trial was funded, which meant our project was in progress during this time. Our research team was not only working to meet study milestones, but also to develop and implement best practices for evaluating stakeholder engagement in time for our first biannual report to our funder. Thus, our work to develop engagement evaluation procedures also represents the unique perspectives of stakeholders actively engaged in a research collaboration. Next, we describe the SCIP Cycles 1 and 2 along with the development of the Stakeholder-Centric Engagement Evaluation. We provide the Stakeholder-Centric Engagement Evaluation in Figure 1. Figure 2 depicts the SCIP.

Figure 1. Stakeholder-centric Engagement Evaluation.

Figure 1

Figure 1

For each question, respondents answer:

Please rate how OFTEN you think the research team did each of the following. (1-Never; 2-Rarely; 3-Sometimes; 4-Most of the time; 5-Always)

Please rate how WELL you think the research team did each of the following. (1-Poor; 2-Fair; 3-Good; 4-Very good; 5-Excellent)

Figure 2.

Figure 2

The Stakeholder-Centric Instrumentation Process (SCIP)

Cycle 1 – Tailoring of an Existing Evaluation Tool for Stakeholder Engagement

Cycle 1, Step 1: Determine best practices for evaluating stakeholder engagement

We convened an in-person study kickoff meeting early on in our large pragmatic trial. Our goals for the kickoff meeting (held in March 2018) were to build advisory committee-research team comradery and identify clear procedures for stakeholder engagement throughout the study. In recognition of the value of advisory committee members’ time and to ensure equitable access to resources across individuals, committee members received compensation for attending the in-person meeting.

During the kick-off meeting, the advisory committee and research team reviewed the proposed research strategy, the concept of stakeholder engagement, and further clarified the group’s priorities for engagement in our project. We reviewed PCORI’s perspective on research partnerships, including the PCORI engagement principles (i.e., reciprocal relationships, partnerships, co-learning, and transparency-honesty-trust) and biannual reporting requirements for awardees. We also held an open brainstorming session to identify ways to meet these expectations within our study. During our discussion, the advisory committee and research team agreed that gathering data on stakeholder engagement via a regular and systematic evaluation process was needed to meet our funder’s reporting requirements. The advisory committee also agreed that ongoing evaluation would help the research team gather data on committee satisfaction and subsequently modify our approach to engagement. However, the advisory committee and research team were unable to identify a tool or method for evaluation during this meeting.

Additional outcomes of the kick-off meeting included the creation of a charter outlining engagement procedures (e.g., communication strategies, establishing a monthly meeting schedule, creating attendance policies, clarifying methods for conflict resolution). Monthly meetings were established to discuss study progress and research related topics. In recognition of the value of stakeholders’ time, all advisory committee members also received compensation for their participation in each monthly meeting.

Next, the research team conducted a scoping review on recent approaches for evaluating stakeholder engagement in research in response to an action item from the kick-off meeting (Martínez et al.,2019b, 2019c). The research team engaged two advisory committee members (i.e., patient advocate and caregiver) in this scoping review as co-authors. The results of indicated a need for ongoing, regular assessment of engagement, a lack of consensus on specific strategies for evaluating research team-stakeholder partnerships, a lack of published tools for this purpose, and common themes in stakeholder engagement evaluations (Martínez et al.,2019b, 2019c). Our scoping review also uncovered a single available tool for capturing engagement in research—a tool which presented a quantitative measure of stakeholder engagement created by Goodman and colleagues (Goodman et al., 2017). Advisory committee members were appraised of the scoping review’s process and findings during our April, May, and June 2018 meetings.

Cycle 1, Step 2: Complete targeted review of the evaluation tool by the research team

Before implementing Goodman and colleagues’ quantitative measure to our study (Goodman et al., 2017), the research team and advisory committee engaged in an iterative review process to enhance its applicability to our project without making major modifications to the tool. First, the research team conducted an initial review of Goodman and colleagues’ quantitative measure (Goodman et al., 2017) to better align its language with the wording that was already familiar to the advisory committee. Terms changed included updating “academic team” to “research team” and “community” to “advisory committee”. No further changes were made. This review created a streamlined, Initial Evaluation, which was presented to our stakeholders in the next step of the SCIP. The process also expedited initial review of the tool as our funded study was in the planning/implementation phases and as we neared our first annual report to PCORI.

Cycle 1, Step 3: Invite targeted review of the evaluation tool by stakeholders

An open invitation for in-depth review of the Initial Evaluation was extended to advisory committee members with little experience in research as well as those representing the patient and caregiver perspectives. This step was intentional and introduced in response to prior research highlighting inequities in research partnership and the need to involve patient and caregivers in the development of study procedures (Boivin et al., 2018; DeCamp et al., 2015; Martínez et al.,2019b, 2019c; Poger et al., 2020). As an additional step toward equity, we compensated reviewers for the time spent reviewing this tool.

We recruited our advisory committee’s patient advocate, caregiver, and certified nursing assistant, and ancillary staff member with expertise in dining for this review. All reviewers completed their reviews individually and via online communication. Reviews centered on general impressions of the Initial Evaluation, satisfaction with the time requirement and administration procedures, accessibility to non-researchers, and how well the assessment captured stakeholder engagement. Reviewer comments reflected agreement that the changes in terminology made by the research team in Step 1 were appropriate; reviewers recommended no further changes were needed.

Cycle 1, Step 4: Present the evaluation tool to stakeholders for approval

The research team and advisory committee reviewers presented the Initial Evaluation based on the seminal work of Goodman et al. (Goodman et al., 2017) during our July 2018 meeting. After an open discussion period, the remaining advisory committee members expressed approval of the changes made and recommended no further tailoring. Advisory committee members endorsed and approved the following strategies to reduce assessment burden:

1. Biannual Administration.

The advisory committee completed the evaluation tool biannually in accordance with published guidance (Goodman et al., 2017, 2019; Martínez et al.,2019b). To decrease burden, monthly meetings were cancelled each August and December so that the advisory committee could instead use this time to complete the evaluation.

2. Recognition and Valuation of Advisory Committee Members’ Time.

The advisory committee requested the recognition and valuation of the time they spent completing the lengthy Initial Evaluation. To this end, the research team disseminated a stipend for each administration of the evaluation. The stipend was equal to that of attendance at a monthly meeting. We also provided advisory committee members with the option to complete the evaluation tool anonymously; however, no advisory committee members selected this option. Rather, they expressed comfort providing feedback linked to their name and appreciation for the opportunity to contribute to subsequent action items.

3. Online Distribution.

The advisory committee requested that the evaluation be administered online and reminders provided to facilitate access and completion. To this end, we used an electronic web application, REDCap (Harris et al., 2009) that generated a personalized URL that allowed respondents to complete the assessment in as many sessions as needed within a pre-set 14 day response period. The REDCap system also allowed us to generate reminder emails set every four days for a maximum of three emails sent.

Cycle 1, Step 5: Administer evaluation tool and capture data on face validity and acceptability

The Initial Evaluation was administered in August 2018 in accordance to the strategies endorsed by the advisory committee in Cycle 1, Step 4. Although we had already received feedback and approval from our advisory committee, we wanted to learn more about participant’s thoughts on the evaluation after having completed it. With this goal in mind, the research team developed and appended a 5-item survey to learn more about the Initial Evaluation’s face validity and acceptability. The 5-item survey is presented in Table 2. Questions addressed satisfaction with the tool, alignment of questions with stakeholder engagement concepts, and suggestions for improvement. Three items used a 5-point response scale with higher scores indicative of greater satisfaction with burden (time for completion), clarity of items, and relevance of items. Two items were open ended.

Table 2.

Final Results of 5-item Face Validity and Acceptability Questions for the Initial Evaluation and Stakeholder-Centric Engagement Evaluation

Item - Quantitative Quantitative Scores Mean (SD)
Initial Evaluation (N=16)
August 2018
Stakeholder-Centric Engagement Tool (N=18)
December 2018

1. Please rate your satisfaction with the time it took to complete the evaluation 1 4.13(0.89) 4.50(0.86)
2. Rate your satisfaction with the clarity of the questions1 3.94(0.57) 4.56(0.78)
3. Rate your satisfaction with how well the evaluation captured the areas of stakeholder engagement that you believe are important for the research team to evaluate1 3.94(0.77) 4.67(0.49)

Item – Qualitative Qualitative Themes
Initial Evaluation (N=11)
August 2018
Stakeholder-Centric Engagement Tool
(N=12)
August 2018

4. What did you like or dislike about the evaluation? What suggestions do you have for improving it? Relevance of Questions
● Many of the questions did not apply and there was no way to indicate this

Structure of Evaluation
● Difference between frequency and doing was not well defined
● Many questions seemed abstract or similar to one another and it was hard to differentiate

Experience Completing the Evaluation
● Unsure that questions were interpreted as intended
Relevance of Questions
● Survey comprehensively addressed the work and responsibilities of the committee

Structure of Evaluation
● Clear and easy to complete
● Covered the basic areas of engagement

Experience Completing the Evaluation
● Well written and easy to understand
5. Is there anything else that you would like to share with the research team regarding your experience taking this evaluation? (n=4)
Successes of this Evaluation
● Reflecting on experience as advisory committee member is enjoyable
● Appreciate the opportunity to provide feedback
(n=9)
Successes of this Evaluation
● Survey was comprehensive yet concise
● Survey is well done
● Feel good about working with a research team that cares about stakeholder opinions
1

Response scale: 1-Very dissatisfied, 2-Dissatisfied, 3-Neutral, 4-Satisfied, 5-Very satisfied

Cycle 1, Step 6: Share evaluation tool results with stakeholders

To promote power sharing, partnership, and transparency, the research team presented on the results from the Initial Evaluation and the 5-item survey during the September 2018 monthly meeting. More specifically, we presented the quantitative results across all engagement areas, a summary of open-ended responses, and an action plan for addressing questions or concerns to the advisory committee. We also hosted a period of open discussion where the advisory committee provided feedback on the research team’s interpretation of responses and action plan. Advisory committee members were encouraged to share any additional feedback via email.

The results of our 5-item survey on face validity and acceptability are presented in Table 3. Overall, the results showed that the advisory committee found that many questions on the Initial Evaluation did not apply to our study, appeared to overlap, and were confusing. During our September 2018 meeting, advisory committee members agreed that the Initial Evaluation did not fully capture their experience as research stakeholders. They also shared concerns that the Initial Evaluation’s scores were lower than intended, would misrepresent the success of the advisory committee-research team partnership to our funder, and requested that we consider an alternate assessment for future evaluations.

Table 3.

Results of Stakeholder-Centric Engagement Evaluation (December 2018, August 2019, December 2019) by Engagement Area

Engagement Area Stakeholder-Centric Engagement Evaluation Scores Mean (SD)
How Often1 (N=18)
How Well (N=18)2
Dec 2018 Aug 2019 Dec 2019 Dec 2018 Aug 2019 Dec 2019

Engagement Area 1: Focus on issues important to the nursing home community from advisory committee perspectives (4 items) 4.51(0.58) 4.39(0.68) 4.50(0.61) 4.54(0.65) 4.36(0.86) 4.47(0.63)
Engagement Area 2: Respect and value advisory committee perspectives (8 items) 4.97(0.17) 4.85(0.36) 4.96(0.20) 4.85(0.36) 4.81(0.41) 4.93(0.26)
Engagement Area 3: Seek advisory committee input (5 items) 4.74(0.49) 4.47(0.74) 4.67(0.54) 4.69(0.57) 4.43(0.77) 4.64(0.59)
Engagement Area 4: Act on advisory committee input (4 items) 4.71(0.49) 4.42(0.77) 4.63(0.52) 4.71(0.52) 4.35(0.91) 4.60(0.60)
Engagement Area 5: The advisory committee and the research team learn from one another’s expertise (7 items) 4.55(0.63) 4.46(0.72) 4.73(0.50) 4.55(0.64) 4.42(0.77) 4.70(0.53)
Engagement Area 6: Deal with conflict and disagreements effectively3 (6 items) 4.90(0.34) 4.70(0.59) 4.91(0.38) 4.73(0.47) 4.69(0.56) 4.82(0.39)
Engagement Area 7: Communicate with advisory committee members using effective methods (7 items) 4.67(0.50) 4.58(0.53) 4.74(0.46) 4.66(0.48) 4.56(0.59) 4.79(0.41)
Engagement Area 8: Use a clear organizational structure (4 items) 4.78(0.42) 4.50(0.61) 4.78(0.45) 4.67(0.48) 4.50(0.69) 4.74(0.44)
Engagement Area 9: Be transparent and informative (5 items) 4.78(0.44) 4.61(0.59) 4.78(0.44) 4.72(0.48) 4.53(0.71) 4.73(0.52)
Engagement Area 10: Involve the advisory committee in dissemination activities (4 items) 4.63(0.59) 4.54(0.63) 4.78(0.42) 4.60(0.60) 4.53(0.63) 4.75(0.58)
1

Average scores for question: Rate how often you think the research team did each of the following (Response scale: 1-Never, 2-Rarely, 3-Sometimes, 4-Most of the time, 5-Always)

2

Average scores for question: Rate how well you think the research team did each of the following (Response scale: 1-Poor, 2-Fair, 3-Good, 4-Very good, 5-Excellent)

3

This question also included an option of “N/A; not applicable”. Answers of “N/A” were not analyzed.

Cycle 1, Step 7: Determine need for further tailoring of evaluation tool

Cycle 1 determined that the Initial Evaluation did not meet our advisory stakeholder’s needs given their feedback in Cycle 1, Step 6. Throughout Cycle 1 it became clear that further tailoring of the Initial Evaluation would not be sufficient to meet stakeholder recommendations. Instead of repeating a second cycle of the SCIP to further tailor the Initial Evaluation, we utilized the SCIP methods to develop a new too. Just like in Cycle 1, the process in Cycle 2 occurred during an ongoing, active trial thereby reflecting stakeholder priorities in real-time. Next, we describe Cycle 2 of the SCIP methodology which yielded the Stakeholder-Centric Engagement Evaluation presented in Figure 1.

Cycle 2 – Developing a New Evaluation Tool for Stakeholder Engagement

Cycle 2, Step 1: Determine best practices for evaluating stakeholder engagement

As a first step in Cycle 2 of the SCIP, the research team dedicated the October 2018 meeting to reviewing the foundation information about stakeholder engagement that was first presented in our study kick-off meeting. For example, we reviewed stakeholder engagement concepts and PCORI priorities for research partnerships. To further ground Cycle 2 of the SCIP, we reviewed the results of our scoping review on ways to evaluate stakeholder engagement (Martínez et al.,2019b, 2019c; Sheridan et al., 2017) and discussed the work we completed as part of the SCIP Cycle 1. These conversations allowed our advisory committee and research team to establish a common understanding of our work and next steps to develop an evaluation of stakeholder engagement that met our collective needs.

Cycle 2, Step 2: Complete targeted review of the evaluation tool by the research team

The research team reviewed several sources of data to develop a tool that better reflected our study and advisory committee priorities. Data sources included the PCORI engagement principles, feedback received from advisory committee members during monthly meetings, results of the Initial Evaluation’s 5-item survey, and findings from our scoping review on measuring engagement (Boivin et al., 2018; Goodman et al., 2017). Using the Initial Evaluation as a guide, research team members developed new assessment domains (e.g., transparency, organization, dissemination), created new items, reorganized existing questions, and removed overlapping items. The research team met weekly during this process and discussed all changes.

Once a draft of the Stakeholder-Centric Engagement Evaluation was created, targeted research team members conducted a more in-depth review. For example, the research team psychometrician reviewed the tool’s content, doctoral and master’s students read for clarity, and a cultural and engagement expert reviewed for strategies to enhance inclusivity and accessibility. The research team again met weekly to discuss all recommendations and finalize a draft for advisory committee review.

Cycle 2, Step 3: Invite targeted review of the evaluation tool by key stakeholders

An open invitation to review the Stakeholder-Centric Engagement Evaluation was again extended to advisory committee members with little experience in research as well as those representing the patient and caregiver perspectives. To this end, we engaged our advisory committee’s patient advocate, caregiver, and certified nursing assistant, and ancillary staff member with expertise in dining. The research team provided opportunities for online feedback and engaged in one-on-one discussions with these stakeholder reviewers. Reviewers were compensated for the additional time spent on this task.

Similar to Cycle 1, reviews centered on general impressions of the Stakeholder-Centric Engagement Evaluation, satisfaction with the time requirement and administration procedures, accessibility to non-researchers, and how well the tool captured stakeholder engagement. Discussions occurred via email, phone call, or videoconferencing conversation, as preferred by the reviewer. Advisory committee member recommendations at this stage included: (a) simplifying existing language to enhance accessibility to non-researchers and non-clinicians, (b) removing additional duplicate questions, and (c) grouping like items together.

Cycle 2, Step 4: Present the evaluation tool to stakeholders for approval

The research team and advisory committee reviewers presented the Stakeholder-Centric Engagement Evaluation during our November 2018 meeting. After an open discussion period, advisory committee members unanimously endorsed the Stakeholder-Centric Engagement Evaluation and recommended the labeling of question groups as “Engagement Areas” to reflect engagement values and the addition of a “Not Applicable” answer choice for items about disagreements given the lack of conflict in our partnership. The advisory committee also reiterated the value of our administration strategies (i.e., scheduling, online distribution, compensation for time spent) and requested that these remain unchanged to reduce burden to stakeholders. Further, the research team noted that our administration strategies had led to 100% completion (N=18) of the tools during each administration.

Cycle 2, Step 5: Administer evaluation tool and capture data on face validity and acceptability

The Stakeholder-Centric Engagement Evaluation was administered for the first time in December 2018 using the strategies endorsed by the advisory committee in Cycle 1, Step 4 and reinforced in Cycle 2, Step 4. To explore advisory committee members’ perspectives on this new evaluation, we once again appended the study-team developed 5-item survey, which we present in Table 2.

Cycle 2, Step 6: Share evaluation tool results with stakeholders

The research team analyzed all responses to the Stakeholder-Centric Engagement Evaluation and the additional 5-item survey questions about face validity and acceptability. In line with the SCIP, the research team presented all results to the advisory committee during the January 2019 meeting.

We again held an open discussion to elicit the advisory committee’s reactions and thoughts. During this time, the advisory committee responded expressed enthusiasm about the Stakeholder-Centric Engagement Evaluation and recommended no further modifications. More specifically, committee members shared that the Stakeholder-Centric Engagement Evaluation more fully captured their experience as study partners and expressed gratitude for the time invested by the research team and advisory committee members to develop the tool. Advisory committee members were again encouraged to share any additional feedback that emerged via email.

The results of our 5-item survey to learn more about the Stakeholder-Centric Engagement Evaluation’s face validity and acceptability are presented in Table 3. Overall, the results showed that the advisory committee found the Stakeholder-Centric Engagement Evaluation to be concise, comprehensive, and clear. Given this information, the research team and advisory committee unanimously agreed to adopt this tool for the remainder of the study.

Cycle 2, Step 7: Determine need for further tailoring of evaluation tool

Based on results from step 6, no further tailoring was necessary. Thus, the Stakeholder-Centric Engagement Evaluation was integrated into our study as the method for evaluating stakeholder engagement in our study. Given our stakeholder’s recommendations for no further changes, we concluded the SCIP process and removed the 5-item survey containing face validity and acceptability questions. The Stakeholder-Centric Engagement Evaluation is presented in Figure 1. The results of each administration are presented in Table 3 and discussed in more detail later in this paper.

Descriptive Analysis of Data

Quantitative and qualitative methods were utilized to analyze responses to each evaluation and the 5-item surveys used to capture data on face validity and acceptability for both the Initial Evaluation and the Stakeholder-Centric Engagement Evaluation.

The Stakeholder-Centric Engagement Evaluation

Quantitative data was obtained from the Stakeholder-Centric Engagement Evaluation’s (Figure 1) 54 items across 10 Engagement Areas. These items were rated on a 5-point response scale addressing quality and a second 5-point response scale addressing quality. Qualitative data was gathered from 2 open ended items at the end of the Stakeholder-Centric Engagement Evaluation that were intended to capture advisory committee member’s thoughts and priorities for co-learning.

The 5-Item Face Validity and Acceptability Survey

Quantitative data was collected from three items on the 5-item face validity and acceptability surveys. Qualitative data was obtained via two open ended items which solicited reflections on the tool and its administration.

Data Analysis

All analyses were conducted by the first author and reviewed by the entire research team during weekly project meetings. Quantitative data was analyzed using descriptive statistics (i.e., means, standard deviations) obtained via IBM SPSS Statistics Version 26. The results are presented in Tables 2 and 3. Qualitative methods employed Excel to organize participant responses and generate emergent themes via a broad surface structure content analysis (Bengtsson, 2016). Next, the first author presented emergent themes to the research team during weekly project management meetings and led a short member-checking session. Conflicts were resolved via discussion from the research team. Given the small amount of narrative data collected from the 18-member advisory committee and brevity of our research methods, alternative in-depth analyses methods were not warranted (Bengtsson, 2016).

Results

Face Validity and Acceptability

Face validity and acceptability questions were administered through a research team-developed 5-item survey appended in December 2018 to the Initial Evaluation and in August 2019 to the Stakeholder-Centric Engagement Evaluation. Reponses to the three quantitative questions of the 5-item survey showed high satisfaction for the tool’s time required to complete (M=4.50, SD=0.86), clarity (M=4.56, SD=0.78), and relevance (M=4.67, SD=0.49) (maximum score=5). Content analysis of the two open-ended items yielded themes addressing the relevance of questions, the structure of the evaluation, the general experience of the respondent, and areas of success of the evaluation. The quantitative and qualitative results for each administration of the 5-item survey are presented in Table 3.

Stakeholder-Centric Engagement Evaluation

The Stakeholder-Centric Engagement Evaluation was administered at three time points with a 100% response rate (N=18): December 2018, August 2019, and December 2019. Results for each administration are shown in Table 3. Overall, mean scores across the Stakeholder-Centric Engagement Evaluation’s areas remained constant across administrations. We monitored responses during each administration for fluctuations that could alert us to areas to strengthen or keep in our stakeholder engagement strategy.

The mean engagement scores across each area ranged from M=4.36(SD=0.86) to M=4.97(SD=0.17) (maximum score=5). The lowest rated score (M=4.36, SD=0.86) in August 2019 was achieved in “Engagement Area 1: Act on Advisory Committee Input”. In December 2019, the mean increased to M=4.47(SD=0.63), which corresponded to the research team’s strategic effort to enhance communication, meeting structure, and timeliness. For example, we made multiple modifications to the monthly meeting structure, including situating each topic within the overarching study. We did so in response to advisory committee feedback that the goal and details of the project were difficult to follow given its complex structure. We also made major adaptations to our study procedures based on stakeholder feedback gathered through monthly meetings and follow-up conversations. For example, we adapted the recruitment of caregivers for qualitative interviews to minimize burden on caregivers and nursing home staff. In comparison, the highest rated engagement area (M = 4.97, SD=0.17) was “Respect and Value Advisory Committee Perspectives”. Our observed high scores remained consistent across all evaluation time points. Qualitative analysis of the two open ended items at the end of the Stakeholder-Centric Engagement Evaluation intended to capture additional stakeholder thoughts and their priorities for co-learning yielded the themes of communication and meeting organization which are described below.

Communication

Communication encompassed methods of sharing information between the research team and advisory committee (e.g., videoconferencing, use of an online platform). Advisory committee members reported liking the online platform but had questions about how to use the services. For example, one member requested help in personalizing the number and frequency of email notifications sent by the online platform. In response, the research team developed “how to” guides on the use of project technology and led workshops during multiple advisory committee meetings.

Advisory committee members expressed a desire for more ways to communicate, share information, and network with one another. In response, we created online forums on our online platform and encouraged networking between all members during calls. Feedback obtained via subsequent assessments and in-call discussions showed that advisory committee members were satisfied with these measures and found them responsive. A

Advisory committee members with little to no previous research experience also shared that participating in a study initially seemed very intimidating. However, these fears were eased after participating in our monthly advisory committee meetings. Stakeholders explained that our communication styles and strategies promoted a collegial and respectful atmosphere which helped them feel at ease and like their opinion was wanted, respected, and valued.

Meeting Organization

As established during the in-person kickoff meeting, we held monthly meetings with the advisory committee through our study. These virtual meetings were repeated three times per month in response to advisory committee’s busy schedules and to reduce barriers to participation. Specifically, three meetings were offered on different days and varied times suggested by stakeholders to accommodate time zone differences and varied responsibilities; we also incorporated an online project management platform to communicate and share documents (e.g., meeting minutes, presentation slides) in between calls.

Our organization of each meeting maximized stakeholders’ attendance and participation in our regular advisory committee meetings. For example, we created and shared a calendar with all call times and dates for the entire year. We sent electronic calendar invitations and meeting reminders with an agenda to all advisory committee members ahead of each meeting. We also adapted how our monthly meetings were organized. During the early months of our study, the research team presented on the progress made toward all project milestones; advisory committee members were overwhelmed by this format. In response, we shifted our approach to use visual aids liberally (e.g., PowerPoint slides, diagrams) and highlight only one or two salient topics each month. We began each discussion by anchoring it in the context of the research question and design so that its relationship to the study was clear; an in-depth review of the broader study structure was also provided periodically. Lastly, we concluded each monthly meeting with a brief update on all major parts of the study to provide our advisory committee with a view of our progress. Our advisory committee communicated their satisfaction with our response via their responses to the December 2019 evaluation.

Discussion

Engaging stakeholders in research is an opportunity to create meaningful collaborations where individuals participate as co-researchers, advisors, and collaborators (Boivin et al., 2018; Harrison et al., 2019). Effective engagement leverages stakeholder expertise and encourages partnership, power sharing, transparency, and mutual benefit. Because stakeholder partners represent the diversity of their communities and may have little or no research experience, a culturally sensitive approach that empowers vulnerable stakeholders is essential.

As a growing number of funders embrace stakeholder-engaged research, investigators must be prepared to report on their collaborations with non-researcher partners using a common framework and systematically evaluate the quality and impact of stakeholder contributions (Forsythe et al., 2016; Sheridan et al., 2017). Ongoing evaluation of engagement at regular intervals within an ongoing study is important for understanding how stakeholders are engaged and the effectiveness of such strategies (Martínez et al.,2019a, 2019b, 2019c). Promisingly, there is a growing body of literature supporting the need for assessment of stakeholder engagement, recommendations for its implementation, and emergence of models to capture stakeholder engagement (Boivin et al., 2018; Concannon et al., 2014; Forsythe et al., 2016, 2018. 2019; Goodman et al., 2019, 2020; Hamilton et al., 2018; Maccarthy et al., 2019; Ray & Miller, 2017). Taken together, engaging stakeholders in decision-making and utilizing regular, systematic evaluations for engagement can support studies that champion authenticity, equity, inclusion, and sustainability in their partnerships.

Our study integrated ongoing evaluation of the research team-advisory committee collaboration using the pioneering work of Goodman et al. (Goodman et al., 2017). We developed and implemented the Stakeholder-Centric Instrumentation Process (see Figure 2), a method to establish a tool that reflected our research team and advisory committee’s collective values, language, and priorities and allowed for monitoring over time. The result was the Stakeholder-Centric Engagement Evaluation, a tool to assess our stakeholders’ preferences (see Figure 1). Other researchers can use the SCIP to customize an evaluation tool for a given stakeholder group.

Obtaining detailed and thorough feedback from stakeholders is a time-intensive endeavor but possible within an ongoing trial. In our case, strategies such as electronic administration of the Stakeholder-Centric Engagement Evaluation, electronic reminders for its completion, incorporating time for the advisory committee to complete the assessment, and providing a stipend have resulted in high quality feedback and a 100% response rate. In turn, the Stakeholder-Centric Engagement Evaluation has given us timely information for rapidly addressing our stakeholder’s concerns. Feedback gathered through the Stakeholder-Centric Engagement Evaluation had a significant impact on our engagement approach. For example, our advisory committee requested that we address questions about technology, ways to communicate with one another, and balance the information presented during monthly meetings.

Limitations center on external validity of the SCIP and Stakeholder-Centric Engagement Evaluation. Although the SCIP has not been implemented by others, we propose the systematic process can be replicated by other research teams to establish a tool that meets the needs of their stakeholders. We also developed the Stakeholder-Centric Engagement Evaluation in a short amount of time with our advisory committee so generalization of the tool to other groups of stakeholders is limited. Future research directions include investigating the Stakeholder-Centric Engagement Evaluation’s psychometric properties, enhancing its generalizability to various study topics and populations, and implementation of the SCIP in other studies.

Although the importance of partnering with stakeholders in the scientific process is widely supported, best practices for engagement and approaches for evaluating or reporting such processes do vary (Martínez et al.,2019a, 2019b, 2019c). Goodman’s pioneering work (2017) laid the foundation for the development of our tool to capture stakeholder research engagement (Goodman et al., 2017, 2019).

We believe that the SCIP methodology and the Stakeholder-Centric Engagement Evaluation can be implemented with ease within existing trials to capture research team-stakeholder collaborations and ensure that evaluation strategies represent study partner priorities. To date, there is a dearth of resources for evaluating stakeholder engagement (Harrison et al., 2019). Here, we offer a tailored tool and instrumentation process as a first step toward addressing this gap in knowledge and advance the science of stakeholder engagement.

Acknowledgements:

Financial disclosure: Research reported in this publication was funded through a Patient-Centered Outcomes Research institute (PCORI) Award (IHS-1608–35732). The statements in this publication are solely the responsibility if the authors and no not necessarily represent the views of the Patient-Centered Outcomes Research Institute (PCORI), its Board of Governors or Methodology Committee.

References

  1. Bengtsson M (2016). How to plan and perform a qualitative study using content analysis. NursingPlus Open, 2, 8–14. 10.1016/j.npls.2016.01.001 [DOI] [Google Scholar]
  2. Boivin A, L’Espérance A, Gauvin F-P, Dumez V, Macaulay AC, Lehoux P, & Abelson J (2018). Patient and public engagement in research and health system decision making: A systematic review of evaluation tools. Health Expectations, 21(6), 1075–1084. 10.1111/hex.12804 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Concannon TW, Fuster M, Saunders T, Patel K, Wong JB, Leslie LK, & Lau J (2014). A systematic review of stakeholder engagement in comparative effectiveness and patient-centered outcomes research. Journal of General Internal Medicine, 29(12), 1692–1701. 10.1007/s11606-014-2878-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. DeCamp LR, Polk S, Chrismer MC, Giusti F, Thompson DA, & Sibinga E (2015). Health care engagement of limited English proficient Latino families: Lessons learned from advisory board development. Progress in Community Health Partnerships, 9(4), 521–530. 10.1353/cpr.2015.0068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Forsythe L, Heckert A, Margolis MK, Schrandt S, & Frank L (2018). Methods and impact of engagement in research, from theory to practice and back again: Early findings from the Patient-Centered Outcomes Research Institute. Quality of Life Research, 27(1), 17–31. 10.1007/s11136-017-1581-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Forsythe LP, Carman KL, Szydlowski V, Fayish L, Davidson L, Hickham DH, Hall C, Bhat G, Neu D, Stewart L, Jalowsky M, Aronson N, & Anyanwu CU (2019). Patient engagement in research: Early findings from the Patient-Centered Outcomes Research Institute. Health Affairs, 38(3), 359–367. 10.1377/hlthaff.2018.05067 [DOI] [PubMed] [Google Scholar]
  7. Forsythe LP, Ellis LE, Edmundson L, Sabharwal R, Rein A, Konopka K, & Frank L (2016). Patient and stakeholder engagement in the PCORI pilot projects: Description and lessons learned. Journal of General Internal Medicine, 31(13). 10.1007/s11606-015-3450-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Goodman MS, Ackermann N, Bowen DJ, Panel Delphi, & Thompson VS (2020). Reaching consensus on principles of stakeholder engagement in research. Progress in Community Health Partnerships, 14(1), 117–127. 10.1353/cpr.2020.0014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Goodman MS, Ackermann N, Bowen DJ, & Thompson V (2019). Content validation of a quantitative stakeholder engagement measure. Journal of Community Psychology, 47(8), 1937–1951. 10.1002/jcop.22239 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Goodman MS, Sanders Thompson VL, Johnson CA, Gennarelli R, Drake BF, Bajwa P, Witherspoon M, & Bowen D (2017). Evaluating community engagement in research: Quantitative measure development. Journal of Community Psychology, 45(1), 17–32. 10.1002/jcop.21828 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Hamilton CB, Hoens AM, Backman CL, McKinnon AM, McQuitty S, English K, & Li LC (2017). An empirically based conceptual framework for fostering meaningful patient engagement in research. Health Expectations, 21(1), 396–406. 10.1111/hex.12635 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Hamilton CB, Hoens AM, McQuitty S, McKinnon AM, English K, Backman CL, Azimi T, Khodarahmi N, & Li LC (2018). Development and pre-testing of the Patient Engagement In Research Scale (PEIRS) to assess the quality of engagement from a patient perspective. PLoS One, 13(11), e0206588. 10.1371/journal.pone.0206588 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, & Conde JG (2009). Research electronic data capture (REDCap)—A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics, 42(2), 377–381. 10.1016/j.jbi.2008.08.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Harrison JD, Auerbach AD, Anderson W, Fagan M, Carnie M, Hanson C, Banta J, Symczak G, Robinson E, Schnipper J, Wong C, & Weiss R (2019). Patient stakeholder engagement in research: A narrative review to describe foundational principles and best practice activities. Health Expectations, 1–10. 10.1111/hex.12873 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Maccarthy J, Guerin S, Wilson AG, & Dorris ER (2019). Facilitating public and patient involvement in basic and preclinical health research. PLoS One, 14(5), e0216600. 10.1371/journal.pone.0216600 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Martínez J, Piersol CV, Terhorst L Wong C, Bickling M, Day CE, Leland NE (2019a). Stakeholder engagement in research: Enhancing quality of life for nursing home reisdents with dementia. Alzheimer’s and Dementia, 15(7), P1156. 10.1016/j.jalz.2019.06.3525 [DOI] [Google Scholar]
  17. Martínez J, Wong C, Piersol CV, Bieber DC, Perry BL, & Leland NE (2019b). Stakeholder engagement in research: A scoping review of current evaluation methods. Journal of Comparative Effectiveness Research, 8(15), 1327–1341. 10.2217/cer-2019-0047 [DOI] [PubMed] [Google Scholar]
  18. Martínez J, Wong C, Saric K, Clayton Bieber D, Perry B, & Leland NE (2019c). Measuring stakeholder engagement in research: A review of the evidence. The American Journal of Occupational Therapy, 73(4). 10.5014/ajot.2019.73S1-PO3009 [DOI] [Google Scholar]
  19. Pandi-Perumal SR, Zeller JL, Parthasarathy S, Edward Freeman R, & Narasimhan M (2019). Herding cats and other epic challenges: Creating meaningful stakeholder engagement in community mental health research. Asian Journal of Psychiatry, 42, 19–21. 10.1016/j.ajp.2019.03.019 [DOI] [PubMed] [Google Scholar]
  20. Poger JM, Yeh H-C, Bryce CL, Carroll JK, Kong L, Francis EB, & Kraschnewski JL (2020). PaTH to partnership in stakeholder-engaged research: A framework for stakeholder engagement in the PaTH to Health Diabetes study. Healthcare, 8(10), 1–6. 10.1016/j.hjdsi.2019.05.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Ray KN, & Miller E (2017). Strengthening stakeholder-engaged research and research on stakeholder engagement. Journal of Comparative Effectiveness Research, 6(4), 375–389. 10.2217/cer-2016-0096 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Sheridan S, Schrandt S, Forsythe L, Advisory Panel on Patient Engagement, Hilliard TS, & Paez KA (2017). The PCORI engagement rubric: Promising practices for partnering in research. Annals of Family Medicine, 15(2), 165–170. 10.1370/afm.2042 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Western journal of nursing research are provided here courtesy of Health Research Alliance manuscript submission

RESOURCES