Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2016 Dec 19;24(3):537–543. doi: 10.1093/jamia/ocw157

Comparative analysis of stakeholder experiences with an online approach to prioritizing patient-centered research topics

Dmitry Khodyakov 1,, Sean Grant 1, Daniella Meeker 1,2, Marika Booth 1, Nathaly Pacheco-Santivanez 1, Katherine K Kim 3
PMCID: PMC7651951  PMID: 28011596

Abstract

Objective: Little evidence exists about effective and scalable methods for meaningful stakeholder engagement in research. We explored patient/caregiver experiences with a high-tech online engagement approach for patient-centered research prioritization, compared their experiences with those of professional stakeholders, and identified factors associated with favorable participant experiences.

Methods: We conducted 8 online modified-Delphi (OMD) panels. Panelists participated in 2 rating rounds with a statistical feedback/online discussion round in between. Panels focused on weight management/obesity, heart failure, and Kawasaki disease. We recruited a convenience sample of adults with any of the 3 conditions (or parents/guardians of Kawasaki disease patients), clinicians, and researchers. Measures included self-reported willingness to use OMD again, the panelists’ study participation and online discussion experiences, the system’s perceived ease of use, and active engagement metrics.

Results: Out of 349 panelists, 292 (84%) completed the study. Of those, 46% were patients, 36% were clinicians, and 19% were researchers. In multivariate models, patients were not significantly more actively engaged (Odds ratio (OR) = 1.69, 95% confidence interval (CI), 0.94–3.05) but had more favorable study participation (β = 0.49; P ≤ .05) and online discussion (β = 0.18; P ≤ .05) experiences and were more willing to use OMD again (β = 0.36; P ≤ .05), compared to professional stakeholders. Positive perceptions of the OMD system’s ease of use (β = 0.16; P ≤ .05) and favorable study participation (β = 0.26; P ≤ .05) and online discussion (β = 0.57; P ≤ .05) experiences were also associated with increased willingness to use OMD in the future. Active engagement was not associated with online experience indices or willingness to use OMD again.

Conclusion: Online approaches to engaging large numbers of stakeholders are a promising and efficient adjunct to in-person meetings.

Keywords: ExpertLens, online modified-Delphi, patient-centered outcomes research, patient engagement, pSCANNER, stakeholder engagement


Engaging stakeholders in prioritization and resource allocation exercises is a critical component of research. To ensure relevance and value to health care stakeholders, the Patient-Centered Outcomes Research Institute (PCORI) has a mandate for stakeholder participation in all stages of research. Clinicians, patients, and caregivers engage in preparation (agenda setting, topic prioritization and selection); execution (study design, participant recruitment, data collection and analysis); and translation (results dissemination and implementation) of research.1,2 Patient and other stakeholder participation can help researchers (1) ensure relevance of research questions to patient and stakeholder needs; (2) secure funding, design data collection protocols, and choose appropriate outcomes; (3) facilitate study recruitment efforts; and (4) facilitate translation and dissemination of results.1,3–5

Despite widespread support, little is known about best methods of stakeholder engagement.1 Focus groups, in-depth interviews, surveys, e-mail communication, conference calls, patient home visits, patient advisory boards, deliberative sessions, and consensus-building techniques have been used for engagement purposes.1,3,6–8 Many of these methods require face-to-face interaction and are considered to be “high-touch.”9 High-touch approaches involve direct contact with stakeholders and therefore are time-consuming, logistically challenging, prone to cognitive bias, expensive to implement, and difficult to scale up.9 Because of these limitations, only a relatively small number of stakeholders, especially patients or their caregivers, are typically engaged in any given research study.

Moreover, patients’ priorities are often elicited separately from those of other stakeholders, which complicates the decision-making process.10 It is not clear whether it is better to convene a multistakeholder engagement process where patients and health care professionals (ie, clinicians and health researchers) interact with one another directly or to combine their separately collected input. While patients may be more comfortable sharing their perspectives when professionals are not present, direct multistakeholder interaction can have synergistic effects and can lead to consensus.

As an alternative to high-touch methods, “high-tech” engagement approaches conducted online are becoming more popular.11,12 Online approaches are scalable and can facilitate the engagement of large numbers of stakeholders at lower cost, allowing for post hoc stratification of responses. Stakeholders can also contribute at their convenience without needing to travel, a particular concern for patients with chronic and potentially disabling conditions.9 Online approaches may also promote collaboration and transparency, because the responses of all stakeholders are readily available to all participants.

Online modified-Delphi (OMD) is one high-tech approach that provides a methodological basis for online collaborative platforms. Such platforms were identified as a priority area in a recent report on innovative methods for stakeholder engagement.11 OMD facilitates structured engagement of and deliberation in large and diverse groups of stakeholders, who participate using their own Internet-connected devices.13,14 OMD helps explore the existence of consensus among stakeholders, who answer structured questions and can revise their responses based on group responses and new information generated during discussions. Finally, the online nature of engagement also makes it easier to solicit input from patients and professional stakeholders at the same time, because participation can be anonymous.15

Different versions of OMD have been implemented using REDCapTM (a Web-based data capture and management tool),16 SurveyMonkeyTM (an online survey tool),17 and custom websites with integrated data collection and analysis components,18 as well as online discussion functionalities.15 OMD platforms have been used to engage researchers, health care providers, agency administrators, policy-makers, and community members on such topics as developing national suicide prevention research goals,13,14 identifying definitional features of continuous quality improvement in health care,19,20 developing quality and performance indicators/measures for arthritis patients,21–24 identifying ethical principles that guide translational science research,25 and exploring ways Veterans could be involved in the design of Veterans Administration care.26

While previous research shows that health care professionals generally report positive experiences with OMD,20,24 little is known about patient experiences and how they compare to those of professional stakeholders. In this article, we report the results of an exploratory study that compares patients’ and professionals’ experiences with OMD conducted to identify research priorities for patient-centered, comparative effectiveness research on 3 health conditions: weight management/obesity, heart failure, and Kawasaki disease (KD). We analyzed whether and how stakeholder type (patient vs professional); panel composition (patient-only vs mixed); and stakeholder level of engagement, perception of the OMD system, and experiences with the online process affect stakeholders’ willingness to participate in future OMD panels. Knowing what factors may affect their willingness to be engaged using online approaches will contribute to our understanding of best practices for patient and stakeholder engagement. In the Discussion section, we offer lessons learned about engaging stakeholders using high-tech approaches and discuss the benefits of integrating OMD processes with information in clinical trial management systems and disease registries.

METHODS

pSCANNER

We use stakeholder engagement data from the Patient-Centered SCAlable National Network for Effectiveness Research (pSCANNER),27 a stakeholder-driven network and a member of PCORI’s National Patient-Centered Clinical Research Network, PCORnet.28 As part of pSCANNER stakeholder engagement activities, in winter-spring 2015, we conducted 8 online stakeholder panels to explore consensus on research priorities for future pSCANNER studies.

Participants

Because pSCANNER focuses on 3 health conditions, we convened separate panels for each condition. For weight management/obesity and heart failure, we convened 1 patient-only panel, 1 clinician-only panel, and 1 mixed panel that included patients, clinicians, and researchers. For Kawasaki, a rare childhood disease that affects blood vessels, we convened only 2 panels: a patient/caregiver-only and a mixed panel that included patients/caregivers, clinicians, and researchers. This study was reviewed and determined not to be human subjects research by the Institutional Review Boards at RAND and the University of California, Davis.

Participants were recruited via e-mail and messages to members-only social media communities, and in person by members of the pSCANNER advisory board, investigators, and clinicians. Eligible participants had to be 18 years of age or older, be able to read and write in English, and have access to a computer or similar device to access the online system. Eligible patients were overweight (BMI ≥ 25 kg/m2) or diagnosed with heart failure or KD. Eligible caregivers were parents/guardians of children diagnosed with KD. Eligible researchers and clinicians conducted research or provided care for patients with these conditions. During recruitment, participants provided information about their gender, race/ethnicity, education, and prior participation in online surveys and expert panels. Participating researchers, clinicians, and patients/caregivers received a $300 gift card as a compensation for ∼4 hours of their time required to complete the study. All participants were paid the same amount of money to ensure equity, as suggested in the PCORI Compensation Framework.

Data collection

OMD enables iterative engagement of noncollocated individuals without requiring them to travel to a centralized location. Participants anonymously answer questions, interact with others, and revise their original responses based on feedback and discussion using their own computer at a time convenient to them.15

To ensure consistency across panels within the same condition, we conducted them at the same time using identical 3-round protocols (ie, participants prioritized the same research topics using the same rating criteria). (Substantive panel findings will be reported separately.) While research topics varied across medical conditions, the rating questions remained the same. In Round One (R1), participants rated a series of research topics for their medical condition on 5 criteria and explained their responses using open text boxes. In Round Two (R2), they saw their own and copanelists’ responses and discussed them using a moderated, asynchronous, anonymous discussion board. The same facilitator moderated all discussions. In Round Three (R3), participants could revise their original responses and share their experiences with OMD. Each round was open for 7–10 days.

The ExpertLensTM OMD platform was used throughout.15,20 In comparison to other online Delphi platforms,17 ExpertLens offers an innovative engagement approach that not only allows participants to answer questions and explain their responses, but also discuss results using an asynchronous, anonymous, moderated online discussion board; completely eliminates the time between rounds needed to generate individualized reports detailing group responses; and displays participants’ original R1 responses in R3 to help them make the final decision.

Measures

In accordance with best practices of stakeholder engagement, we evaluated OMD panelists’ experiences.29,30 At the end of R3, we administered a participant experience survey. Panelists used a 7-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = slightly disagree, 4 = neutral, 5 = slightly agree, 6 = agree, 7 = strongly agree) to rate 14 statements (see Table 2). These statements are based on research on computer-mediated communication and factors that may affect participant online experiences.31–33 Although not formally validated, these statements have been used in previous OMD studies to measure professional stakeholder experiences.20,24 For analysis, Likert response scales for negatively worded statements were recoded so that 7 corresponded to the most favorable rating and 1 to the least favorable.

Table 2.

Descriptive results by stakeholder type

Study variables Total (N = 292) Patients/caregivers (N = 133) Professionals (N = 159) P-value
M (SD) M (SD) M (SD)
Willingness to use OMD again 5.31 (1.31) 5.73 (1.13) 4.95 (1.34) <.001
Active participant engagement (yes) 0.51 (0.50) 0.58 (0.50) 0.46 (0.50) .04
 Answered at least 90% of rating questions in Rounds One and Three 0.77 (0.42) 0.76 (0.43) 0.77 (0.42) .78
 Explained at least 90% of ratings in Round One or Three 0.60 (0.49) 0.66 (0.58) 0.55 (0.48) .06
 Posted at least2 comments in Round Two discussions 0.90 (0.30) 0.90 (0.30) 0.89 (0.31) .80
OMD system’s ease of use 5.39 (1.44) 5.58 (1.39) 5.21 (1.47) .04
Experience with the online process (1 = strongly disagree to 7 = strongly agree)
Study Participation Experience Index 4.33 (1.13) 4.67 (1.08) 4.03 (1.10) <.001
 This study was too longa 3.88 (1.60) 3.43 (1.44) 4.28 (1.63) <.001
 Participation in this study was frustratinga 3.04 (1.58) 2.65 (1.49) 3.39 (1.58 <.001
 Participation in this study took a lot of efforta 4.23 (1.72) 4.06 (1.80) 4.37 (1.63) .16
 The right set of questions was asked in this study 4.46 (1.49) 4.83 (1.46) 4.14 (1.43) <.001
Online Discussion Experience Index 4.92 (0.67) 5.06 (0.63) 4.79 (0.68) <.001
 The discussions gave me a better understanding of the issues 5.37 (1.26) 5.63 (1.17) 5.14 (1.30) .002
 I had trouble following the discussiona 3.76 (1.74) 3.24 (1.60) 4.21 (1.72) <.001
 Participants debated each other’s viewpoints during the discussions 4.94 (1.15) 5.07 (1.11) 4.83 (1.17) .11
 The discussions brought out views I had not considered 5.25 (1.32) 5.46 (1.36) 5.06 (1.27) .02
 The discussions brought out divergent views 5.31 (1.08) 5.27 (1.15) 5.34 (1.03) .62
 Participants sometimes misinterpreted each other’s comments during the discussion 4.40 (1.35) 4.34 (1.33) 4.46 (1.37) .47
 The discussion round caused me to revise my original answers 4.80 (0.88) 4.83 (1.40) 4.77 (1.38) .73
 I was comfortable expressing my views in the discussion round 5.91 (0.88) 5.94 (0.94) 5.89 (0.82) .63

Note: aThese items were reverse-coded before being included in an index so that 7 corresponded to the most favorable rating and 1 to the least favorable.

Our main outcome was participants’ willingness to use OMD again, which was based on responses to 1 statement: “I would like to use ExpertLens in the future.”

To measure participants’ perception of the OMD system’s ease of use, we used the responses to 1 statement: “The ExpertLens system was easy to use.”

We created 2 indices measuring participants’ online experiences by averaging the responses to the relevant statements describing study participation and online discussion experiences. The study participation experience index included 4 items: “participation in this study was frustrating,” “participation in this study took a lot of effort,” “this study was too long,” and “the right set of questions was asked in this study” (α = 0.67). This index captured participant experiences with the study as a whole. The online discussion experience index included 8 items: “the discussions gave me a better understanding of the issues,” “I had trouble following the discussion,” “participants debated each other's viewpoints during the discussions,” “the discussions brought out views I hadn't considered,” “the discussions brought out divergent views,” “participants sometimes misinterpreted each other’s comments during the discussion,” “the discussion round caused me to revise my original answers,” and “I was comfortable expressing my views in the discussion round” (α = 0.60). The online discussion experience is a crucial component of the OMD process, because it helps explore whether the online process meets Delphi goals of encouraging participants to learn from others and revise their original responses based on new information.

As in previous studies, we considered a mean of ≥5 on indices and positively worded statements measuring experiences (and ≤3 on negatively worded statements) to be an indicator of a “positive” or “favorable” experience.20,24

We are not aware of any formally validated instruments that measure participant engagement in OMD. Therefore, we developed a dichotomous measure of active participant engagement, which accounts for the number of ratings provided, the number of ratings explained, and the number of comments posted during discussion. None of the rating questions were required, and participants were only encouraged to explain their responses and post comments during R2 discussions. We used a conservative approach and defined participants to be actively engaged if they answered at least 90% of the ratings questions in both rating rounds, explained at least 90% of their ratings in either R1 or R3, and commented at least twice during the discussion round.

Analyses

To identify the factors associated with participants’ OMD experiences and willingness to use OMD in the future, we conducted a series of multivariate regressions that controlled for gender, participant status (patient/caregiver vs professional), panel composition (homogeneous (ie, patient/caregiver-only or clinician-only) vs mixed), and perceived ease of use of the OMD system. In the models identifying factors associated with a dichotomous measure of active participant engagement, we used logistic regression. In the models identifying factors associated with study participation and online discussion experience indices, as well as willingness to use OMD again (our main outcome of interest), we used ordinary least squares regression and also controlled for active participant engagement. We also conducted a sensitivity analysis by individually including the 3 components of the active participant engagement variable.

RESULTS

Of the 349 panelists, 292 (84%) completed the participant experience surveys. Of these 292 participants, 133 (46%) were patients/caregivers, 104 (36%) were clinicians, and 55 (19%) were researchers (Table 1). The majority of participants were female (60%). The distribution of participants across the 3 conditions was fairly even (weight management/obesity, 35%; heart failure, 29%; and KD, 36%), as was the composition of the panel (homogeneous, 52%; mixed, 48%). Most participants had never participated in a prior expert panel (65%) but had participated in an online survey (84%). (These variables were not associated with any dependent variables and were excluded from our models.) Patients and professionals differed significantly by gender, race/ethnicity, level of education, and condition type.

Table 1.

Characteristics of study participants by stakeholder type

Participant characteristics Total (N = 292)a Patients/caregivers (N = 133) Professionals (N = 159) P-value
N (%) N (%) N (%)
Gender
 Female 160 (60) 83 (68) 77 (54) .02
Race
 White 164 (66) 88 (78) 76 (57) .02
 Black 13 (5) 8 (7) 5 (4)
 Asian 56 (23) 11 (10) 45 (34)
 Other 14 (5) 6 (6) 8 (6)
Hispanic origin
 Yes 30 (10) 18 (14) 12 (8) .09
Highest level of education
 Up to high school 5 (2) 4 (3) 1 (1) <.001
 High school/technical school graduate 4 (2) 4 (3) 0
 Some college or 2-year degree 33 (12) 33 (26) 0
 4-year college degree 42 (16) 35 (28) 7 (5)
 Graduate or professional degree 182 (68) 48 (38) 134 (94)
 Prefer not to answer 2 (1) 2 (2) 0
Panel type
 Mixed 141 (48) 41 (31) 100 (63) <.001
Condition
 Weight management/obesity 101 (35) 37 (28) 64 (40) <.001
 Heart failure 86 (29) 30 (23) 56 (35)
 Kawasaki disease 105 (36) 66 (50) 39 (25)
Participated in prior expert panel
 Yes 93 (35) 22 (17) 71 (50) <.001
Participated in prior online survey
 Yes 226 (84) 87 (70) 139 (97) <.001

Note: aSome variables contain missing values, so the total across categories may not add up to 292.

Results in Table 2 show participants’ willingness to use OMD in the future (M = 5.31, SD = 1.31). Roughly half (51%) of participants were actively engaged in the OMD process. Most participants who did not meet our active engagement criteria provided explanations for <90% of their rating responses. Participants reported positive online discussion experiences (M = 4.33 on online discussion index; SD = 0.67) and neutral study participation experiences (M = 4.92 on study participation index; SD = 1.13). Moreover, 8 of the 13 experience statements displayed favorable results. “I was comfortable expressing my views,” “The ExpertLens system was easy to use,” and “The discussions gave me a better understanding of the issues” had the most favorable mean responses (means of 5.91, 5.39, and 5.31, respectively). Most participants disagreed with the statement “Participation in this study was frustrating” (M = 3.04).

Patients/caregivers were not significantly more actively engaged than professionals (OR = 1.69; 95% CI, 0.94-3.05), after controlling for gender, panel composition, and the OMD system’s ease of use (Table 3). Female participants were less likely and mixed-stakeholder panel participants were more likely to be actively engaged, although these differences were not statistically significant (P = .11 and P = .80, respectively).

Table 3.

Results from logistic multivariate model of active participant engagement

Model variables β Coefficient Odds ratio (95% CI) P-value
Intercept 0.02
Gender: female –0.46 0.63 (0.36, 1.11) .11
Patient/caregiver: yes 0.53 1.69 (0.94, 3.05) .08
Panel type: mixed 0.08 1.08 (0.61, 1.92) .80
OMD system’s ease of use 0.06 1.06 (0.88, 1.29) .52

Patients/caregivers reported somewhat better experiences with study participation and online discussion than clinicians and researchers combined, controlling for gender, panel composition, active engagement, and the OMD system’s ease of use (Table 4). Although statistically significant, these differences were small. For instance, patients’/caregivers’ ratings of their study participation experiences were approximately half a point higher (on a 7-point scale) than those of professional stakeholders. Perception of the OMD system’s ease of use was positively associated with better study participation and online discussion experiences. Active engagement had no significant impact on participant experiences with the online process. Our sensitivity analysis of adding the individual engagement components did not produce different results from our main analysis.

Table 4.

Results from linear multivariate models predicting 2 indices measuring experiences with the online process

Model variables Study participation
Online discussion
β coefficient P-value β coefficient P-value
Intercept 2.31 <0.001 3.90 <.001
Active engagement −0.04 0.79 0.04 .65
Gender: female 0.15 0.29 0.11 .17
Patient/caregiver: yes 0.49 <0.001 0.18 .04
Panel type: mixed 0.11 0.43 0.07 .42
OMD system’s ease of use 0.31 <0.001 0.15 <0.001

Finally, participants with more favorable online discussion and study participation experiences and those who felt that the OMD system was easy to use were significantly more willing to participate in future panels (Table 5). Compared to researchers and clinicians, patients/caregivers were also more willing to participate in OMD again (P = .02). Active participant engagement was not a significant predictor of the willingness to use OMD in the future.

Table 5.

Results from linear multivariate model predicting willingness to use online modified-Delphi again

Model variables β coefficient P-value
Intercept 0.18
Active engagement 0.11 .43
Study participation 0.26 <.001
Online discussion 0.57 <.001
Gender: female 0.15 .29
Patient/caregiver: yes 0.36 .02
Panel type: mixed −0.10 .49
OMD system’s ease of use 0.16 .002

DISCUSSION

OMD approaches have the potential to facilitate engagement of large and diverse groups of stakeholders in identifying research priorities. In addition to professionals, stakeholders can include patients and their caregivers, who can better ensure that research priorities are patient-centered. Using data from 8 panels that prioritized patient-centered research topics, we explored patients’/caregivers’, clinicians’, and researchers’ experiences and engagement with OMD. Participants were willing to use OMD in the future, felt that the OMD system was easy to use, had a positive online discussion experiences, and had a neutral opinion about their study participation. However, only half of participants were actively engaged in the OMD process. Although patients/caregivers were not more actively engaged than professional stakeholders (ie, clinicians and researchers), they had better experiences and were more willing to use OMD again. Positive perceptions of the OMD system’s ease of use, as well as favorable study participation and online discussion experiences, were associated with participants’ willingness to use OMD in the future. We note, however, that the effect sizes in our regression models were generally modest.

Regardless of modest effect sizes, many of our findings signal the potential of high-tech engagement approaches. First, the ability to engage large numbers of patients/caregivers and professional stakeholders makes OMD a promising and scalable engagement approach for research prioritization purposes. While better experiences among patients/caregivers may reflect the novelty of research participation and/or the online system, future research should identify aspects of high-tech and high-touch approaches that best promote engagement.

Second, participants generally reported positive online discussion experiences. This result is particularly important, because online discussions have replaced in-person meetings, a core component of interactive and deliberative high-touch engagement approaches.8 The Delphi method is based on the premise that quantitative feedback about consensus and discussion, whether in person or online, can help participants learn about the perspectives of other participants, clarify their own position, and revise original responses in light of new information.15 Both patients/caregivers and professionals agreed that they were comfortable expressing their views in the discussion round, which suggests that the online nature of discussions did not prevent them from engaging with other stakeholders.

Third, panel composition was not associated with participant engagement, experiences, or willingness to use OMD again, which could be explained by participant anonymity. Stakeholders did not know who the other participants in their panels were. This result suggests that (anonymous) mixed-stakeholder panels do not negatively affect the patient/caregiver or clinician/researcher experiences. While we still do not know empirically whether it is better to convene homogeneous or mixed panels, these results are promising for those who wish to include patients/caregivers and professional stakeholders in the same panel to avoid the need for developing approaches for combining the input of independently administered homogeneous panels.

Finally, while perception of the OMD system’s ease of use as well as study participation and online discussion experiences were associated with willingness to use OMD in the future, active participant engagement in the process was not a significant factor. This result highlights the need to ensure positive participant experiences with online engagement approaches. Those wishing to use online panels should seek to balance the amount of time and effort to fully engage panelists while making the participation burden manageable in order to maintain positive experiences with the OMD system, the study itself, and the online discussion board. As iterative Delphi approaches require more time and effort from participants than their noniterative counterparts, particular attention should be paid to the length of data collection protocols, wording of questions, and facilitation of online discussions. Discussion moderators should create an environment that encourages free exchange of ideas, expression of diverse perspectives, and productive debates.34

We note several limitations of our study. First, our results may not be generalizable to the experiences of stakeholders dealing with other health conditions, performing other engagement tasks, or using other online platforms. Second, although large, our convenience sample is not representative of all patients, caregivers, researchers, and clinicians. Third, our measures of participant experiences and active engagement were not formally validated. Additional research should use factor analysis to identify the dimensions underlying participant experiences in online panels and develop indices with better alpha levels. Finally, we do not have similar data from in-person panels; future studies should directly compare experiences and engagement in online and in-person panels.

Regardless of these limitations, future work, especially in the context of PCORnet, could focus on integrating OMD processes with information in clinical trial management systems and disease registries. Doing so would enable integration of OMD efficiently with multisite prep-to-research processes to ensure that selected instruments and outcomes accurately reflect preferences. This, in turn, would enable patient populations, providers, and researchers specializing in such patient populations to rapidly create large panels early and often. Eligible patients could access OMD systems to participate in data-collection efforts initiated by their provider organizations through electronic medical records. Furthermore, the most engaged participants, as identified by our metrics, could be consulted for more direct involvement in other aspects of the research process. More direct integration of OMD with existing research information technology platforms has the potential to improve the practice of patient-centered outcomes research.

In summary, this study illustrates the promise of using high-tech engagement approaches, such as OMD, for prioritizing patient-centered research topics and compares patients’/caregivers’ and professional stakeholders’ engagement experiences. Online approaches can allow a large number of diverse stakeholders located in different parts of the country to engage at a time convenient to them, and patients appear to have more positive experiences with this approach than professionals. Those involved in developing patient-centered research priorities should consider interactive online approaches as an adjunct to high-touch approaches.

ACKNOWLEDGMENTS

The authors would like to thank all study participants for their time and effort.

FUNDING

This work was supported through a PCORI Program Award (CDRN-1306-04819, principal investigator Lucila Ohno-Machado). All statements in this article, including its findings and conclusions, are solely those of the authors and do not necessarily represent the views of the PCORI or its Board of Governors or Methodology Committee.

COMPETING INTERESTS

DK, SG, MB, and NP-S are members of the ExpertLens team.

SG’s wife is a salaried employee of Eli Lilly and Company, and owns stock. SG has accompanied his wife on company-sponsored travel.

All other authors declare that they have no competing interests.

CONTRIBUTORS

DK led the writing process, codesigned the study, and supervised data analysis. SG helped design the study, was responsible for administering the panels, and commented on the manuscript. DM helped design the study and contributed to the Discussion section of the manuscript. MB conducted statistical analyses and contributed to the Methods and Results sections. NPS helped administer the panels, helped clean the data, and commented on the manuscript. KKK codesigned the study, helped write the Methods section, and commented on the manuscript. All co-authors reviewed and approved the manuscript.

REFERENCES

  • 1. Domecq J, Prutsky G, Elraiyah T et al. . Patient engagement in research: a systematic review. BMC Health Services Res. 2014;14(1):89. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Concannon TW, Meissner P, Grunbaum JA et al. . A new taxonomy for stakeholder engagement in patient-centered outcomes research. J General Int Med. 2012;27(8):985–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Boote J, Baird W, Beecroft C. Public involvement at the design stage of primary health research: a narrative review of case examples. Health Policy. 2010;95(1):10–23. [DOI] [PubMed] [Google Scholar]
  • 4. Forsythe LP, Ellis LE, Edmundson L et al. . Patient and stakeholder engagement in the PCORI pilot projects: description and lessons learned. J General Int Med. 2016;31(1):13–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Esmail L, Moore E, Rein A. Evaluating patient and stakeholder engagement in research: moving from theory to practice. J Comparative Effectiveness Res. 2015;4(2):133–45. [DOI] [PubMed] [Google Scholar]
  • 6. O’Haire C, McPheeters M, Nakamoto EK, LaBrant L, Most C, Lee K, Graham E, Cottrell E, Guise J-M. Methods for Engaging Stakeholders To Identify and Prioritize Future Research Needs. Methods Future Research Needs Report No. 4. (Prepared by the Oregon Evidence-based Practice Center and the Vanderbilt Evidence-based Practice Center under Contract No. 290-2007-10057-I.) Rockville, MD: AHRQ Publication No. 11-EHC 044-EF. Agency for Healthcare Research and Quality. June 2011. Available at: www.effectivehealthcare.ahrq.gov/reports/final.cfm. [PubMed] [Google Scholar]
  • 7. Carman KL, Maurer M, Mangrum R et al. . Understanding an informed public’s views on the role of evidence in making health care decisions. Health Affairs. 2016;35(4):566–74. [DOI] [PubMed] [Google Scholar]
  • 8. Deverka PA, Lavallee DC, Desai PJ et al. . Stakeholder participation in comparative effectiveness research: Defining a framework for effective engagement. J Comparative Effectiveness Res. 2012;1(2):181–94. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Lavallee DC, Wicks P, Alfonso Cristancho R, Mullins CD. Stakeholder engagement in patient-centered outcomes research: High-touch or high-tech? Expert Rev PharmacoecoOutcomes Res. 2014;14(3):335–44. [DOI] [PubMed] [Google Scholar]
  • 10. Stewart RJ, Caird J, Oliver K, Oliver S. Patients’ and clinicians’ research priorities. Health Expect. 2011;14(4):439–48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Mallery C, Ganachari D, Fernandez J et al. . Innovative Methods in Stakeholder Engagement: An Environmental Scan. Rockville, MD: Agency for Healthcare Research and Quality; 2012. [Google Scholar]
  • 12. Oostendorp LJ, Durand M-A, Lloyd A, Elwyn G. Measuring organisational readiness for patient engagement (more): an international online Delphi consensus study. BMC Health Services Res. 2015;15(1):1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Claassen CA, Pearson JL, Khodyakov D et al. . Reducing the burden of suicide in the U.S.: The aspirational research goals of the National Action Alliance for Suicide Prevention Research Prioritization Task Force. Am J Prevent Med. 2014;47(3):309–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Khodyakov D, Savitsky TD, Dalal S. Collaborative learning framework for online stakeholder engagement. Health Expect. 2016;19(4):868–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Dalal SR, Khodyakov D, Srinivasan R, Straus SG, Adams J. ExpertLens: a system for eliciting opinions from a large pool of non-collocated experts with diverse knowledge. Technol Forecasting Social Change. 2011; 78(8):1426–44. [Google Scholar]
  • 16. Campbell KA, Olson LM, Keenan HT. Critical elements in the medical evaluation of suspected child physical abuse. Pediatrics. 2015; 136(1): 35–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Culley JM. Use of a computer-mediated Delphi process to validate a mass casualty conceptual model. Computers, Inform Nursing. 2011;29(5):272. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Cam KM, McKnight PE, Doctor JN, The Delphi method online: medical expert consensus via the Internet. Washington, DC: Proceedings of the AMIA Symposium. AMIA; 2002. [Google Scholar]
  • 19. Rubenstein L, Khodyakov D, Hempel S et al. . How can we recognize continuous quality improvement? Int J Qual Health Care. 2014;26(1):6–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Khodyakov D, Hempel S, Rubenstein L et al. . Conducting online expert panels: a feasibility and experimental replicability study. BMC Med Res Methodol. 2011;11(1):174. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Barber C, Marshall D, Alvarez N et al. . Development of cardiovascular quality indicators for rheumatoid arthritis: results from an international expert panel using a novel online process. J Rheumatol. 2015;42(9): 1548–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Barber C, Marshall D, Mosher D et al. . Development of system-level performance measures for evaluation of models of care for inflammatory arthritis in Canada. J Rheumatol. 2016;43(3):530–40. [DOI] [PubMed] [Google Scholar]
  • 23. Barber C, Patel JN, Woodhouse L et al. . Development of key performance indicators to evaluate centralized intake for patients with osteoarthritis and rheumatoid arthritis. Arthritis ResTherapy. 2015;17(322):1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Khodyakov D, Grant S, Barber CEH, Deborah M, Esdaile JM, Lacaille D. Acceptability of an online modified-Delphi panel approach for developing health services performance measures: results from three panels on arthritis research. J Eval Clin Pract. 2016. DOI 10.1111/jep.12623. [DOI] [PubMed] [Google Scholar]
  • 25. Khodyakov D, Mikesell L, Schraiber R, Booth M, Bromley E. On using ethical principles of community-engaged research in translational science. Trans Res. 2016;171:52–62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Khodyakov D, Stockdale S, Smith N, Booth M, Altman L, Rubenstein L. Patient engagement in the process of planning and designing outpatient care improvements at the Veterans Administration healthcare system: Findings from an online expert panel. Health Expect. 2016. DOI 10.1111/hex.12444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Ohno-Machado L, Agha Z, Bell DS et al. . Pscanner: patient-centered scalable national network for effectiveness research. J Am Med Inform Assoc. 2014;21(4):621–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Timbie JW, Rudin RS, Towe V et al. . National Patient-Centered Clinical Research Network (PCORnet) Phase I. Santa Monica, CA: RAND; 2015. Contract. [Google Scholar]
  • 29. Deverka PA, Lavallee DC, Desai PJ et al. . Facilitating comparative effectiveness research in cancer genomics: Evaluating stakeholder perceptions of the engagement process. J Comparative Effectiveness Res. 2012;1(4):359–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Lavallee DC, Williams CJ, Tambor ES, Deverka PA. Stakeholder engagement in comparative effectiveness research: how will we measure success? J Comparative Effectiveness Res. 2012;1(5):397–407. [DOI] [PubMed] [Google Scholar]
  • 31. Olaniran BA. A model of group satisfaction in computer-mediated communication and face-to-face meetings. Behav Inform Technol. 1996; 15(1):24–36. [Google Scholar]
  • 32. Bailey JE, Pearson SW. Development of a tool for measuring and analyzing computer user satisfaction. Manag Sci. 1983;29(5):530–45. [Google Scholar]
  • 33. Hiltz SR, Johnson K. User satisfaction with computer-mediated communication systems. Manag Sci. 1990;36(6):739–64. [Google Scholar]
  • 34. Wright S, Street J. Democracy, deliberation and design: the case of online discussion forums. New Media Soc. 2007;9(5):849–69. [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES