Skip to main content
Journal of Clinical and Translational Science logoLink to Journal of Clinical and Translational Science
. 2024 Feb 6;8(1):e40. doi: 10.1017/cts.2024.19

Empowering the Participant Voice (EPV): Design and implementation of collaborative infrastructure to collect research participant experience feedback at scale

Rhonda G Kost 1,, Alex Cheng 2, Joseph Andrews 3, Ranee Chatterjee 4, Ann Dozier 5, Daniel Ford 6, Natalie Schlesinger 1, Carrie Dykes 7, Issis Kelly-Pumarol 3, Nan Kennedy 8, Cassie Lewis-Land 6, Sierra Lindo 9, Liz Martinez 6, Michael Musty 9, Jamie Roberts 10, Roger Vaughan 1, Lynne Wagenknecht 3, Scott Carey 6, Cameron Coffran 1, James Goodrich 11, Pavithra Panjala 7, Sameer Cheema 11, Adam Qureshi 1, Ellis Thomas 2, Lindsay O’Neill 2, Eva Bascompte-Moragas 2, Paul Harris 2
PMCID: PMC10928700  PMID: 38476242

Abstract

Empowering the Participant Voice (EPV) is an NCATS-funded six-CTSA collaboration to develop, demonstrate, and disseminate a low-cost infrastructure for collecting timely feedback from research participants, fostering trust, and providing data for improving clinical translational research. EPV leverages the validated Research Participant Perception Survey (RPPS) and the popular REDCap electronic data-capture platform. This report describes the development of infrastructure designed to overcome identified institutional barriers to routinely collecting participant feedback using RPPS and demonstration use cases. Sites engaged local stakeholders iteratively, incorporating feedback about anticipated value and potential concerns into project design. The team defined common standards and operations, developed software, and produced a detailed planning and implementation Guide. By May 2023, 2,575 participants diverse in age, race, ethnicity, and sex had responded to approximately 13,850 survey invitations (18.6%); 29% of responses included free-text comments. EPV infrastructure enabled sites to routinely access local and multi-site research participant experience data on an interactive analytics dashboard. The EPV learning collaborative continues to test initiatives to improve survey reach and optimize infrastructure and process. Broad uptake of EPV will expand the evidence base, enable hypothesis generation, and drive research-on-research locally and nationally to enhance the clinical research enterprise.

Keywords: Clinical translational science, evaluation, participant feedback, participant experience, participant-centered, patient experience

Introduction

Understanding the perceptions and experiences of study participants can help research teams improve recruitment, informed consent, diversity, retention, and other challenging aspects of clinical translational research and drive meaningful improvements for participants [13]. Participant input is essential in assessing whether the informed consent process is effective, whether communications are respectful and culturally sensitive, whether unaddressed language barriers exist, and what factors drive participants to leave studies prematurely or decline to join future studies. Whether participants feel valued is measurable and is highly correlated with their views of their research experiences [4,5].

The Association for Accreditation of Human Research Protection Programs (AAHRPP) [6] requires that organizations have policies to measure and improve the quality and effectiveness of their Human Research Protection Program. A recent study of accredited institutions found that few employed measures of participant-centered outcomes, e.g., the effectiveness of consent, in assessing the quality of their programs [7]. The Consortium to Advance Ethics Review Oversight recently issued recommendations, including prioritizing assessments directly related to participant protection outcomes (e.g., quality of the informed consent process and understanding,…and overall participant experience in research) [7]. However, few institutions regularly collect research participant experience data in ways that can be compiled or compared [8], thereby neglecting a valuable opportunity to engage participants at scale as partners in the research process.

Applying a scientific approach to measuring and responding to participant experiences requires robust tools, an evidence base, engagement of stakeholders, hypothesis testing, representative sampling, and evaluation of measurable impact. Participant feedback, collected with appropriate standards and privacy protections, can be studied longitudinally and used for comparisons across studies, departments, and institutions to identify better practices. The Research Participant Perception Survey (RPPS), designed with extensive participant input, asks participants about aspects of their research experience, including respect, partnership, informed consent, trust, feeling valued, overall experience, and others. The RPPS measures are participant-centered and statistically reliable, as demonstrated through psychometric analyses and multiple fieldings [35,911].

Over the past decade, Rockefeller University, the NIH Clinical Research Center, and Johns Hopkins University have used the RPPS to collect participant feedback to enhance research conduct. RPPS response rates range from 20% to 65% [5,9,11]. Requests to use the RPPS indicate robust interest, but institutional barriers have limited the survey’s uptake. Of attendees polled at a 2019 Trial Innovation Network webinar [12], 70% said having real-time feedback from participants about research participation would be valuable, and 70% reported no program at their institution to collect participant feedback. Attendees felt uncertain about selecting the right survey (35%) and worried that the effort or cost would be too high (51%) or results would not be timely (21%). Fewer than 10% wanted to invent a survey, and most agreed that access to a short, validated survey (65%), integrated analysis tools (55%), mobile-friendly app (65%), and low-cost/free infrastructure (60%) would facilitate collecting timely feedback. Sites asked how RPPS users could benchmark with peer institutions [12].

To address these challenges, in 2020, the Rockefeller University led the creation of a consortium with Duke University, Wake Forest Health Sciences University, Johns Hopkins University, University of Rochester, and Vanderbilt University Medical Center (VUMC) to obtain NCATS funding for the Empowering the Participant Voice (EPV) project. The EPV project leverages the validated Research Participant Perception Survey (RPPS) [5] as its core instrument and REDCap [13], a widely-used data-capture platform designed for clinical and translational research, to support a collaborative survey and data management system. This report describes methods and progress in fulfilling the first two EPV-specific aims: the development of effective, low-cost infrastructure to collect participant experience data using RPPS and REDCap and its implementation in demonstration projects at collaborating sites.

Developing the infrastructure

Design principles and values

The EPV initiative embraced explicit principles and values: engaging institutional and community stakeholders throughout the project, building a learning collaborative, respecting institutional autonomy and priorities, minimizing selection bias, aiming for actionability, designing for ease of use, evaluation, and broad dissemination. The sites agreed to use a common core of RPPS questions to maintain survey validity and comparability. Each site had autonomy over other details of local survey implementation (use case), custom questions and variables, local findings, and action plans.

Interdisciplinary EPV RPPS /REDCap team

The EPV principal investigator (PI) and site PIs formed the RPPS Steering Committee (RSC) to define the core questions, variables, common data elements to be collected, processes, standards, and dashboard design requirements to achieve the project aims. Each site secured leadership buy-in and assembled an interdisciplinary project team, including investigators and staff with expertise in translational research, participant engagement, research informatics, REDCap software deployment, and the RPPS tool. Project managers, the technical lead, software developers, and others joined the RSC. The results of RSC operational and administrative planning, and technical setup and implementation planning formed the first draft of the Implementation Guide.

Input from key stakeholders

EPV directed sites to engage stakeholders throughout the project, to understand expectations and concerns, and to develop the local use case and implementation plan. The engagement of institutional stakeholders was vital to overcoming technical, regulatory, resource, and social challenges within the institutions. Site teams used town halls, grand rounds, and small groups to engage investigators, research leadership, regulatory professionals, patients/participants/advocates, community members, and others. Standing community advisory boards and patient/faculty advisory committees were engaged to leverage existing institutional support and integrate participant feedback into ongoing operations [35,9].

During early planning, sites reported that stakeholders said RPPS feedback would be valuable to build participant trust, evaluate the effectiveness of informed consent, tailor approaches for specific groups or protocols, and improve the experiences of underrepresented groups. The data would enable the identification of high and low-performing teams and best practices, establish benchmarks, and build a participant-centered evidence base for improving research processes.

The RSC developed a Longitudinal Stakeholder Tracking survey to capture the themes of discussions with local stakeholders (Supplemental Appendix A). Through May of 2023, sites reported 96 meetings attended by various combinations of local stakeholders identified by their roles. Personal demographics were not collected. Meetings included institutional leadership (43% of meetings), IRB/Privacy professionals (30%), Investigators (56%), research coordinators/managers (47%), community members (15%), research participants/patients (8%), community partners/liaisons (3%), and others (26%). One or more community/participant/patient stakeholders were present at 23% of meetings. Stakeholder meetings ranged in size: 1–10 stakeholders (65%), 11–15 (18%), 26–50 (12%), and more than 50 (4%). A summary of the themes of discussions with stakeholders and related actions and impacts is provided in Table 1. Feedback from stakeholders is ongoing.

Table 1.

Feedback was provided by stakeholders engaged locally at participating sites throughout the design and implementation phases of the project

1. During Project Design Phase: Early Engagement*
Stakeholder themes Action Impact
Anticipated Value:
Assess current research participation experience overall and for underrepresented groups Design At-a-Glance Dashboard, built-in scoring, filter by participant characteristics Able to analyze/act on data for groups, including those affected by disparities. Opportunity for transparency and trust building
Benchmark internally and with other CTSAs Aggregate data to EPV Consortium Dashboard Evidence-based, benchmarks, public-facing
Identify and sustain high scorers Local analysis/actions Evidence for best practices
Identify opportunities for enterprise-wide innovations Consortium Dashboard and Learning Collaborative Shared evidence base & use cases, opportunities to conduct multi-site clinical translational science
Measure pre/post innovation to assess the impact Dashboard views of data over time, custom reports Clinical Translational Science, accountability to stakeholders
Participants feel that their concerns matter C Communicate before and after the survey fielding Charge sites to return meaningful results
Participants can compare their experience to others’ C Return results publicly Charge sites to return meaningful results
Concerns:
Will groups engage? Engage early, manage fears, leverage community engagement and outreach expertise at site Imperfect sampling still provides valuable information
How to prioritize findings? Develop performance improvement workflow, with stakeholders Local autonomy
Will benchmarks compare apples to apples? Standards optimize comparability Validated tools, adherence to standards, filters
Risk of negative scores, reputational harm to the investigator or to the institution Local governance & data-sharing decisions; Data Use Agreement Experiences are real even if unmeasured; better to know
Teams might perceive scores as punitive Constructive performance improvement models Be able to share use cases, best practices
Are the questions relevant to participants? C Core questions from validated participant-centered research; Free text fields for additional input Sites may add custom questions and free text fields retained, and some sites include links to formal complaint workflow
The response might damage the relationship with the research team C Communicate privacy protections early and often Dashboard design suppresses results in any cell with<5 responses and could risk the re-identification of an individual
Lack of transparency and accountability for results and actions taken C Communicate plan to return results; share results and actions; engage stakeholders in analysis and action Sites develop public-facing websites for return of results; sites develop workflow for performance improvement
Potential for tokenism C Engage community and trusted proxies; be accountable Public return of results pages; aim for transparency

Stakeholders included institutional and community members, such as institutional and research leadership, investigators and faculty, privacy/IRB staff, research coordinators and nurses, patients, research participants, community representatives and liaisons, and others. Comments and themes were not linked to specific individuals during reporting.

C Themes raised at an engagement meeting that included one or more community members/advocates/research participants/patient/patient representatives. Community meetings ranged in attendance from 5 to>50.

*

Attendee roles and affiliations membership were not tabulated at stakeholder meetings held in the first 6 months of the project.

Developing use cases

In planning for project implementation, sites weighed critical operational choices: whether the scope of fielding surveys should reach across all or most studies at the institution (enterprise-wide fielding) or would be implemented study-by-study (study-level fielding); frequency of surveying; selection of the sample as a census of all eligible participants, a random sample, or a targeted group; the timing of the survey relative to an individual’s study participation, e.g., shortly after signing consent (post-consent), after completing participation (end-of-study), more than once for long studies (annual), or at an undefined time (unspecified); the survey distribution platform (email, portal, or via text [SMS]); project team membership; how to optimize data extraction from local systems; which additional local variables to track, e.g., department codes or study identifiers for analyzing study-level data; whether to return results to investigators; and whether to add custom questions. RSC members discussed the pros and cons and distilled their conclusions into Key Considerations for sites adopting EPV infrastructure in the EPV Implementation Guide. Each site formulated its site-specific use case reflecting those considerations in alignment with regulatory and institutional policies and local initiatives.

Designing infrastructure and process

An overall schematic for implementing the EPV infrastructure is shown in Supplemental Figure S1.

Data and standards

The EPV/RPPS infrastructure, hosted on the REDCap platform, was developed through close collaboration between the RSC and technical team (VUMC), mindful that the capacity to benchmark would require the ability to compare “apples to apples.” English and Spanish versions of the RPPS-Short [11] survey, with updates to gender, ethnicity, and remote consent questions, formed the core survey (Supplemental Appendix B). The team assigned project variables to the survey questions, encoded definitions for describing survey scope, cohort sampling, and the timing of the survey during study participation, and defined participant and study descriptors. Participant descriptors include the stage of study participation and email address (used to determine eligibility and send a survey), research study code, and demographics (age, sex, gender, race, and ethnicity). Study descriptors include disease domain (MeSH code) and optional locally defined variables (e.g., department) for local tracking. These descriptors are linked to the participant’s anonymized survey record, and provide filters for the data analysis, including characterizing non-responders. Formulas for response and completion rates were defined: surveys with at least one question answered were classified as either complete (>80% of core questions answered), partial (50%–80% answered), or break-off (<50% of core questions answered) responses [14]. These decisions were encoded into data collection tools, software, and dashboards as described in the Implementation Guide.

Flow of data

Participant and study descriptors are extracted by site programmers from institutional databases and uploaded into the local REDCap project. Local data practices govern how identifiers are removed before sharing data with teams. Using the REDCap survey function, sites send invitations and personalized survey links to participants via email, patient portals, or SMS accounts with a locally customized message. Survey responses populate the site’s local REDCap project database in real time. De-identified local project data syncs nightly to the Data Coordinating Center (DCC) and is aggregated in the EPV Consortium database and dashboard. A Reciprocal Data Use Agreement, developed using the Federal Demonstration Partnership Collaborative Data Transfer and Use Agreement template [15], governs data transfer and use between the sites and DCC. Locally defined variables and participant comments are not aggregated. The flow of data from site data sources into the local EPV/REDCap project is illustrated in Supplemental Figure S2. The technical installation details of the software are found in the EPV Implementation Guide.

Developing the At-a-Glance Dashboard external module

The EPV collaborators and technical team designed the dashboard to facilitate rapid analysis and visualization of data, including response and completion rates, and calculated scores for survey results. Dashboard design was iteratively influenced by feedback from stakeholders regarding ease of use, clarity, and analytics.

Survey responses are analyzed using Top-Box scores (percent with the optimal answer) [9], and displayed in total or filtered data columns in the dashboard. A difference of 10 percentage points or more between a filtered column and the total score can be used informally as the minimum important difference generally worthy of attention or action (Supplemental Appendix C). Formal statistical analyses of local dashboard data can be pursued by downloading de-identified data and any local variables from the REDCap database and using third-party software (e.g., SAS, STATA, R) to conduct analyses of interest. Complete descriptive data can be viewed using standard REDCap reports views. Sites have access to their own data on their local dashboard and to the EPV Consortium dashboard displaying results from all contributing in aggregate or filtered by site (blinded) for benchmarking. The technical team at VUMC built and maintained the Dashboard, releasing enhanced versions in collaboration with the RSC. The dashboard has many features to streamline analyses (Fig. 1). A Dashboard demonstration video showcases the analytic and filtering features of the Dashboard [16]; a hands-on test dashboard is available to the public.

Figure 1.

Figure 1.

At-a-Glance Dashboard features – visual analytics and filters for RPPS data. Dropdown menus display choices among the survey perception questions (shown) or response and completion rates. The middle menu filters the survey results (e.g., age, sex, race, etc.). Blue “i” icons display definitions and scoring information. Response data are displayed as Top Box scores with conditional formatting from high (green) to low (red) scores. The “Total” column contains aggregate scores; filtered results populate the columns to the right.

Demonstration use cases

The EPV project survey and data aggregation activities were reviewed and approved or deemed Exempt by the Institutional Review Boards at each site before retrieving participant data or surveying.

Use case implementation: administration of the survey

The demonstration goal of the project was the implementation of the local use cases. Measures of success included the number of surveys fielded, response rates, respondent demographics, ongoing stakeholder engagement, and a revised Dashboard and Implementation Guide.

In November 2021, sites began sending small-scale test survey fieldings; by May 2022, all sites were surveying 200–6000 participants per fielding at bimonthly, quarterly, or semiannual intervals. Several sites piloted initiatives during early fielding to increase response rates. Use case configurations and early optimization efforts are shown in Table 2.

Table 2.

Empowering the Participant Voice infrastructure and use case implementation at five participating sites

Site Scope of fielding Selection of sample Timing of survey delivery Frequency of survey fielding (months) Survey platform Response rate Early efforts to increase response rate or representativeness of response
Breadth of survey participation Census of all eligible participants, Random sample, or other Post-consent (0–2 months); End-of-study participation; Annual; or Other/Unspecified How often surveys are sent How survey invitations are sent Survey response rate for the site, May 2023 Sites met regularly with stakeholders and implemented their recommendations aimed to increase the reach of and response to the survey
A Study Level, with study principal investigator agreement, invited by the site team Census (100–200 per fielding) Post-consent, End-of-study, Unspecified Rolling Email, Telephone (pilot) 31.4% Motivation question pilot : Tested surveys with or without optional questions to test for negative impact on response rate; questions slightly increased response rate 23 to 28%. EFFECTIVE (conclusive).
Awareness campaign : Distributed flyers. NOT EFFECTIVE.
Partnered with community satellite : UNDERWAY.
Telephone outreach to Latino study ; Response rate to email invite 16%; response to telephone call: 31%. EFFECTIVE (expensive).
B Enterprise, leadership decision; RCT only Random (500 per fielding) End of Study 6 Email, SMS 18.4% SMS: Tested SMS survey invite to increase participation of younger participants and POC; Overall EFFECTIVE for younger participants; NOT EFFECTIVE for POC.
Expanded cohort : Pilot sending surveys to participants beyond RCTs. NEUTRAL.
C Enterprise, Only studies listed in central CTMS Census (1000) per fielding) Post-consent, End of Study 2 Email, Paper follow up 20.3% Raffle incentivization: Participants who return the survey have a 1:25 chance to win a $50 gift card; increased response rate from 18 to 30%. EFFECTIVE.
Paper surveys: Community advisors recommended sending paper surveys to Black participants to increase response rate (8%): NOT EFFECTIVE (expensive).
D Enterprise, all studies across the institution Census (100–400) Post-consent, End of Study, Annual 2 Email 22.4% Brand recognition : Inserted branded graphics from brochures into email survey invite to increase response rate: NOT EFFECTIVE.
Study team ambassadors : Targeted return of results to cultivate team members as ambassadors to encourage survey response. UNDERWAY.
Public return of results page: Positive resopnse from community advisors.
Results page on study business cards (for participants). UNDERWAY.
E Enterprise, all studies across the institution Census (3000–6000) End of Study 6 Portal, SMS 15.4% Expand platforms : Current response rate using portal (15%) lower than for pre-project pilot test (30%). Use of other platforms requires institutional policy change, UNDERWAY. Enhance representativeness : The response cohort was 84% White and 13% Black, compared to the population sent the survey (74% White, 19% Black). Instituted an Institutional Equity in Research Experience Committee to address how to reach underrepresented communities better. UNDERWAY.

Results of survey implementation

From November 2021–May 1, 2023, five EPV sites sent 13,850 surveys to participants and received 2,575 responses, of which 99% were complete (>80% of questions answered), with an overall response rate of 18.6%. Survey response rates differed among sites (15%–31%) (Table 2). Site A sent surveys at the study level, returning the highest response rate. Sites piloted effective efforts to use compensation and telephone outreach to increase response rates. The use of SMS was not effective.

Respondents were diverse in age, race, gender, and ethnicity, (Table 3) though minority populations were underrepresented overall, e.g., 11% were Black respondents compared to 14% Black individuals in the US population [17]. However, the representativeness of minority populations varied across sites. At the highest end of representativeness, Black participants made up 22% of the respondents at two sites, and Latino/a individuals comprised 19% of respondents at another site. The characteristics collected from all survey recipients add context: the racial/ethnic diversity of the participants who were eligible to receive a survey (of which the respondents are a subset) also varied considerably across sites, to some extent limiting the possible number of responses from minority groups (Supplemental Table S1). Efforts to increase engagement and representativeness are underway. Two sites have disseminated local results on public-facing websites. (Table 2)

Table 3.

Characteristics of individuals returning the research participant perception survey, total and range across sites, February 2022–April 2023

All Survey Respondents,
% N = 2,575
Survey Respondents, Range across sites % Sites’
N* = 204–1016
Age
18–34 5.5 2.5–21.1
35–44 6.9 1.1–18.1
45–54 12.9 9.7–22.2
55–64 21.7 17.1–33.1
65–74 34.5 11.8–41.2
>75 18.5 7.4–25.7
Sex
Female 62.9 50.5–92.6
Male 36.9 5.6–49.0
Intersex 0.1 0.0–0.7
Prefer not to Say 0.0 0.0–0.5
Gender
Woman 61.0 52.4–89.8
Man 33.6 5.6–46.1
Non–binary 1.2 0.5–1.4
None of these describe me/Prefer not to say 4.1 0.0–4.4
Race
Asian 1.4 0.4–7.4
American Indian/Alaska Native 0.4 0.0–0.5
Black/AA 11.2 3.2–22.5
Native Hawaiian/Other Pacific Islander 0.2 0.0–1.0
White 84.5 65.2–93.0
More than one race 1.2 0.7–2.5
Decline to answer/unknown 1.3 0.4–2.0
Ethnicity
Hispanic/Latina/o/x 3.0 0.8–19.2
Highest level of educational attainment
8th grade or less 0.3 0.0–1.0
Some high school but did not graduate 1.0 0.0–2.0
High school graduate 9.2 6.9–10.7
Some college or graduate 2-year college 27.3 20.1–32.2
Graduated 4-year college 24.0 20.7–30.9
Beyond 4-year college 38.2 34.5–42.3
The study required a diagnosis of a disease or disorder.
yes 55.3 16.2–87.2
no 44.7 11.3–83.1
Drug, device, procedure, or behavioral/lifestyle intervention
yes 39.3 17.3–60.2
no 54.0 30.5–76.1
unsure 6.8 4.4–8.9
Demands of the study
simple 65.6 54.8–91.2
moderate 29.0 6.7–34.1
Intense 5.5 1.4–9.8
*

Individual site data are not shown to prevent inadvertent site identification.

Open text comments

Respondents engaged with the survey. Sites received comments from 15% to 33% (mean 29%) of respondents and discussed comment themes at RSC meetings. Themes identified at more than one site included: (1) gratitude and praise for the research team or study-specific issues; (2) dissatisfaction with unexpected out-of-pocket costs from participating in research; (3) unacceptable delays in receiving compensation; and (4) offense taken at the gender question response options (“Male and transgender male,” “Female and transgender female,” “Prefer not to answer”). In response, the RSC revised the options to: “Man,” “Woman,” “None of these describe me,” or “Prefer not to answer.” Sites also received informative positive and negative comments about study-specific issues or interactions and determined any local responses.

The goal of this project was to deliver a working infrastructure that could help sites collect RPPS feedback from their participants. Analysis of survey findings, acting on findings, and evaluating the impact, is the next stage of conducting clinical translational science using the participant experience data. Those performance improvement activities require additional institutional buy-in, participant engagement, infrastructure, and process, and are the subject of ongoing research.

Deliverables and dissemination

Infrastructure for adoption

The infrastructure for EPV/RPPS can be downloaded from the EPV website after contacting project leadership. Components include (1) the data dictionary for RPPS-Short survey and data collection forms (.XML file); (2) external modules for the At-a-Glance-Dashboard and Cross-Project Piping (REDCap external module repository [18]); and (3) a comprehensive EPV Implementation Guide. Designed for leadership, project managers, and technical staff, the Guide discusses considerations with which all new sites grapple, estimates of effort, and clear recommendations. The technical section provides step-by-step instructions for installing the software components, importing the data needed to field the survey, and details regarding data analytics, scoring, and analysis. The Infrastructure is compatible with sending multilingual surveys using REDCap Multilingual Management functionality (REDCap version ≥ 12.0). Programing scripts for fielding RPPS in Spanish can be downloaded. The EPV team continues to evaluate and implement ways to streamline infrastructure and enhance value. The website links to the current technical change log.

Discussion

The EPV team designed and tested new EPV/RPPS/REDCap infrastructure that enabled five sites to collect, analyze, and benchmark participant feedback at scale, with standards assuring that data are compilable and comparable. The inclusion of participant characteristics and dashboard filters enables subgroup analyses responsive to recent federal guidance for increasing health equity by disaggregating data to understand the experiences of different groups [19]. The infrastructure and instructions are disseminated through a public website, free of charge, for adoption by a wider community of users; the EPV Learning Collaborative welcomes new members. The RPPS measures aspects of participation that are meaningful to participants, providing an evidence base to drive iterative improvements to the clinical research enterprise.

Sites continue to work with stakeholders to test initiatives to increase responses. Financial incentives to return the surveys were successful, but are expensive to sustain. Minority populations were underrepresented among respondents overall, but not at all sites, and outliers deserve study. Sites implemented community partners’ suggestions, testing ways to increase the diversity of responses, although the approaches tested so far have not proven effective. Trust may be an issue. Individuals who are unpersuaded of the trustworthiness of an institution tend to be wary of surveys [20]. Engagement requires the integration of multiple approaches to capture a broad population, and sites continue to explore ways to leverage engagement resources effectively. Stakeholders counseled that even limited feedback from underrepresented groups should be analyzed and solutions pursued while exploring ways to increase response rates in parallel.

Survey data serves as a valuable complement to interviews and other qualitative story-telling [21] and offers a measure of whether improvements defined by a small group translate to benefits for a larger participant population. All sites have planned and/or initiated the return of survey results to the public through presentations and websites. One could envision a virtuous cycle where transparent and accountable return of results to investigators and the participant communities fosters trust over time, and increases participants’ willingness to answer the survey.

The EPV project fulfills many NCATS values: engaging stakeholders in all phases of research, maintaining a participant-centered focus, and creating and disseminating tools for others to adopt. It helped sites generate evidence and incorporated analytics that will be instrumental in identifying and addressing disparities in research. The infrastructure sets the stage for sites to act on participant data, conduct research-on-research to solve problems and accelerate research, engage in CTSA-CTSA collaborations, and leverage common infrastructure to overcome barriers to advance science.

With EPV infrastructure working and RPPS data in hand, some teams still found it challenging to activate the resources (including CTSA-supported cores) to act on findings from participant feedback, despite the support of leadership for the project. The clinical research enterprise lacks the centralized quality improvement infrastructure and expertise to parallel that which hospitals use to measure and improve the patient care experience [22]. Recent attention to guidance from AAHRPP [6,7] to measure the effectiveness of human protections, and from the FDA [23] to elicit participant preferences, has gained increased attention. Further, NCATS has called on its awardees to conduct clinical translational science [24] as a platform for quality improvement in research. These complementary charges from multiple agencies could incentivize clinical research organizations to create an infrastructure for quality improvement in research which could unleash the power of participant feedback. RPPS measures are tools for evaluation, but cannot, in isolation, change institutional culture or practice. Overcoming the multi-step barriers to conducting Clinical and Translational Science, using RPPS data and EPV/REDCap infrastructure, will enable institutions to realize the power of the participant voice to enhance the clinical research enterprise.

Dissemination and the learning collaborative

EPV infrastructure is being disseminated broadly, through poster presentations [25,26], webinars [27], return of results webpages [28,29], and the EPV project website [30]. As of August 2023, two additional CTSA hubs have implemented the full EPV/RPPS infrastructure (early adopters), and others are exploring adoption. Aggregate responses have doubled. The EPV learning collaborative has welcomed early adopters to project team and technical calls and provided guidance implementing their use cases. Dissemination and broad adoption of EPV infrastructure will grow the RPPS evidence base, enhancing opportunities to learn from increasingly representative participant feedback.

Limitations

The average response rate (19%) is lower than optimal. Sites have more work to do socializing RPPS with teams and participants. Sites and practices that produced higher response rates are worthy of study. Sharing practices, testing hypotheses, and deepening engagement may increase response rates over time. Groups underrepresented in research were underrepresented among RPPS respondents. The diversity of respondents differed across sites. High-performing outliers merit more study. As a quantitative measure, the RPPS captures whether, but not why, a research experience was good or bad. The RPPS is a tool to score and benchmark important dimensions of the research experience. Measuring is the first step in evidence-driven quality improvement. Organizations can use the data, leveraging other institutional resources, to prioritize and effect change.

Summary and Conclusion

The EPV/RPPS/REDCap infrastructure proved effective at enabling sites to collect, analyze and visualize participant feedback, and to benchmark with and across institutions. The RPPS measures are meaningful to participants, responsive to AAHRPP standards [6], and provide an evidence base to drive iterative improvements to the clinical research enterprise. The infrastructure and instructions are disseminated on a public website, free of charge, for adoption by a wider community of users. Institutional implementation of the EPV/RPPS is worthy of consideration, even with limited resources. EPV activities may be most effective when embedded with initiatives related to outreach, community engagement, human research protection programs, research resource cores, and/or any local organizational structure that has the agency to lead, implement change, and harvest the impact.

Supporting information

Kost et al. supplementary material

Kost et al. supplementary material

Acknowledgments

The authors would like to thank the following individuals for their thoughtful input and support during the project: Barry S. Coller MD, James Krueger MD Ph.D., and Maija Neville Williams MPH.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/cts.2024.19.

Author contributions

R. Kost conceived the project, designed, led, conducted, and analyzed the multi-site research project and wrote the first draft fo the manuscript; A. Cheng led the development and implementation of the technical infrastructure and wrote technical sections of the manuscript; P. Harris contributed to technical design, strategy, and writing; J. Andrews, R. Chatterjee, A. Dozier and D. Ford each led the local configuration of conduct and data collection at their respective sites, contributed to project design, data collection and analysis, and contributed to writing; N. Schlesinger, C. Dykes, I. Kelly-Pumaraol, C. Lewis-Land, S. Lindo, L. Martinez, M. Musty, J. Roberts, L. Wagenknecht, and L. O’Neill contributed to project design. local implementation, data collection and analysis; N. Kennedy contributed to development of key deliverables, implementation analysis, manuscript drafting; A. Qureshi and R. Vaughan provided statistical support and analysis thorughout the project and writing; C. Coffran, S. Carey, J. Goodrich, P. Panjala, S. Cheema provided site-based technical expertise throughout impememtation and data collection, and contributed to writing. E. Thomas, E. Bascompte-Moragas provided programming, software development and technical expertise throughout the project and contributed to technical refinement and writing.

Funding statement

This work was supported in part by a Collaborative Innovation Award from the National Center for Accelerating Translational Science #U01TR003206 to the Rockefeller University, and by Clinical Translational Science Awards UL1TR001866 (Rockefeller University), UL1TR002553 (Duke University), UL1TR003098 (Johns Hopkins University), UL1TR002001 (University of Rochester), UL1TR002243 (Vanderbilt University), and UL1TR001420 (Wake Forest University Health Sciences). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Kost et al. supplementary material

Kost et al. supplementary material


Articles from Journal of Clinical and Translational Science are provided here courtesy of Cambridge University Press

RESOURCES