Abstract
This descriptive case study covers the development of a survey to assess research subject satisfaction among those participating in clinical research studies at an academic medical center (AMC). The purpose was twofold: to gauge the effectiveness of the survey, as well as to determine the level of satisfaction of the research participants.
The authors developed and implemented an electronic research participant satisfaction survey. It was created to provide research teams at the authors’ AMC with a common instrument to capture research participant experiences in order to improve upon the quality of research operations. The instrument captured participant responses in a standardized format.
Ultimately, the results are to serve as a means to improve the research experience of participants for single studies, studies conducted within a division or department of the AMC, or across the entire research enterprise at the institution.
For ease of use, the survey was created within an electronic data capture system known as REDCap, which is used by a consortium of more than 1,800 institutional partners as a tool from the Clinical and Translational Science Awards (CTSA) program of the National Institutes of Health (NIH).
Participants in the survey described in this article were more than 18 years of age and participating in an institutional review board (IRB)-approved study. Results showed that the vast majority of participants surveyed had a positive experience engaging in research at the authors’ AMC. Further, the tool was found to be effective in making that determination.
The authors hope to expand the use of the survey as a means to increase research satisfaction and quality at their university.
Background
In a competitive market, the presentation, efficiency, and delivery of care can influence which healthcare facilities patients choose. These same characteristics are also critical to the success of clinical research operations.
Many factors contribute to research subject recruitment and retention, and it is imperative that research programs engage in behaviors that contribute to a positive experience for participants, in a manner similar to healthcare facilities. Hospitals and clinics utilize tools, such as Press Ganey surveys, that are sent to patients after use of healthcare services to gauge patient satisfaction and to identify areas for quality improvement. Currently, more than 50% of hospitals in the United States use Press Ganey surveys to enhance quality care, patient satisfaction, and institutional success.{1}
Moreover, in recent years patient satisfaction surveys have been linked to reimbursement. In the 2015 fiscal year, Medicare utilized a hospital Value-Based Purchasing Program, whereby hospital payments are adjusted based on performance across four quality domains, including clinical process of care, patient experience of care, outcome, and efficiency.{2}
Satisfaction is at the core of the patient experience, and feedback provided on healthcare services is not only valuable for quality improvement, but also critical for sustained fiscal solvency. In this same spirit, leaders of clinical research programs carefully watch the financial bottom line of their operations, making knowledge of research participant satisfaction an important contributing factor for financial success.
Patient satisfaction is arguably more applicable to clinical research, because patients and healthy subjects are not required to participate in medical studies. They are also not required to continue participation once started, because volunteerism is key to human subject protections.
Knowledge of the participant experience is critical for those entities invested in the clinical research enterprise. Importantly, satisfaction of research participants may serve as a proxy for study integrity. It may be conjectured that if a participant is having a positive experience, that person is more likely to complete a study than one who has not. Moreover, an experience that is not positive may dissuade one, and in turn others, from participating in future studies.
Clinical researchers need participants to complete studies to ensure that trial milestones are captured. Not only does this ensure study completeness, it is also imperative for the population at the heart of the investigational product. Study participants create new understandings by bringing new drugs and devices to market. For these reasons, teams invested in clinical research should consider this assessment strategy and platform during the conduct of clinical research studies.
Literature Review
The available literature on clinical research participant satisfaction assessment supports the use of this measure in research programs, and has yielded a broad array of findings.
Verheggen and colleagues{3} used personal interviews and a telephone questionnaire on patients participating in clinical trials at the time of consent and one month into the study. They found that although overall patient satisfaction was high, areas of dissatisfaction were revealed after consent. They concluded that patient expectations prior to study entry ultimately impact the reality of subsequent clinical research participation. This study provides support for the notion that simply surveying participants at one point in time is not enough to gauge their feelings on participation.
A study by Reider and colleagues{4} found that, of 155 participants in a General Clinical Research Center, 90.9% of respondents said that the purpose of study had been explained to them and 94.9% said that the risks of the study had been explained. These critical informed consent elements are crucial to participant satisfaction as a foundation for their experience. The interpersonal relationship between participant and research staff is also key to the participant experience, and the authors found that 99.4% of respondents were pleased with the care from research nurses. The authors believed that participant feedback provides valuable input for the implementation and delivery of research.
In 2015, the Center for Information and Study on Clinical Research Participation published its latest report on patient experiences in research studies.{5} From this report, one can say that the most common reasons people participate in studies were to help advance science and the treatment of a particular condition/disease, and to help others. The report’s organizers found that 95% of those surveyed would recommend clinical research participation to their friends and family. Approximately one-third of the respondents indicated that there was nothing they disliked about their clinical trial experience. Among those respondents who disliked something, the possibility of receiving a placebo and the physical location of the study center were the top two dislikes. Almost half of the participants (46%) reported that their research participation experience exceeded their expectations.
Methods
Patient Research Satisfaction Survey Development
In 2011, a task force was formed consisting of researchers from select clinical departments (e.g., family medicine, dermatology, gynecology, gynecology-oncology, neurology, and the inpatient Clinical Research Center) across the authors’ medical center. Prior to this time, a few departments had their own satisfaction tools in a paper format, but the task force collectively sought to standardize this process and to do so in an electronic format. An electronic format would allow for groups to easily access the survey and analyze the results collectively, according to who a study’s principal investigator (PI) was or across the enterprise.
The task force met on a routine basis to discuss and develop items that would encompass a variety of participants enrolling in any of the many types of research studies at the institution (e.g., observational studies, clinical trials).
To effectively execute this initiative, the support of the institution’s Center for Clinical and Translational Science (CCTS) was solicited. The CCTS was funded by a multiyear CTSA grant from the NIH, as a collaborative effort of The Ohio State University, the university’s Wexner Medical Center, and Nationwide Children’s Hospital.
The tool described earlier as being available through the CTSA program, REDCap (Research Electronic Data Capture), was determined by the authors to be the best platform for implementing the process capturing the details of research participants’ experience across the university, inclusive of its medical center. REDCap provides a secure, web-based application that is flexible enough to be used for a variety of types of research. It offers easy data analysis with audit trails, along with the ability to export into common statistical packages, and is compliant with 21 CFR Part 11 of the Code of Federal Regulations.{3}
A protocol for implementation was developed and then approved by the university’s IRB before the survey was implemented. Once approved, task force researchers with IRB-approved studies conducted a pilot of the Research Study Participant Survey in their respective research programs.
Patient Research Satisfaction: Data Collection
Pilot
The survey was launched in late October 2013, and data were collected through April 2014. The research sites of the task force members were the locations of the pilot, lasting for the first six months of implementation.
The survey was administered to research participants ages 18 years and older who were participating in an IRB-approved clinical research study at the university. After six months, the survey was analyzed and determined to be working appropriately by task force members. This was evidenced by evaluating the request system, the URLs provided, pulled data by site, and data in aggregate; all without issue.
Enterprise Launch
After the pilot was deemed successful, the survey was made available to all researchers at the university in May 2014. Interested researchers were able to access the survey via a request for use through the CCTS. Researchers were then provided with a URL to the survey that was customized to each relevant PI, whose study teams then invited their participants to take the survey.
This allows PIs to extract participant information from REDCap on their own studies. Results may be requested by the CCTS at any time and presented to the respective department or other stakeholders.
The survey was designed to be utilized at any point in a participant’s research experience—from the first visit, annually for multiyear studies, or at the final visit regardless of length of participation. The participant is offered a link that could be accessed onsite or at home to complete the survey (the survey contains QR coding to make it available for use from smartphones).
Personal identifying information is not collected, so that responses are not traceable back to the respondents. The survey consists of 25 multiple choice questions. Branching logic exists in certain areas, based on the participant response, with an open-ended text field at the end to capture free-form response data.
The survey takes approximately 10 minutes to complete. When participants complete the survey, the anonymous data are entered directly into REDCap. The overall methods intend to allow research teams a viable mechanism by which to improve processes to provide a more effective clinical research experience for their participants.
Results
A total of 341 completed surveys from multiple research departments were collected from October 2013 to April 2015. Data were analyzed using descriptive statistics within the REDCap database.
The Clinical Research Center was the location of 81.2% of those surveyed; gynecology-oncology accounted for 6.7%, dermatology for 2.6%, and several other departments collectively comprised the remainder of less than 10%.
Mirroring the research study pool, the majority of surveys (76%) were completed by females, and the age groups were 18–25 years (18.8%), 26–35 years (44%), 36–55 years (19.4%), 56–64 years (12.8%), and 65 and older (5.3%).
The predominant race was white/Caucasian (77.1%), followed by black or African American (15.5%), Asian (2.6%), multiracial (2.3%), American Indian/Alaska native (0.6%), Native Hawaiian or Pacific Islander (0.3%), and other categories not listed (1.5%).
A variety of reasons were listed for how participants learned about their clinical research study (see Table 1). Respondents were able to select all of the options that applied. A flyer and family or friends were the most common recruitment tools by which respondents became aware of a study.
Table 1.
Ways participants learned of a study
I saw a flyer about it. | 21.3% |
I was approached about it during a healthcare appointment. | 18% |
I was contacted by a study coordinator or physician. | 14.8% |
I saw it on Study Search. | 9.8% |
I saw it on a social networking site. | 10.3% |
Contacted via ResearchMatch.org. | 17.5% |
I called the Hero line. | 0.6% |
I saw/heard an advertisement. | 15.1% |
A friend or family member told me. | 20.7% |
I don’t remember. | 1.5% |
Table 2 shows what motivated these respondents to choose to participate in a study. Those who completed the survey were allowed to choose up to three reasons that most influenced their decision to participate.
Table 2.
Motivating reasons for participation
To help others. | 70.5% |
Because my caregiver encouraged me to do so. | 5.6% |
Because of a positive experience in another study. | 17.4% |
To find out more about my condition. | 13.3% |
To gain access to new treatment/therapy. | 19.5% |
Because of the good reputation of this AMC. | 35.1% |
To earn study payment. | 49.6% |
Because there were no other options available to treat my condition | 6.2% |
Several questions related to the elements of the process of informed consent were asked in a Likert scale format (see Table 3). Other questions were related to the dynamic between research staff and the participant. Data were collected to determine the respondents’ potential future research participation and their likelihood of promoting research participation at the same sites as where their studies were based (see Table 4).
Table 3.
Research study site experiences
Statement | Strongly Disagree | Disagree | Neutral | Agree | Strongly Agree |
---|---|---|---|---|---|
I understood the study procedures before providing my informed consent to participate. | 0.9% | 0 | 0.6% | 18.8% | 79.8% |
The research staff took the necessary amount of time to answer all of my questions. | 0.6% | 0% | 0% | 9.7% | 89.7% |
I understood that participation was voluntary. | 0.6% | 0% | 0% | 7.9% | 91.5% |
I understood that I could withdraw from the study anytime. | 0.9% | 0% | 0.3% | 8.2% | 90.6% |
I understood the risk(s) involved with participating in the study. | 0.6% | 0% | 1.4% | 15.2% | 83% |
I understood the possible benefit(s) involved with participating in the study. | 0.6% | 0% | 2.6% | 16.4% | 80.4% |
I felt the research staff were approachable when I had questions or concerns. | 0.9% | 0% | 0% | 10.9% | 88.3% |
I felt the research staff were easy to contact. | 0.9% | 0.6% | 1.5% | 13.5% | 83.6% |
I felt the research staff were professional. | 0.9% | 0% | 0% | 9.4% | 89.7% |
I felt the research staff were knowledgeable. | 0.9% | 0% | 0.6% | 11.7% | 86.8% |
I felt the research staff were courteous. | 0.9% | 0% | 0% | 8.5% | 90.6% |
I felt the research staff were sensitive to my needs. | 1.2% | 0.3% | 0.3% | 8.5% | 89.7% |
My research visits went smoothly. | 0.6% | 0% | 0% | 16.4% | 83% |
I was able to schedule my appointments at a time that worked for me. | 0.6% | 0.3% | 2.3% | 12.9% | 83.9% |
My overall experience was positive. | 0.6% | 0% | 0% | 11.5% | 87.9% |
Table 4.
Future participation and study promotion
Statement | Very Likely | Likely | Unlikely | Very Unlikely |
---|---|---|---|---|
I would be ______ to recommend to others that they consider participation in a research study at Ohio State. | 88.3% | 10.9% | 0% | 0.9% |
If I was aware of another research study at Ohio State for which I was eligible and I had time to volunteer, I would be_________ to participate. | 77.7% | 20.5% | 0.9% | 0.9% |
The final series of questions related to the time frame of participation, with 47.8% of respondents being part of an active study, 51.3% having completed all visits related to the protocol, and 0.9% having withdrawn early from a study. The length of study enrollment varied for participants—2.9% of respondents had only a one-time visit, while 66.9% were involved for up to six months, 11.1% from more than six and up to 12 months, 6.2% from more than one year and up to three years, and 12.9% for more than three years.
The respondents were at various time points over the course of enrollment when they completed the survey. The majority of those surveyed (72%), had only been in the study for up to three months; others had been in the study more than three and up to six months (11.3%), or more than six and up to 12 months (5.1%), or more than a year (11.6%).
Respondents could give up to three reasons that influenced their decision to discontinue participation early. For those who had discontinued, the most common reason was due to family/work issues unrelated to the study, followed by too much pain and discomfort related to study procedures, and by unexpected test results/procedures/side effects.
Discussion
The data demonstrate that the participant experience at the authors’ institution was largely satisfactory when analyzed collectively for all groups. Project leaders were able to pull the responses by department to provide to the respective department and by PI. They found the instrument to be effective to elicit the information being sought from those who completed it.
The structural organization of the survey allows results to be parsed by study, by division, by department, and collectively across all studies and groups at the university. The authors’ line of questioning and methodology were similar to those used by others in the field,{3-5} and yielded similar responses from research participants.
Despite the overall satisfaction the study participants had reported, a small minority of them were unhappy with their trial experience. An honest range of feedback can help research teams identify and improve appropriate areas of their research program. Of note is the fact that issues that may have impacted the participants’ experiences, but were beyond the control of the team(s), such as institutional parking issues, were not included in the survey. The goal of the survey was to ask questions that could be identified and improved upon within any respective research team’s scope of influence.
There are some significant limitations to be aware of in terms of the results. For example, the authors found that adult females were more likely than males to provide open-ended feedback in the survey. It was also noteworthy that the data were largely from the university’s Clinical Research Center, which is explained because the center sees more patients than any one of the other areas. Further, because the design of the survey allows for participants to complete the survey at multiple time points, some surveys may have been completed by the same person longitudinally over the course of the study.
It is the intention of the CCTS and the authors for this survey tool and its underlying process to continue to be marketed to and used as a CCTS service by the previous and additional research teams across the university. It is desired that researchers utilize the survey to assess quality of clinical trial execution from the participant’s experience. This feedback can help grow programs in a positive direction.
Indeed, the hope is that incorporating patient feedback into clinical research operations can positively contribute to research recruitment and retention of participants. The authors anticipate that the end result will be high-quality data from participants who are happily engaging in studies that successfully brings new drugs and devices to market.
Acknowledgments
The authors wish to acknowledge Leena Hiremath, Sharon Chelnick, Cindy Overholts, Stuart Hobbs, and Holly Bookless for their contributions to this project as task force members.
The project described has been supported by Award Number UL1TR001070 from the National Center for Advancing Translational Sciences. The content is solely the responsibility of the authors, and does not necessarily represent the official views of the National Center for Advancing Translational Sciences or of the National Institutes of Health.
Contributor Information
Paula Smailes, Email: paula.smailes@osumc.edu, The Ohio State University Wexner Medical Center and visiting professor at the Chamberlain College of Nursing.
Carson Reider, Neuroscience Research Institute at The Ohio State University.
Rose Kegler Hallarn, Center for Clinical and Translational Science at The Ohio State University.
Lisa Hafer, Clinical Trials Management Office at The Ohio State University.
Lorraine Wallace, Undergraduate Research Office at The Ohio State University.
William F. Miser, The Ohio State University.
References
- 1. [August 26, 2015];Who we serve. Press Ganey website. www.pressganey.com/solutions/who-we-serve.aspx.
- 2. [August 26, 2015];Hospital Compare. Medicare.gov. https://www.medicare.gov/hospitalcompare/data/hospital-vbp.html.
- 3.Verheggen F, Niemen F, Reerink E, Kok G. Patient Satisfaction with Clinical Trial Participation. Int J Qual Health Care. 1998;10(4):319–30. doi: 10.1093/intqhc/10.4.319. [DOI] [PubMed] [Google Scholar]
- 4.Reider C, Malarkey W, Nagaraja H. GCRC Subject Satisfaction Survey. J Clin Res Best Pract. 2007;1(3):1–4. [Google Scholar]
- 5.Center for Information and Study on Clinical Research Participation. Public and Patient Perceptions & Insights Study. 2015 https://www.ciscrp.org/our-programs/research-services/perceptions-insights-studies/
- 6. [October 12, 2015];REDCap. http://project-redcap.org/