Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Sep 1.
Published in final edited form as: Acad Med. 2012 Sep;87(9):1228–1236. doi: 10.1097/ACM.0b013e3182628afa

Research Subject Advocacy: Program Implementation and Evaluation at Clinical and Translational Science Award Centers

Rhonda G Kost 1, Carson Reider 2, Julie Stephens 3, Kathryn G Schuff 4, on behalf of the Clinical and Translational Science Award Research Subject Advocacy Survey Taskforce
PMCID: PMC3529179  NIHMSID: NIHMS417607  PMID: 22836849

Abstract

Purpose

In 2000, the National Center for Research Resources mandated that General Research Centers create a Research Subject Advocate (RSA) position. In 2008, the Clinical and Translational Science Award (CTSA) consortium endorsed a new advocacy model based on four RSA Best Practice Functions. The authors surveyed CTSA centers to learn about their implementation of programs to fulfill the RSA functions.

Method

In 2010, the RSA taskforce developed a two-part online survey to examine leadership, organizational structure, governance, scope, collaboration and integration, and funding and evaluation of RSA activities implemented at CTSA centers.

Results

Respondents from 45 RSA programs at 43 CTSA centers completed the survey. Senior university or CTSA officials led all programs. Ninety-six percent (43/45) of programs were funded by a CTSA core. Eighty percent (36/45) designated an individual “RSA.” Ninety-eight percent (44/45) provided diverse services either in collaboration with or complementary to other departments, including development of Data and Safety Monitoring Plans (16/45, 36%), informed consent observation (10/45, 22%), training responsive to audit findings (12/45, 27%), and direct advocacy services to participants (11/45, 24%). Eighty-six percent (24/28) reported qualitative evaluation methods for these activities.

Conclusions

RSA programs conduct both collaborative and unique research protection activities. This survey, an initial step in developing a more robust mechanism for evaluating RSA programs, collected valuable feedback. The authors recommend defining and developing outcome-based evaluation measures that take the heterogeneity of the individual RSA programs into account while advancing their value and effectiveness in protecting human research subject participants.


In 2000, the National Institutes of Health (NIH) National Center for Research Resources (NCRR) introduced the requirement for institutions to create a Research Subject Advocate (RSA) position at NCRR-funded General Clinical Research Centers (GCRCs) to enhance human research subject protections. The primary function of RSAs would be to ensure that human research studies were “designed and conducted safely and ethically with protection of human subjects accorded the highest priority.”1 The RSA program was initially deployed in 78 NCRR-funded GCRCs; each center was responsible for defining the specific activities of its RSA,2,3 maximizing the flexibility of the position and preserving local discretion in filling institutional and center-specific needs according to the general guidelines.1 While some members of the Clinical and Translational Science Award (CTSA) RSA Taskforce4 and of the national Society of Research Subject Advocates5 have broad experience with the roles fulfilled by RSAs in the GCRCs, no one has compiled a systematic inventory of how GCRCs incorporated the position. Historically, only a few RSA programs have published evidence about their RSA activities.611

The implementation of the CTSA program presented an opportunity for the NCRR to extend the platform for research subject advocacy, while preserving local control of the funding, design, and scope of RSA roles and responsibilities. Without specific guidance, some CTSA institutions dismantled their RSA programs, either distributing research subject advocacy-specific functions across other institutional entities or abolishing them entirely. Other CTSA centers expanded their RSA programs, adding responsibilities, complementary oversight, and resources. In 2007, to clarify the role of RSAs at CTSA centers, the Consortium Executive Committee (CEC) asked the Regulatory Knowledge and Support (RKS) Key Function Committee (KFC) to convene a taskforce to recommend best practice functions for RSA programs. Through a collaborative, iterative process, the taskforce proposed and the executive committee endorsed four RSA Best Practice Functions in January 2008 (see List 1).4

List 1. Research Subject Advocacy (RSA) Best Practice Functions within the Clinical and Translational Science Award Consortium, 20084.

  1. The RSA functions should include a reporting pathway to institutional officials of appropriate authority and should be free of conflict of interest.

  2. The RSA functions should be complementary to and integrative with existing entities at the institution to promote and facilitate safe and ethical conduct of human research.

  3. The RSA functions should have, or have direct access to, an authority that can temporarily suspend a research activity based on ethical and safety concerns so that problems can be explored or resolved through proper procedures. This capacity enables preliminary intervention into problems that might not necessarily invoke an institutional review board suspension.

  4. The RSA functions should be a resource to the research community and to participants; have a voice in policy regarding research ethics, participants rights, and research safety; and play a role in the protection of human subjects and responsible conduct of research educational programs of the institution.

Subsequently, the CEC asked the RSA taskforce to explore models of RSA Best Practice Function implementation and to make recommendations for the evaluation of these models. Currently, there are no CTSA-endorsed evaluation metrics for assessing the implementation of the RSA Best Practice Functions.12 Anecdotes of RSA program heterogeneity, and the absence of standards or a recent inventory of RSA program implementation strategies, led the taskforce to develop a multi-step initiative to: (1) describe the models of RSA program organization, associated activities, and current modes of evaluation; (2) define meaningful outcome measures for those activities; and (3) develop the methods to assess these outcomes. In this report, we describe the method and results of a survey designed to address the first step of this initiative. Based on the results, we then present recommendations for the next step of the process--defining outcome measures to enable CTSA centers to evaluate whether they have fulfilled the RSA Best Practice Functions.

Method

The taskforce created a two-part web-based survey that they deployed from May to October 2010. Part I contained 24 questions regarding leadership, organizational structure, governance, scope, degree of collaboration and integration, and funding of RSA activities implemented to fulfill the Best Practice Functions. Part I also asked respondents to identify any particularly valuable RSA practices and to describe the methods for their evaluation. Part II presented an extensive list of activities that could potentially fulfill the four RSA Best Practice Functions.

Respondents were asked to attribute the conduct of each activity to the office at their institution that performed that activity, e.g. “Education,” “RSA,” “Quality assurance/compliance,” “institutional review board (IRB),” and to describe “any methods used to measure the value of these activities” (see Supplemental Digital Appendix 1).

The questions in Parts I and II of the survey align with the RSA Best Practice Functions (see List 1). For example, the questions on governance and organization asked about the reporting structure of each RSA program (Function 1). Other questions assessed the integrative and complementary nature of RSA activities (Function 2). Questions on RSAs’ representation on committees and their voting authority indirectly assessed their ability to temporarily suspend a study for safety or ethical reasons (Function 3). The detailed questions about specific RSA activities in Part II considered programs’ provision of resources to the research community and to research participants (Function 4).

In 2010, the members of the RSA taskforce who direct or implement RSA programs conducted alpha and beta testing of the survey. They refined the overall approach through an initial face-to-face meeting. Taskforce leadership then revised the survey and iteratively tested its face and content validity with taskforce members to optimize readability, content, and clarity. They piloted the survey with 10 members, who field-tested the survey with colleagues familiar with RSA activities, and provided feedback to the leadership. They chose to use surrogate questions for Function 3 based on testers’ feedback on the difficulty of localizing RSA authority among multiple delegated or highly integrated programs. The CTSA RKS KFC and its principal investigator liaison endorsed the final survey and deployed it through the RKS voting representatives.

Each CTSA’s voting representative to the RKS KFC received an email explanation of the project and a link to the online survey. One of us (CR) provided telephone and email support for questions. We distinguished whether a survey response reflected an RSA program spanning multiple entities within a CTSA center or only a single institution, and whether RSA services were provided to CTSA and non-CTSA researchers. Non-responding institutions received email and telephone reminders to encourage survey completion. We did not collect any personal information other than contact information for the survey respondents. In consultation with the Rockefeller University IRB chair, our survey was deemed exempt from IRB review. We analyzed our data using descriptive statistics (SAS 9.2; SAS Institute, Cary, North Carolina).

Results

Survey Part I

Demographics

Between May and October 2010, we received responses from 45 RSA programs to Part I, and from 42 to Part II. Overall, 43 of the 46 (93%) CTSA centers funded at the time participated. Included in Table 1 are characteristics of the respondents and the RSA programs.

Table 1.

Results of a Survey to Document Leadership, Structure, Scope, and Funding of Research Subject Advocacy Programs across the Clinical and Translational Science Award (CTSA) Consortium, 2010

Category No. (%)
Individual respondents
 Research subject advocate (RSA) 14/41 (34)
 Unspecified university administrator 13/41 (32)
 CTSA core director 12/41 (29)
 Faculty member 2/41 (5)
Program characteristics
 RSA for a single institution 16/45 (36)
 RSA spanning a CTSA center 29/45 (64)
 Site accredited by AAHRP* 32/45 (71)
Highest authority for program oversight
 Senior university official (dean, vice president, chair) 24/43 (56)
 CTSA principal investigator (PI) 12/43 (28)
 Research center director 2/43 (5)
 University official also serving as CTSA PI 5/43 (12)
Program structure
 Designates an individual(s) “RSA” 36/45 (80)
 Designated program office organizes RSA activities 32/45 (71)
 RSA or core director organize RSA activities 15/45 (33)
 RSA activities distributed across multiple programs without a locus of responsibility 9/45 (20)
 RSA reports to designated organizing office 15/45 (33)
 RSA reports directly to highest authority 15/45 (33)
 RSA reports to authority other than the highest authority 15/45 (33)
Program scope
 Only CTSA-supported research provided with RSA services 24/45 (53)
 CTSA and non-CTSA-supported research provided with RSA services 21/45 (47)
Program funding
 Any CTSA core 43/45 (96)
 Regulatory Knowledge core 29/45 (64)
 Participant and Clinical Interactions Resources core 6/45 (13)
 Research ethics 6/45 (13)
 No CTSA support 2/45 (4)
*

AAHRP indicates Association for the Accreditation of Human Research Protection Programs.

Organization and scope

A majority of respondents reported that the highest oversight authority for their RSA program was either a senior university official or the CTSA center leadership. Most respondents reported a designated RSA at their institution as well as a university-recognized RSA program. However, some institutions indicated that RSA activities were distributed across multiple programs without a primary locus of responsibility (see Table 1).

All but two RSA programs were funded with CTSA resources. Of the institutions with one or more persons designated as an RSA (36/45, 80%), all reported that their program was supported by their CTSA and for 16 of those 36 (44%) by their Regulatory Knowledge and Support Core. Approximately two thirds (24/36, 67%) of programs with a designated RSA provided services across their entire CTSA center; the remainder (12/36, 33%) served only a single institutional entity such as a research center. Of the institutions with an office designated for RSA functions, half provided services to both CTSA and non-CTSA supported projects (16/32, 50%). Respondents also reported that RSA services were not provided for some elements of CTSA research (11/45, 24%) and non-CTSA research (19/45, 42%) at their centers (see Table 1).

Service, collaboration, and integration

The categories of activities provided by RSA programs were broad, including educational, oversight, and policy development activities, and provision of services to investigators and participants. Respondents viewed most of the activities performed by the RSA programs as collaborative, complementary, and/or integrated with other institutional services. The most commonly reported services uniquely provided by the RSA program included Data Safety and Monitoring Plan (DSMP) development (16/45, 36%), research subject rights/advocacy (11/45, 24%), and informed consent oversight (10/45, 22%) (see Table 2).

Table 2.

Comparison of Research Subject Advocate (RSA) Activities Provided in Conjunction with Other Departments Vs. Those Provided Uniquely by the RSA Program, across the Clinical and Translational Science Award (CTSA) Consortium, 2010

Activity Provided in conjunction with other departments Provided uniquely by RSA program
No. (% of 45) No. (% of 45)
Human subjects protection (HSP)/Good Clinical Practice (GCP) training for investigators 28 (62) 3 (7)
HSP/GCP training for coordinators 31 (69) 6 (13)
HSP/GCP for other research staff/students 26 (58) 4 (9)
Institutional review board liaison 29 (64) 7 (16)
Protocol development/navigation 25 (56) 2 (4)
Data and Safety Monitoring Plan development 32 (71) 16 (36)
Safety review of protocol design 30 (67) 5 (11)
Safety review of protocol conduct 28 (62) 7 (16)
Research ethics education 29 (64) 2 (4)
Research ethics consultation 27 (60) 8 (18)
Auditing/monitoring 25 (56) 7 (16)
Adverse event reporting 26 (58) 4 (9)
Informed consent oversight 28 (62) 10 (22)
Human research subjects’ rights/advocacy 32 (71) 11 (24)
Compliance 25 (56) 1 (2)
Policy development/harmonization 22 (49) 3 (7)
Clinical research management process mapping/improvement 20 (44) 3 (7)
Research ethics research 17 (38) 5 (11)
Clinical research management research 8 (18) 1 (2)
Education/compliance research 11 (24) 1 (2)
Other* 5 (11) 9 (20)
None 2 (4) 9 (20)
*

Respondents were asked to list “other” activities.

Forty-three of the forty-five (96%) RSA programs reported RSA representation on at least one relevant institutional committee. In 33 of the 45 (73%) programs, the RSA representative was given voting rights on at least one committee. RSAs were most often included on the IRB, scientific steering/review committee, and/or CTSA governance committee. Two institutions reported no RSA appointments to any of the committees listed (see Table 3).

Table 3.

Appointment of Research Subject Advocates (RSAs) to Key Institutional Committees, Across the Clinical and Translational Science Award (CTSA) Consortium, 2010

Committee Institutions appointing RSAs to committee RSAs appointed to committee with voting authority
No. (% of 45) No. (%)
Institutional review board 23 (51) 15/23 (65)
Scientific steering 22 (49) 9/22 (41)
Standard operating procedures 4 (9) 3/4 (75)
Safety 7 (16) 7/7 (100)
Research and development 6 (13) 3/6 (50)
Quality assurance/quality control 6 (13) 5/6 (83)
Human research protection program 9 (20) 7/9 (78)
Biosafety 3 (7) 1/3 (33)
Conflict of interest 7 (16) 3/7 (43)
Translational research council 7 (16) 5/7 (71)
CTSA governance 14 (31) 8/14 (57)
Medical ethics/research ethics 11 (24) 7/11 (64)

The most common mechanisms for integrating processes and problem-solving across the human subject research protection program were ad hoc meetings (23/45, 51%), standing meetings (18/45, 40%), and the use of a shared reporting mechanism (17/45, 38%). The most common mechanisms for integrating processes and problem-solving across institutional CTSA core function groups were standing meetings (27/45, 60%), ad hoc meetings (23/45, 51%), and/or the use of a shared reporting structure (20/45, 44%). Some institutions reported no mechanism for integrating these functions into the human subject research protection program (7/45, 16%), or across institutional CTSA core function groups (5/45, 11%).

Survey Part II

RSA activities

Respondents attributed a wide variety of services across the protocol life cycle to RSAs programs, often in collaboration with other departments. RSAs participated in, but were usually not the dominant organizers of, required training in human subject protection or Good Clinical Practices (GCPs). RSAs led activities related to informed consent and supported the design and review of Data and Safety Monitoring Plans and Boards. They were the main providers of education in DSMP requirements (24/44, 55%), elective education in human subject protection (24/44, 55%) and GCP (19/44, 43%), training in adverse event reporting (16/44, 36%), regulatory compliance updates (13/44, 30%), and training in response to audit findings (12/44, 27%). Many respondents also indicated a role for the RSA in addressing rights and safety concerns for CTSA-supported projects. These advocacy activities included real-time compliance oversight (25/42, 60%), and the investigation of complaints initiated by staff (21/42, 50%) or participants (26/42, 62%) regarding research conduct (see Table 4).

Table 4.

Institutional Offices Conducting Activities in Support of Human Subject Research Conducted under the Clinical and Translational Science Award, According to Part II of the Research Subject Advocacy (RSA) Taskforce Survey, 2010*

Activity Offices providing RSA services or conducting RSA activities Not conducted
Education Research Subject Advocate Quality assurance/ compliance IRB Human research protection program Other
Human subject research training requirements and procedures
 Orientation of investigators/staff to HSP requirements, no. (% of 42) 5 (12) 2 (5) 5 (12) 5 (12) 20 (48) 5 (12) 0 (0)
 Maintenance of institutional standard operating procedures for human subjects research, no. (% of 42) 0 (0) 2 (5) 2 (5) 11 (26) 19 (45) 7 (17) 1 (2)
 Harmonization of policies for HSP training across departments, no. (% of 41) 1 (2) 3 (7) 3 (7) 7 (17) 20 (49) 4 (10) 3 (7)
Verification of human subject training certification
 Protocol-nonspecific research personnel, no. (% of 42) 1 (2) 1 (2) 1 (2) 21 (50) 10 (24) 5 (12) 3 (7)
 Protocol-specific research personnel, no. (% of 42) 1 (2) 1 (2) 1 (2) 23 (55) 10 (24) 4 (10) 2 (5)
Protocol development
 Study design support/service, no. (% of 41) 3 (7) 8 (20) 1 (2) 1 (2) 1 (2) 26 (63) 1 (2)
 Recruitment of participants guidance, no. (% of 42) 3 (7) 6 (14) 1 (2) 5 (12) 2 (5) 24 (57) 1 (2)
 Informed consent document development, no. (% of 42) 2 (5) 12 (29) 0 (0) 13 (31) 2 (5) 13 (31) 0 (0)
 Informed consent process oversight, no. (% of 42) 1 (2) 15 (36) 2 (5) 13 (31) 6 (14) 5 (12) 0 (0)
 HIPAA, no. (% of 42) 0 (0) 3 (7) 7 (17) 17 (40) 6 (14) 9 (21) 0 (0)
 DSMP design/development, no. (% of 42) 1 (2) 20 (48) 2 (5) 6 (14) 2 (5) 11 (26) 0 (0)
Protocol review
 Design review, no. (% of 41) 1 (2) 10 (24) 2 (5) 8 (20) 4 (10) 16 (39) 0 (0)
 Recruitment plan review, no. (% of 42) 2 (5) 10 (24) 1 (2) 10 (24) 3 (7) 12 (29) 4 (10)
 Informed consent process review, no. (% of 42) 1 (2) 18 (43) 2 (5) 14 (33) 4 (10) 3 (7) 0 (0)
 Informed consent document review, no. (% of 42) 1 (2) 17 (40) 4 (10) 13 (31) 4 (10) 3 (7) 0 (0)
 DSMP review, no. (% of 42) 1 (2) 22 (52) 2 (5) 9 (21) 5 (12) 3 (7) 0 (0)
Protocol implementation
 Verification of initial IRB approval, no. (% of 42) 0 (0) 9 (21) 2 (5) 6 (14) 4 (10) 18 (43) 3 (7)
 Verification of program readiness, no. (% of 41) 0 (0) 7 (17) 2 (5) 4 (10) 2 (5) 17 (41) 9 (22)
 Monitoring of DSMB, no. (% of 42) 0 (0) 19 (45) 2 (5) 9 (21) 1 (2) 8 (19) 3 (7)
Protocol amendment/revision tracking
 IRB approval obtained (verify), no. (% of 42) 0 (0) 11 (26) 3 (7) 11 (26) 5 (12) 11 (26) 1 (2)
 Re-consenting of participant (verify), no. (% of 40) 0 (0) 6 (15) 5 (13) 10 (25) 4 (10) 9 (23) 6 (15)
Addressing rights/safety concerns in real time
 Real-time compliance oversight, no. (% of 42) 0 (0) 25 (60) 5 (12) 6 (14) 4 (10) 2 (5) 0 (0)
 Investigation of staff complaints about research conduct, no. (% of 42) 0 (0) 21 (50) 8 (19) 6 (14) 5 (12) 2 (5) 0 (0)
 Investigation of participant complaints about research conduct, no. (% of 42) 0 (0) 26 (62) 3 (7) 8 (19) 5 (12) 0 (0) 0 (0)
Safety and compliance monitoring
 Monitoring of AEs tracking/reporting, no. (% of 42) 0 (0) 17 (40) 2 (5) 16 (38) 3 (7) 4 (10) 0 (0)
 Monitoring of DSMP execution, no. (% of 42) 0 (0) 13 (31) 3 (7) 14 (33) 4 (10) 5 (12) 3 (7)
 Monitoring of IND/IDE, no. (% of 42) 1 (2) 7 (17) 7 (17) 7 (17) 4 (10) 14 (33) 2 (5)
 Monitoring of deviation/violation reporting, no. (% of 42) 0 (0) 12 (29) 4 (10) 18 (43) 5 (12) 3 (7) 0 (0)
Audits
 Conducts not-for cause audits, no. (% of 42) 0 (0) 8 (19) 12 (29) 8 (19) 8 (19) 3 (7) 3 (7)
 Conducts for cause audits, no. (% of 42) 0 (0) 11(26) 12 (29) 8 (19) 9 (21) 2 (5) 0 (0)
Education
 Provides mandated education in HSP, no. (% of 44) 13 (30) 9 (20) 11 (25) 16 (36) 26 (59) 6 (14) 0 (0)
 Provides elective education in HSP, no. (% of 44) 18 (41) 24 (55) 12 (27) 22 (50) 18 (41) 12 (27) 0 (0)
 Provides mandated education in GCP, no. (% of 44) 12 (27) 7 (16) 6 (14) 8 (18) 14 (32) 14 (32) 7 (16)
 Provides elective education in GCP, no. (% of 44) 18 (41) 19 (43) 12 (27) 13 (30) 16 (36) 14 (32) 0 (0)
 Provides training as part of audit response corrective action plan, no. (% of 44) 11 (25) 12 (27) 16 (36) 18 (41) 15 (34) 5 (11) 0 (0)
 Provides training in AE reporting, no. (% of 44) 11 (25) 16 (36) 8 (18) 22 (50) 15 (34) 8 (18) 1 (2)
 Provides regulatory compliance updates, no. (% of 44) 10 (23) 13 (30) 17 (39) 17 (39) 13 (30) 9 (20) 0 (0)
 Provides education in DSMP requirements, no. (% of 44) 11 (25) 24 (55) 5 (11) 17 (39) 8 (18) 7 (16) 0 (0)
*

HSP indicates human subject protection; HIPAA, Health Insurance Portability and Accountability Act; DSMP, data safety and monitoring plan; IRB, institutional review board; DSMB, Data and Safety Monitoring Board; AE, adverse event; IND/IDE, investigational new drug/investigational device exemption; GCP, good clinical practice.

RSAs both shared in the delivery of collaborative services and provided unique services to protocols affiliated with the CTSA center. For non-CTSA research protocols, the IRB and compliance group provided otherwise-shared services without RSA participation. Respondents also reported that services provided uniquely by the RSA for CTSA research, such as Data and Safety Monitoring Plans and Boards assistance, direct advocacy, and consent oversight, were often not conducted for non-CTSA research. The most common of these not conducted activities for both non-CTSA (11/39, 28%) and CTSA research (9/41, 22%) was “verification of program readiness to implement a protocol” (see Supplemental Digital Table 1).

Survey Parts I and II: Evaluation activities

In Part I, respondents were asked to describe “a program or activity that provides exceptional value, importance, or innovation in the fulfillment of the RSA functions,” and to describe how the quality and value of the program or activity is assessed. Respondents most often described programs for the education of researchers or coordinators (12/26, 46%) or programs to enhance participant safety (8/26, 31%). Other self-reported programs included informed consent oversight, quality assurance, and support of research ethics (each at 7/26, 27%). Among the 26 RSA programs, respondents described three types of assessment methods: (1) qualitative assessments, including both general feedback such as satisfaction surveys, questionnaires, and verbal praise (15/26, 58%) and feedback provided in response to specific activities such as monitoring and/or the review of audit reports (4/26, 15%); (2) quantitative assessments, including tallies of provided services, protocols reviewed, investigators/trainees assisted, and audits performed (4/26, 15%); and (3) outcome-based measures, such as an evaluation of the impact of the RSA services on protocol review turn-around time, adverse events, audit findings, and the elimination of specific research conduct errors after corrective education (3/26, 12%). Six of the 45 institutions (13%) specifically reported that they did not assess the value of their RSA activities. No respondents reported measuring participant-based outcomes to evaluate their research participant advocacy or human subject protection activities. In Part II, narrative descriptions of these evaluation methods provided no additional information for us to assess.

Most programs tracked collaborative and uniquely RSA-provided activities electronically--DSMP design and development (24/38, 63%), informed consent process and document review (30/38, 79% to 32/38, 84%), data safety monitoring (22/37, 59%), and the investigation of complaints lodged by staff (19/37, 51%) or participants (21/37, 57%) about research conduct.

Discussion

The first step in a multi-step process to develop a robust system of evaluation for the RSA Best Practice Functions is to assess the current state of practice. The RSA taskforce survey collected information on the organizational structure, activities, and evaluation methods of current CTSA center RSA programs. Currently, these programs provide a wide variety of RSA activities, many of which are complementary to or integrated with other institutional programs to support the safe and ethical conduct of research, and some of which are provided solely by RSA programs. The survey also revealed that RSA programs generally have senior level supervision and CTSA funding. The inventory of specific RSA activities that we compiled allows us now both to examine how RSA programs fulfill the RSA Best Practice Functions and to identify important issues to consider when designing formal evaluation recommendations.

Function 1: Inclusion of reporting pathways that lead to the appropriate authority and are conflict-of-interest free

RSA programs are led by senior institutional officials within a variety of underlying organizational structures. In general, these structures provide the appropriate reporting pathways for access to individuals who have the authority to implement and act upon institutional policy. Thus, we can assess an institution’s fulfillment of Function 1 by examining the authority and reporting pathway afforded to those who implement the RSA functions and the institution’s support for alternate reporting pathways in the event of a conflict of interest or of commitment.

Special challenges emerge for the large CTSA centers for which the official overseeing the fulfillment of the RSA functions has no formal authority at the affiliated institutions otherwise within the scope of the RSA functions, and for which there is no binding reporting pathway. Multi-institutional CTSA centers may require new organizational models to ensure that Function 1 is fulfilled, perhaps modeled after those recently developed for aligning IRB functions across many independent but cooperating institutions.13

Function 2: Facilitation of integrative, complementary, and unique activities

RSA programs include many complementary and integrated activities that support the safe and ethical conduct of research. In addition to providing broadly applicable research education, RSA programs also fulfill needs that are context-specific and enhance human subject protections through education, oversight, or advocacy. Whereas federally mandated research education may focus on the regulatory aspects of human protections, RSA-provided education targets operational training, and training and assistance with protocol-specific research ethics or safety challenges. These contextual, responsive services are common mechanisms for fulfilling RSA Best Practice Function 2. Of note for future evaluation, we found: (1) that some duplicative functions exist, which should prompt institutions to assess the safety net value of this redundancy against the need to streamline and ensure the cost-effectiveness of their RSA activities; and (2) that institutions reported some activities uniquely provided by RSA programs were not conducted for non-CTSA research. This contrast in provision of services affords a unique opportunity for evaluation of the impact of those activities in these two groups.

Function 3: Promotion of an authority with the ability to temporarily suspend activities for safety or ethical reasons

We found that for RSA programs the authority to influence the course of a clinical research activity could be conferred formally to the designated RSA by the senior RSA official, or informally through the RSAs relationships, status, and credibility within the CTSA. We assessed who held this authority indirectly, relying on surrogate questions on our survey about committee membership and voting authority in the hope of mitigating concerns that the institutional integration of RSA activities might contribute to misleading responses. We also found that RSAs were represented on IRBs, scientific review committees, and CTSA council or governance boards at approximately half of the institutions and often RSAs held voting rights. Holding these positions provides RSAs credibility within the clinical research enterprise that then can afford them the necessary influence to effect change in a research project, averting the need to halt a study. In addition, these relationships provide RSAs with access to and influence on the appropriate authorities who can halt a study, if they themselves do not hold that power. In retrospect, we should have included both surrogate and direct questions on our survey to learn more about the authority that RSAs hold within their institutions.

Function 4: Act as a resource to the research community and to research participants

We found that most institutions engage the expertise of the RSA in areas such as regulatory compliance and participants’ rights for the benefit of both their research community and their research participants. As a resource for investigators and staff, RSAs provide expertise by delivering operational and specialized research training and consultation and on-demand targeted services in conduct, oversight, and protection of rights and safety primarily for, but not limited to, CTSA-associated protocols. As a resource for human subject research participants, RSAs often fulfill unique roles by providing services to assure participants’ rights and advocacy, informed consent oversight, and participant safety protections. Recently, several RSA programs have collaborated to serve as a resource to the public at large through community engagement initiatives to raise awareness of participant protections and rights.14

Limitations

Our study had a few notable limitations. First, some centers found Part I of the survey difficult to complete because of the complex nature of their program’s organization or Part II difficult to complete because the relatively limited survey response choices could not accurately capture complex program activities. Second, our survey did not directly assess Function 3, relying instead on surrogate questions. Finally, the survey data that we collected provided limited definitive information about the fulfillment of the RSA Best Practice Functions or the impact of RSA programs. While these limitations may be perceived to reflect our survey design, we believe that they reflect the current state of the research subject advocacy field and the limitations of existing evaluation methods. In this regard, our survey represents a critical, early step in the process of developing robust evaluation mechanisms.

Recommendations for the future of RSA Best Practice Functions evaluation

The CTSA consortium model of research subject advocacy is based on the fulfillment of the RSA Best Practice Functions rather than on the provision that institutions conduct specific activities. The RSA activities reported here generally fulfill the broadly-worded RSA functions of appropriate reporting, service and education integration, ability to halt a study for ethical reasons, and acting as a resource to the research enterprise. Challenges to the development of more meaningful evaluation methods for assessing these activities include: (1) heterogeneous organizational structures, (2) the lack of a definition for what constitutes fulfillment of programmatic and organizational objectives, and (3) few existing measures to assess the magnitude of RSA activity value and impact.

Respondents primarily reported qualitative approaches to evaluating RSA programs, including measuring investigator satisfaction and activity tallies using locally defined metrics. Rarely did they report outcome-based measures. Although most institutions reported tracking both compliance data and RSA activities, few described initiatives that specifically correlate program and compliance outcomes using available data. This gap may represent an important opportunity for RSA programs to incorporate the compliance outcome data that they already collect into an evaluation plan with which to assess, organize, and implement their programs. Qualitative evaluation data remain important as they provide feedback on how best to deliver RSA services. To advance RSA evaluation, we must develop CTSA consortium consensus both to define the expected outcomes of RSA programs and to develop measures for those outcomes, while continuing to respect the heterogeneity of locally appropriate program structures.

We recommend dividing potential outcome measures for RSA functions into three categories: operational, research team-based, and participant-based. Operational outcomes should assess the impact of policy, teaching, and services on the research team-based and participant-based outcomes. Research team-based outcomes should measure a program’s adherence to protocol and policy and research documentation. Notably, RSA programs should assess these outcomes by analyzing training, IRB, and compliance data and investigating any correlations with RSA services. Participant-based outcomes should measure the efficacy of investigator training (e.g., informed consent) or the impact of direct-to-participant research advocacy activities on participant outcomes. While we can use compliance data to glean quantitative assessments of operational and research team-based outcomes, we have no validated measures for assessing participant-based outcomes. To address this gap, two of the authors (RGK, KGS) have led and continue to participate in an effort by 15 academic research centers to develop validated participant-based measures of the research experience.15 These outcome measures assess aspects of the participant experience, such as the adequacy of informed consent, a reflection of policy, training and conduct, and may prove to be valuable tools for evaluating participant-based outcomes across the CTSA consortium.

We find it particularly challenging to evaluate how well RSA programs mitigate risk and prevent ethical or safety lapses from occurring by providing support for appropriate protocol design, DSMPs, or real-time oversight. We may be able to detect reductions in protocol deviations or adverse events by comparing the frequency of these lapses before and after an RSA intervention, however, to do so will require novel approaches to data collection. For decades, the Joint Commission has required hospitals to conduct analyses of the potential impact of their “near misses” in patient care using the Failure Mode and Effects Analysis (FMEA) tool.16 Only one report in the literature describes applying this tool to assess and reduce risk in research.17 Applying the FMEA tool, or a modification of it, may represent an opportunity for us to assess the impact of RSA-prevented harms.

Based on the RSA taskforce survey results that we reported here, we recommend the following steps to advance the development of methods to evaluate the value and impact of RSA programs:

  1. Foster a culture among RSA programs that values the capture and utilization of existing data sources to evaluate the impact of ongoing RSA activities on regulatory compliance, scientific integrity, and participants’ rights and safety.

  2. Implement pilot demonstration projects to develop common definitions and procedures for a limited test set of outcome measures and disseminate the results to the CTSA consortium. Such projects could include: (1) comparing the type and frequency of protocol violations and deviations before and after the delivery of RSA services or any relevant changes in policy; and (2) incorporating RSA-provided activities into classic performance improvement initiatives conducted in response to participant-based outcomes.

  3. Develop formal RSA program outcome-based measures in alignment with RSA Best Practice Functions, for use consortium-wide.

The implementation of such evaluation methods is complex. The heterogeneity of RSA programs not only grew out of the flexibility that the NCRR encouraged during the early stages of the RSA program but also from the diverse structures and needs of institutions that persist across the CTSA consortium. The development of outcome-based evaluation measures must take this heterogeneity into account while advancing the value and effectiveness of RSA programs.

Supplementary Material

1
2

Acknowledgments

Funding/Support: This project was funded in part by the National Center for Research Resources, National Center for Advancing Translational Sciences, and NIH, and through the CTSA Program (UL1 TR000043, UL1RR025755, and UL1 RR024140) as part of the Roadmap Initiative, Re-Engineering the Clinical Research Enterprise. The CTSA consortium publications committee approved this report.

The authors wish to thank Dr. Jody Sachs for her encouragement, insightful comments, and administrative support during the development, fielding, and analysis of this survey, and Ms. Tyler-Lauren Rainer for her technical assistance. The authors also wish to thank the members of the Research Subject Advocacy Survey taskforce who were involved in the design or analysis of this survey: Jane Alexander, Dr. Enrico Cagliero, Dr. Dawn Lantero, Dr. Hal Jenson, Robert Kolb, Liz Martinez, Halia Melnyk, Andrea Nassen, Nancy Needler, Dr. Kathy Powell, Eric Rubinstein, Laurel Yasko, and Jan Zolkower.

Footnotes

Other disclosures: None.

Ethical approval: The IRB chairman at the Rockefeller University confirmed that this study did not constitute human research and therefore was exempt from IRB review.

Contributor Information

Dr. Rhonda G. Kost, Director, Clinical Research Support Office, Center for Clinical and Translational Science, Rockefeller University, New York, New York, and co-chair, National Clinical and Translational Science Award (CTSA) Regulatory Knowledge Key Function Committee and Research Subject Advocacy Taskforce.

Dr. Carson Reider, Clinical research consultant and research subject advocate, Center for Clinical and Translational Science, The Ohio State University College of Medicine, and adjunct research professor, The Ohio State University College of Nursing, Columbus, Ohio.

Ms. Julie Stephens, Biostatistician, Center for Clinical and Translational Science and Center for Biostatistics, The Ohio State University College of Medicine, Columbus, Ohio

Dr. Kathryn G. Schuff, Director of regulatory support services, Oregon Clinical and Translational Research Institute, Oregon Health and Science University School of Medicine, Portland, Oregon, and co-chair, National CTSA Regulatory Knowledge Key Function Committee.

References

  • 1.National Center for Research Resources. Research Subject Advocates. [Accessed May 28, 2012];NCRR Division for Clinical Research Resources: Guidelines of the General Clinical Research Centers Program. 2005 Available at: http://webcache.googleusercontent.com/search?q=cache:http://www.ncrr.nih.gov/clinical_research_resources/GCRC_archives/GCRC_Guidelines_September2005.pdf.
  • 2.Neill KM. Research subject advocate: a new protector of research participants. Account Res. 2003;10:159–174. doi: 10.1080/714906094. [DOI] [PubMed] [Google Scholar]
  • 3.O’Lonergan T. Creative solutions: research subject advocates: increase in reports of human subject protection deficiencies bring scrutiny as well as more efforts at education and support. Prot Hum Subj. 2003;(8):10–11. [PubMed] [Google Scholar]
  • 4.Clinical & Translational Science Awards. [Accessed May 14, 2012];Regulatory Knowledge - Research Subject Advocacy. Available at: https://www.ctsacentral.org/committee/regulatory-knowledge-research-subject-advocacy.
  • 5.Society of Research Subject Advocates. SRSA; [Accessed May 14, 2012]. Available at: http://www.srsa.us. [Google Scholar]
  • 6.Bramstedt KA. Research subject advocates: to whom are they loyal? Clin Invest Med. 2003;26:64–69. [PubMed] [Google Scholar]
  • 7.Carroll PR. The impact of patient advocacy: the University of California-San Francisco experience. J Urol. 2004;172:S58–61. doi: 10.1097/01.ju.0000142247.16452.f1. discussion S61–62. [DOI] [PubMed] [Google Scholar]
  • 8.Easa D, Kim K, Kato K, et al. The research subject advocate at the University of Hawai’i Clinical Research Center: an added resource for protection of human subjects. Hawaii Med J. 2006;65:50–52. [PMC free article] [PubMed] [Google Scholar]
  • 9.Martinez RA. Role of research subject advocates in the development of data safety and monitoring plans. J Investig Med. 2004;52:464–469. doi: 10.1136/jim-52-07-39. [DOI] [PubMed] [Google Scholar]
  • 10.Stroup S, Appelbaum P. The subject advocate: protecting the interests of participants with fluctuating decisionmaking capacity. IRB. 2003;25:9–11. [PubMed] [Google Scholar]
  • 11.Silber TJ. Protection of children in research: beyond pediatric risk levels: the emergence of the research subject advocate. J Clin Ethics. 2010;21:221–223. [PubMed] [Google Scholar]
  • 12.Kost Rhonda G. Research Subject Advocacy taskforce, Regulatory Knowledge Committee. Personal communication with Meryl Sufian, Evaluation Committee. Feb 13, 2012.
  • 13.Bierer B, Winkler S. Alternative IRB approvals for multisite trials, Harvard reliance agreement. Paper presented at: The 4th Annual CTSA Clinical Research Management Workshop; 2011; Bethesda, MD. [Google Scholar]
  • 14.Winkler S. Making sausages from silos. Paper presented at: CTSA Regulatory Knowledge Key Function Committee Face to Face Meeting; 2011; Bethesda, MD. [Google Scholar]
  • 15.Kost RG, Lee LM, Yessis J, Coller BS, Henderson DK. Assessing research participants’ perceptions of their clinical research experiences. Clin Transl Sci. 2011;4:403–413. doi: 10.1111/j.1752-8062.2011.00349.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Institute for Healthcare Improvement. [Accessed May 14, 2012];Failure Modes and Effects Analysis Tool. Available at: http://app.ihi.org/Workspace/tools/fmea/
  • 17.Cody RJ. Anticipating risk for human subjects participating in clinical research: application of Failure Mode and Effects Analysis. Cancer Invest. 2006;24:209–214. doi: 10.1080/07357900500524678. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1
2

RESOURCES