Abstract
Background:
Meaningfully evaluating the quality of institutional review boards (IRBs) and human research protection programs (HRPPs) is a long-recognized challenge. To be accredited by the Association for the Accreditation of Human Research Protection Programs (AAHRPP), organizations must demonstrate that they measure and improve HRPP “quality, effectiveness, and efficiency” (QEE). We sought to learn how AAHRPP-accredited organizations interpret and satisfy this standard, in order to assess strengths, weaknesses, and gaps in current approaches and to inform recommendations for improvement.
Methods:
We conducted 3 small-group interviews with a total of 19 participant representatives of accredited organizations at the 2019 AAHRPP annual meeting. Participants were eligible if they had familiarity with their organization’s approach to satisfying the relevant QEE standard.
Results:
Participants reported lacking clear definitions for HRPP quality or effectiveness but described various approaches to assessing QEE, typically focused on turnaround time, compliance, and researcher satisfaction. Evaluation of IRB members was described as relatively superficial and information regarding research subject experience was not reported as central to QEE assessment, although participants described several efforts to improve consideration of patient, subject, and community perspectives in IRB review. Participants also described efforts to educate and build relationships with key stakeholders as important features of a high-quality HRPP. While generally satisfied with their approaches, participants expressed concern about resource and time constraints that pushed them to be reactive and automatic about QEE, rather than proactive and critical.
Conclusions:
The relevant AAHRPP accreditation standard may obscure critical gaps in defining and measuring QEE elements. We recommend that AAHRPP: (1) offer a definition of QEE or require accredited organizations to provide their own, to help clarify the rationale and goals behind assessment and improvement efforts, and (2) require accredited organizations to establish QEE objectives and measures focused on participant outcomes and deliberative quality during protocol review.
The challenge of evaluating whether and to what extent institutional review boards (IRBs) and the human research protection programs (HRPPs) of which they are often a part succeed in protecting the rights and welfare of research participants and in promoting scientifically valid, valuable, ethical research has been long-recognized but remains unresolved (Lynch, Eriksen, and Clapp 2022). In large part because both the regulatory and ethical standards IRBs and HRPPs are tasked with applying are open to substantial discretion, among other reasons, it is difficult to identify specific and sensitive measures of the quality and effectiveness of these research oversight bodies.
As a result of this challenge, regulatory compliance is often used as a default indicator of IRB and HRPP quality, with the benefit of being fairly straightforward to measure (Tsan and Nguyen 2018; Tsan 2019a; 2019b; Tsan and Van Hook 2022). Some acknowledge that regulatory compliance is not sufficient but suggest that appropriate and feasible quality measures may be limited to those focused on IRB and HRPP structures and processes (Scherzinger and Bobbert 2017). Others insist that the only way to truly measure quality is to directly assess IRBs and HRPPs based on outcomes they achieve, especially those related to participant protection (Lynch et al. 2019; Coleman and Bouësseau 2008). This outcomes-based approach is most challenging given that the question of what counts as adequate participant protection is precisely at the heart of what IRBs are supposed to decide, in addition to the many different factors and parties that could influence this outcome beyond the IRB and HRPP.
At present, most of the tools that exist to evaluate IRB and HRPP quality emphasize structure and process measures (Lynch et al. 2020). Among the most comprehensive of these is the set of standards used by the Association for the Accreditation of Human Research Protection Programs (AAHRPP). AAHRPP is a U.S.-based, independent, nonprofit organization that works to “promote[] high-quality research through an accreditation process that helps organizations worldwide strengthen their [HRPPs]” (AAHRPP 2022c). AAHRPP accreditation is entirely voluntary, but self-described as a “gold seal” (AAHRPP 2022c), allowing an organization to “demonstrate the overall excellence of its research program by providing the most comprehensive protections for research participants” (AAHRPP 2022b). As of May 2022, the AAHRPP website lists 261 organizations as having full or qualified accreditation, 217 of which are in the U.S.
To secure AAHRPP accreditation, organizations must conduct a self-assessment based on the AAHRPP Evaluation Instrument for Accreditation (AAHRPP 2019), followed by an on-site visit in which program performance is evaluated with respect to the AAHRPP Accreditation Standards (AAHRPP 2022a). These standards are organized according to three domains: I. Organization; II. IRB; and III. Researcher and Research Staff. Following the site visit and organizational response, AAHRPP’s Council on Accreditation determines the organization’s accreditation status.
Under AAHRPP Accreditation Standard I-5, organizations are asked to demonstrate that they “measure, and improve, when necessary, the quality, effectiveness, and efficiency of the [HRPP.]” This requirement is distinct from another component of the same standard that calls for a demonstration of compliance with relevant policies, laws, and guidance. More specifically, under Element I.5.B, organizations must demonstrate that they conduct audits or surveys, or use other methods, to assess HRPP quality, efficiency, and effectiveness (hereafter, QEE), and that they identify strengths, weaknesses, and necessary improvements. Relevant commentary goes on to state that these efforts should be undertaken by an organization’s quality improvement program to monitor QEE on an ongoing basis and that results should be used to design and implement improvements. Organizations are expected to have a written quality improvement plan that states goals with respect to achieving “targeted levels” of QEE and that defines at least one objective and one measure of QEE, in addition to defining methods for assessment and improvement (see Appendix).
Absent from this set of accreditation requirements is any specific definition of QEE, either as a whole or for any individual term. Moreover, accredited institutions must identify only a single objective and single measure of quality, efficiency, or effectiveness, rather than addressing all three components. Although the intent is likely to provide organizations with flexibility, the overall lack of specificity may undercut the goal of meaningfully demonstrating quality, efficiency, and effectiveness.
We therefore sought to learn from AAHRPP-accredited organizations how they interpret the I-5 standard and I.5.B element, what they currently do to satisfy these provisions, and what barriers they face in doing so. Our goals were to: (1) better understand the practical meaning of IRB and HRPP quality and effectiveness from the perspective of those “in the trenches” working to lead accredited HRPPs; (2) identify any helpful methods and measures that potentially could be implemented more broadly if shared; (3) assess the strengths and weaknesses of approaches taken by accredited organizations; (4) identify relevant gaps; and (5) offer recommendations to improve this component of HRPP accreditation. This work was conducted as part of the broader research agenda of the Consortium to Advance Effective Research Ethics Oversight (www.AEREO.org), a collaborative initiative of IRB professionals, research ethicists, and others aiming to empirically evaluate and improve the quality and effectiveness of IRBs and HRPPs.
Methods
We conducted 3 small group interviews, each with 6–7 representatives of accredited organizations, on-site at the May 2019 AAHRPP Annual Meeting in New Orleans, LA. Group interviews were chosen rather than individual interviews due to limited in-person time and to allow participants to interact in a way that could produce richer data. Interviews were held on 3 successive days and did not conflict with any other conference events. Individuals were eligible to participate if they were attending the meeting, were affiliated with an AAHRPP-accredited organization, and had familiarity with their organization’s approach to satisfying the I.5 standard and I.5.B element.
AAHRPP initially shared our study invitation by email with individuals from accredited organizations registered to attend the Annual Meeting. Two reminder invitations were sent directly by the research team. The invitation indicated AAHRPP’s support for the project but acknowledged that the research team was independent, that participation was voluntary, and that AAHRPP would not be informed of any individual or organization’s participation. Participants were offered a $20 gift card as a token of appreciation.
The interview guide addressed what organizations do in relation to QEE, as well as what they show AAHRPP. More specifically, participants were asked to discuss how their organizations define QEE terms for purposes of accreditation; the objectives and measures included in their quality improvement plans; how organizations use collected information about QEE to make improvements; the activities and approaches reported to AAHRPP to satisfy I.5 accreditation requirements; and overall confidence and satisfaction with current approaches.
Interviews lasted between 45–60 minutes and were recorded with permission. One author (HFL) conducted the interview in person while the other author (HAT) joined by video and took notes. The project was deemed exempt by the IRBs at the University of Pennsylvania and Johns Hopkins University.
Immediately upon completion of the interviews, both members of the research team recorded initial impressions of the discussions. Interviews were transcribed and all identifying information removed. Transcripts were open coded by each team member before generating a preliminary list of codes and creating a codebook. Transcripts were uploaded to Dedoose, a cloud-based app for qualitative data management, coding, and analysis (Dedoose (version 1), n.d.). Both authors individually coded each transcript and then met to compare coding and resolve discrepancies. One author then generated narrative reports by key codes. We used a straightforward content analysis of each data report to identify themes across interviews to create a descriptive set of core findings presented below (Sandelowski 2010).
Results
In total, 19 individuals participated in the small group interviews. The majority were HRPP, IRB, or compliance office directors or assistant directors. All but two individuals were from organizations in the U.S. The participating accredited organizations included public and private universities, specialty medical centers, and private health systems.
In what follows, we describe how these HRPPs define key terms relevant to Standard 1.5; the most common ways they measure quality; whether and how HRPPs incorporate quality-related findings into their practice; their educational efforts to promote quality; HRPP satisfaction with their current quality measures; and finally, organizational approaches to quality improvement (Table 1). In each section, we highlight the most common responses, as well as important outliers.
Table 1.
Definitions of key terms | • Absence of formal definitions of quality, effectiveness, and efficiency • Informal definitions of quality as compliance, effectiveness as researcher satisfaction, and efficiency as turnaround time • Mixed views about value of more standardized definitions |
Reported assessment efforts | • Turnaround time (with acknowledgment of different ways to measure, concern not to make speed a goal unto itself ) • Compliance, audits • Feedback from/about IRB members • Researcher satisfaction (through passive and active channels) • Infrequently assessed: research subject perspectives |
Learning from data and feedback | • Data collection regarding HRPP performance is purpose-driven, with goal of identifying areas in need of improvement and implementing changes • Improvements most often relate to efficiency, researcher satisfaction, and compliance (including changing requirements to avoid “administrative” or non-substantive noncompliance) |
Education to promote quality | • Educating investigators about HRPP standards and requirements • Educating communities and participants about research • Educating IRB members about role and rules |
Satisfaction with quality, assessments | • Generally satisfied with assessments and overall performance, although always room to improve • Challenged by inadequate resources, forcing reactive rather than proactive approaches to quality assessment and improvement |
Organizational structure for quality assessment | • Quality assessment may happen within HRPP or via separate office • Challenges of each model relate to expertise v. independence |
Key terms and definitions
When asked how their organizations define the key terms relevant to Standard I.5, participants overwhelmingly indicated the lack of any formal definitions of quality, effectiveness, or efficiency. Nonetheless, they often indicated having a general sense of what these terms mean. This general sense was not always consistent across participants, however, and some terms were acknowledged as harder to define than others.
Efficiency was usually the clearest term for participants, who emphasized significant attention to turnaround time on submitted protocols, including subcomponents like how long it takes for protocols to be placed on an IRB meeting agenda, how much back and forth is needed with investigators, and how quickly decisions are communicated, as described in more detail below. Participants also discussed frequent assessment of organizational processes to improve speed.
When referring to effectiveness, participants most frequently described relationship building with investigators and relying on researcher perceptions of the HRPP or IRB. Effectiveness seemed to receive less attention than either efficiency or quality. According to one participant:
I think it’s hard to find measurables for effectiveness… . The only way that I can say we could define this now is by the amount of complaints that our vice president for research gets about the process… . from the research community … . Group 3, Speaker 5
Quality was commonly defined as synonymous with compliance, both with regulatory requirements and institutional policies and procedures. Some participants also referenced efficiency in their definitions of quality or acknowledged that their organizations focused more on operational quality than on the quality of IRB review itself. In the words of two participants:
Quality, we would define that as being … the standards that when our [Quality Assurance] team comes in to review us as an IRB - not as an HRPP necessarily, but as the IRB - that we are conducting our reviews in accordance with the regulations, that our minutes meet the requirements of the regulations, and that we would be ‘audit ready.’ Group 3, Speaker 6
I think the term quality is really narrowly defined. I mean, I’ve been doing site visits for 14, 15 years and I’ve yet to see a site, including my own, that measured quality in terms of the quality of the actual IRB review. It’s much more of an operational quality than it is a scientific quality. Group 1, Speaker 1
Despite acknowledged variability between institutions, and overall ambiguity, participants had mixed views about whether it would be helpful to have more uniform or specific definitions of QEE, from AAHRPP or otherwise. Some indicated that they struggled to come up with definitions on their own and would appreciate assistance, especially distinguishing between quality, effectiveness, and efficiency, rather than “bucketing” them all together, and identifying appropriate examples and benchmarks. In particular, some participants noted that uniform definitions could help them demonstrate their performance compared to others, which might help support their requests for resources, as well as promote “fairness” and clarity in measuring things like turnaround time between sites. Others acknowledged the importance of flexibility and the desire to be able to respond to institution-specific circumstances or changing priorities, including potential differences between sites that focus on biomedical versus social/behavioral research. Conveying different sides of the argument, two participants shared:
I think it would benefit AAHRPP to have a definition of HRPP quality and quality improvement. Because I’ve seen a lot of institutions sort of say we did that by auditing studies and that’s not the mission of that standard. Group 1, Speaker 1
I think it would be a mixed blessing. It will be nice to have the definitions, but then we have to adhere to them, and maybe it’s not as easy. I like the concept of saying at least give us a framework. Group 2, Speaker 2
Assessment efforts
Participants described a variety of approaches, measures, and activities used for assessing QEE, many of which they indicated were included in the quality improvement plans required for AAHRPP accreditation. The most common of these were turnaround time, compliance, feedback from and about IRB members, and researcher satisfaction, with some additional discussion of research subject perspectives and engagement.
Turnaround time was frequently mentioned by participants as among their quality metrics and as something that gets significant attention within their HRPP and institution, although participants did not clearly differentiate between IRB turnaround time specifically (i.e., focusing on board decisions) and HRPP turnaround time more generally (i.e., focusing on required procedures before, after, apart from, or in addition to IRB review). This was recognized as a more quantitative “hard” metric than other things they sought to measure, but even still, participants indicated that turnaround time was not always straightforward. For example, reasonable turnaround times were acknowledged to depend on workload. In addition, delays may occur at various stages between submission, review, and returning the ultimate determination, and as noted above, different sites may measure turnaround time differently in terms of starting and stopping the clock. To improve turn around times, participants described seeking to learn from high-performing teams, while making sure boards have the resources needed to move quickly. A few participants cautioned not to make speed a goal unto itself:
[I]f I look only at turnaround times and then I make a plan based on turnaround time, then I could actually totally [be] missing that the times that I was actually slow, I was the most protective. Group 2, Speaker 3
And no matter how quick your turnaround time is, if you are out of compliance it’s still not particularly efficient or effective. Group 1, Speaker 1
Compliance with both regulatory and institutional obligations and processes was also a clear assessment priority reported by participants, recognized as being both important to quality and relatively straightforward to evaluate. Participants predominantly described audits as a compliance mechanism, which might be routine or for-cause. Only one individual explicitly described using audits to assess internal consistency in applying HRPP policies and procedures, although others mentioned consistency as a broader quality goal; some noted conducting re-reviews of selected protocols or having HRPP staff and IRB members review the same test protocol as a prospective learning exercise. Audits of HRPP and IRB review and approval activities were typically distinguished from post-approval monitoring of active studies, which in some cases was carried out by a separate team or office.
As another mode of quality assessment, some participants reported evaluating and/or collecting feedback from IRB members, usually annually. Evaluations might include board member self-assessment or review by the board chair or an HRPP staff member, examining things like how well members are fulfilling the role, their turnaround time, completion of review checklists, or even whether they have opened electronic review materials. However, one acknowledged challenge is the difficulty of evaluating individuals who are playing a volunteer role on top of other obligations, sometimes leading board members to be treated as “customers.” As one participant noted:
[O]nce our IRB member complained to us, “We are working so hard, why are you always mentioning we still have a lot of problems to improve?” Group 1, Speaker 2
When discussed, participants described IRB member surveys as addressing factors such as acceptability of the workload or suggestions for overall improvement, although these surveys were not uniformly valued. As described by one participant:
We haven’t found [the IRB member survey] to be very useful… . It comes down to kind of like, are they attending the meetings and are they filling out checklists? It really seems like a pro forma kind of thing that we’re doing without much benefit. Group 1, Speaker 3
Several participants flagged researcher or “user” satisfaction as an important mechanism for gauging how well the HRPP is doing, of particular interest to institutional leadership. However, they differed in whether this information was gathered passively or actively. For example, some participants described receiving and acting on investigator complaints, while others described formally surveying investigators and research teams. Among those using surveys, the approaches also differed, with some organizations relying on protocol-specific HRPP performance surveys circulated with approval memos or as part of the broader study activation process and others relying on annual surveys about the overall review experience, turnaround times, submission software, and the like. Those using surveys indicated that they were interested in feedback not only from principal investigators but also from broader study teams. As is common for surveys, however, participants sometimes described challenges associated with survey fatigue and non-response. One participant indicated that their institution sometimes conducts focus groups with the research community to solicit feedback about specific HRPP topics or policies.
Finally, some participants indicated that their institutions took various steps to obtain and consider feedback from research subjects as a mechanism of quality assessment, although this was less common than efforts to consider investigator perspectives and satisfaction. As one participant explained, the accreditation process has focused primarily on research review, not subject experience:
From our experience in the accreditation that we’ve had in the past it’s always been focused on … quality about the process with researchers… . It’s always been about the research project itself… . [I]t hasn’t been about going out and actually observing a consent process. Group 1, Speaker 5
Other participants similarly acknowledged that understanding research subject experience has not been central to their assessments of quality, although perhaps should be:
“I’d say the one thing that we don’t measure is are subjects better off as a result of our IRB review.” Group 1, Speaker 1
Traditionally I guess the question is then, is HRPP effectiveness and quality really in terms of … I mean, can we measure it at a patient level so see whether the patients are satisfied with the process? I don’t necessarily have an answer to that, except for the fact that maybe that’s where it has to go. Versus just thinking about, “Are your investigators happy or not?” and is that effectiveness or not? Group 3, Speaker 4
Participants described passively receiving subject questions and complaints (often related to delayed compensation), as well as the occasional use of subject surveys, both of which could help them identify and address concerns. Although at least one organization had an extensive system of surveying a sample of subjects through its clinical research management system several times a year, others emphasized numerous barriers to subject surveys. For example, it can be difficult for the HRPP to capture all research subjects given the lack of a direct interface with them, the results may be biased depending on which subjects are motivated to complete surveys, subjects may be overburdened with research tasks or need assistance in completing surveys, timing surveys may be challenging, and the HRPP needs to have sufficient resources to manage the data and respond to any issues that arise. Nonetheless, participants acknowledged that it might improve research conduct if investigators knew that their subjects might be randomly surveyed. One participant described their institution’s routine assessment of research subject satisfaction:
[W]e do send out surveys, and it’s typically two to three times a year….[The survey] asks a number of questions including things like, how satisfied were you with your research experience? Then also asks some questions about, did you feel like you knew who to go to, to have your questions answered? Some items that directly ask about satisfaction, but also other things like, would you recommend participation in another study? Would you participate again? [Other items ask] whether they understood the objectives of the study. Those questions are a little harder because they might not be temporally associated with their participation. Group 2, Speaker 3
More common than relying on subject perspectives and experiences in research as a mechanism of evaluating the quality of the IRB or HRRP, however, were efforts to increase consideration of these perspectives to improve IRB and HRPP decisions and activities around challenging issues. For example, participants described collaborations with community engagement cores, patient and family advisory committees, committees of former subjects, and community IRB members who are “closest to being actual subjects” to provide input on potential improvements based on their experiences and what they have heard in the community, to improve the consent process, and to better understand perspectives on risks and benefits. As described by one participant:
[W]e have started to kind of see the outcome of the IRB realm from a patient perspective and so we’re trying to get their voice very actively involved… . [T] here’s this kind of gap that we find, at least in our institution, between what we think is right and what patients either want or patients think is right. Group 1, Speaker 5
Another participant noted that these engagement efforts may have a “dual focus” to both improve subject experience and facilitate recruitment. Yet another expressed challenges in evaluating the impact of subject engagement:
We have started to partner with [patients] for consent review. They all had been patients, many of them had been part of research. While we are gaining in the processes realm, we haven’t figured out what’s success. Is it greater compliance, is it greater satisfaction, is it fewer people who are lost to follow-up? Group 2, Speaker 5
On the topic of informed consent, a small number of interview participants also indicated that they, or researchers at their institutions, had carried out studies regarding improvements to informed consent.
Learning from data and feedback
When discussing the various approaches used to assess HRPP and IRB activities and performance relevant to QEE, as well the types of feedback available, several participants emphasized the importance of actively learning from these data to make improvements if and where needed. For example, one participant noted that their HRPP had created a “what went wrong” committee tasked with reviewing all noncompliance, protocol deviations, and other violations to assess how they happened and identify responsive actions, such as eliminating expiration dates from consent forms where they were causing technical violations but not serving any substantive purpose or clarifying to investigators who had enrolled participants outside of approved accrual windows that the IRB prefers realistic timeframes to artificially narrow ones. Another participant also reported using feedback to determine when the HRPP was inadvertently making things more difficult:
[O]nce we see that we have repeated noncompliance, we go back and access, “Is that non- compliance really non-compliance? Or is that administrative burden non-compliance?” Group 3, Speaker 6
Overall, participants indicated that their HRPP’s current efforts at assessment are purpose-driven:
We identify what we want to QI [quality improve] because we want to take a look to see if a change is needed. We do not want this to be an exercise or report that goes on the shelf. Group 1, Speaker 1
When you investigate it, what are you finding? How can we improve upon it? … [T]hey’re looking at different areas of vulnerability within the office structure and what they’re supposed to do and what their expectations are, and identifying areas for the following year to dig deeper and find out where they’re maybe systemic problems that we’re not aware of until something rises to the occasion. Group 2, Speaker 2
Other participants similarly described using data to identify priority areas for attention and change, for example noting efforts to improve efficiency and workflow, to respond to IRB member suggestions, to inform IRB members and HRPP staff about investigator and subject feedback, or to identify and resolve areas of investigator confusion, especially based on frequently asked questions. Some participants explained that when they found things working well, such as a particularly efficient review group, they would make efforts to identify the specific processes leading to that success so that it could be replicated. Occasionally, however, participants noted difficulty integrating different types of data to determine how to proceed or getting feedback that they felt was outside their control:
[W]e do an annual [investigator] survey and we take all that information in and 90% of it is complaining about the electronic system, the other 10% is something we can actually do something about. Group 3, Speaker 5
Some participants also indicated that evaluation data might be shared with those outside the HRPP, such as a vice provost for research or investigators, sometimes through an annual report. Overall, feedback seemed to be most often used for improvements related to efficiency, compliance, and investigator satisfaction, rather than factors directly connected to subject experience and protection.
Education and capacity-building to promote quality
In addition to measurements and assessments of how the HRPP or IRB is performing, when discussing quality and effectiveness, some participants described efforts that extend beyond the narrow review of research protocols to include education of and relationship building with investigators and research subjects. They viewed these activities as critical to promoting a high quality HRPP.
With regard to investigators, participants indicated a variety of training and education approaches led by the HRPP, including office hours for investigators and teams to ask questions that may lead to the development of broader informational programs, regular group meetings with research coordinators for bidirectional sharing of information, and formal training requirements. One participant described an approach through which researchers would receive a “teach visit” from an audit team prior to study initiation to help explain compliance obligations. Participants viewed this approach of addressing potential problems before they arise as particularly useful, sometimes lamenting not having the resources to support a dedicated education lead within the HRPP. Others acknowledged that providing investigators with context for HRPP requirements and making concerted efforts to build strong relationships between the HRPP and investigators can help avoid problems. A few participants also described HRPP efforts to educate the community about research participation or to assist participants during the consent process. Finally, some participants mentioned efforts to ensure the expertise and training of their IRB members as an important component of achieving quality.
Satisfaction with current approaches
Overall, most participants reported being generally satisfied with their organization’s approaches to QEE assessment and improvement, notwithstanding the definitional and other challenges described above. However, a common theme was that there is always room to do more or do better, which demands resources that are not always available. In the words of one participant:
[W]e do love the idea that “always better, not perfect.” … So, I do believe that we are in the evolving process and I’m sure there will be always different problems but certain kind of flexibility is great. You just identify the most important and the priority problems, then [get] working on it. Group 1, Speaker 2
Several participants indicated that they are often forced to take a reactive approach to quality rather than a proactive one, or to triage attention to regulatory issues rather than thinking critically about broader improvements or deeper implications of their research oversight work. This is often because they are facing such a high volume of protocols, as well as a variety of internal and external requirements, including recently changed regulations. Against this backdrop, participants indicated that it is important, but difficult, to make sure HRPP staff have adequate support to learn, advance, and think critically. As two participants explained:
It would be a little bit less reactive in my evaluations and more proactive into thinking about how to develop something of quality upfront… . In my dream world, there would be more time in this changing environment to do a little bit more of the quality planning rather than just the quality evaluation… . That I think would actually lead to potentially a more effective and efficient program. I’d rather not just do it and then evaluate what I made a mistake doing … . Group 2, Speaker 3
[W]ith so many items and such a high volume, people are automatic. Group 2, Speaker 6
Whereas HRPPs in larger academic organizations may be able to draw on certain grant-funded resources, such as a Clinical and Translational Science Award, smaller organizations report struggling with staffing, which significantly limits the quality initiatives they are able to pursue.
Organizational structure for quality improvement/quality assurance
In terms of organizational structure relevant to assessing, maintaining, and improving quality, participants often described their HRPPs liaising with other institutional entities and individuals, such as a clinical trials office, office of research compliance, or an institutional director of operations. Participants agreed on the importance and value of having designated individuals or teams responsible for quality assessment and improvement. However, there was a split in approaches, with some describing a quality team housed within their HRPP, some describing a separate quality office outside the HRPP, and some describing a more hybrid model with relevant quality efforts occurring both inside and outside the HRPP. Participants perceived benefits and drawbacks to each approach, with an embedded model facilitating expertise and understanding of specific regulatory requirements and other realities, but also potentially leading to conflicts or a lack of independence.
Discussion
We conducted small group interviews with leaders of AAHRPP-accredited organizations to gain insight into how they understand and satisfy accreditation requirements regarding the assessment and improvement of HRPP and IRB quality, effectiveness, and efficiency (QEE), concepts that are related but best understood as distinct, as we have argued elsewhere (Lynch et al. 2019). Whereas most AAHRPP accreditation standards focus on the presence of policies and procedures, Standard I-5 is especially important because it aims to go further, requiring organizations to consider whether and how well those policies and procedures are working to promote HRPP QEE.
In our interviews, we found a clear commitment to overarching QEE assessment and improvement, with numerous metrics and evaluation efforts in place at every site, typically focused on turn around time, compliance, and researcher perspectives. Interview participants reported general satisfaction with their approaches, while recognizing opportunities for improvement. In particular, interviewees reported high workloads and resource constraints that lead to reactive rather than proactive approaches to assessing and advancing quality, as these efforts must be balanced with the day-to-day work of research review and oversight. They also reported definitional and measurement challenges that result in little attention to research participant outcomes and experience. Overall, our findings offer important insights about the overall utility of Standard I-5’s requirements, as well as how those requirements could be made more meaningful going forward.
We heard from accredited organizations that they faced some difficulty filling in for the lack of definitions provided by AAHRPP, which typically led to a lack of formal institutional definitions of the QEE components. Although interview participants described efficiency as the most straightforward conceptually, it still raised challenges related to variation between institutions. Interview participants viewed quality and effectiveness as far more ambiguous overall, often leading them to offer more instrumental definitions that referred back to the approaches their institutions currently take to measurement. This aligns with findings from a recent interview study involving IRB and HRPP directors (not focused on accreditation requirements) in which participants tended to define quality in terms of what they are able to measure (Lynch, Eriksen, and Clapp 2022). Yet definitions would ideally be distinct from and precede measures, since the reverse may lead to measures that fail to reflect the intended objective or that mask important variation in what is truly being assessed. For example, although AAHRPP lists quality, effectiveness, efficiency, and compliance separately in Standard I-5, interview participants often merged these concepts.
While not ideal, the fact that AAHRPP has not yet offered formal definitions for these important terms is unsurprising, for at least two reasons. First, some participants indicated a desire for flexibility and an associated concern that specific definitions would demand specific measures that may be more difficult to implement or may otherwise be less valuable than their institution’s current approaches. Second, and related, definitions in this context will inevitably be linked to measures and assessments, since the goal of this accreditation standard is to demonstrate – through observable measures – that accredited institutions are in fact achieving high-quality. However, appropriate measures have proven exceedingly difficult in this context (Lynch, Eriksen, and Clapp 2022; Scherzinger and Bobbert 2017; Coleman and Bouësseau 2008; Taylor 2007; Abbott and Grady 2011; Nicholls et al. 2015). AAHRPP may therefore be understandably reticent to offer concrete definitions before it can offer concrete measures to accompany them, instead preferring to leave this to individual institutions. Yet this approach leaves at least some organizations feeling adrift. Although AAHRPP should be praised for taking the important first step to encourage accredited institutions to pay attention to QEE elements, however broadly or narrowly they conceive of them, we think it is possible to go further, even in the absence of perfect measures. Otherwise, there is a risk that this accreditation standard will offer a veneer of quality while missing important substance, the opposite of AAHRPP’s presumed goal.
To demonstrate this concern, consider what our interview participants indicated they do to assess QEE within their HRPPs. We observed substantial attention to turnaround time and compliance audits, both of which are important but secondary to core IRB and HRPP functions related to meaningful deliberation and participant protection. By this we mean that rapid approvals and perfect regulatory compliance matter if and only if research in fact meets core ethical standards, as some interviewees themselves acknowledged. But regulatory compliance is not necessarily a guarantee of ethical research; if the regulations were that straightforward, we might not need IRBs at all. Instead, the regulations are fairly general because it is not possible to anticipate every issue that could arise in every protocol. They therefore “delegate to IRBs the authority to make critical judgments” (Lynch and Rosenfeld 2020) and intentionally leave IRBs with a great deal of discretion – for example, to determine whether there is an acceptable balance between research risks and benefits, whether informed consent disclosures are adequate, and whether informed consent may be acceptably waived. These are things about which reasonable people (and boards) may disagree, which is precisely why the board’s deliberative function is so crucial. IRBs exist not simply to follow a regulatory checklist but rather to engage in ethical analysis and judgment together as a board, with consideration of various perspectives informed by different expertise and experience. The regulations are intended to guide that ethical analysis, but cannot substitute for it, nor does regulatory compliance provide insight into how things actually go for research participants (Lynch et al. 2019; Coleman and Bouësseau 2008; Lynch and Rosenfeld 2020). Importantly, there are also several ethical considerations that are either ignored or insufficiently addressed by existing regulations, such as approaches to participant payment, plans for and responses to participant injury, meeting participant needs for ancillary care and post-trial access, return of research results, inclusion of vulnerable populations, and research risks to third parties, among others. Measuring quality based on regulatory compliance, then, risks missing the forest for the trees.
With this in mind, the shortcomings in reported approaches to assessing QEE become clear. Reviews of IRB members, when used by interview participants, appear to be relatively challenging and superficial, focused on their attendance, participation, and satisfaction, but not digging more deeply into the quality of their deliberation and engagement around core ethical issues raised by research. Several interview participants described reliance on researcher satisfaction as a measure of quality, although actively surveying these stakeholders often raised practical hurdles. In contrast, research participant experiences were rarely mentioned in definitions of effectiveness and were given substantially less attention than those of researchers, at least in terms of relevance to QEE, despite research participants being primary stakeholders in everything HRPPs do – and the fact that research participants might have different views than investigators regarding what they might want out of IRB oversight (Lynch and Rosenfeld 2020). (We note, however, that our interviews do suggest a promising trend toward including patient and community perspectives in research review.) Overall, our findings align with the recent interview study of IRB and HRPP directors noted above in which directors defined quality through a focus on efficiency, compliance, board and staff qualifications, research facilitation and investigator satisfaction, and AAHRPP accreditation itself (Lynch, Eriksen, and Clapp 2022), all of which are relatively measurable. In contrast, the directors in that study – like our small group interviewees – did not highlight careful deliberation and participant protection in their quality definitions (Lynch, Eriksen, and Clapp 2022), areas that are simultaneously most difficult to measure and most central to why we have IRBs and HRPPs in the first place.
Based on our findings, there appears to be an important disconnect between the breadth of an accreditation standard that emphasizes measurement and improvement of HRPP QEE as part of an overarching credential intended as “a public affirmation of [a] commitment to protecting research participants” and real-world satisfaction of that standard in ways that narrowly focus on operational quality. Our interview participants reported dutiful efforts to engage in self-assessment, develop quality improvement plans, and learn from the collected data, all of which are expected activities to meet the I-5 accreditation standard. What gets obscured, however, are the broader debates and challenges about what exactly counts as QEE in this context and how it can be meaningfully measured (Lynch et al. 2019; Scherzinger and Bobbert 2017; Coleman and Bouësseau 2008; Abbott and Grady 2011; Nicholls et al. 2015). When the accrediting body requires quality measurement and improvement without offering specific parameters or expectations, the reasonable perception may be that everyone just knows how to do this. The reality is that there are critical gaps in existing approaches and in our understanding of how to best engage in these QEE assessments. AAHRPP is not alone in facing these gaps, which are especially important to recognize against the backdrop of an investigation currently underway by the U.S. Government Accountability Office (GAO) to determine whether “existing standards of quality, efficiency, and effectiveness provide adequate protection for participants in IRB-approved clinical trials” and how to “address any shortcomings in the current system to improve quality and patient outcomes” (Office of Senator Warren 2020).
In drawing attention to these concerns, our goal is not to criticize either AAHRPP or accredited organizations, nor is it to suggest that accredited organizations are not in fact achieving high quality. It is instead to highlight the inherent difficulty of doing what Standard I-5 aims to do. We acknowledge that AAHRPP accreditation is itself a resource-intensive undertaking and that there are numerous other standards and elements in the accreditation requirements that aim to promote quality in human research protection through policies and procedures regarding, for example, independence, ethical standards, compliance, resources, expertise and representation, responsiveness to participant concerns, IRB review, risk-benefit assessment, consent, and researcher obligations, among other relevant topics. In fact, previous research determined that of 10 available instruments for evaluating research ethics committees and HRPPs, the AAHRPP accreditation standards are “by far the most comprehensive” (Lynch et al. 2020). AAHRPP’s standards do not, however, address participant outcomes, such as those related to consent comprehension, systematic assessment of adverse events in relation to the adequacy of IRB review, or other participant experiences in research. And although they address the people, approaches, and topics that should be part of IRB review, the standards also do not provide substantial guidance around determining the quality of board deliberation or the reasonableness of board decisions. This is ideally what the QEE standard has the potential to achieve.
Thus, without purporting to have all the answers, we have two recommendations for modest improvement. First, Standard I-5 should either define what AAHRPP means by quality, effectiveness, and efficiency, or require accredited organizations to provide their own explicit definitions of these terms in order to specify the rationale and goals behind what they are ultimately measuring and improving. AAHRPP definitions need not be unduly restrictive and could be offered in the spirit of examples but would help clarify the intent of this standard. As noted by our interview participants, efficiency could be relatively straightforwardly defined in terms of turnaround times, with specific attention to the relevant periods of interest during which a protocol sits with the IRB rather than with the research team (or other ancillary offices whose approval is required for study initiation), perhaps with further attention to segmenting protocols according to type and complexity of review. Quality and effectiveness are more challenging, with quality best defined in a cumulative fashion combining procedural and substantive elements and effectiveness ideally defined in terms of outcomes of interest (Lynch et al. 2019). It may work best for AAHRPP and accredited organizations to focus on definitions that highlight procedural elements, such as quality of submissions, completion of review checklists, satisfaction of training requirements, and overall burden, separately from definitions that focus on more substantive elements, such as those more directly related to participant protection; this could help draw attention to circumstances in which substantive elements are missing altogether or only weakly addressed in assessment efforts. Alternatively, or in addition, attention could be bifurcated to focus on the quality of activities that occur before protocol approval and the outcomes that occur thereafter. Again, because HRPPs exist not merely to review but ultimately to protect participants and promote ethical research, definitions and assessments that focus exclusively on the “front-end” of what HRPPs do are insufficient. Yet that is largely where current Standard I-5 attention appears to be focused.
This relates to our second recommendation. Rather than allowing accredited organizations complete flexibility to identify relevant QEE objectives and measures for purposes of satisfying Standard I-5, which risks too much attention to “low-hanging fruit,” AAHRPP should require accredited organizations to set out QEE objectives and measures in specific subdomains, including required domains centered on (1) deliberative quality during protocol review and (2) participant protection and outcomes. To go further, AAHRPP could create a novel standard that specifically emphasizes these domains, apart from efficiency and compliance. Organizations might reasonably take different approaches to assessing deliberative quality and participant protection and we should be transparent about the fact that we lack perfect metrics. Yet more explicit and directive AAHRPP requirements could have the potential to encourage attention, resources, and creativity to be devoted to these challenging areas.
To facilitate this shift toward QEE assessment in the domains of deliberative quality and participant experience, AAHRPP could offer several examples. These might include measures that aim to assess the presence or absence of meaningful and participatory discussion and debate around core ethical issues during IRB meetings, to examine how systematically risks and benefits are evaluated, to determine whether and how prior board decisions are considered when reviewing new protocols (i.e., development and use of precedent) (Seykora et al. 2021), and to evaluate efforts to proactively engage with research participants both about their experiences and how they view relevant risks and benefits. In addition to offering examples of acceptable approaches, AAHRPP could also create opportunities for accredited organizations to learn from one another. During our small group interviews, participants valued the opportunity to compare notes around Standard I-5. This could be built into AAHRPP conferences and AAHRPP could also assemble an accessible toolkit of ideas or resource bank based on promising approaches it sees during the accreditation process.
We note that I-5 measures need not always be quantitative but could include more qualitative assessments that encourage organizations to broadly consider “how things are going” within relevant domains (Serpico 2021). Even without a certain benchmark for what success looks like, examination is better than ignoring important elements of quality simply because we are not sure precisely how to score them. Because it is often those things that are subjected to evaluation that receive the most resources and attention (i.e., “what gets measured gets done”), encouraging accredited organizations to evaluate HRPPs in ways that extend beyond their policies, procedures, compliance, speed, and investigator satisfaction to include some participant-facing and deliberation-specific assessments of quality would be an important step forward.
Importantly, HRPPs need not wait for AAHRPP to amend its requirements before taking steps in these directions, although AAHRPP can push them to do so and offer helpful guidance. In the meantime, previous work from the AEREO Consortium has recommended that HRPPs adopt the following approaches to improving quality and effectiveness (Lynch et al. 2020; Lynch, Eriksen, and Clapp 2022; Seykora et al. 2021; Serpico, Rahimzadeh, Anderson, et al. 2022; Serpico, Rahimzadeh, Gelinas, et al. 2022):
Attend to compliance but minimize “audit culture” that focuses on achieving metrics for their own sake over attention to substantive goals;
Prioritize assessments directly related to participant protection outcomes (e.g., quality of the informed consent process and understanding, robust review of adverse events and whether they could have been avoided, and overall participant experience in research);
Emphasize assessment of processes and standards likely to promote participant protection through identification of risks and implementation of safeguards (e.g., how systematically IRBs evaluate research risks and benefits, data monitoring plans, and plans to provide care and compensation for participant injury);
Make sure reviewers are fully informed about relevant details of proposed and ongoing research and that they have adequate time to review materials and deliberate;
Examine the extent of meaningful discussion and debate about key ethical and regulatory issues during board meetings;
Promote diversity, expertise, and independence among board members to ensure consideration of a variety of relevant perspectives and call in outside experts, as needed;
Encourage reliance on prior decisions to promote consistency, where appropriate, and efficiency, as well as to clarify the meaning of ambiguous ethical and regulatory standards;
Include attention to whether and how the HRPP monitors the conduct of research beyond protocol review (i.e., what happens after IRB approval); and
Adopt assorted quality assessments that incorporate feedback from and about a variety of stakeholders (e.g., board members, HRPP staff, research teams, study participants and others) obtained through a variety of methods (e.g., checklists, site visits, record review, interviews, and surveys).
We recognize, of course, that adopting the full slate of these recommendations will be resource intensive, but even small steps to realign current HRPP and IRB assessments away from compliance and efficiency alone will be a sign of progress. We also recognize that some of these proposals require further specificity in ways that demand empirical support.
In that vein, the AEREO Consortium has several projects in development or underway to flesh out these recommendations across three primary domains. First, we are examining who is around the IRB “table,” including through work to learn more about the selection and role of lay (nonscientific) members serving IRBs; how IRBs incorporate community perspectives into their work; how they engage with outside experts; and what types of diversity they find important among board membership. Second, we are at the beginning stages of examining what it means to deliberate well, recognizing that longer deliberation is not necessarily better deliberation and that not every protocol will demand the same level of attention. Anticipated efforts in this domain will aim to identify analogies to assessing deliberative quality in other areas outside the IRB world and key elements of deliberation that can be said to lead to reasonable decisions. This work will also expand on prior efforts to examine what it would take to facilitate IRB use of “precedent” in their decisions (Seykora et al. 2021), through pilot testing the use of precedent in prospective IRB decision-making. In addition, we are also exploring ways to better understand how IRB member deliberations might differ from deliberations more reflective of participant views. Third, we are working to better understand what stakeholders think. Selected efforts in this domain will delve into more detail regarding whether and how IRBs and HRPPs gather participant experience data, what barriers they face to doing so, and whether they view it as useful to quality assessment, as well as seek to understand from the investigator’s perspective what IRBs do that is particularly helpful to promoting ethical research. Notably, AAHRPP has indicated a willingness to accept participation in relevant AEREO empirical projects toward satisfaction of the I-5 standard (AEREO n.d.).
Limitations
Our small group interviews included representatives of only 19 of AAHRPP’s 261 currently accredited organizations around the world and were comprised largely of individuals from U.S. sites. In addition, our sampling was not random, but instead included only individuals who attended an in-person AAHRPP conference and who responded with interest to an invitation to discuss issues related to QEE and Standard I-5.
Because we interviewed participants in groups rather than alone, it is possible that they faced some social desirability bias in their responses, perhaps withholding information that they felt may have reflected poorly on their organization amongst peers. It is also possible that participants held back details that they worried might jeopardize their accreditation status, despite our assurance that identifiable information would not be shared with AAHRPP or otherwise. We note, however, that participants seemed to be relatively open with us and with one another, often sharing challenges and perceived institutional shortcomings. It is also possible that participants may have mentioned approaches or ideas that they otherwise would not have thought of if not prompted by points raised by other participants, although this is also one of the benefits of the small group method.
Because AAHRPP accredited organizations are already a select group given the resources and institutional support needed to seek this credential, it is likely that non-accredited organizations would differ in whether and how they evaluate their HRPP’s QEE. Finally, we note that these interviews took place shortly prior to the COVID-19 pandemic and that new or different approaches may have been adopted in the interim. Future survey work could helpfully complement these interviews to provide broader insight regarding how both accredited and non-accredited organizations evaluate QEE.
Conclusion
AAHRPP accreditation is itself often touted as an independent indicator of HRPP quality (Lynch, Eriksen, and Clapp 2022), given that accreditation requires “tangible evidence – through policies, procedures, and practices – of [institutional] commitment to scientifically and ethically sound research and to continuous improvement” (AAHRPP 2022c). The I-5 accreditation standard specifically requires organizations to go beyond the presence of policies and procedures to measure and improve their HRPP’s quality, effectiveness, and efficiency. However, it is unclear how well this standard achieves its goals. In our small group interview study, we found an abundance of purported QEE measures but a dearth of QEE definitions, leading to an important gap in efforts to assess whether and how well HRPP activities are achieving core goals related to deliberative quality and participant protection.
Overall, the I-5 accreditation standard may obscure the difficulty of evaluating HRPPs and provide a false sense of confidence that the HRPP community has this all figured out. AAHRPP is not alone in facing this challenge and should be commended for its efforts to encourage organizations to engage in continuous quality assessment and improvement. To make these efforts more meaningful, however, AAHRPP should consider offering specific definitions of quality, effectiveness, and efficiency and requiring accredited organizations to include objectives and measures that go beyond speed, compliance, and investigator satisfaction to emphasize deliberative quality and participant protection.
Acknowledgments
We thank AAHRPP leaders Elyse Summers, Michelle Feige, and Lori Kravchick for allowing us to conduct small group interviews at the 2019 AAHRPP annual conference and facilitating invitations to registered participants. We also acknowledge the contributions of these individuals who agreed to participate in our interviews.
Appendix A
AAHRPP Evaluation Instrument for Accreditation, May 2019 https://admin.aahrpp.org/_layouts/15/download.aspx?SourceUrl=/Website%20Documents/AAHRPP%20Evaluation%20Instrument%20(2018-05-31)%20published.pdf
Note: This text is the same as the prior version of the Evaluation Instrument from October 2018.
Relevant Excerpts
Standard I-5 The Organization measures and improves, when necessary, compliance with organizational policies and procedures and applicable laws, regulations, codes, and guidance. The Organization also measures and improves, when necessary, the quality, effectiveness, and efficiency of the Human Research Protection Program.
Element I.5.b The Organization conducts audits or surveys or uses other methods to assess the quality, efficiency, and effectiveness of the Human Research Protection Program. The Organization identifies strengths and weaknesses of the Human Research Protection Program and makes improvements, when necessary, to increase the quality, efficiency, and effectiveness of the program.
Commentary on I.5.b An organization’s quality improvement program should include measures of quality, efficiency, and effectiveness to evaluate the performance of the HRPP. The organization should use results from the quality improvement program to design and implement improvements. The organization should collect objective data through audits, surveys, or other methods and use the data to make improvements and monitor quality, efficiency, and effectiveness on an ongoing basis.
Required written materials for I.5.b Essential requirements: (a) The organization has a quality improvement plan that periodically assesses the quality, efficiency, and effectiveness of the HRPP. (b) The plan states the goals of the quality improvement plan with respect to achieving targeted levels of quality, efficiency, and effectiveness of the HRPP. (i) The plan defines at least one objective of quality, efficiency, or effectiveness. (ii) The plan defines at least one measure of quality, efficiency, or effectiveness. (iii) The plan describes the methods to assess quality, efficiency, and effectiveness and make improvements.
Common types of materials that may be used to meet I.5.b
Quality improvement plan
Audits, surveys, or other data collection tools
Evaluation reports
Outcomes for I.5.b
The organization:
Identifies targets for quality, efficiency, and effectiveness of the HRPP. Plans improvements based on measures of quality, efficiency, and effectiveness.
Implements planned improvements.
Monitors and measures the effectiveness of improvements.
Appendix B
The following information was provided to participants in advance of the interview and again as a hand-out the day of.
Interview questions
What do you do?
How does your organization define HRPP quality, effectiveness, and efficiency for purposes of satisfying the I-5 standard?
What objectives and measures do you include in your required quality improvement plan? Why were these selected?
How satisfied are you with your approach to evaluating your HRPP’s quality, effectiveness, and/or efficiency? How confident do you feel that what you track is a valid measure of these elements?
What have you learned about your HRPP’s quality, effectiveness, and/or efficiency and how have you used that information to make prospective improvements? What did you show?
What did you submit or show to AAHRPP to demonstrate satisfaction of the I-5 standard?
Footnotes
Disclosure statement
Holly Fernandez Lynch and Holly A. Taylor are co-chairs of the Consortium to Advance Effective Research Ethics Oversight (AEREO). Members of AAHRPP’s leadership team are also members of AEREO, although their membership post-dates the interviews described in this article. There is no financial relationship between AEREO and AAHRPP. Holly Fernandez Lynch receives funding from the Greenwall Foundation as a Faculty Scholar. She is an unpaid board member of Public Responsibility in Medicine & Research (PRIM&R). She is a paid ethics consultant to the Robert Wood Johnson Foundation. In 2019, she received payments from a law firm for consulting regarding a whistleblower complaint alleging misconduct in research oversight at a public university. Holly Taylor’s effort is funded by the Intramural Program, Clinical Center, National Institutes of Health (NIH). The views expressed by Dr. Taylor do not represent the position or policy of the NIH, the Department of Health and Human Services, or the U.S. government.
Contributor Information
Holly Fernandez Lynch, Department of Medical Ethics & Health Policy, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA.
Holly A. Taylor, Department of Medical Ethics & Health Policy, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA Department of Bioethics, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA.
References
- AAHRPP. 2019. Evaluation instrument for accreditation. https://admin.aahrpp.org/_layouts/15/download.aspx?SourceUrl=/Website%20Documents/AAHRPP%20Evaluation%20Instrument%20(2018-05-31)%20published.Pdf.
- AAHRPP. 2022a. AAHRPP accreditation standards. http://www.aahrpp.org/apply/process-overview/standards.
- AAHRPP. 2022b. Considering accreditation. https://www.aahrpp.org/learn/considering-accreditation .
- AAHRPP. 2022c. Our mission, vision, and values. http://www.aahrpp.org/learn/about-aahrpp/our-mission.
- Abbott L, and Grady C. 2011. A systematic review of the empirical literature evaluating IRBs: What we know and what we still need to learn. Journal of Empirical Research on Human Research Ethics: JERHRE 6 (1):3–19. doi: 10.1525/jer.2011.6.1.3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- AEREO. n.d. Membership benefits. https://www.med.upenn.edu/aereo/membership-benefits.html .
- Coleman CH, and Bouësseau MC. 2008. How do we know that research ethics committees are really working? The neglected role of outcomes assessment in research ethics review. BMC Medical Ethics 9 (1). doi: 10.1186/1472-6939-9-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dedoose (version 1). n.d. https://www.dedoose.com/.
- Lynch HF, Abdirisak M, Bogia M, and Clapp JT. 2020. Evaluating the quality of research ethics review and oversight: A systematic analysis of quality assessment instruments. AJOB Empirical Bioethics 11 (4):208–22. doi: 10.1080/23294515.2020.1798563. [DOI] [PubMed] [Google Scholar]
- Lynch HF, Eriksen W, and Clapp JT. 2022. We measure what we can measure’: Struggles in defining and evaluating institutional review board quality. Social Science & Medicine (1982) 292:114614. doi: 10.1016/j.socscimed.2021.114614. [DOI] [PubMed] [Google Scholar]
- Lynch HF, Nicholls SG, Meyer MN, and Taylor HA. 2019. Of parachutes and participant protection: moving beyond quality to advance effective research ethics oversight. Journal of Empirical Research on Human Research Ethics : JERHRE 14 (3):190–6. doi: 10.1177/1556264618812625. [DOI] [PubMed] [Google Scholar]
- Lynch HF, and Rosenfeld S. 2020. Institutional review board quality, private equity, and promoting ethical human subjects research. Annals of Internal Medicine 173 (7):558–62. doi: 10.7326/M20-1674. [DOI] [PubMed] [Google Scholar]
- Nicholls SG, Hayes TP, Brehaut JC, McDonald M, Weijer C, Saginur R, and Fergusson D. 2015. A scoping review of empirical research relating to quality and effectiveness of research ethics review. PLoS One 10 (7):e0133639. doi: 10.1371/journal.pone.0133639. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Office of Senator Warren 2020. GAO accepts warren, brown, and sanders’ request to investigate for-profit institutional review boards (IRBs), August 19. https://www.warren.senate.gov/newsroom/press-releases/gao-accepts-warren-brown-and-sanders-request-to-investigate-for-profit-institutional-review-boards-irbs .
- Sandelowski M 2010. What’s in a name? Qualitative description revisited. Research in Nursing & Health 33 (1):77–84. doi: 10.1002/nur.20362. [DOI] [PubMed] [Google Scholar]
- Scherzinger G, and Bobbert M. 2017. Evaluation of research ethics committees: Criteria for the ethical quality of the review process. Accountability in Research 24 (3):152–76. doi: 10.1080/08989621.2016.1273778. [DOI] [PubMed] [Google Scholar]
- Serpico K 2021. Making metrics meaningful: how human research protection programs can efficiently and effectively use their data. Ethics & Human Research 43 (5):26– 35. doi: 10.1002/eahr.500102. [DOI] [PubMed] [Google Scholar]
- Serpico K, Rahimzadeh V, Anderson EE, Gelinas L, and Lynch HF. 2022. Institutional review board use of outside experts: what do we know? Ethics & Human Research Online First 44 (2):26–32. doi: 10.1002/eahr.500121. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Serpico K. V. Rahimzadeh, Gelinas L, Hartsmith L, Lynch HF, and Anderson EE. 2022. Institutional review board use of outside experts: a national survey. AJOB Empirical Bioethics. In press. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seykora A, Coleman CH, Rosenfeld SJ, Bierer BE, and Lynch HF. 2021. Steps toward a system of IRB precedent: piloting approaches to summarizing IRB decisions for future use. Ethics & Human Research 43 (6):2–18. doi: 10.1002/eahr.500106. [DOI] [PubMed] [Google Scholar]
- Taylor HA 2007. Moving beyond compliance: measuring ethical quality to enhance the oversight of human subjects research. IRB 29 (5):9–14. [PubMed] [Google Scholar]
- Tsan M-F 2019a. From moving beyond compliance to quality to moving beyond quality to effectiveness: realities and challenges. Journal of Empirical Research on Human Research Ethics: JERHRE 14 (3):204–8. doi: 10.1177/1556264619850710. [DOI] [PubMed] [Google Scholar]
- Tsan M-F 2019b. Measuring the quality and performance of institutional review boards. Journal of Empirical Research on Human Research Ethics: JERHRE 14 (3):187–9. doi: 10.1177/1556264618804686. [DOI] [PubMed] [Google Scholar]
- Tsan M-F, and Nguyen Y. 2018. Assessing the quality and performance of human research protection programs to guide compliance oversight activities. Journal of Empirical Research on Human Research Ethics: JERHRE 13 (3):270– 5. doi: 10.1177/1556264618776460. [DOI] [PubMed] [Google Scholar]
- Tsan M-F, and Van Hook H. 2022. Assessing the quality and performance of institutional review boards: impact of the revised common rule. Journal of Empirical Research on Human Research Ethics 155626462210944. doi: 10.1177/15562646221094407. [DOI] [PubMed] [Google Scholar]