Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jan 22.
Published in final edited form as: J Law Med Ethics. 2011 Fall;39(3):513–528. doi: 10.1111/j.1748-720X.2011.00618.x

Views and Experiences of IRBs Concerning Research Integrity

Robert Klitzman 1
PMCID: PMC3551536  NIHMSID: NIHMS433400  PMID: 21871046

Institutional Review Boards (IRBs) can play vital roles in observing, monitoring, and responding to research integrity (RI) issues among researchers, yet many questions remain concerning whether, when, and in what ways these boards in fact adopt these roles. Increasingly, RI is being challenged due to many factors, yet the extent of violations, and institutional responses to these, remain unknown. As the amount and complexity of experiments on human participants, often funded by for-profit industry, mushrooms, scandals have occurred,1 posing dilemmas concerning how to best oversee research to protect these participants from harm.

For over 15 years, many institutions have been developing research compliance programs that monitor misconduct and conflict of interest (COI), and may interact with IRBs. In 2002, the Institute of Medicine (IOM) report, “Responsible Research: A Systems Approach to Protecting Research Participants,” called for increased assessment of the overall human research protections system, and oversight of research.2 This report recommended several activities, including identifying, adopting, refining, and disseminating best practices, enhancing Quality Assurance/Quality Improvement (QA/QI), examining the type and number of FDA and OHRP investigations, and enhancing accountability and transparency. The report also suggested differentiating IRBs from other institutional compliance, risk management, and COI offices, and having institutions provide adequate resources for these activities. Yet authors of this IOM report felt “repeatedly confounded by the lack of data regarding the scope and scale of current protection.”3

In 2005, the HHS Office of Inspector General also produced guidelines for compliance offices that include “the use of audits or other techniques to monitor compliance,” and a “hotline” for anonymous complaints.4 Yet many questions remain as to whether such recommendations have been followed, and if so, when, to what degree, where, and how, and with what subsequent problems or benefits. Institutions appear to vary widely in how they establish compliance offices, what the responsibilities of these offices are, and to whom these report.5 How these offices relate to IRBs also remains unknown.

IRBs are charged with monitoring research, but may do so in a variety of ways, including through review of continuing renewals, informed consent processes, adherence to protocols, and unapproved activities.6 Such monitoring by IRBs is important in avoiding research scandals, and in optimizing public trust of science.7 Protection of subjects can include monitoring ongoing studies and adverse events, but little is known about whether IRBs find RI problems, broadly defined, and if so, what, how, and when.

IRBs may be the only detailed reviewer of protocols within an academic institution, and hence can potentially serve as an important lens for examining RI. Yet IRBs are known to vary widely within and among institutions,8 and may be influenced by institutional and social contexts and other factors. Surprisingly, though, very little, if any, empirical research has probed how IRBs view and approach these pressing RI issues.

IRB members may play unique and critical roles in monitoring and addressing RI. Of hospital IRB chairs, “17% had dealt with scientific misconduct allegations,” but 42% did not “have a written policy regarding” RI.9

Presumably, even in the IOM recommendations, ensuring compliance of individual researchers remains under the purview of the IRB, though how this compliance is assessed, and with what effectiveness, is not clear. Anecdotally, for instance, most consent forms direct subjects with complaints to the IRB, not to a compliance office.

From a theoretical perspective, Talcott Parsons10 and others have suggested that social systems face underlying tensions between conformity and deviance, and seek to establish mechanisms of social control. Institutions need to determine how to counteract unacceptable behavior and resistance to conformity that inevitably arise due to alienation, difficulty conforming, and other factors. RI problems may potentially constitute “deviance,” yet it is not clear when, in what ways, and how effectively institutions do and should respond.

Definitions of RI themselves may vary. “Integrity” is defined as “freedom from moral corruption…. Soundness of moral principle; especially in relation to truth and fair dealing; uprightness, honesty, sincerity.”11 Yet how these terms become interpreted and operationalized in research can differ. The Office of Research Integrity (ORI) defines RI as “the use of honest and verifiable methods in proposing, performing, and evaluating research and reporting research results with particular attention to adherence to rules, regulations, guidelines, and commonly accepted professional codes or norms.”12

Violations of RI occur that are not only major (e.g., clear falsification of data), but “minor.” In one study of PIs, 27.5% reported keeping inadequate records; 15.3% dropped observations or data points from analyses based on a gut feeling that these were inaccurate; 13.5% used inadequate or inappropriate research design; 10.8% withheld details of methodology or results in papers or proposals; 6.0% failed to present data that contradict their own previous research; and 7.6% circumvented certain minor aspects of human subject requirements.13 Yet it is unclear what some of these categories (e.g., “circumventing certain minor aspects of human-subject requirements”) include, whether IRBs are ever aware of these issues, and if so, when and how.

Wider institutional responses to violations of RI are also not always clear, and can vary. For instance, in responding to lapses in RI, PIs were more likely to inform their colleagues, while administrators were more likely to notify supervisors and deans.14

IRBs can potentially promote RI, influencing adherence to, and deviation from, ethical guidelines. RI violations may in fact stem from perceptions of inequities in procedural injustice involved in IRB reviews, and other institutional decisions15 (i.e., PIs feeling that “the system is unfair”).

IRBs have also been facing growing criticism. Their effectiveness is said to be “in jeopardy,” because of ever-increasing workloads, leading some to conduct minimal continuing review of approved research and provide little training for members and investigators. In addition, too little attention is paid to evaluating IRB effectiveness.16

The relatively little empirical research that has been published about IRBs has focused on logistical issues (e.g., number and types of members17), and length of time to approve protocols, rather than the content of IRB decisions. Discrepancies exist in IRB reviews of multisite studies in the types of reviews used (i.e., expedited versus full board review), and the number of changes requested.18

But, I have found no published studies exploring whether, when, and how IRBs are involved in RI, and with what results. Hence, I conducted an in-depth semistructured interview study of views and approaches toward RI, broadly defined, among IRB chairs, administrators, and members. The interviews shed light on several other issues as well, concerning central IRBs,19 COIs,20 variations between IRBs,21 and research ethics in the developing world,22 but focused on RI.

Methods

As described elsewhere,23 I conducted 2-hour phone interviews with each of the 46 chairs, directors, administrators, and members. I contacted the leadership of 60 IRBs around the country, representing every fourth one in the list of the top 240 institutions by NIH funding; and interviewed IRB leaders from 34 of these institutions (response rate = 55%). In some cases, I interviewed both a chair/director and an administrator (e.g., as the chair thought that the administrator might be better able to answer certain questions). Hence, I interviewed a total of 39 chairs/directors and administrators from these 34 institutions. The institutions ranged in location, size, and public/private status. Inclusion of IRBs from a wide range of institutions allowed for elucidation of the roles of different social and institutional contexts on these issues. I also asked half of these leaders (every other one following the list by amount of NIH funding) to distribute information about the study to members of their IRBs, in order to recruit 1 member of each of these IRBs to be interviewed for the study as well. Thus, I interviewed an additional 7 other members (1 community and 6 regular members).

As summarized in Table I, these 46 individuals included 28 chairs/co-chairs; 1 IRB director; 10 administrators (including 2 directors of compliance offices); and 7 members. In all, 27 were male and 19 were female. One was Asian/Pacific Islander, while the remaining 43 were Caucasian. Twenty-one came from institutions in the Northeast, 6 from the Midwest, 13 from the West, and 6 from the South. From institutions ranked 1-50, 51-100, 101-150, 151-200, 201-250, the number of interviewees were 13, 13, 7, 1, and, 12, respectively.

Table I.

Characteristics of the Sample

Total %(N=46)
Type of IRB Staff
Chairs/Co-Chairs 28 60.87%
Directors 1 2.17%
Administrators 10 21.74%
Members 7 15.22%
Gender
Male 27 58.70%
Female 19 41.30%
Institution Rank
1-50 13 28.26%
51-100 13 28.26%
101-150 7 15.22%
151-200 1 2.17%
201-250 12 26.09%
State vs. Private
State 19 41.30%
Private 27 58.70%
Region
Northeast 21 45.65%
Midwest 6 13.04%
West 13 28.26%
South 6 13.04%
Total # of Institutions
Represented
34

The interviews explored participants’ views of RI (e.g., PIs’ noncompliance with regulations), IRB responses (e.g., auditing), and factors involved in decisions, and shed important light as well on many other, broader issues that arose concerning IRB decision-making. Relevant sections of the interview guide are attached (Appendix A), through which I sought to obtain detailed descriptions of the above issues. From a theoretical standpoint, Geertz24 has advocated studying aspects of individuals’ lives, decisions, and social situations not by imposing theoretical structures, but by trying to understand the individuals’ own experiences, drawing on their own words and perspectives to obtain a “thick description.”

In the methods, I adapted elements from grounded theory.25 The approach was thus informed by techniques of “constant comparison” in which data from different contexts are compared for similarities and differences, to see if they suggest hypotheses. This technique of “constant comparison” generates new analytic categories and questions, and checks them for reasonableness. During the ongoing process of indepth interviewing, I constantly considered how participants resemble or differ from each other, and the social, cultural, and medical contexts and factors that contribute to differentiation. Grounded theory also involves both deductive and inductive thinking, building inductively from the data to an understanding of themes and patterns within the data, and deductively, drawing on frameworks from prior research and theories.

In conducting thematic content-analyses, I also triangulated methods, referring to the published literature, as presented above. I drafted the questionnaire, drawing on prior research conducted and published literature. Transcriptions and initial analyses of interviews occurred during the period in which the interviews were being conducted, enhancing validity, and these analyses helped shape subsequent interviews. Interviews were conducted at participants’ offices or homes or in the PI’s office — whichever was more convenient for participants. The Columbia University Department of Psychiatry Institutional Review Board approved the study, and all participants gave informed consent.

Once the full set of interviews was completed, subsequent analyses were conducted in two phases, primarily by a trained research assistant (RA) and me. In phase I, we independently examined a subset of interviews to assess factors that shaped participants’ experiences, identifying categories of recurrent themes and issues that were subsequently given codes. We read each interview, systematically coding blocks of text to assign “core” codes or categories (e.g., instances of audits of protocols by IRBs, RI problems found by IRBs, and issues concerning industry funding). While reading the interviews, a topic name (or code) was inserted beside each excerpt of the interview to indicate the themes being discussed. We then worked together to reconcile these independently developed coding schemes into a single scheme. Next, we prepared a coding manual, defining each code and examining areas of disagreement until reaching consensus between them. New themes that did not fit into the original coding framework were discussed, and modifications were made in the manual when deemed appropriate.

In phase II of the analysis, the RA and I independently content analyzed the data to identify the principal subcategories, and ranges of variation within each of the core codes. We reconciled the sub-themes identified by each coder into a single set of “secondary” codes and an elaborated set of core codes. These codes assess subcategories and other situational and social factors. Such subcategories include, for instance, different types of audits (e.g., random or for-cause), and specific types of RI problems found (e.g., researchers not submitting protocols or protocol changes to the IRB for review).

Codes and sub-codes were then used in analysis of all of the interviews. To ensure coding reliability, two coders analyzed all interviews. Where necessary, we used multiple codes. We assessed similarities and differences between participants, examining categories that emerged, ranges of variation within categories, and variables that may be involved.

We examined areas of disagreement through closer analysis until we reached consensus through discussion. We checked regularly for consistency and accuracy in ratings by comparing earlier and later coded excerpts.

To ensure that the coding schemes established for the core codes and secondary codes are both valid (i.e., well grounded in the data and supportable) and reliable (i.e., consistent in meaning), they were systematically developed and well-documented.

In this process, we were able to explore “cases” of problems that arose (e.g., difficult decisions chairs faced) to examine the range and patterns of issues that emerged. We triangulated data, drawing on the range of issues identified through the literature, posing questions and collecting sufficient details to substantiate points that arose.

Results

Overall, as seen in Table II, IRBs encountered a range of problems concerning how studies were carried out, post-approval. IRBs varied in how they defined, discovered, viewed, and responded to RI problems, and interacted with other institutional offices concerning these issues, and what types of RI violations they encountered.

Table II.

Themes Concerning Involvement of IRBs in Research Integrity

Roles of IRBs Concerning PI Problems
  • IRBs define RI differently

  • IRBs interact differently with other institutional offices

  • IRBs differ in amounts of responsibility they feel they have for RI problems

Severity of Problems
  • Mostly minor
    • ◆ Not involving harm to subjects
What Are the Problems?
  • Poor informed consent

  • Non-submitting to the IRB
    • ◆ Entire protocols
    • ◆ Changes to protocols
  • “Merely paperwork”
    • ◆ Changes in study design
      • “Arms” of study
      • Inclusion/exclusion criteria
      • Number of subjects
How IRBs Learn of Problems
  • Continuing review

  • PI self-report

  • Audits
    • ◆ Audits for cause
    • ◆ Questions of when to audit
    • ◆ Random audits
  • Complaints by subjects

  • Complaints by staff (“whistleblowers”)

  • Serendipity

Causes of Problems
  • Generally not mal intent

  • Poor supervision of staff or students by PI

  • Rarely, PI mental health problems

Responses to Problems
  • Vary in degree of severity and type
    • ◆ Educating PI
    • ◆ Suspending study
    • ◆ Reporting problem to federal agencies
Implications
  • Most problems are minor, but not negligible and dismissible

  • Questions of whether better monitoring is warranted, and if so, what

These interviewees provided examples of RI problems they confronted. Each instance involved a particular problem, an underlying cause, and a way in which the IRB discovered and responded to the lapse. These domains are closely interwoven, but for purposes of analysis, are separated below. Hence, examples provided here in each domain apply to the other domains as well.

Definitions of RI

Interviewees define RI broadly, and in different ways. A few chairs defined RI as conducting and reporting research accurately (“an accurate reflection and disclosure of the results.” IRB9) But other interviewees viewed RI even more broadly — akin to “research ethics” itself. RI can include the highest ideals and principles in science — e.g., even working to disprove one’s own biases and favorite hypotheses. As one chair said,

For me, integrity works at several levels, including the conceptualization and design of studies, so that one works hard against one’s own biases, to unpack them, and try to disprove one’s favorite hypothesis…. Also, how you deal with your subjects afterwards…involve them in dissemination, or give credit for their contribution. It could be about 9,000 other things, too. IRB22

Though mandated to protect human subjects, most interviewees seek to “go beyond” the narrowest possible definition of that mandate, to include key aspects of RI, though how they do so varies. Many felt that though others were involved, the IRB can play an important role in RI, and may even ultimately share part of the fault for problems that occur. “Everyone — including the PI, the IRB, the heads of departments, the department chairmen — are to blame.” IRB9

Others struggled to try to distinguish very carefully between what they are versus are not responsible for. IRBs generally thought that their concerns were broad, but did not include all potential aspects of RI compliance per se.

I don’t think that integrity in reporting data and publishing, and acknowledging contributions, sources of data, and others’ work falls under the IRB’s purview. But following protocol, and decency in treating human subjects…is the IRB’s jurisdiction. IRB7

New, expanding information technologies may provide opportunities for new kinds of PI violations. For instance, Photoshop can enable researchers to alter images that they publish of their findings (e.g., of cells). “Problems are now occurring with the use of Photoshop. The degree to which people go to the trouble of doctoring a picture is startling beyond belief.” IRB11

Types of Problems Found

Seriousness of PI Noncompliance Problems

IRBs occasionally, though rarely, faced problems of data fabrication, falsification, or major, willful misconduct, but more often faced other, milder RI issues. The most frequent RI problems that many IRBs confront concern researcher noncompliance with regulations. These violations were mostly minor, but occasionally raised concerns.

IRBs confronted a variety of aspects of research design that were not vigorously adhered to. Some IRBs saw only such minor deviations: “protocol deviations, or a PI who exceeded the number of approved subjects, or is sloppy.” IRB40 PIs may also not follow time frames dictated in their protocol. For instance, patient revisits supposed to occur at 60 days, did not occur until 90 days.

Poor Informed Consent

IRBs also found various problems related to obtaining informed consent. PIs may have non-approved personnel do the consenting, or staff may be rushed in obtaining consent. These deficiencies may result not from intent, but from pressures on PIs.

A new staff person just does random audits. She tells researchers, “You have to give subjects more time.” Researchers say, “Well, I was pressed for time. It wasn’t that I didn’t want the subject to know.” IRB39

Non-Submission of Protocols or Changes

In addition, IRBs discover noncompliance problems of PIs not submitting protocols, or changes to approved studies. For instance, PIs have argued that they “don’t need IRB approval for chart reviews,” but breaches of confidentiality can occur.

Nice patients come for treatment here, thinking that only their doctors look at their records. Researchers can write things down, but don’t carry them around in an open tote bag, with patients’ names on the data collection sheets that you’re taking to the office once you’ve gotten coffee at Starbucks, with all this stuff flopping around. IRB13

IRB members may also see recruitment flyers posted, or lists of dissertations based on studies that the IRB never reviewed. Yet IRBs do not know the number of studies that are conducted but not submitted for review. Non-submission may occur particularly in fields where regulations are not entirely clear, as in educational research.

It is hard to know how often researchers do not submit to the IRB at all. It is a black hole. I think it happens more in areas that are a little gray without clear consensus, like educational research. From the federal level down, we have not been clear about what is and isn’t exempt. So there is confusion, and researchers want to avoid the hassle of submitting a project. So they will just go ahead and do the research. IRB40

Many problems that IRBs find in audits involve “only” paperwork. These deficiencies vary in severity. These problems may indeed be relatively small. For example, staff may inadvertently be using outdated consent forms.

Our random audits have found that a number of investigators have not done a very good job record-keeping. They may have different versions of the consent form. One version was approved, and then an RA prints another one that’s not stamped, and there are 12 versions, and no one can remember which version is right. IRB3

Yet at times, problems with “record keeping” can have potentially more serious implications. One chair (IRB25) saw PIs “giving medication they weren’t supposed to” — i.e., medication that was not mentioned in the protocol.

IRBs faced questions of whether these lapses indeed represent only minimal problems or constitute “the tip of the iceberg.” Poor record keeping can extend beyond mere sloppiness, and solely aesthetic concerns, and have ethical implications — e.g., for patient confidentiality. A former chair described how one PI retained identifying information of biological samples.

We made a site visit because a cell biologist was egregiously late in submitting some continuing reviews. His record keeping was in shambles. His protocols stated that he was getting cell samples from a couple of other, de-identified hospitals. Yet information sheets that had accompanied the samples were in his files with patients’ names, addresses, and diagnoses. He had mixed records from different studies. It was very hard to figure out if he was compliant in the numbers of samples that he had taken, and when he’d taken them. That has been going on for a couple of years, and hasn’t been resolved. We have disapproved him to do the research. The finance office asks us every so often if they can tell NIH to clear the flow of funds for the PI. We’ve said no. IRB7

Such sloppiness can cause confusion that can be hard to disentangle.

How IRBs Learn of Problems

IRBs learn about these problems with RI in a variety of ways — particularly through continuing review, audits, complaints by staff or, occasionally, subjects, and/or serendipity.

PI Self-Reports

IRBs are mandated to review research annually, and occasionally learn of RI through this means. An IRB administrator reported, for instance:

In reviewing one investigator’s materials for his annual review, I happened to notice that, unbeknownst to the IRB, he had just dropped one condition, and changed it. So I wrote him a little memo: “That’s not OK. Please tell us why you thought it was OK to do that without coming to the IRB.” IRB26

Other times, PIs separately report RI problems, or designate their staff to do so. The majority of these are minor, and discovered by PIs themselves, or their employees.

Most noncompliance issues are self-reported by the investigators. The number one problem, which we consider relatively minor, depending on the study, is over-enrollment. The IRB will approve of enrollment of X number of subjects, and at annual review or earlier, we will detect if the investigator is approaching that threshold. On the continuing review forms, we ask if they want to adjust the number of subjects to be enrolled. Or the PI will discover inadvertently, prior to continuing the review, that they overenrolled, and will report that. We then investigate what the circumstances were, and we’ll take it from there. Depending on the relative risk in the study, we may simply ask the investigator to amend the number. IRB4

When learning of their own lapses or errors, PIs generally feel badly, upset by their discovery of an inadvertent lapse. “Some researchers come in and say, ‘Oh my God, I just did this terrible thing. I’m freaked out. What can I do to make it better?’” IRB26

PIs may express feelings of moral guilt and trespass. “I hear about occasional lapses or mistakes because an investigator will come and confess to me.” IRB26

One PI, for instance, misplaced a study videotape of family interactions. The researcher reported the loss to the IRB only after it was found. The video had been at a receptionist’s desk, not in a locked location, as the protocol had promised. The PI then agreed to take steps to protect against such potential breaches of confidentiality.

The PI videotaped family interactions, and misplaced tapes. The staff kept referring to them as “missing data,” which the PI thought meant: missing statistically. But they were literally missing. The videotape was found by the receptionist’s desk. The PI was concerned that that videotape had not been under lock and key, and came to me and confessed that this lapse had occurred, and told me what steps they were taking to make sure the staff understood. We kind of went on high alert. I can slap the PI’s wrists, and say, you really screwed up. But they’re already coming to me, telling me that, and showing me steps they’re taking to rectify it. IRB26

This metaphor of confession suggests wrongdoing, and need for forgiveness.

As we shall see, IRBs then face questions of how to respond to these lapses — e.g., how fully or aggressively. This IRB administrator continued,

I always report an instance like that to the IRB chair, and he determines whether it should go to the full IRB. Generally it does, but, on something that small, as an information item. IRB26

The fact that the PI herself was very concerned led this administrator to think that a relatively minimal response was appropriate. Instead, IRBs could potentially mandate that a PI disclose the problem to all subjects in the study. “I might recommend that the scientist contact the parents and explain that it was lost for a period of time.” IRB26

Audits

IRBs varied widely as well in whether, when, and to what degree they monitored and audited studies, and what these investigations then found. Overall, audits either resulted from suspicion of a problem, or were random. Institutions varied in whether IRBs themselves or other offices conducted audits, and if the latter, what relationships existed between these entities.

Who Conducts the Audits?

Institutions differ in whether audits are conducted by IRBs and/or Compliance Offices, and if the latter, how these two entities then interact. Boundaries between IRBs and Compliance Offices varied widely — from close and collaborating to distant, with minimal, if any, communication. These two organizational entities can differ, and have overlapping and synergistic roles shaped by complex institutional histories and cultures that can facilitate or impede interactions.

For instance, one IRB did not know if a PI, whose study was suspended because of keeping unidentified information on samples, was still conducting research. The IRB reported a problem to institutional officials, but has not received any follow-up.

We’ve shut down several protocols. But it isn’t clear to me: maybe the PI is still doing research. I don’t know if he’s doing nonhuman research, or working on those samples properly. IRB7

Most chairs felt that the IRB was already over-worked and under-resourced, and hence neither could, nor should, do more. But uncertainties lingered. This chair added:

There should be a way to find out what this faculty member is doing. That is in the Compliance Officer’s domain. The IRB forwarded to him a list of what he needs to do to reactivate his studies. He didn’t do those. His department chair and dean are aware — from being copied on the correspondence. So, the IRB met its obligations. I didn’t think that we needed to be police and put yellow tape across his lab doors. IRB7

Questions thus persist as to the degree to which IRBs should be involved — how far it should go in knowing what transpires versus letting other offices be completely responsible.

To assist PIs more fully, many institutions establish QA/QI offices, yet the relationships between these two entities can vary. The two offices may share members, and range in what, how, and when they communicate.

At times, IRB leaders establish within their own office a QA/QI division that may help with other IRB functions. One IRB director said, “I created a QA/QI division, which serves underneath the IRB. They perform internal and external audits — for cause, and not for cause.” IRB9

Separating IRB and QA offices can have potential advantages and disadvantages, though these may not always be clear. IRBs or PIs can request a QA investigation. But if researchers instigate the assessment, IRBs may not learn the result. As a member of both a QA committee and an IRB said:

If the IRB has a concern about a study, they will ask the QA/QI committee to go in and take a look. But we are independent from the IRB. I don’t know why, or whether that’s good or bad. Sometimes I think communication would be better if we were a part of the IRB. But the independence means that when researchers ask us to come, our review is very confidential, unless we find something that’s reportable to the IRB. PIs tend to fix it, because it needs to be fixed. IRBs only get a report when they request us to take a look — that’s happened maybe 30 times. Frequently, we find miscommunication — a sloppy IRB submission, or concerns that are now being taken care of. A study may be reporting a large number of violations. We find: six months ago, a new coordinator, really on the ball, has gone back, and done an internal audit, and found problems, and fixed them, but knew to report them as violations. That’s why we’re hearing about them now. IRB11

Thus, IRBs work in dynamic social systems that can make these issues of monitoring and reporting complex. The problems found may be concerning, but might have already been addressed by the PI, and turn out to be minor.

Many IRBs attempt to become broader “human subject protection programs,” and to “change institutional culture,” often reflecting national discussions. Yet the nature of these differences is not always clear, and desires to establish enhanced Compliance Offices at times appear to arise primarily because of perceived needs to avoid federal audits, more than concern about human subjects per se. One IRB chair said,

We have the elements and structure of a Human Research Protection program, but are not calling it that. I have suggested to the authorized institutional official that we do so, but the system now works OK. IRB5

Still, this interviewee thinks it could be improved in certain ways.

Often, a Compliance Office, not the IRB per se, performs audits, but in the complex exigencies of institutions, IRBs themselves may subsume these functions. As one interviewee said, “For the past year, the positions have been vacant, so I’ve been handling it.” IRB9

Types of Audits

Audits can result from known problems, or be random. Evidence or suspicions of serious or ongoing problems and lapses can prompt “audits for cause.” But the thresholds used in deciding whether to perform such an audit vary. Triggers can range from major to minor. “The two major reasons we do for-cause audits are if there’s a death in a study, and if there are lapses and the PI hasn’t submitted continuations on time.” IRB3

Other IRBs audit based on intimations of possible deficiencies, looking for signs or suggestions (e.g., over-enrollment) as reasons to conduct audits for cause.

Seeing a lot of serious adverse events, or a greater frequency of risk on the consent form, or an investigator over-enrolling by, say, a 100 people, are red flags that maybe we should do an audit for-cause. Are the researchers seeing a lot of unanticipated problems, or deviating from the protocol? IRB1

Occasionally, a PI may have a track record of ongoing problems, prompting the IRB to audit his or her research — neither wholly random, nor entirely forcause.

There are a couple of problem investigators, where the track record is such that we tend to go looking for things a little. We may have pharmacy staff go audit them. One PI has a track record, raising enough concern that we end up being proactive, trying to make sure there aren’t problems that we aren’t being told about. But that is rare. IRB40

The IRB then has to decide what to do when it has such suspicions about a researcher.

Other IRBs recognize the potential benefits of audits for-cause, but rarely, if ever, perform these because of lack of staff. Instead, these boards often rely only on continuing annual renewals as opportunities to assess studies post-approval. These IRBs usually expressed desires for additional staff to investigate certain studies more fully, but for the present, felt restrained.

A few rare IRBs performed random audits that were not triggered by suspicion or evidence of problems, though such investigations required additional resources. Relatively well-resourced IRBs at large institutions may even have a full-time staff person dedicated only to such random checks.

Usually, random audits uncover at least a few problems. Invariably, IRBs can discover some deficiencies. One chair said that a neurologist, for instance, who looks at a person

can almost always find some evidence that looks like brain damage. So when the Research Compliance people go in there, they can almost always find something not done right. IRB5

But, though random audits usually identify lapses, the significance of these deficiencies varies widely, and can be very small.

It depends what you consider problems. They almost always find something, but it may be very minor. We really look at it as an opportunity for re-enforcing education. IRB17

IRBs thus face decisions of how to respond — e.g., how aggressively to intervene based on the results of an audit. Depending on the findings, IRBs may stop or substantially alter a study, or only recommend additional staff and/or PI training.

The minor lapses these audits generally find, posing no real danger, do not significantly worry many IRBs.

We find things we don’t know, or want to know about — slip-ups in doing research, things done out of the time frame window, incomplete, or undated — nothing that really concerns us yet. IRB25

These lapses would not have been known otherwise. But this chair sees these problems as not only minor, but as occasionally better off not known, because the administrative consequences are more cumbersome than the shortcoming warrants.

Broader policy questions thus emerge of whether more such random audits should be conducted, though doing so would require additional resources. One chair whose IRB conducted random audits wondered if such additional monitoring could get too aggressive or invasive. “I don’t know if that would be considered too intrusive, or not.” IRB8 Not surprisingly, these audits, by their very nature, can be stressful for PIs, and IRBs may or may not try to reduce this stress to varying extents.

IRBs that conduct random audits tend to concentrate on PI-initiated, rather than industry-initiated or large multi-site studies. These IRBs felt that PI-initiated protocols have the least external oversight, and thus, IRBs seem to fear, the highest risk.

We usually audit investigator-initiated protocols, because drug companies audit theirs pretty well. Most NCI or pediatric AIDS clinical trial protocols get pretty good oversight. Investigator-initiated studies are highest risk, so we focus on them. IRB17

Surprisingly, IRBs tended to trust the pharmaceutical industry to oversee the studies it funds at universities. Yet they believed that drug companies are motivated to comply since the FDA reviews the results. But given drug company scandals, and the fact that IRBs discover problems beyond strict compliance, this assumption may raise concerns.

Many IRBs would like to conduct additional types of random audits, but are limited by resources.

We don’t audit pharmacy logs because of time. But to come up with a complete picture, you have to cross-reference different documents — looking at pharmacy logs, doses logged and given to subjects, protocol violations or deviations, trying to catch them in research charts. We’re left with a partial, not a complete, picture. IRB9

But the notion that there are essentially always problems, undetected unless actually sought, poses questions of whether that phenomenon should be accepted as inevitable, or seen as grounds for more auditing (with additional government or other funds to support such efforts).

Whistleblowers

IRBs can also learn of problems not by audits, but by complaints from subjects or concerned staff who alert the IRB without the PI’s knowledge. Complaints by subjects are rare, but do occur — e.g., up to once or twice every few years, but not more — and can prompt federal investigations. However, not all such reports of problems by subjects or junior staff prove valid.

Complaints are pretty rare: maybe one or two a year. A couple of people have called saying, “I was asked to be in a study. They said it was going to be this and that, and it turned out to be something else. I’m really pissed off.” IRB3

Often, the problem turns out to be due to bad informed consent interactions.

From the patient’s view, the complaint is valid. But it tends to be a failure of the informed consent process. The form said there are unknown risks. However, consent should be ongoing. IRB3

Research staff may complain to the IRB without the PI’s knowledge, also then triggering audits for cause.

Usually, a staff person or one of the nurses thinks that things haven’t been done appropriately. Some improprieties have occurred — usually through oversight, neglect, or lack of staff. Data should have been recorded in the charts, but were written on pieces of paper. Or test results should have been filed earlier. Or researchers are going back, writing and correcting things. IRB13

Yet the examples of lapses offered appear relatively minor.

Nonetheless, problems emerge because whistleblowers may get penalized. Institutions may protect the PI, rather than the whistleblower. Such a fate may further discourage such complaints.

Investigators were doing unapproved procedures outside of their protocol. It turned out to be a major match of wills between the whistleblower and the upper administration. The administration didn’t react very well, and ended up protecting the researcher. The whistleblower eventually left, and the investigator was more or less protected. IRB28

Another chair perceived such unfortunate repercussions against these informants more broadly, and would actually caution such potential informants about the risks they may face.

Whistleblowers get screwed. I see how they are treated, regardless of policy. Everyone has to hesitate before they come forward, because there are going to be negative reactions. Everywhere. IRB27

In part, countervailing financial and personal bonds may be long-standing, and an institution may judge the reported violation to be minor. As one IRB member commented,

There are strong personal, professional, and financial relationships — collaborations between investigators and administrators. A federal regulator might consider procedures on lab mice that are not on the protocol to be major noncompliance. But to a local administrator, it’s very minimal, and not worth sacrificing the researcher’s career. A small tarnish, at the expense of a few little mice, is worthwhile. Our institution then gave more money to the IACUC. IRB28

This institution acknowledged the problem not by censoring the PI in any way, but by devoting more resources to preventing such difficulties in the future.

Serendipity

IRBs may learn of RI problems relatively unexpectedly, through happenstance. For example, one research coordinator reported to the IRB that a PI was “making up” subjects. The IRB then pursued the allegation.

The project coordinator was answering some questions the IRB asked, and identified a researcher who claimed to have seen patients, but didn’t actually see them. It was one of those accidental, serendipitous things, and the IRB jumped right on that and met with the researcher and her supervisor. That was the most outlandish thing I’ve heard of. It happens once every eight years. IRB39

Eventually, the IRB felt that the PI was not intentionally malicious, but had a psychiatric disorder. Still, the IRB found these problems surprising. She had strong letters of recommendation, but it turned out that her previous institution had wanted to get rid of her.

This PI is not evil, but has some serious mental health issues. It was surprising, because she had gotten here in the past two or three years, with magnificent letters of recommendation. Our university called around, and found that the letters of recommendation were magnificent because they needed to dump her. The recommenders said, “I realize she’s now stuck at your institution, but we needed to dump her.” There’s no recourse. IRB39

A researcher’s ostensible reputation may thus not predict RI violations. Here, too, the RI deviation discovered by the IRB is more significant than that usually found.

IRBs Cannot Detect Problems Well

Clearly, IRBs face challenges in discovering problems that may exist — in particular, locating more serious violations (e.g., fraud). IRBs have limited abilities to uncover such deficiencies.

We’re not equipped to detect the most serious kind of problem. We can tell if PIs are sloppy or late, but not if they are outright defrauding us, lying, withholding, or making stuff up. IRB32

Research participants or staff who observe problems may not in fact report them to the IRB. Indeed, chairs may worry that they do not receive more complaints — especially from subjects. Yet study participants may feel intimidated or unempowered in expressing criticism.

It bothers me that more problems aren’t reported. Either researchers are all doing a great job, or participants may not feel comfortable complaining. We’ve got this big university, with big fancy researchers with big names and titles. Maybe subjects don’t feel comfortable saying, “The needle prick hurt, and I’ve now got a big hematoma.” IRB28

Similarly, a few chairs say that they expect a certain prevalence of protocol violations and serious, unanticipated adverse events, and may be suspicious if PIs do not report any.

If we expect to have adverse events from a study, and hear nothing, we get concerned. If we don’t get a satisfactory answer at the continuing review, we will usually audit, and find bad record keeping. IRB11

Yet as above, not all IRBs have the resources to conduct audits, and/or proceed to do so.

Causes of Problems

As suggested above, RI problems may result from a range of factors. Occasionally, psychiatric problems can lead to lapses. But more commonly, problems stemmed from poor education about the regulations among PIs and their staff and trainees, and unintentional errors (“miseducation of a co-investigator, RA, nurse, or coordinator” IRB9). Specifically, researchers or their staff may not fully know the regulations. Interviewees tended to feel that almost all PIs conducted research without serious RI violations. Worrisome lapses, when occurring, usually resulted from ignorance, not intentional deception. “The vast majority of people do things right. Those who don’t mostly err out of ignorance, not because they are trying to play the system, or get rich.” IRB4

In addition, PIs were overextended, with insufficient time to adequately train or monitor staff (e.g., “The PI’s off in China and India doing research much of the time”). Email contact alone can prove inadequate for supervising and monitoring. Staff turnover can also pose problems — e.g., a project coordinator may leave and not be rapidly or adequately replaced.

The responsibilities of PIs for graduate student research can also be blurry. Senior PIs may insufficiently supervise graduate students who in turn have inadequate training and experience working with IRBs and/or conducting research. In these situations, PIs may not fully admit or fulfill responsibilities.

Researchers who lost their privileges due to research integrity issues mostly…weren’t bad apples. It was a culmination of being overworked and overstretched, giving a lot of responsibility to a graduate student, but not overseeing or supervising appropriately. A lot of things got out of hand, which the grad student didn’t realize. But the PI was ultimately responsible, and did not follow-up, or properly supervise. IRB27

IRBs may thus also perceive and respond to a cumulative pattern of violations over time, rather than a single error.

A trainee might make errors without the PI knowing — e.g., starting subjects in a protocol before it is approved.

In a relatively moderate risk protocol, the graduate student enrolled subjects before the protocol was even approved, because some consent form issues were still up in the air. The student collected data. The PI was not aware. He was on this campus, but the grad student was elsewhere. The PI just lost track of what was going on, and of how important supervision was. IRB27

Tensions exist as to whether supervisors should be solely versus primarily responsible for graduate study work, and how much the students themselves should be accountable. IRBs may want to hold faculty completely responsible, who may resist this role, or perform it perfunctorily or half-heartedly. Moreover, if faculty are fully responsible for graduate students, questions surface as to whether these students would then be less accountable for their own work.

We allow a graduate student to be the PI. But we are changing that to require the faculty member be the PI, and have the student be a collaborator. But the faculty feel they’d be directly accountable for the students’ actions, and that the students would not be, if they act inappropriately on that protocol. IRB28

Responses to PI Problems

As mentioned earlier, once detecting a problem, IRBs have to decide how, at all, to respond, and face a range of options from more to less serious. IRBs can “mandate change, or terminate the study, or report these to the FDA or OHRP.” IRB5 As above, for relatively minor lapses, IRBs may suspend a faculty member’s ability to conduct research based not on a single episode, but on repeated violations, even if these are each relatively minor.

Several types of problems present additional dilemmas — e.g., what to do if data were already collected though the study had not complied with regulations. If a PI may have enrolled participants without having renewed the protocol, or without proper informed consent, IRBs then have to judge whether the PI has to re-contact all of the participants, or can use the data nonetheless. The IRB might allow use of the data from already enrolled subjects in minimal risk studies, but not from more invasive or risky studies.

Conclusions

These data suggest that IRBs become involved in a variety of RI problems, broadly defined, and face challenges in doing so — e.g., in deciding how and when. IRBs often define and view RI broadly to include issues of researcher noncompliance with regulations in ways that can affect human subject protection. While many institutions establish separate Compliance Offices, the boundaries and relationships between these entities and IRBs vary considerably, such that many IRBs themselves discover and monitor RI violations, and struggle with questions of how to respond to these.

IRBs detect RI problems that appear mostly minor, but are not all negligible or dismissible. IRBs find unsubmitted studies, and undisclosed changes to inclusion/exclusion criteria, sample size, study arms, and timing of participant visits (i.e., outside of prespecified periods). Though many problems involve only paperwork (i.e., “no one is hurt”), others raise greater concerns (e.g., fabricating subjects), though rarely.

IRBs may learn of RI problems through continuing annual reviews, self-reports by PIs or their staff, random or for-cause audits, and complaints by staff (of which the PI may not be aware) or subjects. Subjects contacted IRBs to complain only rarely: at most no more than one to two times every year or so. Yet these mechanisms of detection are all limited. In the end, IRBs cannot detect all RI problems.

IRBs emerge here as operating within, and as part of, complex social systems that involve larger academic institutions, including compliance offices, PIs, research staff, subjects, and outside federal agencies. The relationships between these entities involve intricate types and patterns of communication, with various kinds of information requested and/or provided about a range of experiences and behaviors. The flow of this information can be hampered or facilitated in several ways.

Studies of IRBs thus need to include understandings of not only official formal regulations, but day-to-day interactions and experiences — the manifold ways policies and guidelines are in fact interpreted, applied, and shaped within dynamic social systems.

Parsons wrote that social systems establish formal rewards and punishments, as well as “unplanned and largely unconscious mechanisms which serve to counteract deviant tendencies”26 to reverse vicious cycles that may trigger alienation and hence more deviance. The data here highlight the importance of understanding how institutions do or should try to identify cases of “deviance,” and decide how many resources to use in so doing.

In drawing on Parson’s theoretical framework, questions arise of whether RI violations indeed constitute deviance. Parsons defines deviance as “to depart from conformity with the normative standards which have come to be set up as the common culture.”27 But operationalizing this definition poses problems. These data highlight how definitions of deviance (i.e., of “integrity”) can differ, causing strains. In certain ways, RI violations clearly constitute deviance since the federal regulations are the stated rules of the organization. However, the present data suggest that these violations are generally minor. Still, a researcher may feel that she is making only “minor” changes in a protocol, therefore not diminishing the integrity of the project. Tensions can thus emerge because researchers do not always agree with IRBs that these behaviors are in fact deviant.

These data suggest, too, that in assessing RI violations, IRBs seek to gauge issues of intent, and in so doing, confront gray areas. The cause of RI problems often appears unintentional. But conscious or unconscious attitudes may underlie some PIs’ “sloppiness,” and be difficult to discern. Work motivation can stem from both social and psychological factors.28

In general, deviance may result partly from perceived injustice.29 PIs may feel that increased regulations and amounts of paperwork are unjustifiably being imposed on them. IRBs should be aware of these views, and possibly consider them in making and carrying out decisions.

Robert Merton30 suggests that while conformity may meet cultural goals (e.g., career success) and institutional means of attaining these, innovation may entail acceptance of cultural goals, but rejection of institutional means. Thus, conformity may be particularly hard for researchers, who, by definition, tend to be innovators, opposed to accepted ways of thinking. IRBs might take this into account as well.

These data pose critical questions of whether IRBs should be more fully and systematically involved in monitoring and responding to RI issues, and if so, how. The IOM31 calls for institutions to enhance research monitoring, but for separation of IRBs from institutional compliance offices. The present data suggest that such differentiation can be hazy and challenging. IRBs review protocols in detail and can potentially spot problems. To have other offices monitor studies as well could create duplication of efforts, and thus inefficiencies. Oversight of the compliance of an institution overall, and of individual researchers and studies can, in fact overlap, and be hard to disentangle. Moreover, separate compliance offices do not always disclose their findings and decisions to IRBs, generating frustration, confusion, and inefficiency.

One could argue that according to the federal regulations, narrowly defined, such duties lie outside the scope of the IRB, and that IRBs should thus not be engaged. But the boundaries between assessments of integrity and of potential risks and benefits to subjects often appear blurry: RI issues can affect the potential risks and benefits to subjects, and thus fall within the IRB’s purview. At times, IRBs can perform unique functions, uncovering and responding to lapses in RI.

But these functions, if they fall under IRBs’ mandate, pose dilemmas — e.g., whether these boards should devote more resources to these activities than at present. Currently, discovery of these problems often relies on limited mechanisms and serendipity. If these activities are deemed important, more resources would be beneficial. Especially as the amount and complexity of research increases, more support for such monitoring would appear indicated. Yet questions then emerge of how much is needed and when such resources should be used. Research that is potentially more invasive may warrant heightened scrutiny. Yet surprisingly, these chairs often felt that they could monitor industry-funded research less than other studies, since industry funders were already overseeing it. This belief raises concerns, given drug company scandals that have occurred. Though one might argue that such scandals frequently relate to marketing, rather than compliance, RI issues, broadly construed, can involve IRBs and are often entailed as well. These phenomena require additional research, too.

Dilemmas also emerge as to what thresholds should be used for standards. The fact that an audit can almost invariably uncover problems poses quandaries as to whether more audits should be conducted, and why or why not — e.g., whether the kinds and extents of the violations found justify the expenditures that such additional investigations would require. Alternatively, if the standard in research ethics is deemed not to be “perfection,” questions emerge of whether the expectation should be changed. In clinical care, frequently involving the “art” of doctoring, with uncertainties and subjective human judgments, “perfection” is not always expected. Errors occur, though hopefully they are only minor. But it is unclear whether a lowered expectation and standard should then be employed in research, where presumably, findings and procedures are objective and replicable.

Clarification of policies may also be beneficial — e.g., whether PIs need to notify IRBs of every minor change in a protocol, and if not, which minor alterations require review and approval (that in some institutions can take a few weeks, delaying studies). Concerns arise, however, since PIs and IRBs may define “mere paperwork” and “small changes” differently, and PIs may consider certain changes “minor” that are actually more significant, and alter the risk-benefit ratio. Questions surface, too, of whether PIs can use data collected when protocols were non-compliant, and if permissible, when and why.

Larger questions underlie these issues: ultimately, how IRBs do and should decide whether to trust versus closely monitor individual PI integrity. Trust has received increased attention from philosophers and social scientists, but in many ways remains an amorphous concept.32 Trust can facilitate and streamline many interactions, but become attenuated and fragile as the size and complexity of research enterprises burgeon. IRBs may need evidence (through monitoring) to assess whether trust is warranted, but how much and what kinds of such data are necessary is unclear.

The good news here is that overall, the frequency of RI problems appear relatively low. Hence, IRB critics might aver that IRBs need not expand their monitoring activities, and/or can even decrease them. But IRBs do discover that RI problems can be of concern and might otherwise not be detected. In addition, PIs may not be reliable to wholly self-regulate, since they may not know or correctly self-apply federal regulations. Unfortunately, the actual baseline frequency of RI problems remains unknown. Such data are needed to enable policymakers to consider whether the rate and seriousness of these problems are high enough to warrant additional resources. Yet determining such rates of non-compliance itself faces several obstacles, as violations may be under-reported and even hidden by PIs.

Understandably, PIs may resent such monitoring, but research involves public trust, especially since the government funds much of it. Such transparency and scrutiny are vital. However, inevitably, tensions may continue with PIs, who see these regulations as an imposition.

That staff who report problems to IRBs may be seen as “whistleblowers” who may then be penalized in the institution is also worrisome. Such castigation can deter reporting to the IRB. Mechanisms to counter and avoid such negative repercussions, though established by institutions, can still prove insufficient. Indeed, state protections vary, and their effectiveness has been questioned.33

It is also not clear whether the current relatively low frequency of complaints by subjects is appropriate, or whether IRBs should encourage participants to provide more feedback. Staff complaints may be low because of fears of backlash, but can potentially help improve research within institutions. Arguably, IRBs should encourage subjects to provide more feedback (whether good or bad) concerning study participation. A disadvantage of seeking such feedback is that complaints may require investigation by the IRB, which requires resources, and surely not all complaints will prove valid. But such input can potentially yield valuable information about studies already approved by the IRB. Perhaps subjects can be more routinely asked to complete “evaluation” forms about their participation, in which they are asked how satisfied or dissatisfied they are with their participation, and whether aspects of the experiences could be improved. Such information could be useful to both researchers and IRBs regarding not only potential problems, but effective aspects of the research process, and of subject protection. Thus, feedback could yield potential benefits that should be seriously considered.

Questions arise of whether other means may be effective besides monitoring and responding to deviance — i.e., whether institutions can and should promote conformity with RI in other ways. Katz and Kahn34 identified four patterns of motivation within institutions to promote organizational effectiveness: legal compliance (backed by use of penalties), use of rewards, self-expression, and internalization of organizational goals. Institutions seek to have researchers conform to RI norms, but rely primarily on legal compliance, and fear of repercussions, not rewards, per se. Hence, PIs may not fully embrace this institutional goal. Perhaps rewards can help, and be considered (e.g., public announcements of successful research grants, papers, and findings). IRBs may want researchers to internalize these values, but encourage only legal compliance. In contrast, internalization may depend on the congruence of the goals with the individual’s needs and values, active involvement in organizational decisions and fair dispersal of rewards received by the organization, and thus may be far more difficult.35 Unfortunately, PIs may see their needs (e.g., advancing research) as conflicting with RI regulations.

These data have critical implications for future studies — e.g., to examine more fully on larger samples how often subjects and staff in fact complain to the IRB about studies, and how IRBs respond. More research is needed, too, to understand how IRBs in fact function within the complex dynamics and social systems of medical institutions (e.g., the nature of their relationships with compliance and other institutional offices, and individual PIs and studies, when suspecting compliance or other problems), and how IRBs develop, investigate, and judge potential problems. Unfortunately, in general, relatively few in-depth studies have been conducted of how IRBs make decisions. Anecdotally, IRBs have often resisted examination of their decision making. Yet, such research can have important benefits, and IRBs should encourage it in order to improve trust from researchers and study subjects, and the public at large.

This study has several potential limitations. These data are based on in-depth interviews with individual IRB members and chairs, and did not include direct observation of IRBs making decisions, or investigation of IRB written records. Future research can also observe IRBs and examine such records. However, such additional data may be difficult to obtain if, for instance, IRBs require obtaining consent from all IRB members, as well as from the PIs and funders of protocols. However, the present data provide important insights on these issues. In addition, these interviews probed respondents’ experiences and views at present and in the recent past, but not prospectively over time to assess whether they changed their views, and if so, why. Future research can explore these issues over time as well.

In short, these data illustrate that IRBs are in fact involved in RI issues broadly construed in complex ways, monitoring studies and finding RI problems, but vary in whether and how they respond to these lapses. IRBs’ roles here are often indirect and not fully systematic, thus raising questions of whether these functions should be enhanced, and if so, to what degree, and how. As the complexity and amount of research rises, these realms require heightened investigation and discussion.

IRBs may be the only detailed reviewer of protocols within an academic institution, and hence can potentially serve as an important lens for examining RI. Yet IRBs are known to vary widely within and among institutions, and may be influenced by institutional and social contexts and other factors. Surprisingly, though, very little, if any, empirical research has probed how IRBs view and approach these pressing RI issues.

IRBs varied widely as well in whether, when, and to what degree they monitored and audited studies, and what these investigations then found. Overall, audits either resulted from suspicion of a problem, or were random. Institutions varied in whether IRBs themselves or other offices conducted audits, and if the latter, what relationships existed between these entities.

IRBs can also learn of problems not by audits, but by complaints from subjects or concerned staff who alert the IRB without the PI’s knowledge. Complaints by subjects are rare, but do occur — e.g., up to once or twice every few years, but not more — and can prompt federal investigations. However, not all such reports of problems by subjects or junior staff prove valid.

Studies of IRBs need to include understandings of not only official formal regulations, but day-to-day interactions and experiences — the manifold ways policies and guidelines are in fact interpreted, applied, and shaped within dynamic social systems.

Acknowledgements

The author would like to thank Meghan Sweeney, B.A., Jason Keehn, B.S., Renée Fox, Ph.D., and Paul Appelbaum, M.D., for their assistance with this manuscript.

Appendix A. Sample Questions from Semi-Structured Interview*

  • How do you define research integrity (RI)? What has been the most difficult case concerning RI that you have faced? What kinds of issues arose? Do you think IRBs and PIs view RI differently or apply RI standards differently, and if so, how? Have you seen problems in researcher noncompliance with IRB regulations or mandates? If so, what kinds of problems?

  • What are the barriers and facilitators in IRBs monitoring and addressing RI problems? Do you perceive any gray areas or problems weighing issues about RI? If so, what?

  • Is your IRB more cautious about some researchers than others? Why? In general, do PIs treat your IRB with respect?

  • What kinds of conflicts, if any, has your IRB faced with your institution?

  • Has your IRB discussed sanctions against PIs?

  • Do you think a centralized IRB rather than local IRBs would have advantages concerning RI and other areas? If so, what?

  • What do you think makes an IRB work well or not in monitoring and responding to RI?

  • Do you have any other thoughts about these issues?

*Note: Additional follow-up questions were asked, as appropriate, with each participant.

References

  • 1.Kennedy D. Responding to Fraud. Science. 2006;314(no. 5804):1353. doi: 10.1126/science.1137840. [DOI] [PubMed] [Google Scholar]
  • 2.Institute of Medicine, Committee on Assessing the System for Protecting Human Research Participants . In: Responsible Research: A Systems Approach to Protecting Research Participants. Federman DD, Hanna KE, Rodriguez LL, editors. National Academies Press; Washington, D.C.: 2002. [PubMed] [Google Scholar]
  • 3.NBAC. the DHHS Office of Inspector General. the General Accounting Office the Advisory Committee on Human Radiation Experiments, the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, and the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, as cited. Responsible Research: A Systems Approach to Protecting Research Participants. 2003 [Google Scholar]
  • 4.Department of Health & Human Services, Office of Inspector General . Draft OIG Compliance Program Guidance for Recipients of PHS Research Awards. U.S. Government Printing Office; Washington, D.C.: [last visited May 12, 2011]. 2005. available at < http://oig.hhs.gov/fraud/docs/complianceguidance/PHS Research Awards Draft CPG.pdf>. [Google Scholar]
  • 5.Grant G, Guyton O, Forrester R. Creating Effective Research Compliance Programs in Academic Institutions. Academic Medicine. 1999;74(no. 9):951–971. doi: 10.1097/00001888-199909000-00007. [DOI] [PubMed] [Google Scholar]
  • 6.Heath E. The IRB’s Monitoring Function: Four Concepts of Monitoring. IRB: Ethics and Human Research. 1979;1(no. 5):1–3. 12. [PubMed] [Google Scholar]
  • 7.Weijer C, Shapiro S, Fuks A, Glass KC, Skrutkowska M. Monitoring Clinical Research: An Obligation Unfulfilled. Canadian Medical Association Journal. 1995;152(no. 12):1973–1980. [PMC free article] [PubMed] [Google Scholar]
  • 8.Greene SM, Geiger AM. A Review Finds that Multi-center Studies Face Substantial Challenges but Strategies Exist to Achieve Institutional Review Board Approval. Journal of Clinical Epidemiology. 2006;59(no. 8):784–790. doi: 10.1016/j.jclinepi.2005.11.018. [DOI] [PubMed] [Google Scholar]
  • 9.Jones J, White L, Pool L, Dougherty J. Structure and Practice of Institutional Review Boards in the United States. Academic Emergency Medicine. 1996;3(no. 8):804–809. doi: 10.1111/j.1553-2712.1996.tb03519.x. [DOI] [PubMed] [Google Scholar]
  • 10.Parsons T. The Social System. Free Press; Glencoe, Ill: 1951. [Google Scholar]
  • 11.Stevenson A, Siefring J, Brown L, Trumble WR, editors. Oxford English Dictionary. Oxford University Press; Oxford: 2002. [Google Scholar]
  • 12.U.S. Department of Health and Human Services, Office of Research Integrity [retrieved September 8, 2005];Research – Extramural. available at < http://ori.hhs.gov/research/extra/index.shtml> (last visited May 12, 2011)
  • 13.Martinson B, Anderson M, de Vries R. Scientists Behaving Badly. Nature. 2005;435(no. 9):737–738. doi: 10.1038/435737a. [DOI] [PubMed] [Google Scholar]
  • 14.Korenman SG, Berk R, Wenger NS, Lew V. Evaluation of the Research Norms of Scientists and Administrators Responsible for Academic Research Integrity. JAMA. 1998;279(no. 1):41–47. doi: 10.1001/jama.279.1.41. [DOI] [PubMed] [Google Scholar]
  • 15.Mello M, Clarridge B, Studdert D. Academic Medical Centers’ Standards for Clinical-Trial Agreements with Industry. New England Journal of Medicine. 2005;352(no. 21):2202–210. doi: 10.1056/NEJMsa044115. [DOI] [PubMed] [Google Scholar]; Koocher G, Keith-Speigel P. IRB Researcher Assessment Tool (IRB-RAT): A User’s Guide. Harvard Medical School; Boston: 2005. [Google Scholar]
  • 16.Office of Inspector General . Institutional Review Boards: Their Role in Reviewing Approved Research. U.S. Government Printing Office; Washington, D.C.: 1998a. DHHS Publication No. OEI-01-97-00190. [Google Scholar]; Office of Inspector General . Institutional Review Boards: Promising Approaches. U.S. Government Printing Office; Washington, D.C.: 1998b. DHHS Publication No. OEI-01-91-00191. [Google Scholar]; Office of Inspector General . Institutional Review Boards: The Emergence of Independent Boards. U.S. Government Printing Office; Washington, D.C.: 1998c. DHHS Publication No. OEI-01-97-00192. [Google Scholar]; Office of Inspector General, Department of Health and Human Services . Institutional Review Boards: A Time for Reform. U.S. Government Printing Office; Washington, D.C.: 1998d. DHHS Publication No. OEI-01-97-00193. [Google Scholar]; Office of Inspector General . Protecting Human Research Subjects: Status of Recommendations. U.S. Government Printing Office; Washington, D.C.: 2000a. DHHS Publication No. OEI-01-97-00197. [Google Scholar]; Office of Inspector General . Recruiting Human Subjects: Pressure in Industry-Sponsored Clinical Research. U.S. Government Printing Office; Washington, D.C.: 2000b. DHHS Publication No. OEI-01-97-00195. [Google Scholar]; Office of Inspector General . Recruiting Human Subject: Sample Guidelines for Practice. U.S. Government Printing Office; Washington, D.C.: 2000c. DHHS Publication No. OEI-01-97-00196. [Google Scholar]; Office of Inspector General . Clinical Trial Web Sites: A Promising Tool to Foster Informed Consent. U.S. Government Printing Office; Washington, D.C.: 2002. DHHS Publication No. OEI-01-97-00198. [Google Scholar]; Moreno J, Caplan A, Wolpe P, the Members of the Project on Informed Consent, Human Research Ethics Group Updating Protections for Human Subjects Involved in Research. Gov of the American Medical Association. 1998;280(no. 22):1951–1958. doi: 10.1001/jama.280.22.1951. [DOI] [PubMed] [Google Scholar]; Burman W, Reves R, Cohn D, Schooley R. Breaking the Camel’s Back: Multicenter Clinical Trials and Local Institutional Review Boards. Annals of Internal Medicine. 2001;134(no. 2):152–157. doi: 10.7326/0003-4819-134-2-200101160-00016. [DOI] [PubMed] [Google Scholar]
  • 17.Campbell EG, Weissman JS, Clarridge B, Yucel R, Causino N, Blumenthal D. Characteristics of Medical School Faculty Members Serving on Institutional Review Boards: Results of a National Survey. Academic Medicine. 2003;78(no. 8):831–836. doi: 10.1097/00001888-200308000-00019. [DOI] [PubMed] [Google Scholar]
  • 18.Greene SM, Geiger AM. A Review Finds That Multi-center Studies Face Substantial Challenges but Strategies Exist to Achieve Institutional Review Board Approval. Journal of Clinical Epidemiology. 2006;59(no. 8):784–790. doi: 10.1016/j.jclinepi.2005.11.018. [DOI] [PubMed] [Google Scholar]
  • 19.Klitzman R. How Local IRBs View Central IRBs in the US. BMC Medical Ethics. 2011;12(no. 13) doi: 10.1186/1472-6939-12-13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Klitzman R. ‘Members of the Same Club’: Challenges and Decisions Faced by US IRBs in Identifying and Managing Conflicts of Interest. PLoS ONE. doi: 10.1371/journal.pone.0022796. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Klitzman R. The Myth of Community Differences as the Cause of Variations among IRBs. AJOB Primary Research. doi: 10.1080/21507716.2011.601284. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Klitzman R. US IRBs Confronting Research in the Developing World. Developing World Bioethics. doi: 10.1111/j.1471-8847.2012.00324.x. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.See Klitzman supra note 19.; Klitzman supra note 21.
  • 24.Geertz C. Interpretation of Cultures: Selected Essays. Basic Books; New York: 1973. [Google Scholar]
  • 25.Strauss A, Corbin J. Basics of Qualitative Research – Techniques and Procedures for Developing Grounded Theory. Sage Publications; Newbury Park: 1990. [Google Scholar]
  • 26.See Jones, et al. supra note 9.
  • 27. Id.
  • 28.Leonard NH, Beauvais LL, Scholl RW. Work Motivation: The Incorporation of Self-Concept-Based Processes. Human Relations. 1999;52(no. 8):969–998. [Google Scholar]
  • 29.See Jones, et al. supra note 9.; Koocher GP. The IRB Paradox: Could the Protectors Also Encourage Deceit? Ethics & Behavior. 2005;15(no. 4):339–349. doi: 10.1207/s15327019eb1504_5.
  • 30.Merton RK. Social Structure and Anomie. American Sociological Review. 1938;3(no. 5):672–682. [Google Scholar]
  • 31.See Kennedy supra note 1.
  • 32.Tyler T, Kramer R. Whither Trust? In: Kramer R, Tyler T, editors. Trust in Organizations. Sage; Thousand Oaks, Calif.: 1996. [Google Scholar]; Hardin R. Trust and Trustworthiness. Russell Sage Foundation; New York: 2002. [Google Scholar]; Nagel T. Concealment and Exposure. Philosophy and Public Affairs. 1998;27(no. 1):3–30. [Google Scholar]
  • 33.Callahan ES, Dworkin TM. The State of State Whistleblower Protection. American Business Law Journal. 2000;38(no. 1):99–175. [Google Scholar]
  • 34.Katz D, Kahn RL. The Social Psychology of Organizations. Wiley; New York: 1996. [Google Scholar]
  • 35. Id.

RESOURCES