Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
editorial
. 2006 Sep;21(9):1003–1004. doi: 10.1111/j.1525-1497.2006.00581.x

Why Disclosure?

Raymond De Vries 1, Carl Elliott 2
PMCID: PMC1831613  PMID: 16918750

In March 2006, 6 previously healthy research subjects in London were nearly killed in a Phase 1 trial of an investigational monoclonal antibody called TGN1412. Shortly after being given the investigational drug, the subjects developed multisystem organ failure and were rushed to an intensive care unit at a nearby hospital.1 The subjects had been promised £2000 ($3500) apiece upon completion of the trial, which was conducted by a Contract Research Organization called Parexel.2 According to its website, Parexel partners with clients “to accelerate time-to-market, control development costs, reduce risk, and maximize return on investment.”3

It is phrases like “maximize return on investment” that should be troubling to potential research subjects. The TGN1412 study is only one among a number of recent industry-funded clinical trials where financial interests have arguably placed research subjects at risk.

  • In December 2005, news reports revealed that 9 healthy subjects had tested positive for tuberculosis after taking part in an industry-sponsored trial of an immunosuppressant drug at the Anapharm research facility in Montreal. A later investigation reported that 11 employees at the trial site were also infected.4

  • In November 2005, it was reported that SFBC, Anapharm's parent company, had paid undocumented Latino immigrants to take untested drugs in a converted Miami motel. The trials were supervised by an unlicensed medical director whose degree came from an offshore medical school in the Caribbean. The for-profit IRB that approved many SFBC trials was owned by the wife of an SFBC vice-president.5

  • In April 2002, according to news reports, the Fabre Research Clinic in Houston recruited a homeless Vietnam veteran named Garry Polsgrove for a trial of clozapine. The trial was funded by Ivax Corporation, the nation's largest manufacturer of generic drugs. Twenty-two days after Polsgrove checked into the clinic, he died of myocarditis in the care of an unlicensed clinic assistant. The FDA allowed the clinic to operate for 3 more years before closing it down.6

  • In February 2004, Traci Johnson, a 19-year-old healthy volunteer, committed suicide in Eli Lilly's testing facility in Indianapolis, while taking the antidepressant duloxetine (Cymbalta). Johnson had no previous history of mental illness. She was being paid $150/day.7

  • In May 2006, the Washington Post updated the on-going story of Pfizer's 1996 trial of the antibiotic Trovan in Kano, Nigeria. At least 11 children died in a clinical trial of Trovan, which was conducted in the midst of a meningitis epidemic. Six children died after being given Trovan, while 5 died after being given an inadequate dose of the comparison drug. According to the Post, a Nigerian government report condemning the Pfizer study mysteriously disappeared for 5 years before a copy was finally found and leaked to the press. The leaked report said that Pfizer had not told the children or their parents that they were part of an experiment, and that a letter of approval from a Nigerian ethics committee, which Pfizer used to justify its actions, had been concocted and backdated by the company's lead researcher in Kano. The report also said that the oral form of Trovan with which the children in the trial were treated had never been given to children with meningitis before.8

Medical research, once largely the province of academic researchers working in universities, has become a multinational corporate enterprise. Only about a quarter of clinical trials now take place in academic settings; academic researchers themselves have considerable financial ties to industry; and even the ethics review of clinical trials has become a major commercial enterprise. Clearly the enormous amount of money at stake in medical research today presents potential subjects with a problem. How can a research subject be sure that the emphasis on “return on investment” will not translate into a dangerous trial? Are subjects even aware that the investigators conducting the trials may have a considerable financial stake in the research?

As reported in this issue of the Journal, Weinfurt et al.9 decided to put these questions directly to research subjects themselves. In a series of focus groups, Weinfurt and his colleagues asked subjects what they would want to know about clinical researchers' financial interests. What did they find? Unsurprisingly, they found that research subjects really have no idea what to think. Some subjects apparently do not want to know about conflicts of interest; others say they do want to know about conflicts of interest, on the grounds that this knowledge will help them make better decisions; and still others do not understand enough to know what they want. Acknowledging this confusing result, the authors conclude on a rather equivocal note: “This does not necessarily mean…that disclosures should not be made.”

What are we to make of the subjects who simply do not want to know about a researcher's conflicts of interest? Weinfurt and his colleagues write that these subjects apparently assume that “ignorance is bliss.” Perhaps, but it is not irrational of them to distrust disclosure as a remedy for conflict of interest. In fact, the most interesting research yet published on conflict of interest, from a group at Carnegie Mellon, suggests that these subjects who do not hear disclosures may make better decisions than those who hear them.10 Not only has this research cast doubt on the assumption that disclosure is a good remedy for conflict of interest; it suggests that disclosure may make the effects of the conflicts even worse.

In the Carnegie Mellon study, researchers devised an experiment where one group of people was instructed to look from a distance at large jars filled with coins. These people, called “estimators,” were asked to estimate how much money the jars had in them. The closer an estimator came to the right amount, the more money that particular estimator would get. Thus the estimators had a strong financial incentive to get their estimates right.

Another group, the so-called “advisors,” had a different job. Their job was to get closer to the jars, look at the coins more carefully, and then give written advice to the estimators. But the advisors had a different set of financial incentives. They were paid according to how high their estimators guessed. That is, they were given financial incentives based not on how close to the truth their estimators got, but on how high their estimators' guesses were. Thus, they had an incentive to give misleading advice.

The results? Predictably, when the estimators listened to the advisors, they made higher guesses. That was unsurprising, as the advisors were being paid to advise them to guess high. Less predictable, however, was the effect of disclosure. When the advisors disclosed to the estimators that they were getting paid to have them guess high, the disclosure did not lead the estimators to guess any lower. They still guessed high, even though they had been told that their advisors had a conflict of interest. Disclosure did not make the estimators any more skeptical.

Perhaps even more interesting was the effect of disclosure on the advisors themselves. Once the advisors disclosed their conflicts, their advice got even worse. They started giving advice that was even more biased than before. It was as if disclosure had given them moral license to exaggerate. The Carnegie Mellon authors of the study conclude that “coming clean” leads to “playing dirtier.”11 They argue that the solution to the bias created by of conflict of interest is not simply to disclose the conflict, which makes the bias even worse. Rather, the solution is to eliminate the financial conflicts.

The important ethical question raised by conflicts of interest is not, as Weinfurt as his colleagues seem to assume, “Should researchers disclose?” but rather “Why has disclosure become such a popular way of managing financial conflicts of interest in medicine?” The standard, boilerplate answer to this question is that researchers must disclose in order to show “respect for persons.” Respect for persons demands that researchers provide all relevant information to would-be research subjects so that their decisions to participate (or to refuse) is made in full and open awareness of all the risks they are assuming. However, a more cynical explanation of the popularity of disclosure suggests that it is a “remedy” for financial conflicts of interest that allows those conflicts to stay in place. Thus it does nothing to threaten the existing funding arrangements for clinical research.

In the hands of many research ethicists, “respect for persons” becomes synonymous with “autonomy,” the right of a research subject to choose or refuse to take part in a trial. This easy translation of respect into autonomy allows researchers to ignore the problems that are part and parcel of industry-funded research. The obligation of researchers to their subjects is reduced to a process of plying subjects with information, including information about funding. Autonomy has replaced the overweening paternalism of medicine with a new kind of “distant paternalism” where researchers, like a distant father, remain aloof and detached. The helplessness of research subjects in the face of medical power is transformed into helplessness in the face of incomprehensible and misleading information.

What is being overlooked here is the need to protect the welfare of potential research subjects. The real problem raised by industry-funded research is that the pursuit of financial gain might lead researchers to place subjects at risk in dangerous studies. Researchers have yet to find a way to disclose their financial conflicts of interest that shows a true respect for the varied needs of would-be subjects, but even if they—or some hard-working research ethicists—discovered the ideal way to disclose, subjects would still need to be protected from dangerous trials. The work of Weinfurt and his colleagues and the researchers at Carnegie Mellon suggests that should we are placing far too much faith in disclosure as a means of protection. Disclosure works like a warning flag: it alerts us of potential problems, but it does not fix them. For that, we need to look elsewhere.

REFERENCES


Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES