Teaching responsibility is hard. Anyone who’s ever been a parent understands that truism. It’s especially difficult when the very behaviors you want to prevent are sometimes the ones that secretly make you very proud. I recall when my son was in a soccer league for four-year-olds and was trash-talking other children. I didn’t want my kid to be name-calling, but I was also quietly pleased that he had the intestinal fortitude to stand up for himself.
Assigning responsibility for problems in health care is much more difficult than overcoming a bit of parental ego. If an individual patient is nonadherent with a prescribed course of care, is that the fault of the person or the system? Is it because people lack access to resources or education to help them understand and comply with the recommended plan, or is it because they consciously made a bad choice? (As an emergency physician, most of what I see on shift is a manifestation of bad choices, and most of my efforts at patient education center on the phrase “Don’t do that!”) If we focus on systemic factors at the exclusion of individual behaviors, proposed solutions may risk toppling public health from the unbiased moral high ground into the abyss of the partisan wars. If we assign responsibility to the individual, we have a different set of problems. If patients with pulmonary disease continue to use tobacco despite multiple admonitions not to do so, to what extent are they still given carte blanche for care? And if assigning responsibility is difficult, learning accountability—the ability to accept the consequences of our actions—is even harder. Bil Keane, the original artist behind the Family Circus comic strip, used to draw two ghostly children in the house called Ida Know and Not Me. Most of us still have them living somewhere in our emotional basements.
RELIANCE
In “Nondetected: The Politics of Measures of Asbestos in Talc,” Rosner et al. (p. 969) review an interesting chapter in the history of consumer safety. As they describe well, battles over meaning are fundamental to the process of regulation, and it seems intuitive to presume that warring parties will promote different standards and meet somewhere in the middle. In this case, the cosmetic industry appears to have held the better hand. If fault is to be assigned, it would seem to be shared between industry and government, the former for pushing for a lower standard for fiber detection and the latter for not adhering to its own proposal. A critical review might also find that although terms such as “nondetected” and “asbestos-free” might well be misleading, the use of the terms to the regulated limits of detection was factually correct. And if we accept the authors’ implied premise that any exposure to asbestos is unacceptable, one wonders if any lower limit would have calmed the waters nearly fifty years later. But the larger issue here is that of an uneasy and often adversarial relationship between environmental epidemiology, corporate economics, and government regulation. More broadly, one can view this episode as industry failing their responsibility to both recognize and act on what science said about the risks of asbestos in talc.
But does industry really have a responsibility to science? I would propose that a better term to describe the relationship of industry to science is “reliance.” In a free market, industry depends on science to maintain a competitive edge. The physical sciences allow companies to develop products, processes, and services that are less expensive, more innovative, or of better quality than those of their competitors; the social sciences guide marketing, sales, and corporate management. Corporate research is funded not for the pursuit of knowledge for its own sake but in the hope that such knowledge leads to that competitive edge. And although we may bemoan the fact, science can certainly be owned; that’s called a patent.
In this model of reliance, the accountability inherent in the term “responsibility” is still present, but it goes by the name of “risk management.” A company has a vested interest in the results of studies that show no benefit or harm to a product or service. Work that shows no benefit signals a waste of corporate capital; studies demonstrating harm illustrate legal and financial risks to the company itself. The wise industry leader will be aggressive in studying the risks and benefits of all facets of corporate activity, because if he or she doesn’t, most assuredly someone else will. Indeed, the latest revision of the International Organization for Standardization 9001 specifications for quality management systems emphasizes a focus on risk-based thinking, helping users to address both risks and opportunities in a structured and systematic way. The work of science is of necessity a major force in these programs of critical thought.
It’s also harder today to keep a lid on science. Although “inside information” was undoubtedly more secure in the early 1970s, and the deliberations of regulatory bodies probably even more so, today our connected lives offer more opportunities for transparency than a glass-walled bedroom and for leakage than a home with bad plumbing. Both industry and regulatory bodies can be held accountable for their actions in more ways than dreamed of almost a half century ago.
There are certainly other ways to view the relationship between business and research. The word “dependence” might be used, and it’s true that, as previously discussed, industry is in many ways dependent on science. But the reverse is not true: although research is often corporate funded, industry dollars are not necessarily a prelude to lab work. And strictly speaking, industry is not “accountable” to science but is instead accountable to its shareholders and customers in the ways it uses science. Considering the limitations of more specific terms, “reliance” seems to best encompass the full scope of interaction.
ACQUIRING THE CORPORATE EAR
The word “responsibility” equates to a moral judgment, and it is asking for conflict when public health advocates condemn corporate behavior in moral terms. Considering the relationship between industry and science in ethical terms is a win–lose scenario, someone’s right and someone’s wrong. Emphasizing the reliance of industry on the work of science carries no such weight, and turning the relationship into a practical problem to be cooperatively managed can be a win–win. When science wants to talk to industry, we might do better to say that science fosters innovation, competition, and accountability, all three of which keep the dividends flowing. The fact that too many corporate leaders have promoted short-term profits at the expense of both consumer safety and long-term corporate survival is undoubtedly of concern and likely has furnished tenure for any number of business school ethicists, but not strictly part of the relationship between the worlds of business and science. Noting that risk-based thinking may also drive what we consider to be ethical behavior is icing on the cake.
If public health wants to talk about the responsibly of industry to science and have the argument understood, the response needs to be framed in context to attract corporate attention. Simple appeals to what’s considered right or wrong by the scientific community won’t do the trick; understanding that the best use of science optimizes the free market just might. Otherwise, our concerns register like those of trash-talking toddlers: simply words on the pitch while the contest rages on.
ACKNOWLEDGMENTS
The author wishes to thank Stephen Cohen, associate professor, School of Humanities, University of New South Wales, Sydney, Australia, for his thoughts and discussion, as well AJPH reviewers for their insights.
CONFLICTS OF INTEREST
The author has no financial disclosures or conflicts of interest to report.
