Skip to main content
Public Health Reports logoLink to Public Health Reports
. 2014 Mar-Apr;129(2):124–126. doi: 10.1177/003335491412900204

On the Hard and Soft Sciences in Public Health

Mark VanLandingham a,
PMCID: PMC3904890  PMID: 24587545

Many of my friends and colleagues who conduct public health research in a controlled environment regularly and casually invoke the moniker “hard and soft science.” My reflexive query, “What does that mean exactly?” generally leads to a self-conscious reply and a fairly quick change of subject, and so I have decided to explore the question in this essay, “What does that mean exactly?”

Oxford defines science as “the intellectual and practical activity encompassing the systematic study of the structure and behavior of the physical and natural world through observation and experiment.”1 On the supposed distinction between “hard” and “soft” science, Oxford is silent. I think what my friends and colleagues sometimes have in mind by hard science is physicality, such as the hard surfaces of laboratory counters, centrifuges, and pipettes; and the physical presence of things such as skin, bones, and water samples. Perhaps “soft” here implies abstract. Sometimes I think what they have in mind has to do with reliability, i.e., that the results of “hard” science are more reproducible or consistent than are results from “soft” science (in fact, Hedges concluded from his systematic empirical comparison that “the results of physical experiments may not be strikingly more consistent than those of social or behavioral experiments”2). Sometimes I think they're trying to say something about validity or epistemology, i.e., the “hard” truth vs. the “soft” or squishy truth. What is clear when they say it is that from their point of view, “hard” is highly valued, while “soft” is much less so.

From where does the denigration of the so-called “soft” by the self-proclaimed “hard” originate? Dr. Massimo Pigliucci, a philosophy of science professor writing on the website, Science 2 point 0,3 attributes the distinction to an article by Dr. John R. Platt published in Science 50 years ago.4 In his thoughtful essay, Platt explores why he thinks some fields are more productive than others. However, rather than disparage the behavioral and social sciences—a frequent modern target of the “soft” epithet—Platt, a physicist, directs much of his ire toward chemistry. Near the end of his essay, he includes social problems on his list of the most complex and challenging scientific issues that lie ahead.

Platt makes a strong case for induction, hypothesis testing, and the “fruitfulness of interconnecting theory and experiment so that the one checked the other.”4 So does every good scientist I know.

The “hard” vs. “soft” science metaphor enjoys casual and widespread use within public health, nearly always employed by those who fancy themselves to be on the “hard” side of the coin. Perhaps this distinction results from having cursory contact with such a wide range of disciplines. In our field, bench scientists, managerial experts, legal analysts, statisticians, program managers, epidemiologists, social scientists, physicians, nurses, and other clinicians often work in close proximity with the mutual goal of improving public health. That effective collaboration is possible at all across such a wide range of expertise is remarkable.

That such close proximity does not always break down insularity is not surprising. Sometimes we know just enough about other subfields focusing on public health to be dangerous. One such danger involves a tendency to deride an approach as “soft” when it does not fit within a familiar paradigm. When the familiar paradigm is the two-by-two table, this tendency will severely limit the scope of public health problems that can be viewed as fruitful topics of scientific investigation.

The classic two-by-two research design is a very powerful scientific tool. Randomized controlled trials, for example, can help illustrate whether treatment (or program) A is better than B, all things being equal.

The problem is that, in situ, all things seldom are equal. Random assignment of individuals to experimental or control groups can be silly to contemplate for many of our central problems in public health (e.g., “you will drink soft drinks for 10 years; you will drink water”). And so we require more complex tools to accommodate the complex context within which these health problems occur.

Indeed, many self-described “hard” science disciplines are moving beyond simple research designs and analytical strategies to more complex approaches. But usually my colleagues in the laboratory-based and clinical sciences are surprised when I point out that these more sophisticated strategies are typically pioneered and developed within the disciplines they deride as “soft.” Decades before these more complex models and techniques gained traction among scientists who typically work in more controlled environments, statisticians and social scientists were employing multivariate analysis to address multiple sources of simultaneous causation, survival models to address right-censored observations, multilevel analysis to address micro and macro sources of influence, latent variable analysis to address unobserved sources of influence, content analysis to effectively use qualitative data, and instrumental variables to address endogeneity. Certainly, no one employing these more complex and sophisticated tools would consider a two-by-two research design to be “hard” in the sense of valid when potentially confounding influences have not been addressed.

So, why the staying power of the “hard” science moniker among those who use it? Provinciality and isolation are surely part of the answer. Many researchers have limited exposure to approaches used outside of their disciplinary enclave. Another part of the answer is likely due to the equipment, uniforms, and rituals that distinguish the clinical and bench sciences from the so-called “soft” disciplines that function largely without them. A third part of the answer likely has to do with the appeal to some of the things that can be readily counted. The number of micrograms of lead per deciliter of blood can be easier to wrap one's head around than, say, determinants of differential racial trends in attitudes regarding unwed pregnancy during a period of increasing age at marriage.

But is the “hard vs. soft” disparagement really a problem? Yes, in two ways. First, complacency invariably leads to ignorance, and ignorance of state-of-the-art methods in field-based research is a serious impediment to scientific progress. Second, the most fruitful lines of public health research and practice require effective collaboration across fields. For example, decades ago, leaders in family planning realized that substantial cross-talk among bench scientists, clinicians, social scientists, and program specialists would be required to move modern contraceptives from laboratories to villages.5 Similarly, looking forward, as public health research incorporates more and more biomarkers into population-based assessments, interdisciplinary teams consisting of highly specialized scientists will be required to effectively implement and interpret this work. Organizers of such teams will find it a lot easier to invite a bench scientist or a clinician to discuss a potential collaboration when there is some indication that the prospective team member is reasonably knowledgeable—or at least open to learning—about subfields outside of their own. More importantly, if the potential collaborator is reasonably knowledgeable about other fields of science, he or she will be in a much stronger position to facilitate synergy than would a self-proclaimed “hard scientist,” who will more likely be a red herring on an interdisciplinary team.

I contend that the hard/soft science moniker is vacuous, vapid, complacent, and ultimately counter-productive. I propose that we set an example for other fields and strike the terms “hard science” and “soft science” from our public health lexicon. Doing so will speed our collective progress toward good science.

Footnotes

The editorial benefited from comments on the topic, and on an earlier draft, by Paul Hutchinson, Martina Morris, Becky Mowbray, Adrian Raftery, and Susan Weller. Research assistance by Mengxi Zhang was also helpful.

REFERENCES


Articles from Public Health Reports are provided here courtesy of SAGE Publications

RESOURCES