Do you suffer from pre-arthritis syndrome? You may feel fine, but your joints might be quietly conspiring against you, poised to fail when you least expect it. Arthritis has many risk factors [3]: A history of injury, overuse, smoking, alcohol use or infection; extremes of body mass; certain occupations; and genetic influences, among others. If you can claim more than a few items on that list, perhaps the pre-arthritis syndrome label will suit you well.
The specific category of “pre-arthritis syndrome” is new—I just made it up—but the general designation of accumulated risk factors as a pre-disease syndrome is well established in medicine. For example, the American Diabetes Association [1] classifies people with impaired fasting glucose or impaired glucose tolerance, yet without a single overt symptom of disease, as having “prediabetes syndrome”. Interestingly, diabetes itself is a pre-disease syndrome, as elevated blood sugar can damage the arteries and nerves well before any symptoms appear. It won’t be long, I predict, before a “pre-prediabetes syndrome-syndrome” is defined as well.
The justification for finding pre-disease syndromes is the prevention of complications. Diabetes can be silent as it smolders, and prediabetes quieter still. If patients are identified before tissue damage irrevocably takes root, modification of the risk factors might avert clinical problems.
Orthopaedics does not traffic much in pre-disease syndromes, but mild developmental dysplasia of the hip (DDH), to name one example, might qualify. Infants whose hips are reduced but can be subluxed will have no symptoms, and unless the hip dislocates, there may be no outward manifestation of the condition as the child is growing, either. Nonetheless, we vigilantly seek and diligently treat DDH in the neonate, all in the name of preventing degeneration later in life.
Pre-disease syndrome diagnoses are labels created by doctors. The question is how much predisease should we manufacture. The adage “an ounce of prevention is worth a pound of cure” establishes the value of detection and preemption, that some amount of labeling is laudable, but does not tell us how much. After all, an equally famous adage, “a stitch in time saves nine” also establishes the value of prevention, but with an effectiveness ratio only 9/16th (56.3%) as large.
More predisease will be created if the threshold defining abnormality is lowered. Metrics that were once “high normal” according to an older criterion would be placed in the abnormal category if the threshold line were to be moved downward. Yet dropping the threshold—what I’ll call defining deviancy up [14]—also runs the risk of medicalizing many people who would benefit more from simply being left alone. Using the definition from the CDC [4], 29.1% of the adult population in the United States is said to experience hypertension (with three-quarters of them taking medications for it). Increase that rate just a bit, and the term “normotensive”, currently used to designate the healthy state, will be obsolete: Abnormal is the new normal.
The obvious problem with identifying too much of predisease disease is that there is no natural limit to how much “care” can be provided. Already, spending in the United States on modifying two asymptomatic risk factors, elevated cholesterol and high blood pressure, alone exceeds USD 135 billion [6], representing 6.5% of all healthcare spending. To put that in perspective: USD 135 billion is more than is spent on all neurological disorders combined.
I can define pre-arthritis syndrome so liberally (Fig. 1) that just about everybody qualifies as having pre-arthritis. In turn, some entrepreneur might respond to this epidemic by inventing a cartilage rejuvenation procedure of dubious value. If so, a decent (or is it indecent?) number of these procedures will be performed. Whether the burgeoning volume of surgery is a feature, or a bug, depends entirely on whether you are buying or selling; what is certainly true is that more money will be spent.
Fig. 1.

I created a hypothetical pre-arthritis syndrome questionnaire.
Yet there are even greater costs than the financial ones and greater negative consequences than the exposure to the potential harms of treatment. When we label our patients as having pre-disease syndromes, we can rob them of their sense of well-being. All human beings inevitably wear out. Nonetheless, our day-to-day happiness depends on keeping this out of mind [2]. More to the point, excessive labeling of this sort can produce a paradoxical increase in risk. Telling patients that they have osteoporosis may cause them to limit their activities to avoid breaking their bones, while decreased activity “might accelerate bone loss and even increase the risk for fracture” [18].
There is proper role for vigilance regarding the detection of early disease states. When my wife and I became parents, we had our children screened for DDH; and when I turned 50, I swallowed my pride and gallons of GoLYTELY® to have a colonoscopy. Screening makes sense when the testing is adequately specific, and when we have a modest-yet-effective action we can take in response to a positive test. For pre-arthritis syndrome, neither criterion is met. Until we have good evidence that screening is cost-effective, socially effective, and psychologically effective, a bit of parsimony, a willingness to miss a case, may be the best approach.
It behooves us to lose the race to discover new pre-disease syndromes. Yes, there is truth to the adage, “The early bird gets the worm”; but as comedian Stephen Wright correctly points out, “It is the second mouse that gets the cheese.”
Shannon Brownlee MSc
Senior Vice President, The Lown Institute
Author of Overtreated: Why Too Much Medicine Is Making Us Sicker and Poorer
When my son was 10-years-old, an eager young pediatric intern recommended a radiograph of his back. My son appeared to be developing the same slight mid-thoracic scoliosis I have with my back and the intern wanted to get a “baseline study” so that we would know if it was getting worse as he entered adolescence. I was not enthusiastic. If my son’s spine grew bad enough to discuss treatment, wouldn’t we know it then? In the meantime, why would I want to expose him to unnecessary radiation?
The notion that we can prevent disease and disability by “catching it early” has become so deeply ingrained in medical theory and practice that both clinicians and patients sometimes seem to forget that there is no free lunch in medicine. No test, no drug, no surgical intervention comes without some risk of harm. As a result, clinicians routinely recommend interventions that have little or no possibility of easing suffering, changing the course of treatment, or improving much less curing a condition, and patients regularly go along with it.
This bias towards perceiving benefit at the expense of recognizing harm is understandable. After all, modern medicine has much to offer patients. But there are other reasons for this bias, principally the dearth of reporting on harms in our journals [11, 19]. And in particular, some kinds of harm—such as psychological effects and financial toxicity, both of which matter very much to patients—are only rarely considered. Nearly half of medical interventions are either entirely untested, so the balance of benefit and harm is unknown, or the evidence is insufficient to determine the ratio [7].
This lack of information on harms trickles down first into guidelines, which often fail to convey when patients should not receive a test or treatment because the harms outweigh the benefits, and then into practice, where the use of screening tests in particular is reinforced by our collective faith in the wisdom of catching disease early. When new, negative data emerge showing that a test is ineffective or causes more harm than benefit, the news is often met with resistance by practitioners [16] perhaps because before then, they had little knowledge of the possibility of harm.
Twenty-five years ago, Clifton Meador, an author and endocrinologist at Vanderbilt University, published an essay in the New England Journal of Medicine titled “The Last Well Person” in which he decried the proliferation of testing that served mostly to increase the anxiety of patients [13]. In his view, a public in dogged pursuit of perfect knowledge of disease, and clinicians in possession of increasingly powerful tests that can detect ever more insignificant lesions was not a recipe for creating health and well-being. “If the behavior of doctors and the public continues unabated, eventually every well person will be labeled sick,” Meador wrote [13].
We aren’t quite there yet, but given the number of tests we have today, we would do well to heed Dr. Bernstein’s message: Missing a case may be a small price to pay for avoiding harm to large numbers of patients, and clinicians have an important part to play in conveying the dangers of excessive testing to their patients and the public. I’m glad I knew that 13 years ago. My son managed to grow up straight and tall with only the slightest kink in his back and without that baseline radiograph.
Nortin M. Hadler MD, MACP, MACR, FACOEM
Emeritus Professor of Medicine and Microbiology/Immunology, University of North Carolina at Chapel Hill
Author of Worried Sick: A Prescription for Health in an Overtreated America
At any given moment, nearly half of the population of the United States are patients, medical providers or both. Is life in America lived under a pall of poor health? Or should we consider this a triumph of modern medicine? Both are plausible, though “triumph” holds sway. After all, longevity in America has increased some 30 years since my parents’ generation passed into history. Is longevity the fruit of all the care seeking?
Medicine has its triumphs, but these relate to the health of persons, not the health of the people. Measures of population health such as longevity largely reflect the structure of the society in which individuals live. Disparities in the health of populations result, predominately, from social, economic, and environmental disadvantage [20]. That being the case, why are so many Americans lining up at a HIPPA-dictated distance from the intake clerk at clinics and hospitals across the nation?
The question speaks to the social constructions of health, healthcare, and healthfulness. These vary greatly over time and between individuals. Understanding each is an exercise in semiotics. In the United States whatever we eat, whatever we weigh, whatever we feel, with whomever we interact, however we appear, and how much we are in motion are viewed as aspects of health. That encourages leaps of clinical inference whenever things are not right. And they are not right (or not quite right) often. Modern epidemiology has gone out into the community to document, describe, and monitor this iceberg of morbidity. We all face “loss”. We all face intermittent and remittent discomforts. We all face changes in our bodily functions that take us by surprise. We are all encouraged to take morbidity seriously. Ignoring or just coping is considered ignorance or denial.
Consequently, we are the most medicalized population I am aware of. We are poised to accept all sorts of health-related advice, forewarnings, and alarms with little regard for validity or effectiveness. We are bombarded with marketing schemes from “providers” even when the offerings are far more profitable than salutary. That brings us full circle to Dr. Bernstein’s polemic. He has chosen medical screening as his object lesson. No one should submit to a screening test unless the test is accurate, the disease is important, and we can offer clinically meaningful recourse if the test is positive. But there is a corollary object lesson: We must be prepared to offer reassurance whenever we advise against a screening test, or any other medical intervention, because there is too little likelihood of a clinically meaningful benefit.
Reassuring the patient is not a lost art, just a vanishing one, in the face of contemporary impediments to doing it well [9]. Reassurance “makes patients better able to bear what must be borne, both of illness and of treatment, and helps them concentrate upon recovery” [12]. However, without a trusting and trustworthy relationship it will fall on ears that are deafened by the preconceptions fueled by marketing and confounded by insurance and disability. Prescribing an anticipated intervention is more acceptable to the patient than proscribing futility despite reassurance [5]. Worse yet, recommending against a test or intervention is likely to be interpreted as a sign of disinterest, and perhaps even an affront. We are at a time in medicine when skillfulness in intervening must share center stage with skillfulness in reassuring.
David A. Rier PhD
Department of Sociology & Anthropology
Bar-Ilan University
Dr. Bernstein’s thoughtful column raised several points about the difficulties of extending diagnosis to the “pre-disease syndrome.” Although he generally focuses on economic concerns, there are other implications to consider.
For example, it isn’t just our budgets that are limited—so are our attention spans. The public is bombarded by information about lifestyle and environmental risks. Yet our ability to concentrate on risk is finite. In 1988, Hilgartner and Bosk [10] described their “public arenas” model of social problems, noting that at any given time, the public will attend to only a limited number of issues. Since their paper was published, smartphones, tracking apps, and social media have amplified risk news, and made their own immense claims on the public’s finite attention.
Additionally, laypeople generally lack experience with such concepts as statistical significance or sample bias. Therefore, they may focus on essentially theoretical risks, ignoring those far likelier to harm them. Dr. Bernstein, indeed, observed that the public might adopt risk-avoidance strategies riskier than the risks they flee. In such conditions, why boost the noise-to-signal ratio?
Dr. Bernstein also touched on an even more profound problem: creating legions of prodromal sick people is a far-from-innocent business. Illness threatens societal function. Modern society must therefore formulate social roles for the ill that encourage their return to health and productive function [15] where possible.
But when does this “sick role” begin? When does it end? When human immunodeficiency virus testing began in 1985, seropositive individuals assumed the sick role; many lived with stress, and feared disclosure and stigma, sometimes decades before contracting AIDS [17]. As AIDS, cancer, and other treatments improve, so too can survival rates, producing a “remission society” [8] of those occupying a liminal stage between sickness and health, never sure if their illness is really "over". Predisease diagnosis, perhaps facilitated by expanding genetic testing, creates the reciprocal stage of a “subclinical” society. This threatens to land us all in a “pre-disease remission society” for whom the ax is always about to fall. Shouldn’t we preserve some space for a “healthy society”?
Expanding diagnostic categories could lead to—or excuse—the failure to open that business, pursue that degree, start that family. If not because of “prepatient” despondency, then perhaps because others choose not to invest—as business partners, employers, or spouses—in "prediseased" individuals. For, one may assume the "pre-emptive" sick-role—or society may assign it to one.
Given all this, predisease only really makes sense where: (1) the risk appears substantial, (2) it is likely to manifest, and (3) effective, realistic counter-measures exist. If you must diagnose a disease that isn’t yet a disease, make it count.
Footnotes
A note from the Editor-in-Chief: We are pleased to present to readers of Clinical Orthopaedics and Related Research® the next Not the Last Word. The goal of this section is to explore timely and controversial issues that affect how orthopaedic surgery is taught, learned, and practiced. We welcome reader feedback on all of our columns and articles; please send your comments to eic@clinorthop.org.
The author certifies that neither he, nor any members of his immediate family, have any commercial associations (such as consultancies, stock ownership, equity interest, patent/licensing arrangements, etc.) that might pose a conflict of interest in connection with the submitted article.
All ICMJE Conflict of Interest Forms for authors and Clinical Orthopaedics and Related Research® editors and board members are on file with the publication and can be viewed on request.
The opinions expressed are those of the writer, and do not reflect the opinion or policy of Clinical Orthopaedics and Related Research® or The Association of Bone and Joint Surgeons®.
References
- 1.American Diabetes Association. Diagnosis and classification of diabetes mellitus. Diabetes Care. 2014;37:S81-90. [DOI] [PubMed] [Google Scholar]
- 2.Becker E. The Denial of Death. New York: The Free Press; 1973. [Google Scholar]
- 3.Centers for Disease Control and Prevention. Factors that Increase Risk of Getting Arthritis. Available at: https://www.cdc.gov/arthritis/basics/risk-factors.htm. Accessed January 3, 2018.
- 4.Centers for Disease Control and Prevention. Hypertension Among Adults in the United States: National Health and Nutrition Examination Survey, 2011–2012. Available at: https://www.cdc.gov/nchs/products/databriefs/db133.htm. Accessed December 28, 2018.
- 5.Chou R. Reassuring patients about low back pain. JAMA Int Med. 2015; 175:743-744. [DOI] [PubMed] [Google Scholar]
- 6.Dieleman JL, Baral R, Birger M, Bui AL, Bulchis A, Chapin A, Hamavid H, Horst C, Johnson EK, Joseph J, Lavado R, Lomsadze L, Reynolds A, Squires E, Campbell M, DeCenso B, Dicker D, Flaxman AD, Gabert R, Highfill T, Naghavi M, Nightingale N, Templin T, Tobias MI, Vos T, Murray CJ. US Spending on Personal Health Care and Public Health, 1996-2013. JAMA. 2016;316:2627-2646. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.El Dib RP, Atallah AN, Andriolo RB. Mapping the Cochrane evidence for decision making in health care. J Eval Clin Pract. 2007;13:689-692. [DOI] [PubMed] [Google Scholar]
- 8.Frank AW. At the Will of the Body. Boston, MA: Houghton Mifflin; 2002. [Google Scholar]
- 9.Hadler NM. By the Bedside of the Patient. Lessons for The Twenty-first Century Physician. Chapel Hill, NC: UNC Press; 2016:1-204. [Google Scholar]
- 10.Hilgartner S, Bosk CL. The rise and fall of social problems: A public arenas model. Am J Sociology. 1988;94:53-78. [Google Scholar]
- 11.Ioannidis JP, Evans SJ, Gøtzsche PC, O'Neill RT, Altman DG, Schulz K, Moher D; CONSORT Group. Better reporting of harms in randomized trials: An extension of the CONSORT statement. Ann Intern Med. 2004;141:781-788. [DOI] [PubMed] [Google Scholar]
- 12.Kessel N. Reassurance. Lancet. 1979;8126:1128-1133. [DOI] [PubMed] [Google Scholar]
- 13.Meador CK. The last well person. N Engl J Med. 1994;330:440-441. [DOI] [PubMed] [Google Scholar]
- 14.Moynihan DP. Defining deviancy down. The American Scholar. 1993;62:17-30. [Google Scholar]
- 15.Parsons T. The Social System. New York, NY: Free Press of Glencoe; 1951. [Google Scholar]
- 16.Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Rier DA. The patient’s experience of illness. In: Bird C, Conrad P, Fremont A, Timmermans S. (eds.), Handbook of Medical Sociology, 6th ed. Nashville, TN: Vanderbilt University Press; 2010:163-178. [Google Scholar]
- 18.Rubin SM, Cummings SR. Results of bone densitometry affect women’s decisions about taking measures to prevent fractures. Ann Intern Med. 1992;116:990-995. [DOI] [PubMed] [Google Scholar]
- 19.Seruga B, Templeton AJ, Badillo FE, Ocana A, Amir E, Tannock IF. Under-reporting of harm in clinical trials. Lancet Oncol. 2016;17:209-219. [DOI] [PubMed] [Google Scholar]
- 20.U.S. Department of Health and Human Services. Office of Disease Prevention and Health Promotion. Healthy people 2020. Available at: https://www.healthypeople.gov/2020/about/foundation-health-measures/Disparities. Accessed January 23, 2019.
