Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Mar 1.
Published in final edited form as: Acupunct Med. 2013 Mar;31(1):98–100. doi: 10.1136/acupmed-2013-010312

Responses to the Acupuncture Trialists' Collaboration individual patient data meta-analysis

Andrew J Vickers, Alexandra C Maschino, George Lewith, Hugh MacPherson, Karen J Sherman, Claudia M Witt, on behalf of the Acupuncture Trialists' Collaboration
PMCID: PMC3658608  NIHMSID: NIHMS467484  PMID: 23449559

Abstract

In September 2012, the Acupuncture Trialists' Collaboration published the results of an individual patient data meta-analysis of close to 18,000 patients in high quality-randomized trials. The results favored acupuncture. Although there was little argument about the findings in the scientific press, a controversy played out in blog posts and the lay press. This controversy was characterized by ad hominem remarks, anonymous criticism, phony expertise, and the use of opinion to contradict data, predominately by self-proclaimed skeptics. There was a near complete absence of substantive scientific critque. The lack of any reasoned debate about the main findings of the Acupuncture Trialists' Collaboration paper underlines that mainstream science has moved on from the intellectual sterility and ad hominem attacks that characterize the skeptics’ movement.

Keywords: Pain, Skepticism, Headache, Acupuncture, Statistics and Research Design

Introduction

In September 2012, the Acupuncture Trialists' Collaboration published the results of an individual patient data meta-analysis of close to 18,000 patients in high quality-randomized trials. The results favored acupuncture, with statistically significant differences from both no acupuncture and sham acupuncture controls.1 Acupuncture has long been controversial and so, as might be imagined, the findings of the Acupuncture Trialists' Collaboration generated a fair degree of controversy.

What is remarkable is that this controversy played out predominately in blog posts and the lay press. There has been little argument about the findings in the usual scientific channels. Three letters to the editor were submitted, but they dealt with peripheral issues, such as whether acupuncturists might miss diagnoses in routine clinical practice. Only one substantive critique of the paper has appeared in a scientific forum2.

Style of criticism

In addition to appearing outside of traditional scientific forums, criticisms of the Acupuncture Trialists' Collaboration meta-analysis have been characterized by one or more of the following.

Ad hominem remarks

In a typical blog post, the study authors were accused of displaying “considerable pro-acupuncture bias”3; a comment on another blog described senior author Klaus Linde as a “homeopath”4. One poster, who claims that the study shows that the “desperation of NCCAM” and the “gullibility” of the media, opined that “Dr. Vickers … needs to go back and take an introductory course on statistics” and warned “like loaded guns, some people shouldn’t be left alone with a statistical software program”.5 A somewhat bizarre accusation leveled at two authors was that they may have “read only the abstract of the paper”2. One post made a direct accusation of statistical misconduct: “The whole thing looks like a number the authors pulled out of their nether regions and then plugged into their meta-analysis software in order to see if it would affect anything.”4

Anonymity

The individual who advised Dr Vickers to take a statistics course, signed off as “A. Skeptic”.5 The blog post about the Collaboration’s nether regions was posted by “Orac”.4

Phony expertise

Many blog posters threw around methodologic concepts such as I2 or funnel plots, or made claims about the nature of chronic pain or acupuncture placebo techniques. At the same time, many admitted to not having read the paper4, and none appear to have published scientific research on pain, acupuncture or meta-analysis.

Science as self-proclamation

Blog postings were made on sites with names such as “Science Based Medicine”3 or those that claim to “battl[e]” against “where facts, rationality and truth have been sacrificed upon the altar of entertainment”5.

Opinion beats data

Edzard Ernst was cited in several media outlets as stating: “I fear that, once we manage to eliminate this bias [that operators are not blind] … we might find that the effects of acupuncture exclusively are a placebo response."6. One blogger asserted that acupuncture “has an effect size that is very small and, in my opinion, overlaps with no effect at all.”3 It is simply bizarre to dismiss years of careful statistical analysis on the grounds that results “might” change; similarly, it should go without saying that whether an effect size overlaps with no effect is not a matter of opinion but of confidence intervals.

Response to critiques

What of the substantive content of the critiques? The one critique in the scientific press, published on the website of the British Medical Journal, came from David Colquhoun, a well-known critic of complementary therapies. He stated that “acupuncture does not work to any useful extent … Vickers et al showed that the difference is far too small to be of the slightest clinical interest”2. Colquhoun’s point appears to be that “clinical interest” depends on the difference between acupuncture and sham, whereas we argue that that the decision taken in clinical practice is between referring to acupuncture or not doing so. Nonetheless, it is of interest is that the effect size for against sham is in some cases very comparable to that for widely accepted treatments. For example, in one meta-analysis of NSAIDs for osteoarthritis of the knee, the effect size for NSAIDs vs. placebo for trials that did not preselect NSAID responders was 0.237, compared to 0.26 for acupuncture, or 0.16 after exclusion of an outlying trial1.

A critique by Ernst appears to be that, because the effect of acupuncture vs. sham is relatively small, it might disappear altogether if account was taken of the bias associated with the acupuncturists being unblinded: “Not only does the acupuncturist know whether they are performing the therapy, or a sham, but the patient will tend to notice as well. Find some way around this … and it may be the case that the effect disappears into the sands of placebodom.”8 He has also stated that, “a trial is either both patient and therapist-blind, or not blind at all.”6 There are several problems with this critique. First, the most obvious mechanism whereby operator unblinding could cause bias is if information about allocation was transmitted to the patient, that is, the patient was meant to be blind but picked up hints from the practitioner. But there is no evidence that this happens, either in general or in the trials in our review, many of which included assessment of blinding such as asking patients what group they thought they were in and why. Second, there may be some unknown mechanism by which lack of blinding of the practitioner leads to differential outcome, perhaps some effect working on the subconscious level. But again there is no evidence whatsoever of such an effect: Ernst is trying to argue away data on the basis of an entirely unsubstantiated hypothesis. Third, there is evidence that the effects posited by Ernst are not important. Despite Ernst’s chiding remarks (“acupuncturists tend to tell us that therapist blinding is impossible, but this is clearly not true”), trials have been conducted by members of the Acupuncture Trialists' Collaboration in which the practitioner was kept blind by using a laser acupuncture device that had been detuned. If Ernst is correct – operator unblinding leads to bias – then the results of this trial should show equivalence between needling and detuned laser. This was not the case, with a statistically significant difference between acupuncture and this type of sham.9 Or to put it another way, investigators did find a way around the problem of therapist blinding and the effect did not disappear into “placebodom”.

In an extensive critique published anonymously online, “Orac” makes several points.4 The collaboration was accused of “comparing apples and oranges” due to our “mixing studies that compare acupuncture to no treatment [with those that compare acupuncture] to sham treatment”. This is false: comparisons between acupuncture vs. sham and acupuncture vs. no acupuncture were kept entirely separate. With respect to our analysis for publication bias, Orac asks “Why 47 unpublished RCTs of 100 subjects and not a smaller number of larger RCTs?” and then accuses us of pulling numbers out of our “nether regions”. The answer to the question “why … RCTs of 100 subjects?” is that it was pre-specified in the protocol which was previously published and referenced in the paper.10 Orac claims that our failure to report I2 was “sloppy” (in fact, we chose not to cite this statistic because we believe it to be invalid) and criticizes the lack of a funnel plot (highly underpowered for the number of trials in our analysis). Orac also complains about our characterization of the study results, stating that “it’s uncommon to have a 50% reduction in pain scores”. But in fact we chose 50% precisely because it was close to what was reported in the trials.11

Scientific debate or political muckraking?

The Acupuncture Trialists' Collaboration meta-analysis was published during the presidential campaigns of 2012 and it is remarkable how closely the debate about our paper mirrored the election.

Contemporary politics now seems characterized by anonymous blog posts, press releases, phony expertise (how many political commentators really understand how health insurance works?), ad hominem attacks, and the attempt to fight data with opinion, something that culminated in the bizarre spectacle of a leading Republican denying live on TV that Obama had won.

There is an interesting debate to be had about the clinical implications of the Acupuncture Trialists' Collaboration meta-analysis. Indeed, the results provide plenty of reasons to be skeptical about much of acupuncture. With respect to the debate about clinical implications, the Collaboration argued that while a treatment should ideally be shown to be superior to placebo, evaluation of clinical significance should be based on overall benefit, including any non-specific effects. The editorial accompanying the paper appears to agree.12 But it is not unreasonable to suggest that doctors should only refer patients to treatments that have a large specific effect; all of us in the Acupuncture Trialists' Collaboration would be willing to debate that point. With respect to skepticism about acupuncture, we found only a small difference between putting a needle at the right depth in the right place vs. insertion to the wrong depth at the wrong place. This raises serious questions concerning some acupuncturists’ belief that acupuncture will only be effective if it conforms strictly to specific theories as to point selection, beliefs that persist despite considerable variation in clinical practice13.

But these are not debates that many self-appointed “acupuncture skeptics” want to have, appearing to prefer instead the comfort of nay-saying and the thrill of adversarial campaigning. It is far less work to make a comment about a researcher’s “nether regions” than to spend the time getting to grips with a complex paper, and it is clearly more fun to make a cutting remark about another scientist’s supposed statistical cluelessness than, say, write a thoughtful critique of different approaches to handling the problem of publication bias. The lack of any reasoned debate about the main findings of the Acupuncture Trialists' Collaboration paper underlines that mainstream science has moved on from the intellectual sterility and ad hominem attacks that characterize the skeptics’ movement.

References

RESOURCES