Blinding – where is the bias?

Stimulated by Moustgaard et al 2020.[1]

Photo by Mitchell Luo on Unsplash.
“We are only going to look at double-blind randomised sham-controlled trials of acupuncture. No matter what the cost.”

The BMJ still comes in paper form through my door every week. It is usually in time for leafing through with my coffee on a Saturday morning, if I am not travelling. I had fallen out of the habit of looking through it, but was attracted by the cover of the most recent issue with the words:

Why blinded trials are not always better

Intrigued, I first turned to Fiona Godlee’s comments at the front in her Editor’s Choice section.[2] This was titled:

Blinding may be unnecessary, but please divest

Divest here refers to fossil fuels, which is certainly laudable, but in the broader ecological context reminds me that I still get the glossy paper BMJ in a plastic wrapper every week, despite trying to stop it being sent. I can of course read it online, like I do everything else these days, but then I would not be adequately exposed to the advertising content mostly purchased at eye-watering rates by Big Pharma.

But let’s get to blinding. What is the fuss about?

It is a research piece – a meta-epidemiological study – that tries to assess the impact of blinding on estimates of treatment effects and their variation between trials. In other words, does it matter whether or not patients, healthcare providers or outcome assessors are blind to treatment allocation?


Well according to this huge data set, aka MetaBLIND, it doesn’t seem to matter too much whichever way you look at it.

Before getting to grips with this piece of meta-epidemiology, I had a look at the other commentaries. One was a commissioned editorial that was not peer reviewed,[3] and the other was an analysis piece that was not commissioned but was peer reviewed.[4]

The editorial was cautious, noting that blinding had become an established part of the EBM furniture, and should not be discarded without better evidence. It went on to urge readers that if open trials were to be considered, this should be done in the SPIRIT of EBM, where SPIRIT stands for Standard Protocol Items: Recommendations for Interventional Trials.[5] This all seems somewhat in contrast to the title:

Blindsided: challenging the dogma of masking in clinical trials

…and subtitle of the piece:

Fresh evidence should encourage critical thinking about when blinding really matters

The analysis paper by contrast focused on the negative aspects of blinding, and came with another snappy title:

Fool’s gold? Why blinded trials are not always best

…and the subtitle gives a forerunner of the main content:

Blinding is intended to reduce bias but can make studies unnecessarily complex or lead to results that no longer address the clinical question

I think I prefer the ‘Fool’s gold’ to the ‘Blindsided’, but I might have placed the apostrophe after the s. Afterall there are quite a lot of characters in the EBM world that unquestioningly strap on their blinkers and aim for the gold standard – the double-blind RCT.

I prefer the ‘Fools gold’

But enough of word play and punctuation, what has all this got to do with acupuncture? More than I first thought actually!

Before reading the paper itself, I read the rapid responses, and could not believe one of the Bandolier Boys (BBs) was in their saying this all cannot apply to nonsense like acupuncture, and repeating the same old mantra about bias in open studies and completely ignoring the issues of sham controls. He references a web-based piece commenting on the first ever meta-analysis of acupuncture.[6] I have commented about this before in the blog: The problem with sham.

When I regained composure I looked at the full details of MetaBLIND. The main thing that struck me was the very wide CIs (usually 95% Confidence Intervals, but in this case 95% Credible Intervals), and that reminded me of ‘stirring mud’.

I decided to read what the authors thought about their own results, and that led to a discussion of other studies with quite different results. Indeed, they singled out CAM studies as being particular prone to having large differences in measured effects between blinded and unblinded trials. I was suspicious, so I looked at the reference.[7] The lead author had a very familiar name (AH), and of the 12 trials included in this analysis, 10 of them were acupuncture studies. He has made the same assumption as the BBs, but done it in a way that looks entirely credible and has been cited by the different BMJ authors.

You will cringe when I explain!

AH and friends compared the effect of acupuncture over sham (ie proper needling vs gentle needling) with the effect of acupuncture over waiting list, and attributed the entire difference to the bias of unblinding. In other words, this analysis assumes that any effect of sham acupuncture over a waiting list control is caused by bias. This seems rather odd when sham acupuncture outperforms conventional care, such as in low back pain,[8] and is associated with significantly greater improvements in quality of life than all non-acupuncture comparators.[9]

I simply cannot believe that they continue to peddle this nonsense, and I am alarmed that this particular piece of nonsense was referenced by all three BMJ papers I have mentioned above.

So back to the title of this blog: Blinding – where is the bias? It seems to me that the bias comes from the deliberately blinkered EBM advocates desperately trying to hang on to their golden holy grail, and thus we have an explanation for my choice of image at the start.


1         Moustgaard H, Clayton GL, Jones HE, et al. Impact of blinding on estimated treatment effects in randomised clinical trials: meta-epidemiological study. BMJ 2020;368:l6802. doi:10.1136/bmj.l6802

2         Godlee F. Blinding may be unnecessary, but please divest. BMJ 2020;:m255. doi:10.1136/bmj.m255

3         Drucker AM, Chan A-W. Blindsided: challenging the dogma of masking in clinical trials. BMJ 2020;:m229. doi:10.1136/bmj.m229

4         Anand, R, Norrie, J, Bradley, JM, et al. Fool’s gold? Why blinded trials are not always best. BMJ 2020;:l6228. doi:10.1136/bmj.l6228

5         Chan A-W, Tetzlaff JM, Gotzsche PC, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ 2013;346:e7586–e7586. doi:10.1136/bmj.e7586

6         The Bandolier Boys. Acupuncture for back pain? Bandolier website. 1999.

7         Hróbjartsson A, Emanuelsson F, Skou Thomsen AS, et al. Bias due to lack of patient blinding in clinical trials. A systematic review of trials randomizing patients to blind and nonblind sub-studies. Int J Epidemiol 2014;43:1272–83. doi:10.1093/ije/dyu115

8         Haake M, Müller H-H, Schade-Brittinger C, et al. German Acupuncture Trials (GERAC) for chronic low back pain: randomized, multicenter, blinded, parallel-group trial with 3 groups. Arch Intern Med 2007;167:1892–8. doi:10.1001/archinte.167.17.1892

9         Saramago P, Woods B, Weatherly H, et al. Methods for network meta-analysis of continuous outcomes using individual patient data: a case study in acupuncture for chronic pain. BMC Med Res Methodol 2016;16:131. doi:10.1186/s12874-016-0224-1

Declaration of interests MC