The Cancer Drugs Fund is producing dangerous, bad data: randomise everyone, everywhere!

September 28th, 2016 by Ben Goldacre in bad science | 6 Comments »

There are recurring howls in my work. One of them is this: in general, if you don’t know which intervention works best, then you should randomise everyone, everywhere. This is for good reason: uncertainty costs lives, through sub-optimal treatment. Wherever randomised trials are the right approach, you should embed them in routine clinical care.

This is an argument I’ve made, with colleagues, in endless different places. New diabetes drugs are approved with woeful data, small numbers of patients in trials that only measure blood tests, rather than real-world outcomes such as heart attack, renal failure, or death: so let’s roll out new diabetes treatments in the NHS through randomised trials. We rely on observational studies to establish whether Tamiflu reduces complications of pneumonia: that’s silly, we can do trials, and we should. Statin treatment regimes in widespread use have never been compared head-to-head, using real-world outcomes such as heart attack, stroke, and death: so let’s embed randomised trials as cheaply as possible in routine clinical care (we’ve done two pilots, to document the barriers).

This week a dozen colleagues and I published yet another application of this basic, simple principle, as an editorial in the BMJ. The Cancer Drugs Fund is being marketed as a way to generate new knowledge: but in reality, the data that will be collected is weak, observational evidence, riven through with confounders. There’s no need for us to squander patient experience like this. When we spent vast sums of money on new treatments with uncertain benefits (and hazards, and costs), then we should do so through randomised trials wherever possible. That’s how we can find out whether these expensive new treatments are effective, how effective they are, and whether they’re cost effective.

That’s the argument today. It was the argument last year. And it will be the argument next year. But it is part of a growing, furious thread: we should not be tolerating bad quality evidence. We should be randomising, as a matter of routine, throughout the health service, whenever we lack good evidence on what works best. We should be turning the NHS, and it’s sisters around the world, into learning health systems that test, learn, and adapt with every move they make. “Big Data” is a tedious buzzword. We need better data, and where that we requires randomisation then we should be identifying the ethical and practical barriers, and smashing those down. For that battle, we should reach outside the ivory tower and use every tool available to us, including harrassing policymakers, as a matter of routine.

There are lives at stake. Anything short of perfect data exposes patients to avoidable harm. Onward!

 


++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

6 Responses



  1. Thomas said,

    September 28, 2016 at 9:37 pm

    It’s a nice idea, but I’m not sure if there truly is any added value compared to epidemiological studies.

    Besides, where I’m from (Belgium), the regulatory issues will be too great to overcome. There should be some academic lobbying to the government to ligthen the regulatory burden for trials like this.

    I hope you will demonstrate the feasibility of such trials and that it will have an impact outside UK.

  2. Tom Boyles said,

    September 29, 2016 at 7:46 pm

    It’s incredible what a pain in the ar*e it is to run an RCT. We want to trial duration of antibiotic prophylaxis in high risk women undergoing C section. The red tape involved in doing this makes it all but impossible. Ethics committees in particular need to realise that every piece of red tape causes harm to patients by discouraging research so they better be damn sure its all justified

  3. Tom Boyles said,

    September 30, 2016 at 11:25 am

    It’s incredible what a pain in the ar*e it is to run an RCT. We have 400 patients per month undergoing C-section and want to randomise high risk women to different length of antibiotic prophylaxis but the red tape involved in doing such a simple thing is prohibitive.

    Ethics committees in particular need to understand that every new requirement they insist on is a harm to patients as it discourages research and researchers. They must therefore show a commensurate benefit to patients that outweighs that cost. Rarely if ever have i seen an ethics committee prove its benefits to patients.

  4. Craig Jones said,

    September 30, 2016 at 3:11 pm

    I totally agree Ben. Has anyone thought about the problem of the funding around the man power required to enter and collect the data from randomisaing everyone as this would be a big sticking point.

  5. Ben Goldacre said,

    October 22, 2016 at 9:09 pm

    Hi, briefly… cost of information very likely to be less than its value, especially for v expensive drugs, we are generally bad at reflecting that in spending/measuring priorities. Also, data analysis is a trivial part of the cost of running a trial, and if running simple low cost pragmatic RCTs you would aim to collect your follow up data from existing sources eg cancer registry records so minimal marginal cost.

  6. Ben Goldacre said,

    October 22, 2016 at 9:11 pm

    I agree.