The reliability of early-stage cancer biology research is called into question by an investigation that concludes more than half of experimental results can’t be replicated by independent scientists
An eight-year-long investigation into the reliability of preclinical cancer biology research has found that fewer than half of the results published in 23 highly cited papers could be successfully reproduced.
Tim Errington, director of research at the Center for Open Science in Virginia – which conducted the investigation – says the original plan was to reproduce 193 experiments from 53 papers. But, as explained in one of two studies the team publishes today, this was reduced to 50 experiments from 23 papers.
“Just trying to understand what was done and reported in the papers in order to do it again was really hard. We couldn’t get access to the information,” he says.
In total, the 50 experiments included 112 potentially replicable binary “success or failure” outcomes. However, as detailed in the second study published today, Errington and his colleagues could replicate the effects of only 51 of these – or 46 per cent.
The experiments were all in-vitro or animal-based preclinical cancer biology studies, and didn’t include genomic or proteomic experiments. They were from papers published between 2010 and 2012 and were selected because they were all “high-impact” studies that had been read and heavily cited by other researchers.
The results are “a bit eye-opening”, says Errington.
The investigation’s findings do, however, align with those of earlier reports published by the big pharmaceutical companies Bayer and Amgen. C. Glenn Begley, who recently co-founded US biotech Parthenon Therapeutics, was a senior cancer biologist at Amgen and an author of its report, which was published in 2012.
“We looked back at the papers that we had relied upon at Amgen and found that we could only reproduce 11 per cent of the studies,” says Begley.
The Amgen report was applauded by some in the research community for shining a light on an important problem. But Begley says the report was also criticised for a lack of openness about exactly which studies it tried and failed to replicate.
This criticism can’t be levelled at the new investigation. Errington and his colleagues have published all the data about the studies they included on the Open Science Framework, a website and data repository run by the Center for Open Science, to help facilitate data sharing. They also invited peer review of their methods for replication before the study was completed.
Although the investigation focused on preclinical studies, the replicability problems it uncovered might help explain problems with later-stage studies in people too. For instance, a previous survey of the industry showed that less than 30 per cent of phase II and less than 50 per cent of phase III cancer drug trials succeed.
Even if there isn’t a direct link between the problems at the preclinical and clinical trial stages of scientific investigation, Errington says the high rate of failure of later clinical trials in this area is very concerning.
“At that point, you’ve already invested in the very expensive clinical trial pipeline,” he says. “This is people’s lives, hopes and livelihood on the line here.”
He adds that the Center for Open Science is now advocating for a scientific culture change that places more focus on data sharing and good quality early-stage studies, which could help highlight any issues with replicability employed in this sort of research.
Emily Sena at the University of Edinburgh, UK, agrees this is important, but says more needs to be done to persuade scientists to get on board. “It requires institutions and their appointment panels and promotion panels to value the fact that you have done this, but the incentive structure just isn’t there at the moment,” she says.
There are promising signs of change on the horizon. The US National Institutes of Health, one of the largest funders of health-related research, is instituting a new policy in early 2023 that will make data sharing the default for the projects it funds. Several journals have also changed their publishing systems in recent years to encourage open science and data sharing.
Begley says he has seen a real change in the decade since he co-authored the Amgen report. “When I first started talking about this issue, people would get very angry and say, ‘Well, this just proves that Amgen scientists are incompetent’,” he says. “Now, when I give a talk, the focus is on what should we be doing about this.”
Sign up to our free Health Check newsletter for a round-up of all the health and fitness news you need to know, every Saturday
More on these topics: