Posted in Uncategorized

Rates and Replication Issues

More travelling for me… and during those travels, two stories struck me as worthy of comment. These were widely reported, but I’m using links from the Guardian for both stories.

Rates:

The revelation that “Thousands have died after being found fit for work”

http://www.theguardian.com/society/2015/aug/27/thousands-died-after-fit-for-work-assessment-dwp-figures

However, these figures do not give any baseline mortality rates.   This leads to the question of what the baseline should be.  What exactly should we compare these values to?  Some preliminary ideas include:

  1. Compare to the mortality rates of a typical fortnight of those on Employment and Support Allowance (ESA)
  2. Compare to the mortality rates of those on jobseekers’ allowance [probably not a good baseline measure]
  3. Compare to the general population – ignoring disability and employment status, but trying to account for other demographic factors.

The first option would be of immediate relevance.  If the mortality rate of those on ESA is lower than those who have been removed from ESA as they have been declared “fit-for-work” then there is an immediate and obvious major problem.

The second option isn’t really a runner.  Why? Well, although people have been ruled “fit-for-work”, this assessment does not state that they are fit and fully healthy.

The third option is a population baseline – an okay measure but obviously needing terms and conditions!

Not being able to compare these figures to anything meaningful makes them essentially meaningless. We can’t even really assess if they are unexpectedly high or low!  Insufficient detail on the causes of death [related to their disability, to the assessment process, accident etc.?] is another issue that would need to be examined before this story could be properly be deemed a fully fleshed out story.

Replication:

The story of the attempt to replicate results from 100 major psychology studies (where just over 1/3 of studies could be replicated) is welcome in that it stirs debate about the direction of science.

http://www.theguardian.com/science/2015/aug/27/study-delivers-bleak-verdict-on-validity-of-psychology-experiment-results

We would like, as a community, to be able to claim that science progresses in leaps and bounds. It is more typically a long hard slog; making tiny incremental progress.  The “publish or perish” culture is not healthy for long term scientific thinking.  Unless a replication study sets out to confirm the results of many other studies [such as the one quoted in this story http://www.sciencemag.org/content/349/6251/aac4716 ] then checking other peoples results carefully can be seen as a waste of resources; both time and money.  Editors deem it not be “innovative” and discount it off-hand; unless it is contradicting the findings of a major study, and thus has some controversy attached.

However, it is only by checking the results that we can hope to strengthen the foundations of future research.  Who should provide funding for the “boring” (not innovative) process of replicating others results before the community can assume the results to hold?  Furthermore, what is the point of doing these replication studies without others knowing that the study has been undertaken and the findings found to either support or undermine the original results?  There is definitely a gap for a good open access repository for the results of replication studies.  Without spending a large amount of time on the write up – a simple experimental design, noting any adaptations / amendments to the original study, with the reasons for those designs, and the results and a headline of whether the original study was confirmed or undermined would provide a useful tool for researchers.  If this could then be extended so that online versions of the original study would then have to link to the relevant replication studies within the repository I would be ecstatic.

With publication bias still a major issue; effect sizes and statistical significance of results in the published literature should be taken with a pinch of salt.   But we shouldn’t have to replicate each study before relying on the findings.  But who is going to properly lead the charge to ensure that replication studies are undertaken and then that the subsequent findings are accessible in order to avoid wasted effort?

Unknown's avatar

Author:

I was previously an academic applied statistician (based in the University of the West of England, Bristol) with a variety of interests. This blog reflects that variety! I now work in official statistics - which will not be covered at all here.

Leave a comment