Welcome to my Open Notebook

This is an Open Notebook with Selected Content - Delayed. All content is licenced with CC-BY. Find out more Here.

ONS-SCD.png

We have a statistically rigorous and scientifically meaningful definition of replication. Let's use it

Researchers writing about the ‘Reproducibility Crisis’ often conflate the terms reproducibility, repeatability and replicability, but it is quite important to distinguish these. There is a great discussion of the distinction in the SimplyStatistics blog post titled: We need a statistically rigorous and scientifically meaningful definition of replication. But I actually now think we DO have that definition! It is the that repeatability is the same as replication and involves a new sample with new measurement errors while reproducibility uses the same data to recalculate the result.

I think it is vital that we labour the point so that the distinction between repeatability and reproducibility is made clear.

I follow the definition that ‘reproducible’ is exact re-computation whereas repeatable/replicable is a new analysis, of a new sample, yielding a new result (plus or minus some variance from measurement error), and if the original study is replicated then the same conclusions are reached from the new analysis.

I used this paragraph and recent reference in my thesis:

Reproducibility is defined as ‘the ability to recompute data analytic
results given an observed dataset and knowledge of the data analysis
pipeline’ (Leek & Peng 2015). This definition distinguishes
reproducibility from replicability which is ‘the chance that an
independent experiment targeting the same scientific question will
produce a consistent result’ (Leek & Peng 2015).  

Leek, J.T. & Peng, R.D. (2015). Opinion: Reproducible research can
still be wrong: Adopting a prevention approach. Proceedings of the
National Academy of Sciences of the United States of America, 112(6),
1645–1646.  

But I am also a big fan of the the definitions in Peng 2011 and Cassey 2006.

  • Peng 2011:
With replication, independent investigators address a scientific
hypothesis and build up evidence for or against it...

Reproducibility calls for the data and computer code used to analyze
the data be made available to others. This standard falls short of
full replication because the same data are analysed again, rather than
analysing independently collected data.

Peng, R. D. (2011). Reproducible research in computational
science. Science, 334(6060), 1226–1227. doi:10.1126/science.1213847

  • Cassey 2006:
[For a repeatable study] a third party must be able to perform a study
using identical methodological proto- cols and analyze the resulting
data in an identical manner... [Further] a published result
must be presented in a manner that allows for a quantitative
comparison in a later study...

We consider a study reproducible if, from the information presented in
the study, a third party could replicate the re- ported results
identically. 

CASSEY, P., & BLACKBURN, T. M. (2006). Reproducibility and
Repeatability in Ecology. BioScience,
56(12), 958. doi:10.1641/0006-3568

To fully endorse any scientific claims, the experimental findings should be completely repeatable by many independent investigators who ‘address areplicated hypothesis and build up evidence for or against it’ (Peng, 2011). It is important to note that the exact results need not be computed in a repeatable study. This is because experimentation involves probability, and if performed again, with a different sample and new set of measurement errors some variance between experiments is to be expected.

Posted in  disentangle reproducible research


blog comments powered by Disqus