Raiders of the lost HARK: a reproducible inference framework for big data science

Mattia Prosperi, Jiang Bian, Iain Buchan, James Koopman, Matthew Sperrin, Mo Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Hypothesizing after the results are known (HARK) has been disparaged as data dredging, and safeguards including hypothesis preregistration and statistically rigorous oversight have been recommended. Despite potential drawbacks, HARK has deepened thinking about complex causal processes. Some of the HARK precautions can conflict with the modern reality of researchers’ obligations to use big, ‘organic’ data sources—from high-throughput genomics to social media streams. We here propose a HARK-solid, reproducible inference framework suitable for big data, based on models that represent formalization of hypotheses. Reproducibility is attained by employing two levels of model validation: internal (relative to data collated around hypotheses) and external (independent to the hypotheses used to generate data or to the data used to generate hypotheses). With a model-centered paradigm, the reproducibility focus changes from the ability of others to reproduce both data and specific inferences from a study to the ability to evaluate models as representation of reality. Validation underpins ‘natural selection’ in a knowledge base maintained by the scientific community. The community itself is thereby supported to be more productive in generating and critically evaluating theories that integrate wider, complex systems.
Original languageEnglish
Article number125
JournalPalgrave Communications
Volume5
DOIs
Publication statusPublished - 22 Oct 2019

Fingerprint

Dive into the research topics of 'Raiders of the lost HARK: a reproducible inference framework for big data science'. Together they form a unique fingerprint.

Cite this