Reprod oducibility i y in R Resear earch Dr Alexandra Bannach-Brown Institute for Evidence-Based Healthcare Bond University @ABannachBrown
Background • Systematic review of biomedical literature describing animal models • Translating systematic review findings • Research Quality • Open Science & Open Access
Research • Aim: to improve scientific theories for increased understanding of natural phenomena • Analysis and interpretation of observations • Experimental or spontaneous • Leads to knowledge claim • Usually involves statistical analysis
Research Cycle Generate and Publish/conduct specify next experiment hypothesis Interpret results Design study Analyse data Conduct and and test collect data hypothesis “Manifesto for Reproducible Science”, Munafo et al., 2017
Bench to Bedside Generate and Generate and Generate and Publish/conduct Publish/conduct Publish/conduct specify specify specify next experiment next experiment next experiment hypothesis hypothesis hypothesis Interpret results Design study Interpret results Design study Interpret results Design study Analyse data and Conduct and Analyse data and Conduct and Analyse data and Conduct and test hypothesis collect data test hypothesis collect data test hypothesis collect data Nonclinical Development Preclinical Development Clinical Development • Formulation • Animal studies • Phase I • Laboratory development • Bioavailability • Phase II • Quality control & assurance • Pharmacokinetic & pharmacodynamic • Phase III studies • Phase IV
Reproducibility and Replication “Reproducibility”- re-analysis of existing data using the same analytical procedures. Reproducibility Spectrum Publication + Publication Fully Fully Reported Methods Linked Methods, Only Replicable Methods & Data Data & Data Analysis Code Not Gold Standard Reproducible “Replication” - collection of new data, following the same methods. Peng, (2011). "Reproducible Research in Computational Science", SCIENCE ;1226-1227
Research Quality • How credible are the findings presented? • Is the design appropriate? • Rigorous • Designed mitigate risk of bias
Replication Ioannidis et al., 2014 “Reproducibility in Science”, Begley & Ioannidis, Circulation Research. 2015;116:116-126
Replication Average neuroscience study powered between 8-31% (Button et al., 2013) “Reproducibility in Science”, Begley & Ioannidis, Circulation Research. 2015;116:116-126 “Estimating the reproducibility of psychological science”, Open Science Collaboration, Science, 2015; 349(6251)
Threats to reproducible science “Manifesto for Reproducible Science”, Munafo et al., 2017
Generate and Publish/conduct specify next experiment hypothesis Interpret results Design study Analyse data Conduct and and test collect data hypothesis
Research Plan • Your research proposal is based on what is already known • How do you know that this knowledge is reliable? • Were the experiments on which this knowledge is based at risk of bias? • Is the summary of knowledge skewed by publication bias?
Bias in in vivo stroke studies • Infarct Volume • 11 publications, 29 experiments, 408 animals • Improved outcome by 44% (35-53%) Efficacy Randomisation Blinded conduct Blinded of experiment assessment of outcome Macleod et al, 2008
SAINT II Phase 3 Clinical Trial
Bias in other in vivo domains Stroke Alzheimer´s disease Multiple Sclerosis Parkinson´s disease
Reporting of measures to reduce risk of bias in laboratory studies • Reporting of measures to reduce bias in 254 reports of in vivo , ex vivo or in vitro studies involving non-human animals, identified from random sample of 2000 publications from PubMed
Reporting of measures to reduce the risk of bias in publications from ‘leading’ UK institutions Reporting of measures to reduce bias in 1173 in vivo studies involving non- human animals, published from leading UK institutions published in 2009 and 2010
Reporting risk of bias by journal impact factor Prevalence of reporting Journal impact factor
Is the summary of knowledge skewed by publication bias?
Different patterns of publication bias in different fields outcome observed corrected Disease models improvement 40% 30% Less improvement Toxicology model harm 0.32 0.56 More harm Benefit Harm
Publication bias 20% n Estimated Reported Corrected efficacy efficacy expts unpublished Stroke – infarct volume 1359 214 31.3% 23.8% EAE - neurobehaviour 1892 505 33.1% 15.0% EAE – inflammation 818 14 38.2% 37.5% EAE – demyelination 290 74 45.1% 30.5% EAE – axon loss 170 46 54.8% 41.7% AD – Water Maze 80 15 0.688 sd 0.498 sd AD – plaque burden 632 154 0.999 sd 0.610 sd
Generate and Publish/conduct specify next experiment hypothesis Interpret results Design study Analyse data Conduct and and test collect data hypothesis
Is the current study at risk of bias?
Expectancy effects • 12 graduate psychology students • 5 day experiment: rats in T maze with dark arm alternating at random, and the dark arm always reinforced • 2 groups – “Maze Bright” and “Maze dull” Group Day Day Day Day Day 1 2 3 4 5 “Maze 1.33 1.60 2.60 2.83 3.26 bright” “Maze 0.72 1.10 2.23 1.83 1.83 dull” Δ +0.60 +0.50 +0.37 +1.00 +1.43 Rosenthal and Fode, Behav Sci 8, 183-9
It’s not just in the measurement Improvement in behavioural outcome 1.2 (Standardised Effect Size) 1.0 0.8 0.6 0.4 0.2 0.0 No Yes Blinded assessment of behavioural outcome
Generate and Publish/conduct specify next experiment hypothesis Interpret results Design study Analyse data Conduct and and test collect data hypothesis
Lack of an a priori plan (protocol) leaves you chasing shadows
Perils of testing non- prespecified hypotheses • International Study of Infarct Survival-2 – Aspirin improves outcome in myocardial infarction BUT – Non-significant worsening of outcome for patients born under Gemini or Libra • What if it was patients with migraine? Baigent et al., 1998. ISIS-2. BMJ. Peto, R., 2011. Current misconception 3. Br J Cancer
Generate and Publish/conduct specify next experiment hypothesis Interpret results Design study Analyse data Conduct and and test collect data hypothesis
The final paper doesn’t provide useful information
Poor Reporting & Quality • Clinical Trials: Cochrane • High/unclear risk of bias • TIDieR – poor reporting of interventions • (Hoffmann et al., 2014) • Animal Studies: • measures to reduce the risk of bias • Poor reporting of general methods Poor Reporting in 20,920 RCTs – Dechartres et al., 2017, BMJ;357:j2490
Robustness Crabbe et al., 1999, Science
Researchers are different … Open Science Risks of bias Preregistration number HARKing F.F.P. quality
How we do research integrity
Research Improvement Strategy
Biomedical Research Investment • $300bn globally, €50bn in Europe • Glasziou & Chalmers estimated that 85% of research is wasted • Even if waste is only 50%, improvements which reduced that by 1% would free $3bn globally every year. • Investing ~1% of research expenditure in improvement activity would go a long way
Research Cycle Generate and Publish/conduct specify next experiment hypothesis Interpret results Design study Analyse data Conduct and and test collect data hypothesis “Manifesto for Reproducible Science”, Munafo et al., 2017
Take home messages Before the Experiment: 1. Thoroughly check the quality of the prior knowledge base 2. Ensure an a priori plan or protocol, including a priori hypotheses During : 3. Implement measures to reduce the risk of bias in experiments, record keeping After : 4. Transparent reporting: report the study in full, make the data available on e.g. Zenodo, preprint at BioRxiv
Executing Best Practice: Before 1. Thoroughly check the quality of your knowledge base Systematic review of the field + critical appraisal
Executing Best Practice 2. Make an a priori plan or protocol - Register the protocol • Exploratory or confirmatory experiment? • Study population, intervention, primary outcome, sample size calculation, hypothesis, statistical analysis plan • Time-stamped, read-only, with persisting unique digital identifier • Before beginning data collection • Can remain private until work is published
Executing Best Practice: During 1. Implement measures to reduce the risk of bias in your experiments Randomisation, Blinding, handling of drop-outs, etc 2. Traceability of materials, antibodies, code, etc 3. Record deviations from the study protocol
Executing Best Practice: After 4. Transparent reporting: Ensure study is reported in full – methods, reagents, intervention • • Make the data and analysis code available - " as open as possible as closed as necessary " • Link all with DOIs Reporting Guidelines: https://www.equator-network.org/ Before & After
Recommend
More recommend