Intervention Studies: Principles, Opportunities and Pitfalls Prof. Dr. phil. Gabriele Meyer Martin Luther University Halle-Wittenberg, Germany
• 20 English language nursing journals with highest IF (1.221- 2.103) Inclusion: 223 studies from 21 EU-European countries Results: 34% report on nursing interventions 45% observational studies 39% qualitative 12% experimental 4% randomised controlled trials Confirmed by: Mantzoukas Int J Nurs Stud 2009; Forbes Int J Nurs Stud 2009; Yarcheski et al. Int J Nurs Stud 2012
LINKING EVIDENCE TO ACTION • Researchers in nursing should design, undertake, and report fewer descriptive studies and more experimental research into the effectiveness of nursing interventions to ensure a more balanced proportion of intervention and descriptive research in nursing. • Researchers should structure their studies to explicitly link the development, testing, evaluation, and implementation of nursing interventions in coherent programs of research activity rather than as stand-alone projects.
LINKING EVIDENCE TO ACTION • Nursing researchers should consider using the UK Medical Research Council’s “Complex Interventions Research Framework” to organize studies that will deliver an increased evidence base for nursing interventions. • Doctoral education programs for nurses should encourage students to undertake experimental work into the efficacy and effectiveness of nursing interventions.
(Craig et al. 2012; IJNS)
Examples • Organisation – Stroke unit • Health professions – Guideline implementation • Defined Groups – School based programmes for smoking cessation • Individuals – Lifestyle intervention in diabetes
Terms used for complex interventions • Multicomponent • Multifaceted • Multifactorial
Introduction of such a system did not significantly reduce the incidence of our study outcomes. Possible explanations for our findings are that the MET system is an ineffective intervention; the MET is potentially effective but was inadequately implemented in our study; we studied the wrong outcomes; control hospitals were contaminated as a result of being in the study; the hospitals we studied were unrepresentative; or our study did not have adequate statistical power to detect important treatment effects.
Interventional study designs • Also called „experimental study designs “ • Where the researcher intervenes at some point throughout the study • To evaluate study questions related to either treatment (prophylactic agents, treatments, surgical approaches, diagnostic tests) or prevention (protective equipment, engineering controls, management, policy or any element that should be evaluated as to a potential cause of disease or injury) • Strongest design: randomised controlled trial • Other: Pre-post study design, non-randomised controlled trials, quasi-experiments
Most RCTs aim to determine whether one intervention is superior to another. Equivalence trials aim to determine whether one (typically new) intervention is therapeutically similar to another, usually an existing treatment. A non-inferiority trial seeks to determine whether a new treatment is no worse than a reference treatment. Proof of exact equality is impossible, therefore, a pre-stated margin of non-inferiority for the treatment effect in a primary patient outcome is defined.
Heal et al. BMJ 2006; 332
Explanatory trial Pragmatic trial Benefit of a treatment under ideal Benefit of a treatment in routine conditions (efficacy) clinical practice (effectiveness) Homogeneous population as Variations between patients as in possible; aims primarily to further real clinical practice; aims to inform scientific knowledge choices between treatments Standardised intervention Protocol based administration of different treatment to IG Intermediate outcomes often used Full range of health gains
Bias „ Any process at any stage of inference tending to produce results that differ systematically from true values .“ (Murphy 1976)
Internal validity Threated by Selection bias Performance bias Detection bias Attrition bias
Schulz KF & Grimes DA Lancet 2002 Blinding … • minimizes • cointervention bias (differential use of cointerventions), • attrition bias (differential patient dropout) • response bias (differential reporting of symptoms) • ensures a similar degree of placebo effects in compared groups • protects against observer/detection/ascertainment bias Trials that are not double blind exaggerate treatment effects (odds ratios) by 13%, and by 23% when outcomes are subjective ( Savović J et al. Ann Intern Med 2012)
Conclusions: The risk of selection bias could not be ascertained for most trials due to poor reporting. Many trials which did provide details on the randomisation procedure were at risk of selection bias due to a poorly chosen randomisation methods. Techniques to reduce the risk of selection bias should be more widely implemented.
n=475 RCTs with mITT
Modified ITT (Anraha et al. BMJ 2010) • Treatment : „ mITT consisted of patients who received at least six doses of study drug .“ • Baseline assesment : „ mITT included patients with at least one baseline observation .“ • Target condition : „ The mITT population consisted of those patients who were randomly assigned to study treatment minus those who were not H pylori positive.“
Modified ITT (Anraha et al. BMJ 2010) • Post-baseline assessment: „ mITT includes all randomized patients who have . . . at least one post- baseline measurement.” • Follow-up : “All participants who completed follow-up were analyzed as a part of the group to which they were randomized. This was not a strict intent-to-treat analysis as some study participants were lost to follow- up.”
Time for a short exercise
Stages of waste in the production and reporting of research evidence
• Regulation bodies are not in charge (e.g. FDA, EMA) • Prospective regulation voluntary; recommendations by International Committee of Medical Journal Editors, WHO Statement on Public Disclosure of Clinical Trial Results, World Medical Association’s Declaration of Helsinki, Reporting Guidelines (e.g. CONSORT, SPIRIT), AllTrials campaign, United Nations non-regulated RCTs less often registered as regulated RCTs, although 40% of all published RCTs
• Survey, 220 RCTs from clinical geriatric journals • Published RCT registered in a publicly accessible register? • Prospectively registered before participants ‘ recruitment? • Agreement between registration, published study protocol and published report of results?
• 140/220 RCTs registered. • Registration in only 15% of RCTs prospectively. • Half of RCTs report on results of registered outcomes or at least refer to registered outcomes. • Time of enrolment of participants remains unclear in one third of registered trials and half of non-registered studies. •
Sample : 133 trials with 137 interventions Outcome instrument : 8-item checklist Results: 53/137 (39%) were adequately described; after contact to authors: n=81 (59%) according to 63 answers of 88 contacted authors.
Transparency in reporting of complex interventions No studies (out of 5) (Möhler et al. 2011) Intervention … Theoretical basis 3 Piloting 0 Costs 0 Education… Description of participants 3 Curriculum 2 Access to education material 0 Process evaluation 0 Description standard care 0 Implementation of study protocol 0 Staff turnover 2
CReDECI • Designed to integrate relevant aspects of the complete research process of development and evaluation of a complex intervention • 13 Items in 3 sections: – development (n=4 items) – feasibility and piloting (n=1) – introduction of the intervention and evaluation (n=8) • Not focussing on a particular study design
What are cluster randomised controlled trials? cRTCs are experiments in which (interacting) social units rather than individuals are randomly allocated to study groups: communities, schools, families, hospitals, nursing homes …
Reasons for cRCT?
• How many levels are involved? – General practice - patient – Nursing home - ward - resident – Single person (with limbs, teeth, eyes)
Trials with one cluster per arm? • Minimum number of clusters per arm to ensure a valid analysis should be at least four (Hayes & Moulton, 2008)
Challenges of cRCT • Outcome for each participant cannot be assumed to be independent of that for any other participant since those within the same cluster are more likely to have similar outcomes (clustering effect). • The reduction in effective sample size depends on average cluster size and the degree of correlation within clusters, ρ, also known as the intracluster (or intraclass) correlation coefficient (ICC).
Challenges of cRCT • Standard sample size formulas will lead to underpowered studies larger sample sizes are required • Cluster adjustment has to be taken into account for statistical analysis to avoid unit of analysis bias
Sample size calculation cRCT - example
Recommend
More recommend