1 A Comparison of Strategies for Assessing Fidelity to Evidence-Based Interventions Shannon Wiltsey Stirman, PhD National Center for PTSD and Stanford University @slwiltsey
Acknowledgements 2 • Coauthors • Candice Monson, PhD (Co-PI) • Norman Shields, PhD (Co-I) • Patricia Carreno • Kera Mallard • Matthew Beristianos • Sharon Hasslen • Research Funding • Canadian Institute of Health Research RN327031 • National Institute of Mental Health R01 MH106506 • The authors have no conflicts of interest to report.
Importance of assessing fidelity 3 • Measure of success of implementation strategies such as training • Key implementation outcomes (Proctor et al., 2009) • Necessary to understand unexpected outcomes (e.g., voltage drop) (Schoenwald et al., 2010) • Fidelity support has been shown to improve training outcomes (Lu et al., 2014) and decrease turnover (Aarons et al., 2009) @slwiltsey
Relationship between fidelity and outcomes 4 • Mixed findings re: observation • Meta-analysis found no overall relationship (Webb et al., 2010) • Fidelity predicted changes in depression (Webb et al., 2010) • Temporal confounds • Subsequent findings for CPT for PTSD (Farmer et al., 2015) • Self-report • Some researchers have found associations with outcomes (Hanson et al., 2015) • Clinical worksheets • Fidelity predicted subsequent symptom change (Stirman et al., 2015) @slwiltsey
Exploring associations with clinical outcomes 5 • Implications for data collection • Important to rule out temporal confounds • Potential for moderating variables to also impact fidelity • Possible strategies • Rate all early sessions and examine their impact on subsequent symptom change • Look at session-to-session change • All require numerous fidelity ratings – Lower-burden, reliable methods would advance this line of research
Considerations in assessing fidelity 6 Strategy Advantages Disadvantage Observation Accuracy Rater agreement, time Self-report Less time intensive than Accuracy unknown, response observation bias • Clinical documentation Integrated into care, Clinician burden, response accessible bias • Interview Interviewer can probe for Clinician burden, potential details response bias • Survey Typically brief Clinician burden Work samples (e.g., CBT Integrated into care, Requires rating worksheets) minimizes clinician burden @slwiltsey
7 Method
Study design 8 • Fidelity to cognitive processing therapy was assessed in a sample of clinician participants from a study on implementation support strategies • Clinician participants completed the following: • One-time interview • Monthly self report (re: adherence to CPT) • Session note with adherence checklist • CPT worksheets • Recordings of therapy sessions @slwiltsey
9 Sample Characteristics Therapists Clients • N=40 • N=77 • 32% M; 68%F • 41%M; 57% F; ,1% T • Age 42 (SD=11) • Age 40 (SD=14) • 86% White, 4% Hispanic • 75% White; 3% Black; 3% South Asian; 5% • 49% Phd/PsyD/MD; 33% Hispanic/Lation; 9% Other Master’s; 18% Bachelor’s/Other • 78% English First Language, • Years of practice =11 (sd=8) 9% French • 36% Private Practice; 21% • 40% Military or Veteran Community Mental Health; 11% • 65% 12 or more years of Federal; 18% Provincial 15% Other education @slwiltsey @slwiltsey
Observer ratings 10 • Raters were trained to 90% agreement on adherence and competence ratings • Raters reviewed full audio of CPT sessions • Dichotomous ratings of adherence of unique and essential CPT items • Seven-point scale for competence to each CPT item • Decision rules to foster agreement @slwiltsey
Interview 11 • Interviewers asked about • The extent to which therapist followed the CPT protocol • The type, nature, and frequency of adaptations • Global rating of adherence (generally adherent vs. generally non- adherent) @slwiltsey
Self-Reports 12 • In the past month, how closely have you followed the CPT protocol with your cases (0-3 scale) • Clinical note checklist • Checked off each unique and essential item completed in a given session @slwiltsey
CPT worksheets 13 @slwiltsey
Worksheets combined with clinical notes can provide richer information A B C Activating Event Belief Consequence (Something happens) (I tell myself something) (I feel something) Tom told me to get over it I hate him upset mmanding officer making orders that got us into crossfire. “People in authority cannot be I feel fearful and distrusting. I Does it make sense to tell yourself “B” above? _ Yes. He doesn’t understand what happened and he’s said hurtful things. trusted. He put us in harm’s avoid people in authority, or What can you tell yourself on such occasions in the future? It’s probably best not to talk with him about it. argue with them about their decisions when I have to Clinical Note: “ CPT session 3. Reviewed ABC sheets, identified stuck points. Assigned ABC interact with them. sheets and trauma narrative for homework” @slwiltsey
CPT worksheet 15 • Rater scores each section for adherence (0-1) • Assigns competence rating for each column/section • Previous research found high inter-rater reliability • Adherence k=.68-.98 • Competence ICC=.63-.89 @slwiltsey
16 Results
Results: Observer 17 • Adherence m=.85, sd=.25 (0-1 scale) • Competence m = 2.98, sd=1.13 (0-6 scale) • Feasibility • 60-75 minutes per rating • 40 clinicians turned in 485 sessions • Reliability: k=.87 (adherence), ICC=.78 (competence) • Treated as the “gold standard” @slwiltsey
Results: Interview 18 • Rater Agreement: simple to get to 95% agreement • Feasibility: • 30 completed (75%) • One hour interview (included other topics) • Coding is brief • Less precise as it encompasses a larger timeframe • 37% rated as generally adherent (0-1 scale) • Agreement with observer ratings • Adherence: r=.048, p-.57 • Competence rpb=.12, p=.16 @slwiltsey
Monthly self report 19 • m=2.36, sd=.65 (1-4 scale) • Feasibility: ~65% response rate (including dropouts) • Received 56 that could be matched with randomly selected rating • Agreement with observer rating • Adherence: r=.42, p=.001 • Competence: r=.13, p=.31 @slwiltsey
Clinical note checklist (self-report) 20 • m=.73, sd=.30 (73% of session elements) • Feasibility: depends on system • We requested one per month because it wasn’t embedded • Received 42 that could be matched with randomly selected observer ratings • Agreement with observer ratings • Adherence r=.87, p=.00 • Competence r=.77, p=.003 @slwiltsey
Worksheet Ratings 21 • Adherence m=.15, sd=.05 (0-1 scale) • Competence m=.25, sd=.17 (0-2 scale) • Feasibility: Depends on system of collection • Challenges in matching up with some sessions • Therapists posted worksheets for clinical challenges • 12 could be matched • Correspondence with observer ratings: • Adherence: r=.08, p=.85 • Competence: r=.21, p=.62 @slwiltsey
22 Discussion
Discussion 23 • Clinical notes appeared to be most reliable proxies for observer ratings • Monthly reports may be adequate under some circumstances • Data on worksheets should be interpreted with caution • Previous research correlated highly with observer ratings • Low sample size • Data collection procedures need to be carefully considered • Interviews should probably be on a different scale @slwiltsey
Future directions 24 • Larger datasets • Prospective research • Consider/refine strategies for data capture • Examine associations with outcomes
Contact 25 • sws1@Stanford.edu
Recommend
More recommend