getting smart about developing individualized sequences
play

Getting SMART About Developing Individualized Sequences of Health - PowerPoint PPT Presentation

Getting SMART About Developing Individualized Sequences of Health Interventions University of Minessota, NIMH Prevention Center, June 8 Susan A. Murphy & Daniel Almirall Outline 3:20-3:45: Adaptive Treatment Strategies (Murphy)


  1. Delayed Therapeutic Effects Why not use data from multiple trials to construct the adaptive treatment strategy? Positive synergies: Treatment A may not appear best initially but may have enhanced long term effectiveness when followed by a particular maintenance treatment. Treatment A may lay the foundation for an enhanced effect of particular subsequent treatments.

  2. Delayed Therapeutic Effects Why not use data from multiple trials to construct the adaptive treatment strategy? Negative synergies: Treatment A may produce a higher proportion of responders but also result in side effects that reduce the variety of subsequent treatments for those that do not respond. Or the burden imposed by treatment A may be sufficiently high so that nonresponders are less likely to adhere to subsequent treatments.

  3. Prescriptive Effects Why not use data from multiple trials to construct the adaptive treatment strategy? Treatment A may not produce as high a proportion of responders as treatment B but treatment A may elicit symptoms that allow you to better match the subsequent treatment to the patient and thus achieve improved response to the sequence of treatments as compared to initial treatment B.

  4. Sample Selection Effects Why not use data from multiple trials to construct the adaptive treatment strategy? Subjects who will enroll in , who remain in or who are adherent in the trial of the initial treatments may be quite different from the subjects in SMART.

  5. Summary: •When evaluating and comparing initial treatments, in a sequence of treatments , we need to take into account, e.g. control, the effects of the secondary treatments thus SMART •Standard one-stage randomized trials may yield information about different populations from SMART trials.

  6. Sequential Multiple Assignment Randomization Initial Txt Intermediate Outcome Secondary Txt Relapse R Early Prevention Responder Low-level Monitoring Switch to Tx C Tx A Nonresponder R Augment with Tx D R Early Relapse R Responder Prevention Low-level Monitoring Tx B Switch to Tx C Nonresponder R Augment with Tx D

  7. Examples of “SMART” designs: •CATIE (2001) Treatment of Psychosis in Schizophrenia •Pelham (primary analysis) Treatment of ADHD •Oslin (primary analysis) Treatment of Alcohol Dependence •Jones (in field) Treatment for Pregnant Women who are Drug Dependent •Kasari (in field) Treatment of Children with Autism •McKay (in field) Treatment of Alcohol and Cocaine Dependence

  8. SMART Design Principles •KEEP IT SIMPLE: At each stage (critical decision point), restrict class of treatments only by ethical, feasibility or strong scientific considerations. Use a low dimension summary (responder status) instead of all intermediate outcomes (adherence, etc.) to restrict class of next treatments. •Collect intermediate outcomes that might be useful in ascertaining for whom each treatment works best; information that might enter into the adaptive treatment strategy.

  9. SMART Design Principles •Choose primary hypotheses that are both scientifically important and aids in developing the adaptive treatment strategy. •Power trial to address these hypotheses. •Choose secondary hypotheses that further develop the adaptive treatment strategy and use the randomization to eliminate confounding. •Trial is not necessarily powered to address these hypotheses.

  10. SMART Designing Principles: Primary Hypothesis •EXAMPLE 1: ( sample size is highly constrained ): Hypothesize that controlling for the secondary treatments, the initial treatment A results in lower symptoms than the initial treatment B. •EXAMPLE 2: ( sample size is less constrained ): Hypothesize that among non-responders a switch to treatment C results in lower symptoms than an augment with treatment D.

  11. EXAMPLE 1 Initial Txt Intermediate Outcome Secondary Txt Relapse Early Prevention Responder Low-level Monitoring Switch to Tx C Tx A Nonresponder Augment with Tx D Early Relapse Responder Prevention Low-level Monitoring Tx B Switch to Tx C Nonresponder Augment with Tx D

  12. EXAMPLE 2 Initial Txt Intermediate Outcome Secondary Txt Relapse Early Prevention Responder Low-level Monitoring Switch to Tx C Tx A Nonresponder Augment with Tx D Early Relapse Responder Prevention Low-level Monitoring Tx B Switch to Tx C Nonresponder Augment with Tx D

  13. SMART Designing Principles: Sample Size Formula •EXAMPLE 1: (sample size is highly constrained): Hypothesize that given the secondary treatments provided, the initial treatment A results in lower symptoms than the initial treatment B. Sample size formula is same as for a two group comparison. •EXAMPLE 2: (sample size is less constrained): Hypothesize that among non-responders a switch to treatment C results in lower symptoms than an augment with treatment D. Sample size formula is same as a two group comparison of non-responders.

  14. Sample Sizes N=trial size Example 1 Example 2 N = 402/initial Δμ / σ =.3 N = 402 nonresponse rate N = 146/initial Δμ / σ =.5 N = 146 nonresponse rate α = .05, power =1 – β =.85

  15. An analysis that is less useful in the development of adaptive treatment strategies: Decide whether treatment A is better than treatment B by comparing intermediate outcomes (proportion of early responders). 21

  16. SMART Designing Principles •Choose secondary hypotheses that further develop the adaptive treatment strategy and use the randomization to eliminate confounding. •EXAMPLE: Hypothesize that non-adhering non- responders will exhibit lower symptoms if their treatment is augmented with D as compared to an switch to treatment C (e.g. augment D includes motivational interviewing).

  17. EXAMPLE 2 Initial Txt Intermediate Outcome Secondary Txt Relapse Early Prevention Responder Low-level Monitoring Switch to Tx C Tx A Nonresponder Augment with Tx D Early Relapse Responder Prevention Low-level Monitoring Tx B Switch to Tx C Nonresponder Augment with Tx D

  18. Outline • What are Sequential Multiple Assignment Trials (SMARTs)? • Why SMART experimental designs? – “new” clinical trial design • Trial Design Principles and Analysis • Examples of SMART Studies • Summary & Discussion

  19. Pellman ADHD Study A1. Continue, reassess monthly; randomize if deteriorate Yes 8 weeks A. Begin low-intensity A2. Add medication; Assess- behavior modification bemod remains stable but Adequate response? medication dose may vary Random No assignment: A3. Increase intensity of bemod with adaptive modifi- cations based on impairment Random assignment: B1. Continue, reassess monthly; randomize if deteriorate 8 weeks B2. Increase dose of medication B. Begin low dose with monthly changes medication Assess- as needed Random Adequate response? assignment: B3. Add behavioral No treatment; medication dose remains stable but intensity of bemod may increase with adaptive modifications based on impairment

  20. Oslin ExTENd Naltrexone 8 wks Response Random TDM + Naltrexone Early Trigger for assignment: Nonresponse CBI Random assignment: Nonresponse CBI +Naltrexone Random assignment: Naltrexone 8 wks Response Random assignment: TDM + Naltrexone Late Trigger for Nonresponse Random assignment: CBI Nonresponse CBI +Naltrexone

  21. Discussion • We have a sample size formula that specifies the sample size necessary to detect an adaptive treatment strategy that results in a mean outcome δ standard deviations better than the other strategies with 90% probability (A. Oetting, J. Levy & R. Weiss are collaborators) • We also have sample size formula that specify the sample size for time-to-event studies. • Aside: Non-adherence is an outcome (like side effects) that indicates need to tailor treatment. 27

  22. Kasari Autism Study JAE+EMT Yes 12 weeks A. JAE+ EMT Assess- JAE+EMT+++ Adequate response? Random No assignment: JAE+AAC Random assignment: Yes B!. JAE+AAC 12 weeks B. JAE + AAC Assess- Adequate response? No B2. JAE +AAC ++ 28

  23. Jones’ Study for Drug-Addicted Pregnant Women rRBT 2 wks Response Random tRBT assignment: tRBT tRBT Random assignment: Nonresponse eRBT Random assignment: aRBT 2 wks Response Random assignment: rRBT rRBT Random assignment: tRBT Nonresponse rRBT

  24. SMART Designing Principles: Primary Hypothesis •EXAMPLE 3: (sample size is less constrained): Hypothesize that adaptive treatment strategy 1 (in blue) results in improved symptoms as compared to strategy 2 (in red)

  25. EXAMPLE 2 Initial Txt Intermediate Outcome Secondary Txt Relapse Early Prevention Responder Low-level Monitoring Switch to Tx C Tx A Nonresponder Augment with Tx D Early Relapse Responder Prevention Low-level Monitoring Tx B Switch to Tx C Nonresponder Augment with Tx D 31

  26. Preparing for a SMART Study Getting SMART About Developing Individualized Sequences of Health Interventions University of Minnesota, NIMH Prevention Center, June 8 Daniel Almirall & Susan A. Murphy

  27. Outline • We discuss scientific, logistical, and statistical issues specific to executing a SMART that should be considered when planning a SMART (in a SMART pilot study) • Sample size calculation for SMART pilots

  28. Primary Aim of Pilot Studies (in general) • Is to examine feasibility of full-scale trial: e.g., – Can investigator execute the trial design? – Will participants tolerate treatment? – Do co-investigators buy-in to study protocol? – To manualize treatment(s) – To devise trial protocol quality control measures • Is not to obtain preliminary evidence about efficacy of a treatment or treatment strategy.

  29. Review the ADHD SMART Design PI: Dr. Pelham, FIU Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication

  30. Primary/Design Tailoring Variable • Explicitly/clearly define early non/response • We recommend binary measure – Theory, prior research, conventions, and/or preliminary data can be used to find a cut-off. • Need estimate of the non/response rate • Should be associated with long-term response – Surrogate marker or mediation theories • Should be easily assessed/measured in practice

  31. Protocol for Missing Primary Tailoring Variable • Suppose participant misses clinic visit when the primary tailoring variable is assessed – How do we assign second stage treatment if/when participant returns? • This is a non-standard missing data issue • Need a fixed, pre-specified protocol for determining responder status based on whether/why primary tailoring variable is missing. Guided by actual clinical practice.

  32. Example Protocol for Missing Primary Tailoring Variable • Need a fixed, pre-specified protocol for determining responder status based on whether/why primary tailoring variable is missing. Guided by actual clinical practice. • Example 1: Classify all participants with missing response as non-responders. • Example 2: Classify all participants with missing response as responders.

  33. Manualizing Treatment Strategies • Recall: SMART participants move through stages of treatment as part of embedded ATSs • Treatment strategies are manualized – Not just the treatment options by themselves – Includes transitions between treatment options • Treatment has an expanded definition – Example: stepping down is a treatment decision • Recall: randomization is not part of treatment

  34. Prepare to Collect Other Potential Tailoring Variables • Use pilot study to pilot new scales, instruments, or items that could be used as tailoring variables in practice • Have protocols for discovering unanticipated tailoring variables: – Process measures (e.g., allegiance with therapist, families that are difficult to schedule) – Use focus groups during and at end of pilot – Use exit interviews during and at end of pilot

  35. Evaluation Assessment versus Treatment Assessment • Makes sense to use (blinded) independent evaluators to collect outcomes measures used to evaluate effectiveness of embedded ATS • But it is acceptable to use treating clinicians to measure the primary tailoring variable used to move to second-stage of treatment • SMART Pilot study can be used to practice protocols to keep these distinct

  36. Staff Acceptability to Changes in Treatment • Challenges in a SMART: – Researchers maybe not accustomed to protocolized treatment sequences/strategies – SMART may limit use of clinical judgement • Use a pilot SMART to identify concerns by staff and co-investigators about – Assessment of early non/response – Sequences of treatment provided

  37. Participant Adherence/concerns about Changes in Treatment • Use the pilot SMART to identify concerns by participants using – Focus groups, exit interviews, or additional survey items • May ask participants about – Experience transitioning between treatments – Was rationale for treatment changes adequate? – Was appropriate information you shared with clinician(s) in stage 1 understood by stage 2 clinician(s)?

  38. Randomization Procedure • A SMART pilot will allow investigators to practice randomization procedures • Up-front versus real-time randomization – Up-front: After baseline, randomize participants to the embedded ATSs – Real-time: Randomize sequentially • We recommend real-time because we can balance randomized second stage options based on responses to initial treatment.

  39. ADHD SMART Design (PI: Pelham) Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication

  40. Sample Size for a SMART Pilot • Sample size calculation based on feasibility aims, not treatment effect detection/evaluation • Approach 1: Primary feasibility aim is to ensure investigative team has opportunity to implement protocol from start to finish. – Assume: Need 2-3 children in each of the 6 cells – Assume: 10% drop out, 40% response rate – Need to recruit approximately 20 children for the SMART pilot study

  41. ADHD SMART Design (PI: Pelham) N=3 Continue Responders Medication N=3 Medication Increase N=3 Medication Dose Non-Responders R N=9 Add Behavioral N=6 R N=3 Intervention N=18 Continue Responders N=3 Behavioral N=3 Intervention Behavioral Intervention Increase N=3 Behavioral N=9 Non-Responders R Intervention N=6 Add Medication N=3

  42. Sample Size for a SMART Pilot • Approach 2: To obtain estimate of overall non/response rate with a given margin of error – This is a more statistical justification – Usually requires larger sample than Approach 1 – Use if concern about large/small response rate • 95% MOE = 2*SQRT( p (1-p) / N ) • Example 1: p=0.35, MOE=0.15 requires N=41 • Example 2: p=0.50, MOE=0.10 requires N=100

  43. Primary Aims Using Data Arising from a SMART Getting SMART About Developing Individualized Sequences of Health Interventions University of Minnesota, NIMH Prevention Center, June 8 Daniel Almirall & Susan A. Murphy

  44. Primary Aims Outline • Review the Adaptive Interventions for Children with ADHD Study design – This is a SMART design • Two typical primary research questions in a SMART – Q1: Main effect of first-line treatment? – Q2: Comparison of two embedded ATSs? • Results from a worked example • SAS code snippets for the worked example

  45. Review the ADHD SMART Design Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  46. There are 2 “first Line” treatment decisions Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  47. Response/non-response at Week 8 is the primary tailoring variable Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  48. There are 6 future or “second-line” treatment decisions Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  49. There are 4 embedded adaptive treatment strategies in this SMART; Here is one Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  50. There are 4 embedded adaptive treatment strategies in this SMART; Here is another Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  51. Sequential randomizations ensure between treatment group balance Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  52. A subset of the data arising from a SMART may look like this Baseline Prior First Resp ADHD Med Line /Non Second School ODD Dx Score ? Txt -resp Line Txt Perfm ID O11 O12 O13 A1 R A2 Y 1 1 1.18 0 -1 MED 1 . 3 2 0 -0.567 0 -1 0 1 INTSFY 4 3 0 0.553 1 1 BMOD 0 -1 ADDO 4 4 0 -0.013 0 1 0 -1 4 5 0 -0.571 1 1 0 1 2 6 0 -0.684 1 1 0 -1 4 7 0 1.169 0 -1 1 . 3 8 0 0.369 1 -1 0 1 3 This is simulated data.

  53. Typical Primary Aim 1: Main effect of first-line treatment? • What is the best first-line treatment on average, controlling (by design) for future treatment? • Among children with ADHD: Is it better on average, in terms of end of study mean school performance, to begin treatment with a behavioral intervention or with medication?

  54. Primary Question 1 is simply a comparison of two groups! Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  55. Primary Question 1 is simply a comparison of two groups Mean end of ... study outcome for all Medication participants initially assigned to Medication R Mean end of study outcome ... for all Behavioral participants Intervention initially assigned to Behavioral Intervention O1 A1 O2 / R Status A2 Y

  56. SAS code for a 2-group mean comparison in end of study outcome * center covariates prior to regression; data dat1; set libdat.fakedata; o11c = o11 – 0.2666667 ; o12c = o12 - - 0.05561650 ; o13c = o13 - 0.2688887 ; run ; * run regression to get between groups difference; proc genmod data = dat1; model y = a1 o11c o12c o13c; estimate 'Mean Y under BMOD' intercept 1 a1 1 ; estimate 'Mean Y under MED' intercept 1 a1 - 1 ; estimate 'Between groups difference' a1 2 ; run ; This analysis is with simulated data.

  57. The SAS code corresponds to a simple regression model proc genmod data = dat1; model y = a1 o11c o12c o13c; estimate 'Mean Y under BMOD' intercept 1 a1 1 ; estimate 'Mean Y under MED' intercept 1 a1 - 1 ; estimate 'Between groups difference' a1 2 ; run ; The Regression Logic: Y = b0 + b1*A1 + b2*O11c + b3*O12c + b4*O13c + e Mean Y under BMOD = E( Y | A1=1 ) = b0 + b1*1 Mean Y under MED = E( Y | A1=-1 ) = b0 + b1*(-1) Between groups diff = E( Y | A1=1 ) - E( Y | A1=1 ) = b0 + b1 – (b0 – b1) = 2*b1

  58. Primary Question 1 Results Contrast Estimate Results 95% Conf Limits Label Estimate Lower Upper P-value Mean Y under BMOD 3.3443 3.1431 3.5436 <.0001 Mean Y under MED 3.2653 3.0469 3.4838 <.0001 Between groups diff 0.0780 -0.2229 0.3789 0.6115 In this simulated data set/experiment, there is no average effect of first-line treatment on school performance. Mean diff = 0.07 (p=0.6). This analysis is with simulated data.

  59. Or, here is the SAS code and results for the standard 2-sample t-test data dat2; set dat1; if a1= 1 then a1tmp=“BMOD”; if a1=- 1 then a1tmp=“MED”; run ; proc ttest data=dat2; class a1tmp; var y; run ; The TTEST Procedure Results a1tmp N Mean Std Err P-value BMOD 82 3.2927 0.1090 - MED 68 3.3088 0.1053 - Diff (BMOD-MED) -0.0161 0.1534 0.91 This analysis is with simulated data.

  60. Side Analysis: Impact of first-line treatment on early non/response rate Response Rate for ... all participants Medication initially assigned to Medication R Response Rate for all ... Behavioral participants Intervention initially assigned to Behavioral Intervention O1 A1 O2 / R Status A2 Y

  61. Side analysis: SAS code and results for “myopic effect” of first-line treatment proc freq data=dat1; table a1*r / chisq nocol nopercent; run ; Fre Frequ quen ency cy‚ Row Pc w Pct t ‚ R = 0‚ R = 0‚ R = 1‚ = 1‚ Tot otal al ƒƒƒ ƒƒƒƒƒ ƒƒƒƒ ƒƒƒƒ ƒƒˆƒ ˆƒƒƒƒ ƒƒƒƒƒ ƒƒƒƒ ƒƒˆƒ ˆƒƒƒ ƒƒƒƒƒ ƒƒƒƒƒ ƒƒˆ In terms of early A1 = 1 = -1 ‚ 1 ‚ 34 ‚ 4 ‚ 34 ‚ 68 4 ‚ 68 non/response rate, ME MED D ‚ ‚ 50 50.0 .00 ‚ 0 ‚ 50.0 0.00 ‚ 0 ‚ initial MED is better ƒƒƒ ƒƒƒƒƒ ƒƒƒƒ ƒƒƒƒ ƒƒˆƒ ˆƒƒƒƒ ƒƒƒƒƒ ƒƒƒƒ ƒƒˆƒ ˆƒƒƒ ƒƒƒƒƒ ƒƒƒƒƒ ƒƒˆ than Initial BMOD by A1 = 1 ‚ 1 = 1 ‚ 55 ‚ 5 ‚ 27 ‚ 82 7 ‚ 82 17% (p-value = 0.03). BM BMOD OD ‚ ‚ 67 67.0 .07 ‚ 7 ‚ 32.9 2.93 ‚ 3 ‚ ƒƒƒƒƒ ƒƒƒ ƒƒƒƒ ƒƒƒƒ ƒƒˆƒ ˆƒƒƒƒ ƒƒƒƒƒ ƒƒƒƒ ƒƒˆƒ ˆƒƒƒ ƒƒƒƒƒ ƒƒƒƒƒ ƒƒˆ 89 89 61 61 1 150 50 This analysis is with simulated data.

  62. Typical Primary Question 2: Best of two adaptive interventions? • In terms of average school performance, which is the best of the following two ATS: First treat with medication, then • If respond, then continue treating with medication • If non-response, then add behavioral intervention versus First treat with behavioral intervention, then • If response, then continue behavioral intervention • If non-response, then add medication

  63. Comparison of mean outcome had population followed the red ATS versus… Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  64. …versus the mean outcome had all population followed the blue ATS Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  65. But we cannot compare mean outcomes for participants in red versus those in blue. Continue Responders Medication Medication Increase Medication Dose Non-Responders R Add Behavioral Intervention R Continue Responders Behavioral Intervention Behavioral Intervention Increase Behavioral Non-Responders R Intervention Add Medication O1 A1 O2 / R Status A2 Y

  66. There is imbalance in the non/responding participants following the red ATS… Cont. N/2 Responders MED 1.00 Medication Increase 0.5 Medication Dose Non-Responders R Add N/4 0.5 BMOD R(N) …because, by design, • Responders to MED had a 0.5 = 1/2 chance of having had followed the red ATS, whereas • Non-responders to MED only had a 0.5 x 0.5 = 0.25 = 1/4 chance of having had followed the red ATS

Recommend


More recommend