how should cmmi evaluations attempt to
play

How Should CMMI Evaluations Attempt to Account for Model Overlap? - PowerPoint PPT Presentation

How Should CMMI Evaluations Attempt to Account for Model Overlap? Workgroup members: Gregory Boyer, Susannah Cafardi, Philip Cotterill, Tim Day, Franklin Hendrick, Jennifer Lloyd, Patricia Markovich, Kelsey Weaver Objective Develop a strategy


  1. How Should CMMI Evaluations Attempt to Account for Model Overlap? Workgroup members: Gregory Boyer, Susannah Cafardi, Philip Cotterill, Tim Day, Franklin Hendrick, Jennifer Lloyd, Patricia Markovich, Kelsey Weaver

  2. Objective Develop a strategy to ide dentif ntify and and under dersta stand d the e impact ct of CMMI model tests in a constantly changing landscape that includes rule changes and co co-occ occurring urring models dels and d initiativ itiatives. es.

  3. Evaluation Goal: • Obtain accurate and unbiased estimate of the impact of the model test on measures related to: – Quality of care – Spending – Health Outcomes Quantitative Methods: • Require a comparison group for credibility – Compare estimates from the intervention group to that of a comparison group – The comparison group serves as the counterfactual of intervention group 3

  4. Counterfactual: What would have happened without the intervention? 4

  5. Considerations for Addressing Overlap in the Model Design Phase 5

  6. The Impact of Model Design Decisions • Need to know up front t what questions we want to have answered and set up design in order to answer those questions – Beneficiary Engagement and Incentives (BEI) Model • May require development of a model that is larger in scope. Other factors may influence whether the required sample to examine interactions is feasible or desired – Comprehensive Primary Care Plus (CPC+) • Must also consider implications of model design choices on existing models 6

  7. The Impact of Model Design Decisions • Design decisions can substantially impact the ability of the comparison group to serve as a true counterfactual • Isolating the intervention’s impact requires comparison group(s) of non-participants similar to intervention participants • Restricting eligibility for the model: – Complicates comparison group construction  Allowing participation by the comparison group in the prohibited overlapping initiative may lead to a comparison of the model vs. another intervention  However, applying the same restrictions may eliminate too much of the similar population from comparison eligibility 7

  8. To Randomize or Not to Randomize? That is the question. Advantages Potential Challenges Improves comparability of • Not always a viable option • intervention and comparison groups on • May not fully address differences factors not matched on (not between intervention and available in claims) comparison group Generates an appropriate • • Will require increased number to counterfactual reach the target of model Ensures intervention status • is independent of participation in other initiatives at baseline – Helpful for those initiatives not in CMMI data 9

  9. Randomization: Example Identification of An Appropriate Comparison Group – Accountable Health Communities (AHC) Impossible to ascertain social needs data from claims • Advan antag tages: es: • Intervention and comparison beneficiaries more likely to be similar – Reduces need of data for matching • Improves ability to isolate impact of the model with increased level of certainty and fewer caveats • Allows us to obtain data on comparison beneficiaries outside of what is readily available in claims Chal hallen lenge ges: • Difficult for providers to identify a social need in comparison beneficiaries without addressing it • Variation in how randomization is implemented 10

  10. Non-Randomization: Example Identification of an Appropriate Comparison Group – Comprehensive Primary Care Plus (CPC+) Pre-Intervention Performance Years Landscape What are the implications of MSSP CPC inclusion? CPC+ Intervention OCM OCM MSSP MSSP CPC Comparison OCM OCM ? MSSP MSSP 11

  11. Generalizability When model participants are not reflective of the larger healthcare landscape (e.g. they are high performers, early adopters), this limits our ability to generalize to other populations and properly assess the scalability of the model. 12

  12. Considerations for Addressing Overlap in Evaluation Design Phase 13

  13. Similarity of the Initiatives and Expected Impacts on Outcomes 14

  14. The Counterfactual is not Static

  15. The Counterfactual is not Static • CMMI implements models in an ever changing healthcare landscape – Drivers of change include the following: • Natural evolution of clinical practice • New models • Changing MA penetration rates • Medicare Programs • State health initiatives • If well-matched at baseline, changes observed in the comparison group should reflect what would have happened to the intervention group in the absence of the intervention – That being said, these changes have the potential to substantially impact the interpretation of the evaluation results • Qualitative data and thoughtful supplemental analyses are essential for understanding the context within which the model is operating 16

  16. Dynamic Counterfactual: Example Importance of the Counterfactual – Health Quality Partners Medicare Coordinated Care Demonstration (MCCD) Site Randomized trial • – Half of enrollees received coordinated care management – Half received usual care 2002-2010 • – Individuals in the program had significantly lower hospitalizations and spending – Program was viewed as very successful and was continued 2010-2014 • – Program did not reduce hospitalizations or generate Medicare savings The differences during the later period disappeared due to changes in “usual care” – with the ACA’s Hospital Readmissions Reduction Program and introduction of hospital care coordination programs. 17

  17. Telling the Story The Role of Qualitative Data A strong qualitative approach can help inform and interpret the quantitative analyses – What else is happening – what does the landscape look like? • How significantly are co-occurring initiatives impacting the model and its outcomes? – What is changing over time? – Do the results make sense given the timeframe of the intervention? – Could external factors be driving the results? 18

  18. Conclusions We must thoughtfully consider the role of co-occurring • initiatives throughout the design of the model and the evaluation Making informed decisions and understanding their impact on • evaluation results requires the following: – That we are asking the right questions – That we are able to determine what to do with the answers The development and evaluation of each model face a unique • compilation of challenges related to co-occurring initiatives Interpreting model results in the context of similar models with • similar goals often requires more framing and additional caveats – I.e , it’s the effects of a given model compared to what, exactly ? 19

Recommend


More recommend