Browsing by Subject "evidence-based, improvement, home visiting, maternal and child health, policymaking"
Item Open Access Evidence-Based Policy Reform: Exploring the Role of Evidence in States' Model Selection for the Maternal, Infant, and Early Childhood Home Visiting Program(2013-04-19) Kawar, Anna NealPolicy Question Did the U.S. Department of Health and Human Services (DHHS) successfully signal to states that the driving factor for model selection for the Maternal, Infant, and Early Childhood Home Visiting Program (MIECHV) should be strong evidence of effectiveness? o On which factors did states base their selection of models? o Were states rewarded through competitive funding for selecting stronger models? Program Overview Title V of the Social Security Act of 1935 included Federal aid for maternal and child health services (part 1), services for children with disabilities (part 2), and child welfare services (part 3), all to be administered through the Children’s Bureau. Today, Title V remains the only Federal program solely devoted to the health of all mothers and children. In the current political climate, with a strong emphasis on deficit reduction and continued debates over the size and role of government, federal entitlement programs have come under increased scrutiny. It is important to ensure that each dollar is optimally spent and that the programs that are funded are known to be effective. Randomized evaluations have been hailed as the best way to precisely measure impact and gather evidence regarding the true effectiveness of a social program. Policymakers can then use this evidence to make better decisions regarding which programs to fund and also to garner support for social programs that have been proven to be effective. In 2010, Obama signed into law the Patient Protection and Affordable Care Act (ACA), which included authorization for MIECHV, a program designed to strengthen and improve the Title V programs and services. The majority of funding is reserved for evidence-based programs—models developed, evaluated, and proven to show significant improvement in outcomes. DHHS launched the Home Visiting Evidence of Effectiveness (HomVEE) project and created a team to conduct an evaluation of existing home visiting programs and literature. HomVEE’s review of 32 models resulted in a list of 13 models that met DHHS’s criteria for an evidence-based home visiting model. The 13 models that HomVEE selected as approved evidence-based home visiting models for MIECHV are: 1) Child FIRST 2) Early Head Start-Home Visiting 3) Early Intervention Program for Adolescent Mothers (EIP) 4) Early Start (New Zealand) 5) Family Check-Up 6) Healthy Families America (HFA) 7) Healthy Steps 8) Home Instruction for Parents of Preschool Youngsters (HIPPY) 9) Nurse Family Partnership (NFP) 10) Oklahoma’s Community-Based Family Resource and Support (CBFRS) Program 11) Parents as Teachers (PAT) 12) Play and Learning Strategies (PALS) Infant 13) SafeCare Augmented The Coalition for Evidence-Based Policy is interested in evaluating the success of the Home Visiting Program to determine how clearly the need for evidence-based programs was signaled to states and also to learn more about the barriers states may have faced in selecting evidence-based models and programs. The Coalition found a wide variety of evidence of effectiveness among the 13 models selected by HomVEE. Only one program, NFP, is ranked as “strong” by the Coalition. This is primarily because effects were significant, strong, and sustained, but most importantly they were replicated. Replication lowers the likelihood that effects are observed by chance and, therefore, increases confidence that the program is delivering real effects. Methodology In order to determine the driving factors in states’ model selection, my research and analysis consisted of both research and conducting telephone interviews with a select number of states. I selected 13 states to conduct telephone interviews with about the process for and driving factors in model selection. Key Findings Findings indicate that, based on interviews, the driving forces behind model selection were: • Models are already existing or established with a strong statewide network o Out of the 13 states interviewed, 11 (85%) of states selected models that already existed in their state; only two states chose to “start over” with brand new models. • Models would have the most impact on federal benchmarks for MIECHV • Models are the best fit for the needs of our target population given our capacity • Cost of implementation • Models target a specific need and/or risk factor identified in our state Only six models out of the 13 listed by HomVEE were chosen across all 38 states. The top three most commonly selected models were: 1) HFA, selected by 28 states; 2) NFP, selected by 25 states; and 3) PAT, selected by 22 states. HFA represents 29% of all models selected, NFP represents 26%, PAT represents 23%, and the remaining three models represent 21%. NFP is the only model selected that ranks as “strong” by the Coalition, and, therefore, only 26% of all models selected under MIECHV are ranked as having strong evidence of effectiveness. A key hypothesis regarding these findings is that NFP restricts enrollment to only the first child, and many states chose other models in parallel to ensure that all at-risk populations could be served. Recommendations Considering Evidence: Add to the definition of evidence-based models and to the selection criteria that effects be “substantial and important” as well as statistically significant. This language will eliminate a loophole allowing weaker models to be selected by not solely focusing on statistical significance. Statistically significant effects can exist for trivial outcomes, can actually be very small in size to where it’s of little practical importance, or can be chance findings if a program studied a large number of outcomes. Model Selection: As MIECHV evolves and as the list of models continues to develop, it may be important to consider selecting models that have strong evidence of effectiveness in varying contexts and for varying outcomes. This process, ideally, would result in a list of models that are all individually strong and designed to target specific populations in specific contexts. Put together, the models would cover everyone. Providing Information and Tools to States: DHHS should provide the states with more materials, toolkits, and matrices that they can use to thoroughly research and compare models and budget estimates. DHHS should also create a feedback loop that can optimize communication, standardization, sharing of best practices, and can create culture conducive to improvement and innovation. Standardization of model selection as well as implementation can assist in creating not just a centralized state system, but also improve data collection, reporting, oversight, and planning at the federal level. Costs, resources, and staff time can be controllable, optimized, and subsidized as needed. Implementation: Implementation is a commonly overlooked aspect of programs in that the assumption is if you design it well, it will work that way. What has become increasingly evident is that, not only due to the varying contexts within which programs are implemented, but also due to varying processes, management styles, and service delivery on the ground, effectiveness of the same program can vary significantly. Innovation and Continuous Improvement: MIECHV was designed with two-tiers of funding to allow for promising models to be evaluated to determine the strength of evidence of effectiveness. A more thorough understanding of improvement science can help ensure continued innovation through learning from failures and allowing small tests of change in understanding and redesigning the system of maternal and child health.