Evaluations of education interventions can help decision-makers improve schools and programs in ways that help all learners succeed and achieve their full potential. High-quality impact studies of education interventions provide the strongest evidence about whether interventions improve academic outcomes such as school readiness, achievement, learning, persistence, or graduation; social and behavioral competencies; or employment and earnings outcomes. Yet information about whether and by how much a tested intervention improves outcomes is only part of the story. To learn why and how impact findings vary and to support the broader use of effective interventions, educators need to understand how, and under what conditions, the intervention was implemented. High-quality implementation research can contribute to these understandings.
Implementation analyses conducted as part of impact studies can help educators know whether a tested intervention is likely to be a good fit for their own settings. This guide, developed by Mathematica and MDRC with support from the Institute of Education Sciences, can help researchers design and conduct these kinds of analyses. The guide provides steps and recommendations about ways to specify implementation research questions, assess whether and how the planned intervention is implemented, document the context in which the intervention is implemented, and measure the difference between the intervention and what members of the control group receive. It presents strategies for analysis and reporting about these topics, and also for linking implementation and impact findings. The guide offers key definitions, examples, templates, and links to resources.