Beyond measuring average program impacts, it is important to understand how impacts vary. This paper gives a broad overview of the conceptual and statistical issues involved in using multisite randomized trials to learn about and from variation in program effects across individuals, across subgroups of individuals, and across program sites.
Publications
Lessons from a Simulation Study
This paper makes valuable contributions to the literature on multiple-rating regression discontinuity designs (MRRDDs). It makes concrete recommendations for choosing among existing MRRDD estimation methods, for implementing any chosen method using local linear regression, and for providing accurate statistical inferences.
Design Options for an Evaluation of Head Start Coaching
Using a study of coaching in Head Start as an example, this report reviews potential experimental design options that get inside the “black box” of social interventions by estimating the effects of individual components. It concludes that factorial designs are usually most appropriate.
Design Report
This report provides recommendations for an evaluation of coaching that may impact teacher and classroom practices in Head Start and other early childhood settings — including about the research questions; the design of the impact study, implementation research, and cost analysis; and logistical challenges for carrying out the design.
In many evaluations, individuals are randomly assigned to experimental arms and then grouped to receive services. In this situation, accounting for grouping may be necessary when estimating the impact estimate’s standard error. This paper demonstrates that nonrandom sorting of individuals into groups can bias the standard error reported by common estimation approaches.
Planning for the Jobs-Plus Demonstration
Statistical Implications for the Evaluation of Education Programs
A How-To Guide for Planners and Providers of Welfare-to-Work and Other Employment and Training Programs.