Beyond measuring average program impacts, it is important to understand how impacts vary. This paper gives a broad overview of the conceptual and statistical issues involved in using multisite randomized trials to learn about and from variation in program effects across individuals, across subgroups of individuals, and across program sites.
Lessons from a Simulation Study
This paper makes valuable contributions to the literature on multiple-rating regression discontinuity designs (MRRDDs). It makes concrete recommendations for choosing among existing MRRDD estimation methods, for implementing any chosen method using local linear regression, and for providing accurate statistical inferences.
Design Options for an Evaluation of Head Start Coaching
Using a study of coaching in Head Start as an example, this report reviews potential experimental design options that get inside the “black box” of social interventions by estimating the effects of individual components. It concludes that factorial designs are usually most appropriate.
This report provides recommendations for an evaluation of coaching that may impact teacher and classroom practices in Head Start and other early childhood settings — including about the research questions; the design of the impact study, implementation research, and cost analysis; and logistical challenges for carrying out the design.
In many evaluations, individuals are randomly assigned to experimental arms and then grouped to receive services. In this situation, accounting for grouping may be necessary when estimating the impact estimate’s standard error. This paper demonstrates that nonrandom sorting of individuals into groups can bias the standard error reported by common estimation approaches.
This paper examines the properties of two nonexperimental study designs that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. The paper looks at the internal validity and precision of these two designs, using the example of the federal Reading First program as implemented in a midwestern state.
This paper presents a conceptual framework for designing and interpreting research on variation in program effects. The framework categorizes the sources of program effect variation and helps researchers integrate the study of variation in program effectiveness and program implementation.
This paper explores the use of instrumental variables analysis with a multisite randomized trial to estimate the effect of a mediating variable on an outcome.
Despite the growing popularity of the use of regression discontinuity analysis, there is only a limited amount of accessible information to guide researchers in the implementation of this research design. This paper provides an overview of the approach and, in easy-to-understand language, offers best practices and general guidance for practitioners.
Using an alternative to classical statistics, this paper reanalyzes results from three published studies of interventions to increase employment and reduce welfare dependency. The analysis formally incorporates prior beliefs about the interventions, characterizing the results in terms of the distribution of possible effects, and generally confirms the earlier published findings.