This paper examines the properties of two nonexperimental study designs that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. The paper looks at the internal validity and precision of these two designs, using the example of the federal Reading First program as implemented in a midwestern state.
This paper presents a conceptual framework for designing and interpreting research on variation in program effects. The framework categorizes the sources of program effect variation and helps researchers integrate the study of variation in program effectiveness and program implementation.
This paper provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of educational interventions.
This paper provides a detailed discussion of the theory and practice of modern regression discontinuity. It describes how regression discontinuity analysis can provide valid and reliable estimates of general causal effects and of the specific effects of a particular treatment on outcomes for particular persons or groups.
This MDRC research methodology working paper examines the core analytic elements of randomized experiments for social research. Its goal is to provide a compact discussion of the design and analysis of randomized experiments for measuring the impact of social or educational interventions.