As an alternative to random assignment, a regression discontinuity design takes advantage of situations where program eligibility is determined by whether a score exceeds a threshold. With careful attention to assumptions, analysis, and interpretation, this quasi-experimental design can provide rigorous estimates of program effects. Reflections on Methodology outlines some considerations.
Schools use individual screening tests to identify students at risk of falling behind in their reading levels. Could predictive analytics, incorporating multiple composite and subsection scores from a series of tests over time, do a better job of identifying at-risk students? Reflections on Methodology gives an example of this approach.
Lessons from the Grameen America Formative Evaluation
Random assignment is prized for its rigor, but it’s not always feasible to carry out. This Reflections in Methodology post outlines other strong options for studying the effects of a program and illustrates the application of some key considerations in a specific context.
Observation tools allow researchers to rate practitioners’ use of a program or curriculum according to the time spent on its components and how they were implemented. Reflections on Methodology explains how researchers and model developers can collaborate to develop useful assessment tools and the benefits and challenges involved.
The new book Randomistas describes how randomized controlled trials (RCTs) have revolutionized many fields. RCTs are a uniquely powerful tool, but they are not the only way to build knowledge about effective programs for low-income people.
By combining prior beliefs about a program’s effectiveness with new data to produce a distribution of impacts, Bayesian statistics provides an alternative to classical methods that may be more useful for policymaking. Reflections on Methodology discusses some issues with and applications of this approach.
In the second of two posts on the research opportunities presented by school choice systems, Reflections on Methodology discusses a few issues common to lottery-based analyses — constrained statistical power, imperfect compliance, and restricted generalizability.
In a randomized controlled trial, measuring treatment contrast – the difference in services received by a program group and those in a counterfactual condition – is critical for understanding what a program’s effects suggest about the best ways to improve services. This paper explains why treatment contrast is important and offers guidance about how to measure it.
A two-stage study design can test a complex set of interventions, individually and in combination. Reflections on Methodology shows how this approach was used for a pair of programs, the first administered in preschools and the second implemented as a kindergarten follow-up for individual students.
Multisite randomized trials allow researchers to study both the average impact of an intervention and how the impact varies across settings, which can help guide decisions in policy, practice, and science. This Reflections on Methodology post distills some key considerations for research design and for reporting and interpreting such variation.