Multiple testing procedures reduce the likelihood of false positive findings, but can also reduce the probability of detecting true effects. This post introduces two open-source software tools from the Power Under Multiplicity Project that can help researchers plan analyses for randomized controlled trials using multiple testing procedures.
Detecting Follow-Up Selection Bias in Studies of Postsecondary Education Programs
Meta-analyses pool results from multiple published studies to determine the likely effect of a type of intervention. This post discusses a kind of selection bias that can typically lead meta-analyses to overestimate longer-term effects for a range of interventions under consideration.
Attempting to Correct for Follow-Up Selection Bias
A companion post discussed a kind of selection bias that can typically lead meta-analyses to overestimate longer-term effects for a range of interventions under consideration. This post describes a way to use information on short-term outcomes to estimate how much the effects on long-term outcomes are overstated.
In a speech before the Association for Public Policy Analysis and Management Conference on November 7, 2008, Judith M. Gueron, President Emerita and Scholar in Residence at MDRC, accepted the Peter H. Rossi Award for Contributions to the Theory or Practice of Program Evaluation.
This MDRC working paper on research methodology explores two complementary approaches to developing empirical benchmarks for achievement effect sizes in educational interventions.
This MDRC working paper on research methodology provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of interventions on children.
This MDRC research methodology working paper examines the core analytic elements of randomized experiments for social research. Its goal is to provide a compact discussion of the design and analysis of randomized experiments for measuring the impact of social or educational interventions.
Empirical Guidance for Studies That Randomize Schools to Measure the Impacts of Educational Interventions
This paper examines how controlling statistically for baseline covariates (especially pretests) improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement.