Multiple testing procedures reduce the likelihood of false positive findings, but can also reduce the probability of detecting true effects. This post introduces two open-source software tools from the Power Under Multiplicity Project that can help researchers plan analyses for randomized controlled trials using multiple testing procedures.
Detecting Follow-Up Selection Bias in Studies of Postsecondary Education Programs
Meta-analyses pool results from multiple published studies to determine the likely effect of a type of intervention. This post discusses a kind of selection bias that can typically lead meta-analyses to overestimate longer-term effects for a range of interventions under consideration.
Attempting to Correct for Follow-Up Selection Bias
A companion post discussed a kind of selection bias that can typically lead meta-analyses to overestimate longer-term effects for a range of interventions under consideration. This post describes a way to use information on short-term outcomes to estimate how much the effects on long-term outcomes are overstated.
Semistructured interviews involve an interviewer asking some prespecified, open-ended questions, with follow-up questions based on what the interviewee has to say. This Reflections on Methodology post describes a semistructured interview protocol recently used to explore how children who experience poverty perceive their situations, their economic status, and public benefit programs.
Several jurisdictions have instituted procedures meant to affect the use of bail. To determine whether those policies have had effects, a past trend can be used to extrapolate what would have happened had business continued as usual. This post discusses how researchers did such an extrapolation in Mecklenburg, North Carolina.
An earlier post in this series discussed considerations for reporting and interpreting cross-site impact variation and for designing studies to investigate such cross-site variation. This post discusses how those ideas were applied to address two broad questions in the Mother and Infant Home Visiting Program Evaluation.
Part I of this two-part post discussed MDRC’s work with practitioners to construct valid and reliable measures of implementation fidelity to an early childhood curriculum. Part II examines how those data can reveal associations between levels of fidelity and gains in children’s academic skills.
Lessons from the Grameen America Evaluation
In any study, there is a tension between research and program needs. This program’s group-based microloan model presented particular challenges for random assignment. Reflections in Methodology looks at how the research design was adapted to allow a fair test of the program’s effectiveness without hampering its ability to operate.
As an alternative to random assignment, a regression discontinuity design takes advantage of situations where program eligibility is determined by whether a score exceeds a threshold. With careful attention to assumptions, analysis, and interpretation, this quasi-experimental design can provide rigorous estimates of program effects. Reflections on Methodology outlines some considerations.
Schools use individual screening tests to identify students at risk of falling behind in their reading levels. Could predictive analytics, incorporating multiple composite and subsection scores from a series of tests over time, do a better job of identifying at-risk students? Reflections on Methodology gives an example of this approach.