
This paper, originally published in Evaluation Review, provides researchers with new information about the values of the key design parameters needed for planning randomized controlled trial evaluations of interventions in community colleges.
Use the tools at left to search for and filter publications.
Use the tools at bottom to search for and filter publications.
This paper, originally published in Evaluation Review, provides researchers with new information about the values of the key design parameters needed for planning randomized controlled trial evaluations of interventions in community colleges.
Lessons Learned from Career Pathways and Child First
Social services programs are increasingly looking to forecast which participants are likely to reach major milestones. Some explore advanced predictive modeling, but the Center for Data Insights (CDI) has found that such methods come with trade-offs. This post outlines CDI’s approach to predictive analytics, using illustrations from two studies.
Multiple testing procedures reduce the likelihood of false positive findings, but can also reduce the probability of detecting true effects. This post introduces two open-source software tools from the Power Under Multiplicity Project that can help researchers plan analyses for randomized controlled trials using multiple testing procedures.
Detecting Follow-Up Selection Bias in Studies of Postsecondary Education Programs
Meta-analyses pool results from multiple published studies to determine the likely effect of a type of intervention. This post discusses a kind of selection bias that can typically lead meta-analyses to overestimate longer-term effects for a range of interventions under consideration.
Attempting to Correct for Follow-Up Selection Bias
A companion post discussed a kind of selection bias that can typically lead meta-analyses to overestimate longer-term effects for a range of interventions under consideration. This post describes a way to use information on short-term outcomes to estimate how much the effects on long-term outcomes are overstated.
Semistructured interviews involve an interviewer asking some prespecified, open-ended questions, with follow-up questions based on what the interviewee has to say. This Reflections on Methodology post describes a semistructured interview protocol recently used to explore how children who experience poverty perceive their situations, their economic status, and public benefit programs.
Several jurisdictions have instituted procedures meant to affect the use of bail. To determine whether those policies have had effects, a past trend can be used to extrapolate what would have happened had business continued as usual. This post discusses how researchers did such an extrapolation in Mecklenburg, North Carolina.
An earlier post in this series discussed considerations for reporting and interpreting cross-site impact variation and for designing studies to investigate such cross-site variation. This post discusses how those ideas were applied to address two broad questions in the Mother and Infant Home Visiting Program Evaluation.
Part I of this two-part post discussed MDRC’s work with practitioners to construct valid and reliable measures of implementation fidelity to an early childhood curriculum. Part II examines how those data can reveal associations between levels of fidelity and gains in children’s academic skills.
Lessons from the Grameen America Evaluation
In any study, there is a tension between research and program needs. This program’s group-based microloan model presented particular challenges for random assignment. Reflections in Methodology looks at how the research design was adapted to allow a fair test of the program’s effectiveness without hampering its ability to operate.
Get the latest info on MDRC publications, projects, and other news. We send email updates a couple of times each month.