Assessing an intervention’s effects on multiple outcomes increases the risk of false positives. Procedures that make adjustments to address this risk can reduce power, or the probability of detecting effects that do exist. MDRC’s Reflections on Methodology discusses how to estimate power when making adjustments as well as alternative definitions of power.
To improve outcomes among high-interest borrowers, policymakers need to understand what is driving usage. This second post in MDRC’s Reflections on Methodology series discusses how a data discovery process revealed clusters of borrowers who differed greatly in the kinds of loans and lenders they used and in their loan outcomes.
A Literature Review
Examining the scholarly literature published since a seminal review in 2000, this working paper discusses the principles that underlie project-based learning, how it has been used in K-12 settings, the challenges teachers have confronted in implementing it, and what is known about its effectiveness in improving students’ learning outcomes.
Machine learning algorithms, when combined with the contextual knowledge of researchers and practitioners, offer service providers nuanced estimates of risk and opportunities to refine their efforts. The first post of a new series, Reflections on Methodology, discusses how MDRC helps organizations make the most of predictive modeling tools.
Strategies for Interpreting and Reporting Intervention Effects on Subgroups
This revised paper examines strategies for interpreting and reporting estimates of intervention effects for subgroups of a study sample. Specifically, the paper considers: why and how subgroup findings are important for applied research, the importance of prespecifying subgroups before analyses are conducted, and the importance of using existing theory and prior research to distinguish between subgroups for which study findings are confirmatory, as opposed to exploratory.
Howard Bloom’s Remarks on Accepting the Peter H. Rossi Award
In a speech before the Association for Public Policy Analysis and Management Conference on November 5, 2010, Howard Bloom, MDRC’s Chief Social Scientist, accepted the Peter H. Rossi Award for Contributions to the Theory or Practice of Program Evaluation.
This paper is the first step in a study of instrumental variables analysis with randomized trials to estimate the effects of settings on individuals. The goal of the study is to examine the strengths and weaknesses of the approach and present them in ways that are broadly accessible to applied quantitative social scientists.
In some experimental evaluations of classroom- or school-level interventions, random assignment is conducted at the student level and the program is delivered at the higher level. This paper clarifies the correct causal interpretation of “program impacts” when this study design is used and discusses the implications and limitations of this research design. A real example is used to demonstrate the paper’s key points.
What Do We Know and What Do We Need to Know?
This working paper, prepared for a conference sponsored by the Institute for Research on Poverty at the University of Wisconsin-Madison, reviews evidence about the effectiveness of two strategies to strengthen family relationships and fathers’ involvement with their children: fatherhood programs aimed at disadvantaged noncustodial fathers and relationship skills programs for parents who are together.
The Policy and Practice of Assessing and Placing Students in Developmental Education Courses
This paper reports on case studies conducted at three community colleges to learn about how the colleges assess students for placement in developmental education courses. The case studies identify several problems and challenges, including lack of consensus about the standard for college-level work, the high-stakes nature of the assessments, and the minimal relationship between assessment for placement and diagnosis for instruction.