To improve outcomes among high-interest borrowers, policymakers need to understand what is driving usage. This second post in MDRC’s Reflections on Methodology series discusses how a data discovery process revealed clusters of borrowers who differed greatly in the kinds of loans and lenders they used and in their loan outcomes.
Machine learning algorithms, when combined with the contextual knowledge of researchers and practitioners, offer service providers nuanced estimates of risk and opportunities to refine their efforts. The first post of a new series, Reflections on Methodology, discusses how MDRC helps organizations make the most of predictive modeling tools.
A Primer for Researchers Working with Education Data
Predictive modeling estimates individuals’ probabilities of future outcomes by building and testing a model using data on similar individuals whose outcomes are already known. The method offers benefits for continuous improvement efforts and efficient allocation of resources. This paper explains MDRC’s framework for using predictive modeling in education.
A Guide for Researchers
Conducting multiple statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) counteract this problem but can substantially change statistical power. This paper presents methods for estimating multiple definitions of power and presents empirical findings on how power is affected by the use of MTPs.
Beyond measuring average program impacts, it is important to understand how impacts vary. This paper gives a broad overview of the conceptual and statistical issues involved in using multisite randomized trials to learn about and from variation in program effects across individuals, across subgroups of individuals, and across program sites.
Lessons from a Simulation Study
This paper makes valuable contributions to the literature on multiple-rating regression discontinuity designs (MRRDDs). It makes concrete recommendations for choosing among existing MRRDD estimation methods, for implementing any chosen method using local linear regression, and for providing accurate statistical inferences.
This report provides recommendations for an evaluation of coaching that may impact teacher and classroom practices in Head Start and other early childhood settings — including about the research questions; the design of the impact study, implementation research, and cost analysis; and logistical challenges for carrying out the design.
Design Options for an Evaluation of Head Start Coaching
Using a study of coaching in Head Start as an example, this report reviews potential experimental design options that get inside the “black box” of social interventions by estimating the effects of individual components. It concludes that factorial designs are usually most appropriate.
In many evaluations, individuals are randomly assigned to experimental arms and then grouped to receive services. In this situation, accounting for grouping may be necessary when estimating the impact estimate’s standard error. This paper demonstrates that nonrandom sorting of individuals into groups can bias the standard error reported by common estimation approaches.
This paper examines the properties of two nonexperimental study designs that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. The paper looks at the internal validity and precision of these two designs, using the example of the federal Reading First program as implemented in a midwestern state.