Multisite randomized trials allow researchers to study both the average impact of an intervention and how the impact varies across settings, which can help guide decisions in policy, practice, and science. This Reflections on Methodology post distills some key considerations for research design and for reporting and interpreting such variation.
Social network analysis models the structure of relationships using “nodes” (such as organizations) and “edges” (or ties, such as contracts). This Reflections on Methodology post highlights what the method can analyze — strength and complexity of connections, an organization’s positional power — in the context of a community development study in Chicago.
The proliferation of school choice systems offers researchers opportunities to study the effects of education reforms on a large scale, rigorously but relatively quickly. In the first of two posts on the subject, Reflections on Methodology discusses how to ensure that a school assignment process is truly random.
An important tenet of building reliable evidence is that study findings can be both reproduced and replicated and that the methods and data used stand up under scrutiny. This post in the Reflections on Methodology series outlines several ways to ensure credibility in research design and practice.
Assessing an intervention’s effects on multiple outcomes increases the risk of false positives. Procedures that make adjustments to address this risk can reduce power, or the probability of detecting effects that do exist. MDRC’s Reflections on Methodology discusses how to estimate power when making adjustments as well as alternative definitions of power.
To improve outcomes among high-interest borrowers, policymakers need to understand what is driving usage. This second post in MDRC’s Reflections on Methodology series discusses how a data discovery process revealed clusters of borrowers who differed greatly in the kinds of loans and lenders they used and in their loan outcomes.
Machine learning algorithms, when combined with the contextual knowledge of researchers and practitioners, offer service providers nuanced estimates of risk and opportunities to refine their efforts. The first post of a new series, Reflections on Methodology, discusses how MDRC helps organizations make the most of predictive modeling tools.
A Primer for Researchers Working with Education Data
Predictive modeling estimates individuals’ probabilities of future outcomes by building and testing a model using data on similar individuals whose outcomes are already known. The method offers benefits for continuous improvement efforts and efficient allocation of resources. This paper explains MDRC’s framework for using predictive modeling in education.
A Guide for Researchers
Conducting multiple statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) counteract this problem but can substantially change statistical power. This paper presents methods for estimating multiple definitions of power and presents empirical findings on how power is affected by the use of MTPs.
Beyond measuring average program impacts, it is important to understand how impacts vary. This paper gives a broad overview of the conceptual and statistical issues involved in using multisite randomized trials to learn about and from variation in program effects across individuals, across subgroups of individuals, and across program sites.