In NYC P-TECH Grades 9-14 schools, students take an integrated sequence of high school and college courses with the goal of completing both high school and college, while simultaneously being exposed to hands-on work experiences. This infographic describes the model and introduces MDRC’s evaluation of it.
A Primer for Researchers Working with Education Data
Predictive modeling estimates individuals’ probabilities of future outcomes by building and testing a model using data on similar individuals whose outcomes are already known. The method offers benefits for continuous improvement efforts and efficient allocation of resources. This paper explains MDRC’s framework for using predictive modeling in education.
An Empirical Assessment Based on Four Recent Evaluations
This reference report, prepared for the National Center for Education Evaluation and Regional Assistance of the Institute of Education Sciences (IES), uses data from four recent IES-funded experimental design studies that measured student achievement using both state tests and a study-administered test.
No universal guideline exists for judging the practical importance of a standardized effect size, a measure of the magnitude of an intervention’s effects. This working paper argues that effect sizes should be interpreted using empirical benchmarks — and presents three types in the context of education research.
Empirical Guidance for Studies That Randomize Schools to Measure the Impacts of Educational Interventions
This paper examines how controlling statistically for baseline covariates (especially pretests) improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement.
Relying on 427 classroom observations conducted over a three-year period, this study traces changes in teachers’ instructional practices in the First Things First schools.