In NYC P-TECH Grades 9-14 schools, students take an integrated sequence of high school and college courses with the goal of completing both high school and college, while simultaneously being exposed to hands-on work experiences. This infographic describes the model and introduces MDRC’s evaluation of it.
How a District Might Find a Program That Meets Local Needs
For school districts striving to meet both ESSA requirements and specific educational needs, this infographic shows how evidence can guide decisions. The evaluation of Reading Partners, a one-on-one volunteer tutoring program, serves as an example.
A Primer for Researchers Working with Education Data
Predictive modeling estimates individuals’ probabilities of future outcomes by building and testing a model using data on similar individuals whose outcomes are already known. The method offers benefits for continuous improvement efforts and efficient allocation of resources. This paper explains MDRC’s framework for using predictive modeling in education.
An Empirical Assessment Based on Four Recent Evaluations
This reference report, prepared for the National Center for Education Evaluation and Regional Assistance of the Institute of Education Sciences (IES), uses data from four recent IES-funded experimental design studies that measured student achievement using both state tests and a study-administered test.
This paper provides practical guidance for researchers who are designing and analyzing studies that randomize schools — which comprise three levels of clustering (students in classrooms in schools) — to measure intervention effects on student academic outcomes when information on the middle level (classrooms) is missing.
In some experimental evaluations of classroom- or school-level interventions, random assignment is conducted at the student level and the program is delivered at the higher level. This paper clarifies the correct causal interpretation of “program impacts” when this study design is used and discusses the implications and limitations of this research design. A real example is used to demonstrate the paper’s key points.
This paper provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of educational interventions.
This MDRC working paper on research methodology explores two complementary approaches to developing empirical benchmarks for achievement effect sizes in educational interventions.
This MDRC working paper on research methodology provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of interventions on children.
No universal guideline exists for judging the practical importance of a standardized effect size, a measure of the magnitude of an intervention’s effects. This working paper argues that effect sizes should be interpreted using empirical benchmarks — and presents three types in the context of education research.