Multiple Measures Placement Using Data Analytics

An Implementation and Early Impacts Report

| Elisabeth A. Barnett, Peter Bergman, Elizabeth Kopko, Vikash Reddy, Clive Belfield, Susha Roy

Many incoming college students are referred to remedial programs in math or English based on scores they earn on standardized placement tests. Yet research shows that some students assigned to remediation based on test scores would likely succeed in a college-level course in the same subject area without first taking a remedial course if given that opportunity. Research also suggests that other measures of student skills and performance, and, in particular, high school grade point average (GPA), may be useful in assessing college readiness.

The Center for the Analysis of Postsecondary Readiness (CAPR) — a project of the Community College Research Center and MDRC — is conducting a random assignment study of a multiple measures placement system based on data analytics to determine whether it yields placement determinations that lead to better student outcomes than a system based on test scores alone. Seven community colleges in the State University of New York (SUNY) system are participating in the study.

The alternative placement system we evaluate uses data on prior students to weight multiple measures — including both placement test scores and high school GPAs — in predictive algorithms developed at each college that are then used to place incoming students into remedial or college-level courses. Over 13,000 incoming students who arrived at these colleges in the fall 2016, spring 2017, and fall 2017 terms were randomly assigned to be placed using either the status quo placement system (the control group) or the alternative placement system (the program group). The three cohorts of students will be tracked through the fall 2018 term, resulting in the collection of three to five semesters of outcomes data, depending on the cohort.

This interim report, the first of two, examines implementation of the alternative placement system at the colleges and presents results on first-term impacts for 4,729 students in the fall 2016 cohort. The initial results are promising. The early findings show that:

  • While implementing the alternative system was more complex than expected, every college developed the procedures that were required to make it work as intended.

  • Many program group students were placed differently than they would have been under the status quo placement system. In math, 14 percent of program group students placed higher than they would have under a test-only system (i.e., in college-level), while 7 percent placed lower (i.e., in remedial). In English, 41.5 percent placed higher, while 6.5 percent placed lower.

  • Program group students were 3.1 and 12.5 percentage points more likely than control group students to both enroll in and complete (with a grade of C or higher) a college-level math or English course in the first term. (Enrollment and completion rates among the control group were 14.1 percent in math and 27.2 percent in English.)

  • Women appeared to benefit more than men from program group status in math on college-level math course placement, enrollment, and completion (with a grade of C or higher) outcomes; Black and Hispanic students appeared to benefit more than White students from program group status in English on college-level English course placement and enrollment outcomes, but not on completion (with a grade of C or higher).

  • Implementation of the alternative system added roughly $110 per student to status quo fall-term costs for testing and placing students at the colleges; ongoing costs in the subsequent fall term were roughly $40 per student above status quo costs.

The final report, to be released in 2019, will examine a range of student outcomes for all three cohorts, including completion of introductory college-level courses, persistence, and the accumulation of college credits over the long term.