Policymakers talk about solutions, but which ones really work? MDRC’s Evidence First podcast features experts—program administrators, policymakers, and researchers—talking about the best evidence available on education and social programs that serve people with low incomes.

Latest Episode

Leigh Parise: Policymakers talk about solutions, but which ones really work? Welcome to Evidence First, a podcast from MDRC that explores the best evidence available on what works to improve the lives of people with low incomes. I’m your host, Leigh Parise.

This year, MDRC is celebrating its 50th anniversary. For those of you who don’t know, MDRC was founded in 1974 with just a handful of people in a tiny office in New York City. Since then, we’ve grown into an organization with more than 300 talented staff and four offices across the country. To commemorate our 50th, we’re having conversations with some of our long-standing partners, and these are people with whom we’ve really been lucky to work and to grow with over the years. The Center for the Analysis of Postsecondary Readiness, or CAPR, for instance, was started in 2014 as a collaboration between MDRC and longtime partner, the Community College Research Center, or CCRC, to research the effectiveness of developmental education reforms and to understand their implications for equity.

Today, we’re joined by Nikki Edgecombe, a senior research scholar at CCRC who leads CAPR (and, I will say, is very awesome), and Mike Weiss, senior fellow in postsecondary education at MDRC (also very great), to discuss what we’ve learned about equity and developmental education reform. I’m especially excited to have you both here because CAPR is also celebrating its 10th anniversary. We’re just celebrating all around. Nikki, Mike, welcome.

Mike Weiss: Thank you, Leigh.

Nikki Edgecombe: Thank you.

Leigh Parise: A little bit of context for our listeners. Developmental education, also known as remedial education, refers to courses that some entering college students will have to take if they are deemed unprepared for college-level courses. The goal is to help students improve academically and help them succeed in college, but studies have shown that developmental education can actually hinder students’ progress in college.

There’s also the question of equity. Students of color, adults, first-generation students, and those from low-income backgrounds are disproportionately placed in developmental education programs, so there’s a lot of interest currently among policymakers, college practitioners, and researchers in reforming developmental education programs to address these challenges and really focus on supporting more equitable outcomes for students. Nikki, I would love to start with you. It would be great if you can tell me what you’ve learned over the past several years about developmental education and how reforms to developmental education can improve academic outcomes for students.

Nikki Edgecombe: There’s been a lot of knowledge about DevEd, as we like to call it for short, generated over the last couple of decades. First, I think it’s important to acknowledge that the groundswell of concern about developmental education really began at the practitioner level, so credit to these institutions that began to recognize that there was a problem. They saw that a relatively small number of students were progressing through these multicourse, multisemester sequences of remedial math, reading, and writing, and they began to try to better understand what was happening and develop solutions. Researchers then were able to learn from them and begin to study and evaluate these various reforms. I think it’s always important to give credit to the practitioners who really began this change movement.

Back to what we’ve learned. I think I frame it in a couple of ways. I think we’ve learned the nature of the problem. We had some research early on around 2010-ish that let us see that students were essentially exiting these multicourse, multisemester sequences of courses. And that gave us a better understanding of what was going on. We also learned from some causal studies that were happening around the same time that the subsequent academic outcomes for students that were referred to developmental education were no better than, and in some cases worse than, similar students who were not assigned to DevEd. We learned the nature of the problem, and we learned more about the consequences of the traditional system.

I think it’s also important to acknowledge that we pay a lot of attention to the actual courses, but in 2012 Judy Scott-Clayton published an important paper that looked at our assessment and placement policies. It’s really important to think of the DevEd system. Obviously students receive instruction, but they have to be placed into those courses one way or another. Judy’s work was very helpful in understanding the predictive validity of these tests and the extent to which they helped us predict student outcomes, particularly at the introductory college-level course.

Those early years, I think, really set the stage for a set of reforms, which CAPR, CCRC, [and] MDRC [have] been, in many cases, lucky enough to study. And I would say those reforms tended to try to accomplish a couple of things. One is to reduce the likelihood that students are exiting the pathway toward college-level coursework, so they might compress the sequence. We also saw an effort to accelerate students’ progress to the college-level courses. There were a lot of ways to think about “Let’s not give them any more coursework than we necessarily need to.” An example of that would be modularized math reforms, which became quite popular for a while. The results on evaluations of those reforms were mixed.

But where I think we saw the most pronounced reform was in the evaluations that began of the accelerated learning program at the Community College of Baltimore County. That reform, which essentially allowed students to coenroll in the college-level course alongside a support structure—a support course, a lab, or something along those lines—showed strong promise in descriptive studies. And over the past 13 years or so, that reform has become known as the corequisite model. Even some recent survey research by MDRC suggests that upwards of 75. . . 78 percent of colleges (community colleges, two-year colleges) are using that reform model. The ALP or corequisite model, I think, really shifted our vision toward understanding how we could improve and support students who were deemed academically underprepared.

Leigh Parise: Great. Thank you so much. I have to admit that I almost laughed when we thought we wanted to ask you to tell us about everything we’ve learned in this space. It’s not a small ask, but you’ve summarized it so well. I also do, I have to say, really appreciate you giving credit to the practitioners themselves. And research is always in a better position to inform practice when it’s addressing real challenges that colleges and students are facing directly, so grounding us in that I think is really critical.

Let’s shift a little bit to the next question that’s really about issues of equity. There’s a lot of interest in issues of equity and education and specifically in reducing racial inequality in academic outcomes. I hope nobody would disagree that that’s important, but conceptually, how can we think about reducing racial inequality in academic outcomes? And then I think the next question would be how you think about this relating to DevEd reform. Mike, do you want to kick us off there?

Mike Weiss: Sure. Yeah. Thanks, Leigh. I think there’s been a lot of great research in general out there about opportunity gaps—that is, the idea that different racial groups have different access to certain things that can be beneficial to them. And that happens at the college level, but also prior to even attending college, which can lead to these differential rates of participation in developmental education. It’s things like different access to rigorous high school curriculum, including access to AP courses. And then there’s also the fact that there’s also oftentimes differential take-up of intervention. Even when stuff is available in part because of how outreach is conducted or other factors, sometimes there are differences in the rates in which people take up interventions. This can result in opportunity gaps where different groups are actually participating in programs, that hopefully are designed to help out, at different rates. And so there are some people who have thought a lot about that, and then there are other groups of people (I tend to follow this group) who are oftentimes doing evaluations of the effectiveness of programs. And we’re trying to understand, Does this particular intervention work or not?

There’s been a lot of this great research about the effects of different interventions too, and especially in the last two decades or so, that research in community colleges and actually in postsecondary in general has grown dramatically. And I think an important step now for researchers, and then this will apply over to practitioners as well, is to sort-of begin to marry these two ways of thinking. . . . In order to understand interventions’ effects on racial inequality, we need to understand both the extent to which there are disparities in participation rates as well as the extent to which these interventions are having differential impacts on the students that are actually being served by them.

And I’ll just give you two examples that can maybe help think through this. One of them, MDRC did this study a while ago on performance-based scholarships. The details of the intervention in some ways aren’t super important, but they were offering a financial incentive for meeting certain academic benchmarks. And there was a bit of advising as part of this particular evaluation. When we did the evaluation, people were interested, I think, to some extent in understanding both the overall effects of the intervention—that was sort-of primary—and then also people wanted to make sure that there was either no harm or maybe even hopefully a reduction in racial disparities in outcomes.

So what we did, and a lot of researchers do this when they’re studying a particular program, is we said, “Well, let’s look at the impact of the program by race.” In this particular context, it was primarily White and Black students. We looked at the impact for White students, we looked at the impact for Black students. In this case, it was actually a pretty effective intervention in terms of at least increasing retention rates. It got, I think, both groups to be retained in school at a higher rate than their control group counterparts. This was an experiment, so there was a control group and a program group that got offered the services. And the program group was having better retention rates for both White students and also for Black students than they would’ve been expected to have in the absence of the program.

And in this case also, it was true that those effects were about the same size—about 10 percentage points. So if you just said, “Well, what was the gap without this intervention?” It was pretty big in terms of size. There was a lot of racial inequality in outcomes at this school. And then you said, “Well, if you look after the intervention, both groups did about 10 percentage points better,” so the gap remained exactly the same. So your conclusion might’ve been, “All right, I guess this thing was great, it helped everyone—but it didn’t actually do anything in terms of reducing racial inequality in outcomes.”

The thing that is oftentimes not done, which is what I’d encourage evaluation researchers to do more often, is if you looked a step further and said, “Who is being served by these interventions or by this particular intervention?” it actually turned out that at this school, among all incoming students, the rates of participation were actually much higher among the Black students than the White students. In this case, if you looked just narrowly at the people who participated in the program, you’d say, “Actually, nothing happened in terms of reductions in racial inequality.” But because there was a disproportionate serving of Black students in this case, actually for the full school, there was a reduction in racial inequality. And so this point set this idea that we can’t only focus on participation rates because if you only look at that, you might miss the fact that some interventions are effective. But some aren’t effective at all, so in some ways, disproportionate participation doesn’t really matter if the intervention doesn’t work at all. You’ve got to know if it works and you’ve got to know what the participation rates were.

But then when you think of, as a second example, developmental education as a reform as a whole, as was alluded to at the beginning of this podcast, participation rates in DevEd are also disproportionate. If you look at all incoming students, racialized minorities are overrepresented in DevEd. And the current thinking now is actually that DevEd is harmful. In this case, you don’t want to be disproportionately represented in it because it appears like you’re almost better off, or you may actually be better off, being referred directly to college-level classes and not having to go through this sequence at all.

These are two examples of where it’s important for us to consider both who ends up participating in interventions and how they demographically compare to some bigger population of interest, as well as what are the effects of the intervention, both overall and then also broken out in terms of racial groups, if we want to fully understand the effects on racial inequality in outcomes of these types of interventions.

Leigh Parise: Great. Thank you so much, Mike. I don’t know, Nikki, do you want to jump in and add anything to what Mike was just talking through?

Nikki Edgecombe: No, but can I ask him a question?

Leigh Parise: Yes. We love that.

Nikki Edgecombe: Mike, given your first example, what needs to change in terms of evaluation design?

Mike Weiss: Well, I think one thing that we could all do as a first-order thing is if you’re doing a study, an evaluation where you’re trying to consider the effects of an intervention on racial inequality in outcomes, a first-order analysis would be to simply look at the demographic characteristics of the students in the program under study. Now, that almost always happens. But then also consider how they relate to a broader population of interest.

Now, if your study is done at a college and every single student is served, then those will be the same thing probably. But most of the time, that’s not the case. Most of the time, some subset of students are served. I think it’s probably of interest to say, "Well, what did the whole college look like?" (Or maybe all incoming freshmen or something like this.) Then, “What did the subpopulation that were actually served by the intervention look like demographically?”

For example, let’s say you’re doing a study at Tyler Junior College and you find out that 20 percent of the students in some brand new exciting program are Hispanic. You then might want to say to yourself, "Okay, but the overall population at Tyler is 26 percent Hispanic, so the Hispanic students are underrepresented." Therefore, already the program is starting behind with respect to its potential to reduce racial inequality in outcomes among the fuller population at Tyler. In contrast, if you found out that . . . 40 percent of the students being served by the intervention were Hispanic, then it has a much greater potential to reduce racial inequality in outcomes among the full population at Tyler.

Relatedly, I think it’s also worth probing when you look at this kind of thing, how does this disproportional representation in the program come about? Again, if you look at Tyler’s whole population, it’s 26 percent Hispanic. Most interventions have some eligibility criteria. Another thing you could look at is say, "Okay, the college as a whole is 26 percent Hispanic. What percentage of the eligible population is Hispanic?" Because that might give you a hint. Is the reason that there’s disproportional representation, whether over- or underrepresentation, because of the criteria that you’ve put on to enter the program? Maybe you’ve got a GPA criteria or a financial aid eligibility criteria. Different eligibility criteria will result in different types of populations even being eligible in the first place.

Then after you move from everyone at the school to who is eligible, then oftentimes there can be shifts between who’s eligible and who actually participates. It’d be great to have all of those numbers. Who’s at the school? Who’s eligible? Then who actually participates? And that has implications for practitioners because part of what’s going to lead to a gap between who’s eligible and who participates might have something to do with how you’re outreaching to students in the first place. And if you make people really aware that 20 percent of your students that are eligible are Hispanic, but participants are only 10 percent Hispanic, well, you’ve got a problem with your recruitment strategies. How are you getting people to actually join the program? That would definitely be a first-order thing I’d like to see a lot more evaluators doing.

Then the thing that we already do is to continue doing our subgroup analyses to, first of all, just look at the effects of the program because, again, none of this really matters if the program makes no difference, which is oftentimes a finding that we have. Unfortunately, we do a lot of these studies where we find that the interventions aren’t making much of a difference. But certainly [in terms of] their potential to reduce disparities, they need both to make a difference, and then also one helpful criterion can be further action to [have] at least proportional, if not [disproportionately higher], representation for the students who you’re trying to get to reach the average outcome levels.

Leigh Parise: What you were just talking through was really about how researchers might think about their work. And I wonder when you think about practitioners (Nikki, to take us back to where you … very wisely started us with), if these are some of the challenges that practitioners on college campuses themselves are facing? What are the implications for practitioners themselves based on, Mike, some of what you just talked through or the things that we, Nikki and Mike, have been learning at CAPR over time? I wonder if you can speak to that a little bit.

Nikki Edgecombe: One of the things that’s always struck me about the developmental education interventions from some of their early forms to even the corequisite or various multiple measures assessment strategies is there’s nothing fundamental to the design or structure of the reforms that would suggest that they would generate differential impacts for a Black student versus a White student or a Latino student versus a Black student. That suggests to me that the fertile ground for us is going to be thinking about how are we going to complement, supplement, adapt these interventions in ways that direct us toward supports or essentially some change that’s likely to reduce those disparities? You can think about them at a variety of levels. Instructionally, obviously there’s curriculum, there’s pedagogy. We have examples around culturally sustaining approaches in both of those. There’s lots of interesting research around classroom environment. Are you creating an inclusive environment where students feel part [of] and engaged in that work?

And so I think we have the beginnings of ideas about what could supplement a corequisite model or what might be incorporated into a corequisite model to potentially support a reduction in those disparities. I’m inspired by the fact that practitioners and policymakers are in fact talking about this. This is something that is on their radar but then take us back to where we were in 2005, 2010, as colleges first got started innovating in the developmental education space. I feel like with the evidence that we have now, they’ve begun to see the virtues, the good news story, of what’s emerged in terms of developmental education reform but are also aware of where more work needs to be done. I think there’s energy.

I think directionally, there are places we can go. We can draw from the literature to figure out how we might approach reducing those disparities. We just need to get more experiments and innovation on the ground. It’s a hard time for community colleges. Enrollment’s down, we’re coming out of the pandemic, they’re balancing “What does all of this mean if a larger proportion of courses are offered online?” There’s a lot of complicating factors, and so I certainly understand that it’s not necessarily possible for these innovations to just pop up given all of these constraints. That said, I think there are a lot of people focused on this and thinking about ways that we can build on the evidence that we have to develop more equitable approaches.

Mike Weiss: If I can just briefly add to that, one thing I’ve been thinking about lately is the relative promise of these two ways that you can potentially reduce racial inequality in outcomes. The first is identifying interventions that are hopefully effective for everyone but are especially effective for racially minoritized students. And I think Nikki’s point is a good one. I don’t know that the kind of reforms that we’ve studied in the past were really designed with that in mind in any way. They were almost race-neutral interventions, or some were just neutral in general. They’re just “What if we try to use a different placement system?” There’s nothing about that that seems specific, certainly through addressing issues of racial inequality in outcomes. But probably for any population, it’s just a generic reform. And some of them are financial aid interventions. And you maybe could argue that because of wealth disparities, maybe there’s some chance that those would have differential impacts, but they don’t strike me as having this culturally responsive approach that Nikki was describing. I think that’s one promising path is maybe to at least do more rigorous evaluation of those. I think that’s an understudied space.

And then of course, the second one is what I was talking about earlier, which is if we can just identify interventions that are generally effective and either offer those interventions primarily or exclusively to racially minoritized students, that’s another possibility, although that can sometimes be a harder sell politically. And then there’s different levels to think about that second option too, though. If you are the operator of a program, if you’re the person that’s actually running it yourself, you presumably have some control over how you do the outreach that you’re doing for the students that you’re getting to join your program. That’s definitely a space that you, as the program operator yourself, can look to say, "How can I myself do more?" And you might want to change your outreach approaches, or at least think hard about them and who you’re ending up attracting into your program. But then at higher levels, higher-level policy makers might be thinking about what institutions are getting the dollars to use the most effective reforms. And the institutions that are receiving these funds, depending on the demographics of them, that’s going to have implications for what you might do in terms of very broad nationwide disparities that exist.

It seems like there are different levels to think about this, and then there are these two approaches. I think we can both be thinking about identifying these interventions that are ideally particularly effective or even more effective for racially minoritized students, and then also continue to think about who’s getting served. Any time there are new reforms coming about, thinking about who’s going to end up being the potential beneficiaries of those reforms.

Nikki Edgecombe: I’ve been thinking a lot about how this conversation gets better integrated into the policy sphere. And it is challenging, especially politically now. But on a practical level, I think about what the levers, let’s say, at the state system or state association, would be in the community college sector. And there’s certainly an opportunity to, from a system or state perspective, ask institutions in that state to take a closer look at, let’s say, their corequisite outcomes and to identify if disparities exist and between whom and to be able to understand the magnitude of them. And have they [changed] or do they change over time? There’s an important role that policy can play in helping institutions, encouraging institutions, creating incentives for institutions to look more closely at those outcomes.

And then they also tend to have a bit more convening power. With institutions, perhaps, noticing similar patterns in their outcome data, systems have the ability to bring institutions together, again, to leverage that local knowledge and that practitioner expertise and hopefully begin to develop solutions or potential alternatives together. I think it’s important to think through how these different levers at the institutional level can operate, [and] how these state levers can operate, to Mike’s point.

Leigh Parise: All right, wrapping us up, what do you think is next for the field? What are some of the promising approaches to addressing racial disparities in student outcomes that folks should try? For people who are listening to this and thinking, okay, Nikki, Mike, give me some suggestions, give me some ideas. What would you say to them?

Nikki Edgecombe: Before we go there, I think Mike threw a little shade on this a little bit earlier, but I think we both have a similar perspective on this, which is there’s this question around scaling of corequisites and multiple measures and all of these wonderful interventions with proven evidence. And then there’s the alternative of just putting students in college-level courses and just doing whatever you have to do academically or nonacademically to ensure their success. I think we oftentimes find ourselves talking about “How do we scale these proven reforms?” and we do need to in fact do that, but if the survey work that MDRC did last year is indicative, we’ve really made considerable progress on the expansion of multiple measures and corequisite design. And I think that’s important and laudable.

All of those, however, come with unintended consequences, so I think it’s always important for us to think more about “Are we requiring students who could be successful in just a plain vanilla introductory college-level math course to take extra credits and time via a corequisite course? Is the investment required to stand up a multiple measures assessment system worth the return you get in terms of how students are redistributed or redistributed across your course offerings?”

I think there’s more work and thinking to do there, while simultaneously I think we need to keep innovating. And we know one of the challenges that we’re really struggling with is figuring out how to reduce these disparities. I do think it’s just important to note that these disparities are everywhere. This is not a function of developmental education. We see them across a variety of reforms in the K-12 and higher ed space. It’s likely that we’re not simply thinking about tinkering with interventions, but are going to have to be much more thoughtful and attentive to the systemic factors that are contributing to these outcomes and thinking about making progress simultaneously at the interventional level but also at the systems level.

Lastly, I’ll just note that I want to continue to help practitioners feel like they are the innovation leaders in this space. They’re closest to the issues, they’re closest to the students, closest to the curriculum and the course structures, and they know their institutions. How do we bring them in on the conversation— as opposed to having disparate conversations that perhaps never quite sync up or align and allow us to have that critical mass required to energize new directions and new opportunities? So, really emphasizing the need to give our practitioners a strong voice and the room they need. Oftentimes in the developmental education reform process, we [have] heard from institutional leaders talking about how they make space for their faculty leaders to make change. Sometimes you have to give them cover to do that, and sometimes you have to give them space and time to do that, so just recognizing that all of those things have to be in place for us to make progress.

Leigh Parise: It makes me feel more optimistic, frankly, that the research that we are doing is actually going to help to address the challenges that they are facing or help them figure out, “Okay, we've tried a bunch of different innovations—which ones are most effective and which ones make sense to try to implement more broadly if it continues?” Thank you for bringing us back to that continued grounding in the people who are really on the front lines and most directly interacting with students, wanting to see them succeed, and thinking about what the barriers might be and what the opportunities might be. Thanks for bringing us back to that, Nikki.

Nikki Edgecombe: I think that’s my broader takeaway. Mike, I want to hear what you have to say because I don’t want us to lose the role that research can play in helping us to make progress in this space. Mike, what say you?

Mike Weiss: First, I just want to reiterate some things that Nikki said that I think are really important, which is solutions to all of this are going to require looking at broader systems, which is good to say but also tricky sometimes because if you’re a college, you have some control of your system, but some of this is just . . . the whole system. And that feels trickier because I’m not sure that colleges can control so much about what happened in K-12 and early childhood and longstanding wealth disparities. There’s all these issues that are hard and hard to think about how you could have a unified approach to solving them all. I both agree, but also from a practical standpoint for what can you do as a college, it might depend on what level of decision-maker we’re thinking of. If it’s broader policymakers, they could try to influence all of it, but if it’s college leaders or administrators, they have a more limited scope.

But I liked your summary of some of the points of what we’ve learned so far. I wonder if one summary of it is to try to get students out of DevEd as quickly as possible. When we’re thinking of DevEd, the multiple measures assessment story largely seems to have to do with trying to get as many people as you can just to be referred directly to college-level courses. It uses more than one measure. It uses GPA from your high school as well as some placement test. But I suspect that if you just referred more people up to college level, that would also be fairly successful. As you noted, a corequisite allows you to take that college-level course in your first semester at college. You do have additional supports, but part of what they seem to both be doing is just get[ting] people into those college-level courses as quickly as you can.

I do think it’s interesting to note, and this gets back to Nikki’s [point that] things should come from practitioners, the most effective reform I think we really know about, which was designed by practitioners in CUNY, was not even really a DevEd reform per se, but it targeted students referred to DevEd, and that’s CUNY’s ASAP, which is this super comprehensive program. And it is a reminder that (and you said this too, Nikki) DevEd is one part of the challenge that many students face in terms of graduating. But there are so many other barriers that I think the most consequential reform that only focuses on DevEd that you can even imagine will still never have as big of an effect as one that tries to address the broader array of issues that students are facing.

To try to reduce the graduation outcomes that we probably care about the most is about addressing this whole student across their whole time in college. And it’s not just about this one core sequence or even just eliminating that sequence; that’s not probably the place to be looking if you’re trying to have really large impacts. It’s trying to address financial challenges that people have and informational barriers and time limitations and just this whole array of things that these specific sets of extra courses are one small part of. And certainly, [among] the interventions that we’ve studied and [that] have been evaluated rigorously, even when they target these students who are referred to DevEd, the ones that have the big impacts on things like graduation are the ones that are really comprehensive and address just so much more.

But it’s a nice thing to be reminded of that this type of intervention, these comprehensive reforms, they’re coming from the field. Those are not things that researchers invented, but they are things that people wanted to know how big of a difference they’re making. I think we can continue having a role by trying to study these ideas that are really clever that come from the field and helping people understand what effects they have, how challenging it might be to implement them and scale them, their costs. I think there’s an important role for the research world to play in continuing to evaluate them. Sometimes I think we actually can develop our own clever ideas as well, but I think probably a lot of great ideas are going to come from the field. And then we should continue trying to be as helpful as we can and doing evaluations to help understand what kind of a difference they’re making.

Leigh Parise: Thank you so much to both of you. This has been a really engaging discussion, and I really appreciate your focus on what matters for students as well as practitioners and policymakers and what we’ve learned and what some of the opportunities might be. Thanks very much for joining me, Nikki and Mike. To learn more about CAPR and their work, visit postsecondaryreadiness.org. Did you enjoy this episode? Subscribe to the Evidence First podcast for more.

Developmental education, also known as remedial education, refers to courses that some entering college students will have to take if they are deemed unprepared for college-level courses. However, studies have shown that developmental education can actually hinder students’ progress in college. Additionally, students of color, adults, first-generation students, and those from low-income backgrounds are disproportionately placed in developmental education programs, so there’s a lot of interest among policymakers, college practitioners, and researchers in reforming developmental education programs to address these challenges and support more equitable outcomes for students.

As part of MDRC’s 50th anniversary celebration, this episode of Evidence First features MDRC’s longtime partner the Community College Research Center, or CCRC. In 2014, MDRC and CCRC launched the Center for the Analysis of Postsecondary Readiness, or CAPR, to research the effectiveness of developmental education reforms and to understand their implications for equity.

In this episode, Leigh Parise talks with Nikki Edgecombe, a senior research scholar at CCRC who leads CAPR, and Michael Weiss, a senior fellow in postsecondary education at MDRC, about what has been learned about promoting equity in developmental education reform.

All Episodes

Leigh Parise talks with Tamara Johnson and Shondra Tobler from Per Scholas and MDRC’s Donna Wharton-Fields. They discuss their long-term research partnership aimed at helping Per Scholas improve its program and expand its reach.

Cheryl Ohlson, deputy chief of early childhood education for District of Columbia Public Schools, and Michelle Maier, MDRC senior associate, discuss the district’s adoption of an evidence-based, domain-specific curriculum for pre-K classrooms (that is, one that focuses on specific areas such as math and literacy).

To learn more about skills-based hiring in Connecticut and non-degree programs in Virginia, Rachel Rosen talks with Kelli-Marie Vallieres, Connecticut’s Chief Workforce Officer, and Elizabeth Creamer, Vice President of Workforce Development for the Community College Workforce Alliance in Virginia.

In this episode, Leigh Parise talks with Matt Sigelman, President of the Burning Glass Institute, which studies economic and workforce trends. They discuss skills-based hiring, a labor market trend where employers hire with the understanding that degrees are not the only way to acquire competencies.

Join Leigh Parise as she talks with Dean Elson of Reading Partners and Robin Jacob at the University of Michigan. They discuss MDRC’s study of Reading Partners, how to get volunteers to teach reading effectively, and how technology will continue to play a role in tutoring.

Crystine Miller, Director of Student Affairs and Student Engagement in the Montana University System, and Alyssa Ratledge, a Research Associate in Postsecondary Education at MDRC, discuss the evaluation of Montana 10, a wraparound services program for students in the Montana University System.

In this episode, Leigh Parise talks with Christine Brongniart, the University Executive Director of CUNY ASAP, and Colleen Sommo, an MDRC senior research fellow, to learn more about the CUNY ASAP model, its replication across the country, and the latest findings from MDRCs study of the program in Ohio.

Leigh Parise talks with Paul Fain, a veteran higher education journalist, and Betsy Tessler, a senior research journalist at MDRC, about nondegree credentials—their effectiveness, their challenges, and what the future holds for them.

Leigh Parise talks with MDRC President Virginia Knox and Naomi Goldstein, the former Deputy Assistant Secretary at the Office of Planning, Research and Evaluation (OPRE). They reflect on their experiences in evaluating programs and policies, the growth of the evidence-building movement, and future considerations for the field.

Ahmed Whitt from the Center for Employment Opportunities (CEO) and Alissa Stover, formerly of MDRC, discuss the partnership between CEO and MDRC’s Center for Data Insights and how data science tools can more fully capture participants’ lived experiences.