Implementation Research Incubator

Rekha Belu Carolyn HIll

Researchers and funders often want to know not just whether a social program works, but how and why — the terrain of implementation research. Here, Rekha Balu and Carolyn Hill present posts from contributors inside and outside MDRC, offering lessons from past program evaluations and insights from ongoing studies that can improve research approaches.

Rekha is a Senior Research Associate and leads projects for MDRC’s Center for Applied Behavioral Science. Carolyn is a Senior Fellow at MDRC and coauthor of Public Management: Thinking and Acting in Three Dimensions. 

07/2018 | Gayle Hamilton, Susan Scrivener

Treatment contrast — the difference between what the program group and control group in an evaluation receive — is fundamental for understanding what evaluation findings about the effects of a program actually mean. There’s no strict recipe for measuring treatment contrast, because each study and setting will have nuances. And while “nuance” can’t be an excuse for ignoring treatment contrast or for an anything-goes approach to it, there’s surprisingly little detailed and practical guidance about how to conceptualize and measure it. We offer some pertinent considerations.

05/2018 | Susan Sepanik

Rural school districts in low-income communities face unique challenges in preparing and inspiring students to go to college. As part of a federal Investing in Innovation (i3) development grant, MDRC supported and evaluated an initiative that sought to improve college readiness in rural schools, in part by aligning teaching strategies across grade levels and between schools and the local college. Our implementation research allowed us to provide formative feedback that program leaders used to improve the process.

03/2018 | Nicole Leacock, Anne Kou, Michelle Maier, Shira Kolnik Mattera

As part of our study of Making Pre-K Count, an innovative preschool math program, we conducted in-depth qualitative research to better understand how the program produced its effects. The program has several instructional components that are directly measurable, but one key process-related ingredient in the program’s theory of change is less well understood — that is, teachers’ ability to differentiate their instruction to individual students’ needs. What can we learn about a main intervention component that is challenging both to implement and to document?

02/2018 | Carolyn Hill, Virginia Knox

How do we build evidence about effective policies and programs? The process is commonly depicted as a pipeline: from developing a new intervention to testing it on a small scale, to conducting impact studies in new locations, to expanding effective interventions. But an updated depiction of evidence building could better reflect realities of decision making and practice in the field. We describe a cyclical framework that encompasses implementation, adaptation, and continued evidence building. Implementation research takes on a central role in this updated model.

01/2018 | Michelle S. Manno, Jennifer Miller Gaubert

Implementation research in program evaluations plays a critical role in helping researchers and practitioners understand how programs operate, why programs did or did not produce impacts, what factors influenced the staff’s ability to operate the intervention, and how staff members and participants view the program. Often, implementation research does not directly address scale-up questions — whether, when, and how effective programs can be expanded — until decision makers and evaluators are at the cusp of considering this step. Yet implementation research from the early stages of evidence building can be harnessed to inform program scale-up later on.

12/2017 | Rekha Balu, Carolyn Hill

We’re getting in the “year-end wrap-up” spirit a bit early here at the Incubator. In this post, we highlight some implementation research activities at MDRC over the past year and preview what’s in store for 2018.

10/2017 | David M. Greenberg

For implementation research, measuring how community networks are deployed to deliver programs means taking seriously their specific characteristics and how these help or hinder local programs. In other words, the field should become better at reading networks. Our mixed-methods Chicago Community Networks study shows how we go about this.

09/2017 | Rekha Balu, Kristin Porter

A growing number of public, nonprofit, and for-profit organizations, in an effort to target resources effectively and efficiently, are turning to predictive analytics. The idea of predicting levels of risk to help organizations identify which staff members or clients need support appeals to program managers and evaluators alike. MDRC has been working with organizations to develop, apply, and learn from predictive analytic tools. A key insight from our application of predictive analytics is its power to jump-start conversations about program implementation and practice, suggesting that it will become increasingly important for implementation researchers to understand.

Insights from Qualitative and Quantitative Analyses of the PACE Center for Girls

07/2017 | Louisa Treskon

The culture of a program, also known as the program environment, is often of great interest in social services. Many researchers and practitioners view program culture or environment as a key aspect of service delivery and as a potential influence on participant outcomes. Staff members sometimes refer to their program environment as the “secret sauce” that no one knows exactly how to replicate. How, then, can researchers measure program environment and understand how it develops?

Interviews or Focus Groups?

06/2017 | Helen Lee, Leigh Parise

As implementation researchers, we often want to hear directly from program staff members and program participants in order to understand their perspectives and experiences. Gathering these perspectives can involve one-on-one interviews or focus groups — but which approach is more appropriate? We summarize a few key considerations for conducting interviews or focus groups for implementation research.

The Multistate Evaluation of Response to Intervention Practices

05/2017 | Rekha Balu

When schools or programs face challenges in delivering services — such as limited time — and researchers are not on-site to monitor their implementation, how can the researchers know what is happening and how it varies across sites? MDRC’s evaluation of the Response to Intervention reading framework highlights ways to document how schools use their time and which students receive services.

04/2017 | Michelle S. Manno, Louisa Treskon

How are evidence-based practices — approaches to organizing and delivering services that have been rigorously evaluated — implemented within organizations that deliver many services, some of which are not evidence-based? As part of our implementation study of the Children’s Institute, Inc., a multiservice organization in Los Angeles working with low-income children and families, we studied how the staff integrated evidence-based practices into its services, and the challenges that arose along the way.

03/2017 | Rekha Balu

Researchers studying education, youth development, or family support interventions sometimes encounter situations where the program staff is adjusting and adapting program components. Sometimes this is done to fit the needs of clients or budgets. Adaptations also may be made during multiyear programs and studies, for reasons such as staff turnover and budget changes. Such adaptations are likely to occur whether or not sites in a randomized controlled trial are receiving specific implementation guidance.

Instead of viewing adaptation only as an impediment to treatment fidelity, a nuisance that must be managed, we’ve been thinking about how we can anticipate these adaptations in our implementation research.

02/2017 | Rekha Balu, Carolyn Hill

Welcome to MDRC’s Implementation Research Incubator!  We’re glad you’re here. Our monthly posts aim to inform implementation research in social policy evaluations, through

  • sharing ideas about implementation research data, methods, analysis, and findings

  • fostering development of ideas and insights about implementation research

  • integrating understanding across policy domains and academic disciplines

  • advancing transparent, rigorous implementation research

  • informing implementation practice and scale-up of evidence-based programs