Publications

Issue Focus

The Central Role of Implementation Research in Understanding Treatment Contrast

07/2018

Treatment contrast — the difference between what the program group and control group in an evaluation receive — is fundamental for understanding what evaluation findings about the effects of a program actually mean. There’s no strict recipe for measuring treatment contrast, because each study and setting will have nuances. And while “nuance” can’t be an excuse for ignoring treatment contrast or for an anything-goes approach to it, there’s surprisingly little detailed and practical guidance about how to conceptualize and measure it.  Our recent working paper addresses these issues and highlights pertinent considerations for implementation research.

The importance of treatment contrast

When interpreting the results of a study, researchers and stakeholders often focus on the features of the new program or initiative being tested — that is, what the program or “treatment” group is offered or receives. But most social policy evaluations examine the effects of an initiative relative to a counterfactual, commonly referred to as a “business as usual” condition, which represents what would happen if the initiative didn’t exist.

In some evaluations, few if any individuals in the counterfactual situation can receive services that are similar to the ones being studied. But in many evaluations, the counterfactual condition does include similar services, such as services provided by an alternative program or services that are widely available in the community. Treatment contrast thus directly affects the research question that a study can answer.

For example, the Making Pre-K Count evaluation examined the effects of a specialized math curriculum, supported by intensive professional development, on children’s prekindergarten (pre-K) experiences and subsequent outcomes on a large scale in New York City. The study initially sought to answer the question “What difference does Making Pre-K Count make in an environment with weak math instruction?” However, over the course of the evaluation, New York City was emphasizing the alignment of math and literacy curricula with Common Core standards, and, during the second year of the evaluation, the city began to roll out universal pre-K. As a result, the question the evaluation was actually answering became “What difference does Making Pre-K Count make in an environment with an increasing emphasis on math instruction and an increasing availability of pre-K?”

In multisite studies, treatment contrast may vary across sites and can contribute to our understanding of impact variation (for example, in the National Head Start Impact Study).

Guidelines for measuring treatment contrast

In our view, implementation researchers — by virtue of their emphasis on what an intervention consists of and how it is delivered — are uniquely suited to take the lead in specifying how to examine treatment contrast and coordinating evaluation team efforts for the treatment contrast analysis. Our working paper provides some specific ideas for measuring treatment contrast, focusing on differences between the program and the counterfactual in the dimensions of content, quantity, quality, and conveyance of services and other program aspects. Here, we describe some general guidelines:

  • Involve researchers from all major aspects of an evaluation in discussing how to measure treatment contrast. Research team members bring different strengths to an evaluation — different perspectives (such as impact, implementation, cost), substantive knowledge, and measurement expertise.

  • Start planning treatment contrast measurement early in a project. With early planning, the evaluation can embed treatment contrast measures in a wide array of data collection efforts. For example, site-selection or early technical assistance teams can gather information about counterfactual conditions. This information can be essential for refining the treatment contrast measurement plan.

  • Set priorities among all possible contrast measurement options. Project resources rarely permit the measurement of all aspects of the treatment contrast. To identify high and low priorities, it’s helpful to ask, first, which aspects of the intervention are most fundamental to the theory of change — what is likely to be most valuable when interpreting the study’s findings? Second, how difficult will it be to measure each of these elements? 

  • Prespecify exactly how to measure treatment contrast before impact results are known. Prespecification avoids the appearance (or actual occurrence) of fishing for explanations for impact findings and makes the treatment contrast analysis more credible.

  • Stay focused on treatment contrast throughout the study. Conditions in the program environment are seldom static, so continued attention throughout the study to the counterfactual conditions, not just the treatment conditions, is essential. Depending on the setting, researchers might need to make an explicit plan for changes in treatment contrast or adapt the original plan if conditions are changing.

The working paper ends by posing a few open questions about trade-offs in measuring treatment contrast and the interpretation and application of the findings. For example, should treatment contrasts for some types of measures be viewed as more consequential than contrasts for other measures? What are the pros and cons of “covering all the bases” for treatment contrast measurement, versus focusing on specific aspects? We hope that the paper provides useful guidance for evaluation team members and raises some thought-provoking issues for consideration and discussion.

Suggested citation for this post:

Hamilton, Gayle, and Susan Scrivener. 2018. “The Central Role of Implementation Research in Understanding Treatment Contrast.” Implementation Research Incubator (blog), July. https://www.mdrc.org/publication/central-role-implementation-research-understanding-treatment-contrast.