Measuring Treatment Contrast in Randomized Controlled Trials


Many social policy evaluations examine the effects of a new program or initiative relative to a counterfactual, commonly referred to as a “business as usual” condition, meant to represent what would happen to people if the new program being tested did not exist. In some evaluations, few if any individuals in the counterfactual situation can receive services that are similar to the ones being studied. But in many evaluations, the counterfactual condition includes services that are in the same realm as the ones being studied, such as services provided by an alternative program or services that are widely available in the community. The difference between what the program group receives and what those in the counterfactual condition receive is called the treatment contrast

This working paper explains the importance for social policy and program evaluations of the treatment contrast and offers guidance on how to measure that contrast. It draws on the knowledge and experience built from the hundreds of diverse evaluations that MDRC, an education and social policy research organization, has conducted in the past 40 years.

The paper makes the case that assessing treatment contrast yields benefits that include helping to identify the specific questions that an impact evaluation will and will not answer, highlighting the program components that might and might not be driving a program’s effects, and suggesting why a program’s effects might differ across cohorts or subgroups within a site or across sites. Procedurally, the working paper suggests planning for the treatment contrast analysis early in studies, having a treatment contrast measurement plan before program effect results are known, and remaining focused on the treatment contrast throughout a study. It also argues that the measurement of treatment contrast should receive as much attention from researchers as does assessing the process of program implementation or measuring fidelity to the original planned model. In choosing the aspects of treatment contrast to measure, the working paper suggests that the theory of change or logic model for the studied program should play the central role. While questions certainly remain about studying treatment contrast, this working paper provides ideas for researchers as they seek to understand what a program’s measured effects do and do not suggest regarding the best ways to improve social programs and policies — a particularly important consideration when continuous improvement in program replication is a goal.