The Center for Applied Behavioral Science (CABS) combines MDRC’s decades of experience tackling social policy issues with insights from behavioral science. This graphic explains the CABS’s approach to solving problems.
Too often, programs and policies do not consider the way people actually think and behave. Behavioral science demonstrates that even small hassles create barriers that prevent those in need of services from receiving them. This infographic provides a brief overview of how the Center for Applied Behavioral Science is improving social services by making use of behavioral insights.
The SIMPLER framework was developed for the Behavioral Interventions to Advance Self-Sufficiency (BIAS) project ― the first major effort to apply behavioral insights to human services programs in the United States. SIMPLER summarizes several key behavioral concepts that can guide practitioners interested in using behavioral insights to enhance service delivery.
MDRC launches the first of a five-part web series from the Chicago Community Networks study — a mixed-methods initiative that combines formal social network analysis with in-depth field surveys of community practitioners. It measures how community organizations collaborate on local improvement projects and how they come together to shape public policy.
As the first major effort to use a behavioral economics lens to examine human services programs that serve poor and vulnerable families in the United States, the BIAS project demonstrated the value of applying behavioral insights to improve the efficacy of human services programs.
Using an alternative to classical statistics, this paper reanalyzes results from three published studies of interventions to increase employment and reduce welfare dependency. The analysis formally incorporates prior beliefs about the interventions, characterizing the results in terms of the distribution of possible effects, and generally confirms the earlier published findings.
This paper provides practical guidance for researchers who are designing and analyzing studies that randomize schools — which comprise three levels of clustering (students in classrooms in schools) — to measure intervention effects on student academic outcomes when information on the middle level (classrooms) is missing.
Strategies for Interpreting and Reporting Intervention Effects on Subgroups
This revised paper examines strategies for interpreting and reporting estimates of intervention effects for subgroups of a study sample. Specifically, the paper considers: why and how subgroup findings are important for applied research, the importance of prespecifying subgroups before analyses are conducted, and the importance of using existing theory and prior research to distinguish between subgroups for which study findings are confirmatory, as opposed to exploratory.
This paper illustrates how to design an experimental sample for measuring the effects of educational programs when whole schools are randomized to a program and control group. It addresses such issues as what number of schools should be randomized, how many students per school are needed, and what is the best mix of program and control schools.