To help inform policy decisions, the effect of interventions should ideally be evaluated with a reasonably small margin of error, or less uncertainty. The margin of error is a function of the sample size—the larger the number of study participants, the smaller the margin of error. The sample size is a factor that is known and that researchers can control. However, the margin of error is also a function of other factors that are not well-known, including the extent to which the outcomes of participants differ from each other and the degree to which the outcomes of the participants are associated with their characteristics. When planning a study, researchers must make an educated guess about what the values of these “study design parameters” will be. Unfortunately, in the context of community college interventions, there is limited guidance about what researchers should assume when planning the size of their studies.
The purpose of this paper, published in Evaluation Review, is to help postsecondary researchers by providing them with new information about the values of the key design parameters that are needed for study planning. Using data from 14 randomized controlled trials (RCTs) in community college conducted by MDRC, the paper presents value ranges for the parameters across several outcomes (enrollment, credits earned, credential attainment, and grade point average) for several college semesters. The paper also discusses how the parameters vary across studies, outcomes, and follow-up periods. A notable finding is that, for most cumulative outcomes (like cumulative credits earned), the margin of error increases as more time goes by between the start of the study and when students’ outcomes are being measured.
A public database of parameter values from 30 RCTs was created for this paper. Researchers can get a copy of the database by contacting Marie-Andrée Somers, [email protected].