Evaluation of the Content Literacy Continuum

Report on Program Impacts, Program Fidelity, and Contrast


By William Corrin, James J. Lindsay, Marie-Andrée Somers, Nathan E. Myers, Coby V. Meyers, Christopher A. Condon, Janell K. Smith

Large numbers of adolescents enter high school lacking the necessary reading skills to be academically successful. The demand for strong reading and writing skills increases as students get promoted to high school grades. Not only do high school teachers rely more heavily on textbooks to convey critical course content to students, but the content in those textbooks also gets more challenging. Moreover, by grade 9, the reading standards students are expected to meet also increase in difficulty. High school students are expected not only to remember facts but also to induce themes, processes, and concepts from material and relate those “higher order concepts” to new content. Adolescent students are expected to read and to produce complex texts, whose structures and modes of presenting information vary according to genres and content. The Common Core State Standards, which have been adopted by almost 75 percent of the states, emphasize this variation by differentiating “college and career readiness” standards in grades 6-12 according to the reading and writing skills needed in history/social studies, science, and technology subjects.

School district leaders and high school administrators not only face the challenge of providing students with additional instruction focused on improving reading skills, but they also must simultaneously help students master necessary subject area content for which they are held accountable. These leaders need information on interventions that can be integrated within the high school curriculum to help struggling adolescent readers acquire the strategies necessary to read at proficient levels. This report presents the findings of a rigorous experimental impact evaluation and implementation study of one such intervention, the Content Literacy Continuum (CLC), developed by researchers at the University of Kansas Center for Research on Learning. This evaluation of CLC was conducted by three partnering organizations: REL Midwest, MDRC, and Survey Research Management. Thirty-three high schools in nine districts across four Midwestern states agreed to participate in this evaluation, and 28 of those 33 schools continued their participation throughout the entire study period. Full implementation of this intervention began in the 2008/09 school year and continued through the 2009/10 school year.

Given that CLC was designed to address the dual needs of high schools to support both the literacy and content learning of students, the evaluation focused on program impacts on reading comprehension test scores and students’ accumulation of course credits in core content areas. To assess the impacts of CLC on these outcomes, the study team conducted a cluster randomized trial. That is, participating high schools within each district were randomly assigned either to implement CLC (CLC schools) or to continue with “business as usual” (non-CLC schools). Impacts were estimated by analyzing the outcomes of students at the CLC schools compared with those at the non-CLC schools. The evaluation’s primary research questions focused on the impact of CLC on students’ reading comprehension and course performance at the end of the second year of implementation. Secondary research questions compared the first-year impacts and second-year impacts and also investigated program impacts on other student outcomes.

In addition, the evaluation examined the implementation of the CLC framework within the CLC schools. This report presents findings regarding the degree to which schools assigned to implement CLC set up the necessary structures and organizational processes needed to support implementation of CLC (referred to as structural fidelity in this report) and the degree to which the pedagogical practices emphasized in CLC-related professional development were apparent within the instruction of core content teachers in participating schools (referred to as instructional fidelity). The structures and instruction at CLC schools and non-CLC schools also were compared to provide information about the contrast that CLC implementation provided compared with business as usual.

The key impact and implementation findings discussed in this report are as follows:

Findings from primary impact analyses

  • There were no statistically significant differences in reading comprehension scores between CLC schools and non-CLC schools (effect size = 0.06 standard deviations for grade 9 students and 0.10 standard deviations for grade 10 students). Therefore, it cannot be concluded that the CLC framework improved students’ reading comprehension scores in the second year of the study, in either grade level.
  • Nor did CLC have a statistically significant impact on the students’ accumulation of core credits (as a percentage of the total needed to graduate) in the second year, in either grade level. The impact estimate was negative for grade 9 students and positive for grade 10 students (effect sizes = −0.17 and 0.02 standard deviations, respectively).

Findings from secondary impact analyses

  • The estimated impacts of CLC on the primary outcomes, reading comprehension test scores and core credit accumulation, in the first year do not differ statistically from the second-year impacts. CLC did not have a statistically significant impact on students’ reading comprehension in the first year (effect size = 0.13 standard deviation). CLC also did not have a statistically significant impact on students’ credit earning in core content classes (effect size = −0.04 standard deviation).
  • In terms of secondary outcomes, the CLC program had a statistically significant, positive impact on grade 9 students’ reading vocabulary in the first year of the study. However, it did not affect their grade point average (GPA) by a statistically significant amount, in either grade level or in either study year.
  • The analyses that examined whether the intervention had stronger effects for some subgroups of students (for example, groups defined by grade 8 reading proficiency, being overage for grade, or eligibility for special education services) suggest similar results for the various groups. Nor do the data suggest that CLC was more effective in some school districts than in others.

Implementation findings

  • Of the 28 schools that participated in the evaluation for two years, 15 had been randomly assigned to implement the CLC framework. It was rare, however, for the CLC schools to establish all the structural components necessary for CLC. In the first year, 11 of these 15 schools implemented five or fewer of nine structural components at an adequate level or better. Implementation of these components was somewhat less successful in the second year, as all 15 schools implemented five or fewer of these components at an adequate level or better.
  • The percentages of observed core content teachers who explicitly used CLC-specific content enhancement routines or learning strategies were 22 percent during year 1 and 11 percent during year 2. Although this percentage is less than intended by the program developer, in both year 1 and year 2, the rate was double that of the rate of use among core content teachers in non-CLC schools.
  • Observations of instruction of core content teachers in CLC schools during year 1 indicated that one of the three pedagogical practices emphasized during CLC professional development was included in instruction at a level considered “adequate” by the program developers. For year 2, these scores averaged across all CLC schools were all below the program developer’s cut point for “adequate.”
  • Use of CLC-specific content enhancement routines and strategies among teachers of the CLC-specific Fusion Reading course within CLC schools was observed to be 62 percent for both years of implementation.

Document Details

Publication Type
Report
Date
December 2012
Corrin, William, James Lindsay, Marie-Andrée Somers, Nathan Myers, Coby Meyers, Christopher Condon, and Janell Smith. 2012. Evaluation of the Content Literacy Continuum. New York: MDRC.