Publications

Issue Focus

MDRC College Promise Success Initiative Benchmark Template

04/2019

What Is a Benchmark?

Benchmarks are outcome measures that a program or organization uses over a defined time period to measure success against a prespecified target. Their purpose is to aid in program management, providing a standard of success on which team members can agree, and with which all team members can compare their work. Setting a benchmark for a particular outcome (for example, the percentage of students enrolled in 12 or more credits in their first term of college) gives staff members a collective understanding of the program’s expectations. Within the time period defined for the benchmark (in this example, during the enrollment period for that term), the program team can check on the actual outcome in comparison to the benchmark. In this enrollment example, outreach efforts could be developed based on the results to encourage more students to enroll. System adjustments could also be considered as enrollment processes are monitored. After the time period has ended, the program team could use the comparison of the end result and the benchmark to guide discussions about what happened, what the successes were, and where improvements are needed.

What Are the Main Features of a Strong Benchmark?

A strong benchmark is specific, easily tracked (both within and after the time period), accessible (that is, it can be added to reports without too much work from data analysts or information technology staff members), and important to the program’s work. The example above — the percentage of students enrolled in 12 or more credits in their first term of college — meets these criteria: It is specific to an individual term for an entering class of students (or “cohort”), it is a statistic that is usually very easy to track in a college’s administrative data systems, it is generally included on reports that are widely accessible, and it is important to the program staff members and managers in the example.

Capturing Your Program’s Efforts with Multiple Benchmarks

It is a good idea to create a series of benchmarks that provide a complete picture of your program’s support efforts. Preferably, these benchmarks ould capture multiple time periods, such as weeks, months, and terms. For example, a program might use a weekly benchmark of sending at least one communication to all students, a monthly benchmark of having an in-person interaction with 80 percent of all students, and a term benchmark of having 60 percent of students enrolled in 12 or more credits after the census date of their first term (the last day students can add or drop courses). Each of these benchmarks feeds into the next. The successful completion of the weekly benchmark increases the likelihood of success in the monthly benchmark, which then increases the likelihood of success in the term benchmark.

This spreadsheet provides a starting point for thinking about how to create your program’s or organization’s benchmarks. There are four steps:

1. Start by thinking about the benchmarks that you would like to use. The spreadsheet is a template to be used by a hypothetical program providing support services to college students in their first year. The benchmark tab (labeled “2. Benchmark”) provides an overview of the benchmarks the program would like to include. The program wants to increase full-time enrollment and increase the percentage of students who complete their developmental education requirements in math and English within their first year. And although the program’s support for students ends after their first academic year, it is the program’s hope that it will also increase the percentage of students who reenroll for their second year. To accomplish all of these goals, the program intends to meet with students during every term in their first year, work with students to create initial comprehensive education plans, and coordinate three success activities with students (such as new-student orientations or a first-year-experience course). This example is hypothetical and your program should decide which outcomes are relevant to your work. For example, programs that last longer than one year could include benchmarks related to longer-term outcomes such as graduation.

2. Gather historical data for the selected benchmarks. Before you can set meaningful benchmarks, you need to know what it is feasible to track. The attached workbook includes a tab with a historical report for the selected benchmarks. Filling out this report before setting your benchmarks will answer two questions: (1) Are they easily tracked? and (2) What are their baseline values? Finding out how much work it takes to gather the data to fill in each historical number can help you determine whether a given outcome will make a suitable benchmark. If the answer is “too much work,” then either you need to collaborate with the appropriate parties at your organization to update the tracking and reporting process or you need to select an alternate benchmark that focuses on a related area of your work. Using inaccessible benchmarks that are not easily tracked will make it less likely that anyone in the organization will be actively monitoring progress toward those goals during the relevant time period, and active monitoring during the period (and the changes in program behavior that result) is essential to reaching or surpassing a benchmark. Often the benchmarks that are difficult to track are shorter-term items such as success activities or meetings with advisers. If these items are central components of your program model, this is the opportunity to create a data-tracking process to ensure that you all have access to the data that show your work.

New programs without any historical data should still try to collect some relevant data to inform their benchmark levels for the reasons listed above: to confirm how much effort it takes to collect those data and to establish baseline values. For example, a starting point might be to collect historical data for the larger college population or for the subgroup of students whom the program will serve. However, it is important to consider when establishing your benchmarks how your population will differ from these groups and how those differences might affect your program’s outcomes (see the next step).

3. Review the historical baselines and set your benchmarks. In the attached workbook, the data entered in the Historical Data tab are copied over to the Benchmark tab to provide a starting point for the process of setting your benchmark. Working from your baselines, and knowing some facts about the direction of your program and the larger environment, you should be able to create a series of benchmarks for the coming year’s cohort. These benchmarks may be higher or lower than the outcomes reported in the historical data. The best idea is to either draft the benchmarks during a team meeting, or to draft them beforehand and discuss them in a team meeting. Giving the entire team a chance to react to and adjust the proposed benchmarks will make them more likely to endorse those benchmarks, making the benchmarks more useful as a tool for program improvement.

4. Reporting on program outcomes and comparing results with benchmarks. The third tab in the attached workbook (labeled “3. Current Report”) is a template of the actual outcomes of the program during the current time period. The fourth tab (labeled “4. Benchmarks vs. Outcomes”) combines the data from the Benchmark tab and the Current Report tab to compare the benchmarks with the outcomes. It includes automatic highlighting (using Excel’s Conditional Formatting feature) that indicates in green any benchmarks that are met or surpassed and in dark yellow any outcome measures that are falling short of their benchmarks. This tab could be used as a starting point in team meetings during the term to check in on the program’s progress and brainstorm ways to reach the benchmarks that have not yet been met. It could also be used after the term has ended for a wrap-up discussion about the program’s successes and areas that need improvement. Setting the benchmarks is only part of the process. Using them to spur program updates is vital to continuous improvement.

The benchmark template linked above is formatted to a specific program model: a one-year college program with student service components. However, the process discussed above and the flow of the data through the workbook can be applied to nearly any program model and the workbook can be edited as needed to match the model. For example, if your program offers support for more than one year, you can add columns to include additional semesters. The outcomes can be edited or changed as needed as well. The template includes a tab called “Additional Outcomes” that provides some additional examples of outcomes you might consider including. The key is to establish a strong combination of benchmarks that work over multiple time periods and present a full picture of your program’s support services and goals, and to use them in combination with actual data to monitor your program’s progress and make adjustments.

Download the MDRC College Promise Success Initiative Benchmark Template here.