Success factors: What is the role of research and evaluation?

Lead Authors: Bridget Timmeney and Denisa Gándara

Research and evaluation can help Promise stakeholders improve program implementation and find out if program goals are being met.

Evaluation efforts need not be technical or expensive, and they can be carried out in a variety of ways, but their purpose is the same—to generate findings that can be used by stakeholders to make their program more effective. Research and evaluation can help stakeholders track progress toward goals, provide insights that lead to program improvements, and help build support for a program.

Policy Considerations

  • Promise stakeholders should plan for evaluation during the program design phase, and evaluators, whether internal or external, should be engaged early on.

  • Baseline data should be collected before a Promise program is announced to make it possible to compare pre- and post-outcomes.

  • Consent forms for evaluation and research should be integrated into the program application process to facilitate data tracking without extra steps.

  • A dissemination strategy for evaluation findings should be developed, with different mechanisms for internal and external audiences.

What We Know

The Promise movement has given rise to a range of research and evaluation efforts that can help stakeholders understand whether programs are achieving their intended goals and build a base of knowledge about what works. Sometimes these efforts are carried out by external evaluators hired by Promise programs, sometimes they are carried out by Promise staff, and sometimes they are the products of independent researchers. Evaluation need not be costly and technical, or conducted by outside experts, but it should be an integral part of any Promise initiative from the beginning.

Research and evaluation resources can be found in multiple places: Statewide Promise programs created by legislatures generally require state agencies to track progress and usage of resources. In Tennessee, for example, the comptroller’s office produces full evaluations every four years and annual updates.1 The higher education commission also produces annual reports2 that track enrollment and other statistics.

Community college–based programs usually rely on their own institutional research or enrollment management personnel to assess the impact of their tuition-free initiatives. Some cross-institutional efforts also support the community college sector by tracking legislation and promoting best practices.3

Community-based programs have the most diverse array of evaluation efforts. Most carry out their own data tracking and may post a data dashboard,4 while others may also create a formal evaluation plan, hire outside evaluators,5 or partner with academics,6 especially those at local universities, to do more formal evaluations.

Information generated through research and evaluation can inform an array of stakeholders, including program administrators and staff, funders, policymakers, and community partners. Such information can reveal the impact a program is having on its target population and generate insights to help improve program delivery. It also can be used to identify effective, high-quality practices that should be scaled up or replicated.

Evaluations also produce data that can help build support for a program. In addition to providing feedback around implementation and program rules, Promise evaluation results have been used to demonstrate student impacts, such as institutional enrollment increases and stronger student and family engagement in higher education. These findings have been leveraged to solicit funding from donors, to build support in the business sector for investing in sector pathways programs or hosting internships, and to garner political support in the state context.

Types of evaluations
Evaluations take different forms depending on their purpose. Some evaluation efforts provide feedback to program administrators, allowing them to improve programming or implementation efforts (these are sometimes known as process evaluations). Others assess the outcomes of a Promise program and may address issues such as who is being served, how students are progressing through higher education, and ultimately what impact the Promise program has on individuals and their communities (these are sometimes known as impact evaluations).

Not all evaluations shed light on the effects of a Promise program. To assess causal impact (whether the Promise program itself resulted in the changes observed), a comparison group or counterfactual is required to answer the question, “What would the situation be if this initiative had not occurred?” The gold standard in evaluation is a randomized control trial (RCT), where a statistically identical control group is monitored to assess the impact of a treatment. RCTs are difficult in the Promise arena, where programs are designed to reach large cohorts of students; however, when resources are limited and Promise programs are being rolled out slowly (in a pilot phase or at a limited number of schools), randomization is a possibility. Evaluators have used quasi-experimental strategies to assess the causal impact of Promise programs. Causal research designs can help explain cause and effect and thus predict outcomes. However, such rigorous approaches are not always needed to produce useful feedback and demonstrate effectiveness. Sometimes it makes sense to simply track changes in the number of students served or the number of services delivered. Other times, interviews and focus groups can be useful in understanding how implementation is proceeding and how it can be improved.

Launching an evaluation
Evaluation is not something that should come late in the process as a “secret sauce” added at the end to reveal how an initiative has performed. Rather, evaluation is a tool through which stakeholders can better understand their work and create, review, and modify interventions in real time to best meet program goals.

Ideally, planning for evaluation will begin during the design phase of a Promise program. Evaluators and researchers can assist stakeholders in identifying goals, metrics, and timelines, and establishing data collection procedures that are implemented from the start. (For example, due to federal privacy protections, students and families must consent to having their data used for evaluation purposes, and such consents are easiest to obtain if built into the Promise application process.) While stakeholders may benefit from consulting or contracting with a third-party evaluator or researcher outside the Promise organization, evaluation efforts can be carried out by program staff members themselves. Any evaluation effort will be most successful if stakeholders understand the value and purpose of tracking data and examining processes and outcomes and buy into the evaluation process from the beginning.

Knowing your starting point is essential. Evaluation must reflect a shared understanding of program goals: What is the need the program is trying to meet, and how is the initiative expected to meet that need? Evaluators and program administrators must also understand the population they are serving: What kind of interventions are likely to be successful in which contexts? The broader ecosystem should also be part of formulating goals—a provider scan is useful so that services (e.g., success coaching, mentoring, pathway supports, etc.) are not duplicated. Establishing a system to collect baseline data is also helpful so that evaluators can establish a pre- and post-intervention analysis, if needed.

Recommended Reading

Iriti, J., & Miller-Adams, M. (2015). Promise monitoring and evaluation framework. W.E. Upjohn Institute for Employment Research. 

This tool, developed with support from Lumina Foundation, proposes a theory of change for how Promise programs change outcomes in a variety of areas and suggests potential indicators for program stakeholders to track. Indicators span three spheres, including community and economic development. A list of indicators can be downloaded here.

For examples of evaluation studies, see the Promise research bibliography compiled by the Upjohn Institute. 

Case Study

Evaluations can be used to scale pilot programs into larger initiatives. Lake Michigan College launched its Promise program as a one-year pilot. The college then tracked data to discover the impact on enrollment, student financial aid, and the college’s bottom line. These findings were used as the basis for building support for a longer-term program.

Evaluations can be used to generate programmatic changes. In Pittsburgh, evaluators showed that the sliding scale rewarding long-term attachment to the school district disproportionately benefited middle-income students; low-income families with more frequent job and housing changes were losing out on the higher benefits related to long-term enrollment. As a result, the Pittsburgh Promise replaced its sliding scale with a four-year minimum (high school) enrollment requirement.

In Kalamazoo, data analysis showed that some students were not completing bachelor’s degrees within the program’s 130-credit limit, and that these students were disproportionately African American. To strengthen the racial equity impact of the program, stakeholders increased the maximum number of credits covered by the program from 130 to 145 (or a bachelor’s degree, whichever comes first).

Evaluations can be used to identify and catalyze system changes. The Detroit Promise contracted with a national evaluator, MDRC, to carry out a RCT of a program that provides coaching to Promise students at community colleges. Early positive results from the RCT led to the program’s expansion to all Detroit Promise community college students. MDRC has continued to evaluate the impact of these coaching supports and other components of the Detroit Promise Path on retention, progression, and completion.  

Footnotes

  1. Tennessee Comptroller of the Treasury. (2020-2022). Tennessee Promise evaluation.  

  2.  Tennessee Higher Education Commission. (2021). Tennessee Promise annual report

  3.  WestEd. (n.d.). College Promise Project in California

  4.  Pittsburgh Promise. (n.d.). The impact dashboard

  5.  MDRC. (n.d.). Detroit Promise Path

  6.  Bell, E., & Gándara, D. (2021). Can free community college close racial disparities in postsecondary attainment? How Tulsa Achieves affects racially minoritized student outcomes. American Educational Research Journal, 58(6), 1142–1177.