Tennessee taps evidence-based budgeting for funding decisions
Connecting state and local government leaders
Using clearinghouse data, an evidence framework and internal analytic tools, the state's Office of Evidence and Impact evaluates programs to help decision-makers see whether more funding is warranted.
Evidence-based budgeting is driving decisions in Tennessee about increases and decreases to funding for state programs and services.
Evidence-based budgeting involves three steps, according to the Office of Evidence and Impact (OEI), which leads the effort that began in the spring of 2019. The first is creating a list of funded programs to study their outcomes against available research. The second is requiring agencies requesting additional funds or cuts to submit forms with data showing the program’s effectiveness. The third step is sharing data among departments.
“We work with our state departments to create inventories of the programs that comprise their base budgets,” OEI Director Christin Lotz said. “We do this using an evidence framework that we’ve established for the state of Tennessee, and as we create the list of the programs that they offer, we’re able to identify the level of evidence using this framework for each of those programs.”
OEI developed the Tennessee Evidence Framework to standardize the language all state agencies use to classify programs based on the level of available evidence for the program. The framework has five steps that build on one another. It starts with a logic model, or theory of action, and moves to outputs, or process measures that indicate the impact of a program over time. Third is outcomes, which describe change in participants, fourth is evaluation through at least one rigorous evaluation with a comparison group, and last is causal evidence, which requires at least two rigorous evaluations with a comparison group.
OEI defines evidence as “a rigorous body of research that speaks to the efficacy of existing programs or proposed pilots in Tennessee.” Evaluations that qualify as evidence answer three questions: Are programs based on strong models in other states, promising theories of change or something else? Are desired outcomes positive, negative or neutral based on research? And how do existing programs compare to alternatives? Evaluations must be conducted using a system review, randomized controlled trial design or quasi-experimental design.
To find evidence that lets state agencies show information for proposed and existing programs alike, Tennessee uses two main resources: Pew Charitable Trusts’ National Results First Clearinghouse, which aggregates research from nine national clearinghouses, and the Washington State Institute for Public Policy (WSIPP).
“We rely a lot on clearinghouses currently, but we want to make sure that we’re providing the ability to do some of our own analysis, some of our own evaluation for Tennessee programs,” Lotz said. “We’re just about to kick off a pilot evaluation with one of our departments in partnership with one of our state universities … and we’re hoping to do more of that.”
OEI reviews the forms that agencies submit and provides reports to senior state leaders that outline each department’s request, including information about the program or service, the level of evidence and how it ties to the framework.
“If outcome or output data has been provided, we will have that information that is in a simple spreadsheet format provided so that when our decision-makers are reading that report, they can quickly see [it] at a glance,” Lotz said. That information essentially makes an evidence-based argument to grant the request for more or less funding.
For example, the Mental Health Services Program Inventory file shows that the Behavioral Health Safety Net, which provides services to uninsured residents, earned a rating of “promising,” which means it “has some research demonstrating effectiveness for at least one outcome of interest.” The document states that WSIPP was used as a source of evidence in arriving at outcomes, including decreased hospitalization and increased employment.
Agencies do not need additional technology to submit evidence-based budgeting elements because of the state’s Enterprise Analytics project, which aims to centralize data analytics and facilitate information sharing.
“We will have some built-in tools that will make analytics much more accessible, especially to those that may not have a tremendous amount of internal capacity or the right technological tools to do their own analytics,” Lotz said. “While analytics certainly happens across state government, we’re looking to really build upon that and give our departments some really solid tools to be able to analyze not just their own data, but analyze that data in conjunction with data from other departments so that we can get that more complete picture of how our programs are working.”
OEI’s evidence-based budgeting work began with pilot inventory projects with the Mental Health and Substance Abuse Services and Correction departments. Since then, it has finished work with two more departments and has a third nearing completion. The results of work with the Department of Children’s Services will be online the week of Jan. 17, Lotz said.
Initially, OEI required a form only for requesting more funding, but it realized in 2020 it needed a second form seeking reductions as they became necessary amid pandemic-related budget shortfalls.
The Pew-MacArthur Results First Initiative, which works with states to implement evidence-based policymaking, was an impetus for OEI’s effort. Results for America, an organization helping governments use evidence and data, named Tennessee one of the top states in the country for using data to make decisions in 2021 and 2020.=
Stephanie Kanowitz is a freelance writer based in Northern Virginia.