What Works? Evidence and Evaluation Key as States and Localities Spend Aid
Connecting state and local government leaders
The federal government is urging states and localities to study the results of their American Rescue Plan Act spending and to adopt programs with proven track records.
As state and local governments get started spending billions in federal recovery dollars, public officials are making bold predictions about how far the money will go to help their communities with issues like homelessness, upgrading infrastructure and job training.
But to know how well these investments are paying off, it will be necessary to have systems in place to assess whether they’re meeting their goals. With this in mind, the federal government is pushing states and localities receiving American Rescue Plan Act funds to think about ways to evaluate the results of their spending, and also to pursue “evidence-based” programs.
“Evaluation lets us understand if something is working as intended, why and for who,” Diana Epstein, who leads a team focused on evidence-based policymaking at the White House Office of Management and Budget, said during an online seminar the Treasury Department held this week to discuss program evaluation and other issues tied to ARPA’s state and local aid funding.
Epstein emphasized that governments wanting to evaluate a program should plan for how to do so early on, before implementation is fully underway, to ensure they’re collecting the right data. She added that evaluation is not just an accountability tool; it’s also crucial for learning from policy initiatives and improving them.
That said, it can be daunting, especially for smaller local governments, to design and carry out program evaluations—especially ones that adhere to the strongest research standards, allowing for comparisons between similar randomly selected groups of people. Even meeting ARPA’s reporting requirements promises to be a heavy lift for many jurisdictions.
Rohit Naimpally, a senior research and policy manager at the Abdul Latif Jameel Poverty Action Lab, described the point of evaluations as measuring what would have happened in the absence of a program. There are a number of methods that can be used to test this.
He noted that governments looking to conduct program evaluations have places they can turn to for help, including state management and budget offices, universities and academic groups like the California Policy Lab, and private firms like Mathematica.
Rather than starting from scratch with an untested initiative, communities may choose to adopt programs backed up by strong evidence based on past implementation and studies to show that they work. These are often referred to as “evidence-based interventions.”
The main thing is that there’s evidence, rooted in research that meets certain standards, to show that the policy in question, not some other factor, caused a desired outcome or change.
The Pew Charitable Trusts Results First Initiative has a searchable online database with information on the effectiveness of social policy programs. Treasury flagged similar evidence-based policy clearinghouses in a June document.
“Finding and implementing evidence-based interventions, that help improve the outcomes you’re hoping to improve, is easier said than done but there’s a lot of great resources available to you,” said Sara Dube, director of the Results First Initiative.
The reporting guidelines for ARPA’s $350 billion state and local aid program require that states and cities and counties with more than 250,000 residents complete a “recovery plan performance report.” This is due Aug. 31, with annual updates due in July thereafter.
In their recovery plans, funding recipients are supposed to include dollar amounts that show how much they are spending on evidence-based policy interventions in certain categories. States and localities are exempt from the reporting requirement on evidence-based spending in cases where they’re conducting program evaluations.
Importantly, as Treasury explains in a document released on Aug. 9, while the department encourages using ARPA funding on evidence-based programs, there are no requirements to do so, and no targets set in the law or in guidance for this type of spending.
Also notable, is that states and localities can use their funding allotments to pay for evaluation-related costs.
To illustrate how program evaluation can help to identify policies that work successfully and to build additional support for them, Naimpally highlighted the Becoming a Man program in Chicago, which is geared toward reducing violent crime rates among young people.
An evaluation of the program showed reductions in violent crime arrests and overall arrests among participants. Following the release of the results, he explained, the city invested more money in the program and it attracted support from then-President Barack Obama.
“This is sort of the long view, what an evaluation can do,” Naimpally said.
Bill Lucia is a senior editor for Route Fifty and is based in Olympia, Washington.
NEXT STORY: Best Cities for Remote Workers