Performance Management vs. Data Analytics: An Interview With Mike Flowers
Connecting state and local government leaders
New York City’s first chief analytics officer: “Don’t lose the service delivery forest for the quantification tree.”
This article was originally published in Route Fifty’s recently released ebook, “Preparing for Tomorrow’s State and Local Government Workforce” | DOWNLOAD THE EBOOK
During last year’s Code for America Summit in Oakland, California, Oliver Wise, the director of the Office of Performance and Accountability for the city of New Orleans, was part of a panel discussion about what it takes to implement a municipal data analytics program.
During the session, Wise noted a key difference between performance management and data analytics.
To Wise, performance management is about “ratcheting up tension” and implementing a data analytics program is about “ratcheting down tension.” Those two intertwined forces are shaping the ways public-sector managers are examining the effectiveness of their operations.
More and more, smart governments are re-examining their performance management structures and letting data analytics take on more of a starring role when it comes to operations.
During a recent interview with Mike Flowers, New York City’s first chief analytics officer under then-Mayor Michael Bloomberg, I wanted to explore Wise’s observation about the natural tensions that exist with performance management and how data analytics can be a way to ease those tensions.
Here is part of my discussion with Flowers, who is now chief analytics officer for New York City- based startup Enigma.io and a senior fellow at What Works Cities, an initiative funded by the Bloomberg Philanthropies that is helping city governments better leverage their data resources and implement data-driven decision-making processes.
Interview continues below ...
Mike Flowers: I actually hated the performance management people, for what it’s worth. Who loves their auditor, you know? “Yeah, hello IRS man, I am so happy to see you crawl through my checkbook.”
The reason I did not like them was not because they were scrutinizing me but was rather that I thought that their metrics were ham-fisted and led to these perverse outcomes.
In the early days, I get why it was the way it was for what it’s worth. The reality is that when all you’re measuring is volume, which is how all these things started to a certain degree—How much time did it take you? How many widgets did you deliver? How many customers did you see?—it’s a really bad stand in for whether or not the agency is doing what it’s supposed to be doing.
I’ve been bumping into performance management programs—I’ve been in public service for 25 years—. . . I’ve been seeing that stuff since the ’80s and it seems like some McKinsey guy came into government and just started giving them crap that somebody adopted. And they never revisited these metrics.
These metrics became sacrosanct. And God forbid you change the metrics, because that means you’re cheating, right?
And I thought what they ended up doing, and this was confirmed once I started getting into the analytics work, was incentivizing against fixing the problem.
As as an example, if a key performance indicator for a given agency is how many inspections they did in a given time period and there’s a program that comes along and says, “OK, we’re going to do smarter inspections. We’re going to actually care more about the outcomes from those inspections and not the inspections themselves,” then there’s going to be pushback at the agency because their key performance indicator is about volume and not quality.
I actually saw that happen a couple of times. But rather than join them in fury, . . I said: “OK, let’s add something to this. That’s an important piece of the puzzle: How much are you doing? And that certainly speaks to one thing. But we’re going to add a new one and it’s going to be equally key: And that’s going to be, what are we getting for this?”
To Oliver’s point about how it ratchets up tension, it ratchets up tension just like that, right?
You have this: “We’re now looking at you. We’re now measuring you.”
It’s a classic Bloomberg statement: “If you can’t measure it, you can’t manage it.” Now they’re all being measured, which creates a pressure.
Then, to really manage it, that means you have to make sure you are measuring the right thing and those metrics need to be constantly looked at and perhaps tweaked and revised to make sure that the end game is not lost.
Don’t lose the service delivery forest for the quantification tree, if that make sense.
So what I’ve seen now happen is that people are starting to now look at those metrics in the first iteration of performance management and say: “OK, let’s think about what we’re tying to measure here against what our goals are as an organization both statutorily and politically.” I think that’s great.
For what it’s worth, I don’t think analytics could really have happened but for that pressure because the very fact that these guys that I was dealing with were so upset about their KPIs gave me the opportunity to come in and help release that pressure.
I could actually be the good guy, which was great. It was very helpful to me, frankly, on a cultural level, to be able to go into an agency and say: “We’re going to take care of this. We’re going to surface the fact that these metrics aren’t in fact what they should be, we’re going to add a few and we’re going to champion your cause with the people that allocate your budget line so that they realize that the city is at the end of the day is in fact in a better place by allocating its resources to your agency."
Michael Grass is Executive Editor of Government Executive’s Route Fifty.
NEXT STORY: How States Can Use Economic Development Data More Effectively