Best practices for performance metrics
Connecting state and local government leaders
When possible, government agencies should report a variety of critical measures -- including both leading and lagging measures -- to present a balanced view of the facts.
Are government agencies using the best measures to guide policy? In many cases, no. The world is changing, and challenges like international trade and cybercrime are dynamic. Often, before new policies or legislation can be adopted, risks, threats and technology have evolved.
Often, government agencies use historical performance data with no predictive power, also known as lagging indicators. Peter Drucker, often considered the founder of modern management, compared this to flying a plane while looking backwards. By continually looking at the past, an agency may be unable to see what lies ahead.
So why do government organizations use these lagging indicators? Because they are available, seem accurate and can be compared year-over-year. This is often referred to as the “streetlight effect,” so called because of the old joke of a man looking for his lost quarter under a streetlight. He’s there because the light is better, instead of two blocks away where he dropped the coin.
Like everyone else, government agencies tend to use data they have, even when it is a poor choice. Once Congress or the public accepts the use of this data, the streetlight effect becomes entrenched. Everyone agrees to look at the wrong measures, while wondering why it is not working out very well.
Best practices benchmarking
In an effort to promote better understanding of analytics, Lone Star Analysis led a three-year international benchmarking effort on best practices. Participants included corporations, academia and government entities. Here are some of the findings that apply to performance metrics:
The best practitioners (in both the private and public sector) constantly sought to adjust their metrics and methods. Even when it was necessary to preserve historical methods for purposes of comparison, high-preforming organizations ran parallel efforts. They did not use the need to generate historical comparisons as an excuse to defend the status quo.
The best practitioners found ways to submit to peer reviews. Government agencies seeking peer reviews face several barriers. Data is often very sensitive, and the topics are abstract with few potential peers to conduct reviews. Additionally, large-scale data and analytics can be expensive to review, requiring staff and computing platforms. Finally, those most motivated to do peer reviews may be biased or least qualified. Despite these barriers, the best practitioners found a way to include other voices to check their work.
Accountability and transparency were the strongest predictors of good analytic practice. The benchmarking showed that all the worst practitioners in government lacked accountability for their actions and metrics. Some went to great lengths to ensure their work was not transparent and to maintain a monopoly on their projections.
Balanced scorecards
Balanced scorecards have proved to be the best way to approach metrics. This idea seeks to use measures that suggest future performance, or leading measures, in addition to lagging metrics. Since no single number can inform wise action and too many are confusing, executives should limit the number of measures on the scorecard to between four and six. A few good measures used to report to stakeholders promote a useful dialogue, shared understanding and good government.
A few rules help define good metrics:
- They should have clear definitions, based on transparent calculations.
- Important measures, even if imprecise should be preferred over precise but questionable metrics.
- Static measures are prone to fail over time because society and threats are dynamic. In many cases trends in the metrics are likely to be as important as absolute measures.
When possible, government agencies should report a variety of critical measures -- including both leading and lagging measures -- for major policy topics to present a balanced view of the facts. Agencies should actively request feedback on metrics through interested parties or peer reviews.
Additionally, agencies should explore new metrics and analytics. Alternative measures should be generated and understood for possible adoption. In some cases, analytics professionals can help agencies improve their metrics by assisting with the creation of a balanced scorecard.
Leading government organizations are adopting both predictive and prescriptive analytics. The same partners who provide these solutions are good candidates for helping with management metrics.
NEXT STORY: AI adoption: Don't ignore the fundamentals