Slow approach to law enforcement AI could combat errors and bias, experts say
Several police agencies across the U.S. are tapping AI to help draft police reports, but a new white paper from the American Civil Liberties Union points to the risks of its rapid adoption.
Several law enforcement agencies across the U.S. have started the new year by experimenting with artificial intelligence tools. Local police departments from California, Maine, Illinois and others are leveraging AI to draft police reports, a move some proponents say will help speed up the process and reduce administrative burden on officers so they can focus on higher priority tasks.
The use of AI in police work has grown in recent years as police agencies have increasingly implemented tech solutions for use cases like facial recognition, gun shot detection and analyzing body-camera footage. Drafting police reports has emerged as another task that understaffed, underresourced departments are looking to AI for assistance.
But while the technology has potential to help law enforcement find and solve more crimes faster, a recent white paper from the American Civil Liberties Union underscores the risks of relying on AI to inform police narratives.
“Police reports, which are often the only official account of what took place during a particular incident, are central to the criminal proceedings that determine people’s innocence, guilt and punishment,” wrote Jay Stanley, senior policy analyst at the ACLU.
That’s why law enforcement agencies should consider a slower approach to AI tools like large language models, Stanley told Route FIfty in an interview, as such systems can, for instance, produce errors or false content known as AI hallucinations.
In Washington state, the King County Prosecuting Attorney’s Office issued a memo instructing Seattle-area police agencies to not use AI to draft reports late last year, citing concerns that AI tools could introduce errors into police narratives that could negatively impact investigations.
“In one example we have seen, an otherwise excellent report included a reference to an officer who was not even at the scene,” wrote Daniel J. Clark, chief deputy of the office’s Mainstream Criminal Division, in the memo. A police report with false or otherwise inaccurate information, he said, “will be devastating for the case, the community and the officer.”
The ACLU white paper also highlights the risk of introducing bias into AI-enabled police reports “because LLMs are trained on something close to the entire Internet, they inevitably absorb the racism, sexism and other biases that permeate our culture.”
Another risk of relying on AI to expedite report writing is that it could diminish officers’ “conscious effort” to thoroughly recount and review reports, Stanley said.
For instance, “the act of writing reports functions as a form of internal mental discipline for police that continually reminds them of limits on their power,” the white paper states. “A shift to AI-drafted police reports would sweep away these important internal roles that reports play within police departments and within the minds of officers.”
Some AI tools — such as Draft One, launched last year by Axon and used by many law enforcement departments to supplement report writing — include mechanisms to nudge officers to manually review and edit content. Draft One, for instance, allows users to configure the AI tool to insert nonsensical statements as a way to determine if officers are adequately checking AI-drafted reports.
However, Rick Smith, the founder and CEO of Axon, said during a webinar announcing the product that “we’re generally getting not-great feedback I would say on that — most agencies are saying no, they wouldn’t use it.”
There is potential for AI and other innovative tech solutions to increase efficiencies among police departments, said Ash Johnson, senior policy manager at the Information Technology and Innovation Foundation, in an email to Route Fifty.
But any new technology should undergo rigorous testing, such as an initial pilot period, before full implementation to ensure “it works as intended and is more effective and cost efficient than the previous method,” she said. Regular audits on an AI tool and only leveraging it for use cases “within the developer’s parameters [and] specifications” can help make sure this remains the case, she added.
Stanley also highlighted the importance of vetting tech tools, such as confirming their data privacy and security terms, before rushing into adopting novel technologies that are “untested and poorly understood.”
While budgetary and staffing shortages may drive agencies to tap into tech to make up for workflow gaps, Stanley said law enforcement officers “have an obligation to … not do their work in ways that create injustices or other wrongdoings.”
NEXT STORY: These bills would regulate high-risk artificial intelligence use in Virginia