A new take on robocop? Georgia lawmakers look into ways AI can improve public safety
Connecting state and local government leaders
With artificial intelligence rapidly advancing, what public safety jobs can look like is changing quickly, sometimes in a seriously sci-fi kind of way.
This story was originally published by Georgia Recorder.
One of the most fundamental roles of any government is ensuring the safety of its citizens, whether from others who would do them harm, from disasters like fire and floods or from injuries and sickness.
But with artificial intelligence rapidly advancing, what public safety jobs can look like is changing quickly, sometimes in a seriously sci-fi kind of way – picture a team of firefighters searching for survivors in a smoke-filled warehouse with up-to-the second information beamed directly onto their visors like in a first-person video game.
“I could see firefighters going through a building, having some type of visual display, being able to understand based on pre-planned information about where certain hazardous materials are stored in a facility, or seeing real-time temperature information in a fire that would help make them respond better,” said John Chiaramonte, president of consulting services at Mission Critical Partners, a Pennsylvania-based consulting firm, during a recent joint meeting of the Georgia House and Senate committees on artificial intelligence.
AI can also be used for less flashy but equally vital tasks like helping police officers fill out paperwork more quickly and get back out on patrol or determining the best spots to place ambulances based on where emergencies are likely to happen.
“I see AI as a tool that holds great promise for 911 and emergency responders, augmenting, enhancing their abilities, allowing them really to focus in on the most important tasks at hand, allowing them to make better and more informed decisions,” Chiaramonte said.
In some places around the country, calling 911 could connect you with an AI operator whose job it is to weed out more frivolous calls like complaints about neighbors or questions for municipal agencies, piping those calls to the appropriate numbers while routing emergency calls to real humans.
That can be a big help in areas dealing with a shortage of trained dispatchers and too many residents with quick 911 trigger fingers, said Brad Dispensa, a security specialist at Amazon Web Services, which provides AI services to emergency call centers, and that can mean faster response times.
“I think most citizens can appreciate that calling in to call centers can be challenging,” he said. “In one case, in the county of Los Angeles, they were looking at a peak hold time of between 40 and 50 minutes, and that’s something that they worked with AWS closely on and trying to reduce that burden to their citizenry.”
Dispensa said implementing that technology and others helped reduce hold times to less than four minutes on average.
LA County went a step further, implementing a program within its public defender’s office to use AI to help manage its caseload.
“They’re getting case data from up to, I think it’s 99 different possible law enforcement agencies or folks that have the ability to run into crime,” Dispensa said. “And the challenge with that is that the public defenders have to basically take all of those different sources of data and they have to do manual data entry to take all those sources of information, combine them, search for information that could help provide their client with possibly significant capability to address a legal case against them.”
The technology automatically pulls out data relevant to a particular client and compiles it into a file for the public defender to use, which allows them to focus on defending their client rather than spend time searching through multiple paper and PDF files, Dispensa said.
Dispensa said the system includes a failsafe allowing users to check the source of all the data it produces, which is meant to prevent the risk of a lawyer inadvertently presenting bogus material in court.
Georgia lawmakers could consider policies to encourage AI development in public safety and other realms during next year’s legislative session, set to begin Jan. 13. The House and Senate AI committees are scheduled to present recommendations on AI-based legislation early next month.
Some worry about the entry of AI into life or death matters. One potential risk has to do with the computer science principle of GIGO – garbage in, garbage out. If the material fed into an AI system contains errors or bias, it will come up with errors or bias in its output.
In a worst-case scenario, that could mean a system does not recognize a 911 caller is in peril and fails to route the call properly, or, as a group of U.S. members of Congress expressed in a January letter to Attorney General Merrick Garland, that systems designed to serve the public without bias could instead amplify that bias.
“Predictive policing systems rely on historical data distorted by falsified crime reports and disproportionate arrests of people of color,” the letter writers say. “As a result, they are prone to over-predicting crime rates in Black and Latino neighborhoods while under-predicting crime in white neighborhoods. The continued use of such systems creates a dangerous feedback loop: biased predictions are used to justify disproportionate stops and arrests in minority neighborhoods, which further biases statistics on where crimes are happening.
Others argue that in sensitive areas like medical or legal advice or talking someone through an emergency situation, human touches like empathy, understanding and real human connection are immeasurably important, and overreliance on AI could cut those out of the equation, eroding public trust.
“The top priority for the policy must be ensuring that public trust is central to any AI development,” Chiaramonte said. “In most cases, this will mean that AI systems will support and not replace human decision-making.
“We don’t want to replace the human,” he added. “We want to ensure that AI can certainly flag these issues, they could identify, suggest solutions perhaps, but the human really needs to make that final decision. And, yeah, that probably might slow down some processes, but that’s ultimately the best way to guarantee that the public is going to be protected from the risks of unchecked automation.”
Speaking to the joint meeting of Georgia’s two legislative AI committees last week, Amazon lobbyist Maria Saab said in the absence of federal policies, states are taking the lead on AI-related regulations.
She said the technology should be used responsibly, but urged lawmakers to embrace AI.
“With anything, whether it’s artificial intelligence or machine learning, the best way to ensure better outcomes is to actually use the technology,” she said.” Like humans, practice makes perfect, and we need to enable the technology used for it to get smarter. And then with machine learning, it is really about the ability for using large data sets to create patterns. And so one thing in my work supporting policymakers and governments is helping them to embrace use, and even in small cases, to learn and progress on larger enablement.”
Georgia Recorder is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Georgia Recorder maintains editorial independence. Contact Editor John McCosh for questions: info@georgiarecorder.com. Follow Georgia Recorder on Facebook and X.
NEXT STORY: How one school's cell phone ban is going after two years