AI on the line: Monitoring prisoners’ phone calls for criminal intent
Connecting state and local government leaders
Speech-recognition technology, semantic analytics and machine learning can flag phone calls in near real time that contain conversations that point to violence or criminal behavior.
Some correctional agencies are turning to artificial intelligence to monitor inmates’ phone calls for signs of violence.
Phone calls to and from inmates are regularly recorded and monitored, but some companies are using AI speech-recognition technology, semantic analytics and machine learning to flag phone calls in near real time that contain conversations indicating violence or criminal behavior.
LEO Technologies, a firm that offers AI services to U.S. prisons and jails, uses cloud-based natural language processing to build a customized lexicon based on keywords, code words and local slang. Its software identifies discussions among inmates and their outside conversation partners focusing on weapons, contraband, threats to inmates, gangs, homicides, assaults or suicide.
LEO investigators notify law enforcement when the system picks up suspicious language or phrases that signal criminal intent, enabling officers to take action before a problem erupts.
The company recently signed a contract with the Georgia Department of Corrections for its Verus phone monitoring transcription services, which is hosted on the Amazon Web Services platform.
Verus supports “non-biased phone call analysis and transcription, enabling keyword-based searches and alerts,” company officials said. It uses AWS Translation so corrections officers can easily toggle between Spanish transcripts and English translations.
“Investigators leverage the information Verus collects and help prison systems shut down criminal activity that threatens inmates, staff, and surrounding communities,” LEO CEO Scott Kernan said in announcing the contract with Georgia.
A House panel recently asked the Justice Department for a report on the use of AI to monitor prisoners’ calls with an eye toward using it in the federal arena.
Bill Partridge, chief of police in Oxford, Ala., told Reuters that local law enforcement officers were able to solve cold homicide cases after prisoners were flagged on the phone talking about committing the murders. The Verus technology also helped prevent suicides, he said. “I think if the federal government starts using it, they’re going to prevent a lot of inmate deaths,” he said.
However inmates, their families and advocates say “relying on AI to interpret communications opens up the system to mistakes, misunderstandings and racial bias,” according to the Aug. 9 Reuters article.
The advocacy group Surveillance Technology Oversight Project reported last year that the Securus Technologies’ AI platform that the New York State Department of Corrections and Community Supervision uses had the potential to automate racial profiling.
Reuters also pointed to a 2020 paper by researchers at Stanford University and Georgetown University on the five leading speech-to-text technology systems. They found that the “technology that transcribes voice conversations is flawed and has a particularly high error rate when applied to the voices of Black people,” adding that the Sentencing Project found that Black men are six times likelier to be imprisoned than white men in the United States.
Several state and local facilities, including in Alabama and Georgia, already use the technology.