Chattanooga taps AI to identify hate speech
Connecting state and local government leaders
To better understand the factors leading to the spread of violent extremism and intolerance, the city is using Hatebase's natural language processing tool to analyze reports of inappropriate speech and determine the likelihood of hateful context.
To get a better handle on racially insensitive and bias-motivated slurs spoken or written in the region, the city of Chattanooga has implemented an online form that residents can use to anonymously report hate speech that they hear or see.
The form is a component of Chattanooga Mayor Andy Berke’s Council Against Hate, an initiative launched last April to better understand and act on factors that often preface violent extremism and intolerance in the city. Examples of hate speech that might be reported via the form include graffiti, vandalism and derogatory comments spoken or overheard.
“The world of people being radicalized into violent movements is incredibly tenacious and clever and fluid,” said Kerry Hayes, Berke’s deputy chief of staff. “We as a small city don’t begin to have the resources to necessarily even understand it, and so this to us is one sliver of the larger pie to make sure we’re doing everything we can to wrap our heads around the scale and scope of the problem as it exists in Hamilton County.”
The form is a widget from Hatebase, a company whose algorithms analyze public conversations using a repository of more than 3,600 terms in about 100 languages. Hatebase provided a single line of code that the city added to its website to create a form with four fields asking residents what term they saw or heard, whether it was used in a personal attack, how they would define the term and in what language the slur was made. Reports are used for informational purposes only and are not routed to the Chattanooga Police Department, according to the city's website.
The data is timestamped, geotagged to Chattanooga, automatically added to the company's database of hate-speech words from more than 200 countries and immediately available to all Hatebase users who can access the data through an application programming interface. For instance, Chattanooga officials can access the raw data by logging in through the API and specifying the type of information they want: hurtful vocabulary or all the sightings of individual or multiple terms in a given area and within a given time frame.
Hatebase features a natural language processing engine called Hatebrain that can look at a chunk of content containing hate speech and determine if the words were in fact being used in a hateful context. Hatebrain runs sets of rules on the language. For instance, it looks for whether someone meant the term in a clinical way, such as in a speech about solving hate crimes.
“We look for these things we call pilot fish,” Hatebase CEO and Founder Timothy Quinn said. “Pilot fish are the little fish that swim alongside sharks. These are things that are not hate speech but tend to accompany hate speech…. Vulgarity is one of them. If we see that, it increases the odds that something is being used in a hateful context.”
It also looks for targeting language such as “he’s a” or “they’re a” and xenophobic slurs such as “off the boat.” Hatebrain returns an assessment stating the probability that something is or is not hate speech.
“Being able to identify hate speech is really, really challenging for machines,” Quinn said. “It’s challenging for people, [but] it’s especially challenging for machines because language is a really fungible thing, and it can take lots of different forms. It can be misleading, especially when you get into multilingualism, which we do -- we do almost 100 languages. You can’t just do character-set matching; you have to get some idea of the context in which a given term is being used.”
Chattanooga started using the intake form in November 2019. Potential uses for the intelligence include outreach to affected communities and policy changes to promote inclusivity, Hayes said.
“Long term, I think it would be certainly something we would want our office and our law enforcement folks analyzing and looking at and making sure that if there are what appear to be troublesome spots or neighborhoods or ZIP codes, we’re aware of that, and we can take programmatic steps to intervene,” Hayes said.
Hatebase started as a pilot with the Sentinel Project, a Canadian nonprofit working to reduce mass atrocities worldwide. The idea was that because hate speech often leads to violence, capturing quantifiable data points on such speech could serve as early indicators of brewing trouble.
“Cities are really unique entities in that they tend to be petri dishes of potential simmering conflict,” Quinn said. “It’s where a lot of different people aggregate, often in situations of tension. Where there’s different ethnic groups, there’s different social strata, different economic strata, there’s heightened conflict and crimes. A city is really like a pressure cooker for a lot of things that can result in hate speech and ultimately hate crime.”
Efforts to control hate speech are gaining traction worldwide. New Zealand’s Department of Internal Affairs has a team working on a virtual reality tool to study the cause-and-effect of online hate speech, while researchers at the University of Cambridge in England are looking to stem the spread of hate speech online by building algorithms that can score the likelihood that an online message contains hate speech and quarantine it, giving social media users a choice in whether they want to read the post, according to their paper.
NEXT STORY: Delivering on the promise of smart parking