Human-AI teaming for DC-based emergency response
Connecting state and local government leaders
Members of Montgomery County, Md.’s Community Emergency Response Team are training an artificial intelligence-based system to find COVID-related items in Twitter posts that public safety officials should know about.
Members of Montgomery County, Md.’s Community Emergency Response Team (CERT) are using an artificial intelligence-based system to find COVID-related items in Twitter posts that may call for action.
As part of research by a team from Brigham Young University (BYU), George Mason University (GMU), the University of Texas (UT) at Austin and Virginia Tech University, CERT volunteers -- dubbed the Virtual Emergency Response Team (VERT) -- are tagging tweets based on relevance to help train the AI to recognize data that indicates a situation that emergency response may want to know about.
Under a grant from the National Science Foundation to study human-AI teaming in emergency response, the researchers had been studying how the AI platform, called CitizenHelper, would work during natural disasters. Last fall, though, they shifted to focus on COVID-19 and spent six months studying tweets posted in the National Capital Region that could indicate COVID-risky behavior, such as indoor venues without strict mask enforcement.
“The initiative was a proof of concept that was able to show it could -- keyword could -- be implemented within an [emergency operations center] environment should a local, state or federal [emergency management agency [or] department be interested in attaining situational awareness,” VERT lead Steve Peterson said.
From Feb. 25 to early April, VERT volunteers identified 3,300 keywords relative to vaccines and hesitancy that went into CitizenHelper. They then used a team-built interface to label those terms as relevant or irrelevant and for positive or negative sentiment before feeding the data back through the platform to help it learn to automatically identify those categories, said Amanda Hughes, assistant professor of IT and cybersecurity at BYU and a principal investigator (PI) on the project.
“We’re looking at it from an emergency manager perspective,” trying to understand from the tweets if people are upset and, if so, what about, Hughes said. “Can we do something? Is there info they are not getting? Are there rumors we need to correct?” she said.
After CitizenHelper runs the data, it is curated and presented through a dashboard to emergency managers who decide whether to take action.
Since last year, CERT has run 12 COVID missions, called virtual activations. Last year, it had 89 volunteers, and Peterson expects about 20 to commit to helping with the next step of vaccine hesitancy study.
“Because they’re local to the National Capital Region, they can recognize landmarks, and in Twitter, a lot of times things are abbreviated,” said PI Keri Stephens, a professor at UT’s Moody College of Communication. “They have that local knowledge that they can add that computers aren’t necessarily going to know.”
CitizenHelper is the result of an NSF-funded, GMU-coordinated project that mined social media for human behaviors relevant to disaster response, said Hemant Purohit, a co-PI on the project and an assistant professor at the university’s Department of Information Sciences and Technology.
“How can computer algorithms find out these things rapidly and quickly in real time? That’s the very challenging task,” Purohit said, because the algorithms need humans to help them improve their accuracy. “CitizenHelper allows us to create this very seamless interactive mechanism for humans and computers. The humans can provide feedback to the machine on what the machine has predicted.”
He likens the process to a child learning about colors by putting red balls into a red bin. “In the same way, we are trying to teach the AI algorithms to be more specific,” Purohit said. “Let’s say they have learned with the data that humans provided how to classify. When a new data point comes in, the algorithm is unsure of what to do with it, so a human user can provide feedback,” he explained. “This is a specific type of activity called active learning in the world of machine learning and AI.”
Chris Zobel, another co-PI and professor at Virginia Tech’s Pamplin College of Business, said the goal is to determine whether this human-AI interaction can make a community more resilient. Volunteers trained to recognize problems can better understand what is happening in the community and what is being done -- or not -- about it.
“It really is a team between the humans and the artificial intelligence,” Zobel said. “The AI can’t learn what the important information is without being taught … by the humans.” He said humans will always have a role because of the context associated with emergency response and how it varies by place and time.
The researchers used Twitter because its data is the most open and easiest to collect and analyze, Hughes said, and Peterson added that GMU has an API to Twitter that lets the researchers crawl and pull data. The team could use other social media platforms such as Facebook, but that would require getting permission from every user to collect their data, which is impractical, she added.
The Montgomery County CERT isn’t new to pairing volunteers with technology. It’s been doing that to glean situational awareness from social media since 2013. Using commercial products, they performed hashtag searches but couldn’t home in much further.
“We were at the mercy of Hootsuite, TweetDeck -- those platforms that are not customizable,” Peterson said. “You get what they tell you. You have to utilize the results of their filtering mechanism with no knowledge of how they concluded getting something.”
CitizenHelper can automatically pull metadata and geolocation information relative to COVID prevention and risk, delivering results specific to the D.C. area and therefore more useful. Still, many emergency managers are hesitant to use such a tool, he said.
“The tool is readily available, but like most promising technological advancements that can be helpful in a community, agencies are reserved and will not commit to more than a briefing of the capabilities,” Peterson said. “The work has been presented to local and federal officials who liked the progress, and importantly, liked the research that was conducted, but we need more commitment.”
To get it, the team wants to expand their research, most likely to a Texas CERT to study hurricanes, Stephens said. She added that a long-term goal is creating a training module that CERT could use at a national level.
The team is applying for the NSF Civic Innovation Challenge, a competition for smart and connected communities that carries awards up to $1 million. In January, they were selected as Stage 1 challenge awardees and won about $50,000 to refine the project.