Text analytics: Reading between the lines of terabytes of data
Connecting state and local government leaders
With natural language processing and statistical pattern analysis, agencies can dive deep into the tea leaves of public sentiment to detect signs of terrorism, fraud and any number of other activities. They just don’t always want to talk about it.
This is the second of a four-part series about text analytics.
It’s powerful enough to read minds. It can sort through terabytes of unstructured data to pull out hidden troves of information. It’s one of the hottest new software tools that almost nobody in government wants to talk about – it’s text analytics.
The Homeland Security Department reportedly has been using text analytics to scan social media for signs of terrorists. NASA has been using it to cull log reports made by pilots and mechanics to make airliners safer. The IRS and the Federal Reserve also have recently put out requests for proposals for text analysis tools, but both organizations declined interviews about their plans.
And increasingly, agencies and departments are scanning social media to check on how they’re doing with the public and to get ideas for improving service.
“They need to find hidden relationships between the words and sentences that people are using to spot emerging trends and to identify public concerns so they can provide intelligence to all of the departments in the government that they serve,” said Fiona McNeill, SAS text analytics product marketing manager. “It’s decoding the message of the public and understanding the voice of the people.”
Most federal agencies – with the exception, as we shall see, of those involved in public health and safety – are reluctant to talk about their text analytics projects.
According to McNeill, that reluctance results from a number of concerns. Agencies that are using text analytics to detect fraud or other questionable activity don’t want to tip their hands to anyone engaged in that activity.
“Smart [criminal] organizations are doing purposeful activity to get around government,” McNeill said, “so government talking about how they are actually finding them only provides them with ammunition.”
Jamie Popkin, managing vice president at Gartner Group, agrees. “When you look at the history of where this started, it came out of the military and intelligence-gathering wings of government, so I’m not surprised many people don’t want to talk about it,” he said.
Whether it’s being used for rule-making efforts, litigation discovery, detecting compliance or fraud, or monitoring activity for trade negotiations, the specific data being monitored and the tools and techniques being employed “may not be something that you want so clearly publicized,” Popkin said.
Monitoring and analysis of social media is even more sensitive. “I would separate social media as something of a special case,” he said. “If the government is trolling social media, having it be in the headlines certainly wouldn’t help the government’s position with the public.”
While the processing power and fast storage retrieval that has driven other advances in big data management were required for text analytics to take off, the analytic tools themselves are strictly software. And what separates the products – whether it is the major offerings from IBM or SAS, or any of the dozens of applications tailored to serve narrower markets – is how the algorithms are tuned to filter, sort and analyze massive amounts of unstructured text.
The devil, as they say, is in the details.
“You need to look at each of the use cases and understand what the complexities are,” Popkin said. “If you’re just trying to identify entities within a document set, that can be handled fairly easily without too much tuning. If you’re looking for nuanced sentiment around a highly technical set of questions then you’re either going to have to build the linguistic models and build custom taxonomies to support those models or you may have to do a lot of careful training of a machine learning algorithm on a set of documents to be able to get results that can be trusted.”
In general terms, text analytics involves the structuring of text from unstructured sources using techniques for parsing words or phrases, and for detecting patterns and connections in the text. In short, the algorithms contain rules for manipulating the input text. The rules may instruct the program in how to accomplish a variety of analytic tasks, including categorizing documents, creating summaries, detecting relevance between documents, relationship extraction and analyzing the sentiments of those who created the text.
Analyzing unstructured text is a much more challenging feat than other types of data analytics because it is open-ended. With structured data, analysts know what to expect and can write rules accordingly: For example, “If the number in column 10 is greater than 50 send the record to collections.” Simple enough. But how does an analyst at DHS write a rule that can tell whether the tweet “I bombed last night” is from a terrorist or a self-critical performer?
In general, there are two basic kinds of text analytics: natural language processing and statistical pattern analysis.
With natural language processing, the software uses complex sets of “if-then” rules specifically written to analyze language models as they are understood by humans. Increasingly, however, NLP is being supplemented with machine learning techniques that use statistical techniques to analyze bodies of text.
In fact, the most sophisticated version of natural language processing – at least that is publicly acknowledged – was that used by IBM’s Watson supercomputer to easily defeat the top two all-time “Jeopardy!” champions in a highly publicized contest two years ago.
Watson’s performance was impressive. Though the computer had to “understand” complex and often tricky questions before it could search its data stores for the right answer, the machine managed to earn more than three times as much in winnings as its two human competitors.
According to Frank Stein, director of IBM’s Analytics Solution Center, the company is repurposing the same technology and refining it for use in other sectors. “The first industry that we are going after is the health care industry,” he said.
“The great thing about using a system like Watson is that it has all the knowledge you put into it but it doesn’t tend to be biased,” Stein explained. “There was a professor at Columbia medical school who was talking about how [doctors] would miss things because of their bias. They might assume that the person is living in New York when they diagnosis a symptom and they don’t realize that the person was recently in Mozambique or some other place.”
“What’s powerful about text analytics and sentiment analysis is that it is essentially saying to a computer system, ‘Go read these books for me and relay the ones that have this in it or that answer this question,’” Gartner’s Popkin said.
But he warned that text analytics technologies aren’t bulletproof. “If you have a pool of unmined text, you need to start with a hypothesis,” he explained. “What do you think you’re going to find? What is it that you are specifically looking for in that text?”
In many cases, depending on the type of text and whose text it is, even the hypothesis can be difficult to shape accurately. “Understanding the demographics of the audience that you are mining, understanding the language of the demographic, understanding any underlying biases, understanding the phrases — there is a whole set of things you need to take into account when you’re doing those analyses,” Popkin said. “It’s easy to do this wrong. I think you need to be careful in using these tools.”
Accordingly, vendors and analysts agree that whether they reside in-house or outside, agencies will want to enlist information or library scientists in designing and refining their text analytics efforts.
YESTERDAY: NASA applies deep-diving text analytics to airline safety