Eye tracking probes how intell analysts find needles in the haystack
Connecting state and local government leaders
Sandia Lab researchers are working on a way to help track data in visual feeds and other real-life security applications that use more than just static images.
Eye-tracking technology has been used to study how people reason and the differences between the ways experts and novices use information. But now Sandia National Laboratory wants to apply the technology to the real world, where it could help intelligence analysts working to identify security threats in war zones, airports or elsewhere.
Eye-tracking technology measures the eyes’ activity by watching where viewers are looking on a computer screen, what they ignore and frequency with which they blink. The technology works well when the subject is analyzing static images or relatively stable video, where researchers can anticipate where content of interest will appear.
But when analysts toggle between images at lightning speed, pan across images, zoom in and out or view fast-moving videos or other records, current eye-tracking technology can’t keep up.
To address this need, Sandia is working with EyeTracking Inc., a San Diego small business that specializes in eye-tracking data collection and analysis, to develop a way to help track data in visual feeds and other real-life security applications that use more than just static images.
Under a cooperative research and development agreement, researchers are working to figure out how to capture within tens of milliseconds the content beneath the point on a screen where a viewer is looking, hoping to get a better handle on how to anticipate what might trigger an analyst to look at other places in an image.
Until now, Sandia said, eye-tracking research has shown how viewers react to stimuli on the screen. For example, a bare, black tree against a snow-covered scene will naturally attract attention. This type of bottom-up visual attention, where the viewer is reacting to stimuli, is well understood, according to the labs’ researchers.
They want to see how viewers look at a particular scene with a task in mind, like finding a golf ball in the snow. They might glance at the tree quickly, according to the lab, but then their gaze goes to the snow to search for the golf ball. This type of top-down visual cognition is not well understood and Sandia researchers said they hope to develop models that better predict where analysts will look.
Researchers want to anticipate analysts’ decisions in real-world environments to create a model of top-down visual decision-making. “We want to understand how fixation on something leads to analyst decisions, such as detouring to get information from a different source,” said Sandia’s Laura McNamara, an applied anthropologist who has studied how certain analysts perform their jobs.
For intelligence analysts who are “who are facing this firehose of information,” the research could help ensure that new software doesn’t increase their cognitive load, she said.