With video mash-up app, first responders set up virtual ops center
Connecting state and local government leaders
Winner of National Science Foundation-funded tech challenge offers tools to help emergency responders filter video and Twitter feeds from bystanders at the scene.
The hurricane has passed, but for the people of Breezy Point, N.Y., it is only the beginning of their ordeal. The National Guard has started to arrive, but emergency services and communications networks have been crippled.
Emergency planners are relying on Twitter feeds and video streaming from the cell phones of first responders and citizens to prioritize operations. The massive amount of scattered data is being vetted by trained volunteers – virtual operations support teams, or VOST – who pass the most important data on to the regional emergency response center.
That's just one scenario envisioned by the developers of rtER, a prototype Real-time Emergency Response application that allows users to manipulate multiple feeds of streaming video and other data. The project was selected as a winner in the Mozilla Ignite Challenge – a National Science Foundation-funded program for the development of applications that take advantage of gigabit-per-second Internet speeds.
The push for gigabit-optimized applications is being spurred by the advent of Google Fiber, the first gigabit-per-second service, which is currently offered only in Kansas City, Mo. Google Fiber achieves its speed by relying on fiber-optic cables running all the way to the client site. Google is also preparing to deliver the service in Provo, Utah, and Austin, Texas.
According to project leader Jeremy Cooperstock of the Shared Reality Lab at McGill University, the idea of rtER is to use gigabits/sec-speed Internet to integrate live, high-quality video from multiple feeds with sensor data from a variety of sources and cached or archival information. "Imagine an emergency operations center in which the operations coordinator needed to get a rich sense of the environment where the crisis is taking place," said Cooperstock. "And in the case of a massive disaster such as an earthquake or major fire, where the location may be unrecognizable in its present state, wouldn't it be nice if you could have a previous view of what that site looked like just a few weeks or months ago so as to be able to effectively guide the first responders?"
The main hurdle to achieving such a goal is clearly bandwidth. But Cooperstock's team was working on the assumption of having gigabits/sec pipelines.
"If you have hundreds of people at the scene with their smartphones, why not give them the tools to stream that video to the emergency operation center where decision-makers could use that video for real-time situational assessment?" asked Cooperstock. "In the initial round of the competition, we built a prototype visualization environment and presented it in parallel with that an immersive view that was pulled from Google Street View images."
Even assuming the emergency site has gigabits/sec bandwidth for handling massive data streams, the team soon faced a problem. "What you do when you accept hundreds of users' streaming video?" asked Cooperstock. In the visualization environment, the video would display in tiny thumbnails, and there was no way to filter the data. “We need some means of culling the mountains of video data to focus the coordinator's attention on what is most important," Cooperstock said.
That's when the team thought to loop in volunteer analysts. The VOST monitors data feeds, including those from social media like Facebook and Twitter, and team members can look for information that is salient to the needs of the response community in the time of crisis.
To help the analysts, Cooperstock’s team added an interactive loop. The rtER operator can simply click a button to send a message to the video-stream initiator in the field asking him to change the orientation of the camera.
The team also added an activation feature that lets a 911 dispatcher ask volunteers in a certain area to start conveying information to the police or emergency response community. "If you have the rtER app installed in your smartphone, we can send a beacon to you and say, ‘Hey, there's something going on there, and we need you to start sending video,’" said Cooperstock.
The most critical technical issue Cooperstock's team faced was the selection of a video protocol. They settled on Apple's HLS protocol. "The drawback of this protocol is that it writes data as a set of files rather than as a network stream," said Cooperstock. "That means there is an inherent delay of about five seconds before the receiver can actually see the video." On the plus side, says Cooperstock, the structure of HLS data means that users can move around in the video stream seamlessly and without any need to leave the page.
Another critical advantage of HLS, according to Shared Reality Lab research associate Alexander Eichhorn, is that the code of the protocol is fully accessible, unlike commercial protocols. "Since we wanted to cut latency down to the bare minimum, we needed full access to the code," said Eichhorn. With the two-second packets rtER employs, there is a delay of approximately six seconds. "This is something we need to tweak a little further," he said. "I've also seen two to three seconds end-to-end latency."
The application is coded in HTML 5 for broad compatibility with browsers and the team has developed applications for Android and iOS.
To date, the prototype application has been tested on a limited basis with emergency services in Red Wing, Minn., and Quebec City, Quebec. "Now we're thinking about next steps for additional funding that would help extend the system," said Cooperstock. "We are eager to hear from the emergency communities. What do they see as the one core strength that they require? Different communities have different answers."
NEXT STORY: With Q10, BlackBerry makes a last stand for agency users