Take a picture, and 4D app gives you the details
Connecting state and local government leaders
Hybrid 4-Dimensional Augmented Reality, developed at Virginia Tech, can give responders situational awareness, detail what's inside a building or tell how something works.
Second of four parts
Imagine taking a picture of an unfamiliar building or a piece of machinery and then having the picture tells you what’s inside the building or how that machine works. That’s part of the idea behind Hybrid 4-Dimensional Augmented Reality (HD4AR), a project of the MAGNUM Group at Virginia Tech University.
The project, led by Jules White, a professor of electrical and computer engineering at the university, can use a smart phone’s sensors, geotagging features, camera and video and audio recorders to augment situational awareness for first responders, construction crews or the public.
When it receives an image, HD4AR draws on a database of information to deliver annotated data to images on the phones. A user might, for example, take a photo of a piece of equipment. HD4AR would locate a similar image in its database and then deliver attached data — such as labels for the dials and levers on the equipment and perhaps a link to a user manual — to be superimposed on the image on the user’s cell phone. Or, send HD4AR a photo of a downtown street and it would be returned with buildings and stores identified.
“The idea behind this project is to create a framework where, when there was a disaster, people who were trapped in different areas could be using their smart phones to essentially provide situational awareness data to first responders or other citizen scientists in the area,” White said. “It can be image data, taking pictures of things, in-capture audio, video, accelerometer data, these types of things. From our perspective it was more about capturing that data, geo-tagging it all, and having it centralized in a location that first responders could look through.”
HD4AR also is designed as a tool for construction sites, taking the place of all those design drawings, but the framework also holds value for consumers. Suppose you go out in the morning and find your car battery needs a jump-start — and, as luck would have it, jump-starting a car is not something you know how to do. “So you take a photo of your engine and then on your photographs we will figure out where the positive and negative terminals are on the battery and we will annotate your photograph," White said.
The information flow goes both ways, too. “Anybody can add to the database using their phone,” he said. “From those photos we will build a crude 3-D model. So when the user goes into the photo and begins annotating it, drawing in information, we then figure out where on the 3-D model those notes go.
“When that information is saved in the database and when a new photo is taken — with a completely different angle and orientation — we can figure out which of those annotations that the first person created should be visible in the second person's photograph and then render them into that place in the photograph.”
The biggest challenge was to accommodate the processing and matching of photos taken from different angles, at different times of day and with physical changes over time. “We designed all the algorithms to be able to handle change and ambiguity in the images,” White said, citing as examples obstructions such as people walking in front of the camera or walls that change over time.
“We can tolerate a large amount of change before things start giving us trouble,” he said. “A wall may double in height and we can still often recognize that wall based on the original imagery that we have of it.”
White said the technology, which received an Innovation Award at January’s Consumer Electronics Show in Las Vegas, has been licensed to a startup company, PAR Works Inc.
The Virginia Tech group has been working on other innovative ways to manage mobile devices, including a modified version of the Android operating system that lets admins set rules for when users can access data or run certain apps, according to such factors as their location or the time of day.
PREVIOUS: Smart phones as sensors: locating snipers, or parking spots
NEXT: When smart-phone technology hits the wall