How 3D maps can get more accurate
Connecting state and local government leaders
A research team solves a common problem with robotic 3D mapping, leading to clearer, more detailed maps.
People have been using maps since the dawn of time as a way to represent and navigate the world. Prehistoric hunters probably sketched plans of attack in the dirt. Much of the colonial period was spent trying to map different routes around the world. And today Google and MapQuest do their best get you to the store and back.
But all maps have flaws, either based on the techniques used to create them or their display constraints. The first world map most children experience in school is probably a Mercator projection. Created in 1569 by Gerardus Mercator, it was renowned for keeping the linear scale correct in all directions and is still the most popular map today. However, it tends to distort large objects closer to the poles so that Greenland looks bigger than Australia, when in fact it's about one third the overall size.
Moving to a 3D map — a globe — solved the problems of scale and size when representing the Earth. But government organizations these days are much less interested in knowing the general position of China or Hawaii relative to the rest of the planet than the detailed physical characteristics of the insides of buildings or the surface of potential battlefields.
The Defense Advanced Research Projects Agency not long ago completed a five-year program called the Urban Photonic Sandtable Display that creates a real-time, color, 360-degree 3D holographic display to assist battle planners. Now military planners can view 3D maps of battlefields without even having to put on special glasses. The 3D map can be rotated and zoomed, giving maximum control to those tasked with planning dangerous operations.
However, to create a detailed 3D map of an area, especially an indoor structure, requires special tools. Without some way to record what a building looks like, even DARPA's special UPSD hologram would remain blank.
One of the best ways to create a 3D map is to simply send a human or a robot through an area taking pictures then use software to stitch the images together to create a model that can then be explored virtually by others. That was the concept behind an Massachusetts Institute of Technology program last year that took a sensor from the Microsoft Xbox Kinect video game console and paired it with positional sensors and mapping software.
The idea is that firefighters entering a burning building trying to find survivors, or soldiers trying to clear a structure of enemies, could benefit from knowing the terrain so long as someone or some thing had gone in there before them and recorded the data into a 3D map.
However, a drawback with the 3D mapping software from MIT is the same one all robotic 3D mapping programs face, even those that use very precise robots to take measurements. Called loop closure, or drift, it occurs when a robot-mounted camera returns to ground it has already covered. Because of slight errors between the path the robot was supposed to take and the path it actually traveled, the software has trouble closing the loop and accurately modeling the complete picture. Doors may be slightly larger or smaller on the map than in reality. Stairway entrances might be represented too far to the left or right than their actual positions. Depending on the circumstances, those errors can be either troublesome or deadly if they are the sole source of information.
For smaller maps, loop errors are relatively minor, nothing like turning Greenland into the eighth continent of the world as with the Mercator projection. However, loop errors are incremental, since the farther a robot travels, the more tiny positional errors are introduced into its path. So after traversing a lot of ground, the loop process can become extremely inaccurate, perhaps even doubling some terrain features.
So the scientists at MIT and the National University of Ireland at Maynooth went back and found a way to close loop errors altogether. The secret is tracking the position of the camera in space as the robot moves. Then when the camera gets back to a place that it has already seen, an algorithm compares the robot's path and the projected path, and adjusts the model accordingly.
“Before the map has been corrected, it’s sort of all tangled up in itself,” Thomas Whelan, a Ph.D. student at NUI told MIT News. “We use knowledge of where the camera’s been to untangle it. The technique we developed allows you to shift the map, so it warps and bends into place.”
A video of a map being recorded and then perfectly stitched together shows how accurate the new maps are, with no looping or positional errors. I'm pretty sure Mercator would be impressed, but more importantly, these 3D maps can be completely accurate.