This year’s drones will soon look like kid stuff
Connecting state and local government leaders
A quick sampling of recent advances shows that the pace of innovation is picking up steam.
While drones might be the hottest gift this holiday shopping season, the research labs have been busy building robots with capabilities that will make those drones seem like…well…kids’ stuff.
Taken individually, the advances achieved may seem of only moderate significance -- and a few of them are quirky enough to inspire some head scratching. But taken together, it’s apparent that the pace of improvements is picking up steam.
Thought-controlled navigation
One of the most dramatic developments was announced in June by the Ecole Polytechnique Federale de Lausanne, where a team of researchers at the Defitech Foundation Chair in Brain-Machine Interface are creating a robot that can be remotely controlled by a person’s thoughts.
In tests, nine disabled people and 10 other individuals in three countries were trained to remotely control a robot at the CNBI lab in Switzerland with their thoughts. The testers wore an electrode-equipped hat capable of analysing their brain signals that relayed instructions over the Internet. Just as important, the robot was designed to avoid obstacles regardless of instructions from the remote pilot.
While the testing only included navigating the robot rather than manipulating objects, the technology offers promise for eventually giving the disabled opportunities to physically interact with the world.
Developing an eye for detail
Some of the most difficult abilities to bestow on robots are those that come naturally to humans, such as recognizing hand-drawn objects. Researchers at the University of Queen Mary in London announced in June that they had created the first program that can do better than humans at recognizing drawn objects. Their Sketch-a-Net program could correctly identify drawn objects – distinguishing a seagull from a pigeon, a flying bird from a standing bird -- 74.9 percent of the time. Humans topped out at 73.1 percent.
While the immediate use of the Sketch-a-Net in applications isn’t clear, the program -- which employs a neural net architecture -- does mark a significant step forward in developing machines with human-like perception.
Robot test kitchen
Another simple human task that has been difficult for machines is flexible, on-the-fly grasping of objects. Researchers at the University of Maryland Institute for Advanced Computer Studies (UMIACS) have trained robots to learn to cook by watching online cooking videos. The robots determine the best combination of motions for, say, grasping a spatula and frying pan to cook eggs. That simple task -- simple to humans, at least -- required teaching the robot how to accurately identify shapes and movement, to employ natural language skills and to process instructions from the video. The robots had to recognize each distinct step in a video, assign it a “rule” and the perform the steps in the proper order.
"We are trying to create a technology so that robots eventually can interact with humans," Cornelia Fermüller, an associate research scientist at UMIACS, told a reporter. "So they need to understand what humans are doing. For that, we need tools so that the robots can pick up a human's actions and track them in real time.”
Ocean locomotion
Aerial drones get much of the public attention, but researchers are teaching robots how to get around in all sorts of interesting ways.
Last February, researchers at MIT, the University of Southhampton in England and the Singapore-MIT Alliance for Research and Technology introduced a robotic octopus that moves through water at a rate of 10 body lengths per second. The robot -- with a 3D printed skeleton – propels itself by inflating with water and then rapidly deflating, shooting water out through its base.
Knowledge from the clouds
Stuck with your head under the hood trying to figure out how to remove the carburetor from an old Buick? Just put on your video glasses, let Gabriel see what you’re working on and he can deliver visual instructions. In December, researchers at Carnegie Mellon University previewed Gabriel, a cloud-connected program that works with wearable vision systems allowing users to tap into information in the cloud to perform tasks at hand.
Gabriel -- the result of a five-year, $2.8 million grant from the National Science Foundation -- isn’t on the market yet, but it may be soon. “Ten years ago, people thought of this as science fiction," Mahadev Satyanarayanan, professor of computer science and the principal investigator for the Gabriel project, told a reporter. "But now it's on the verge of reality."
I suspect it won’t be long until the quadcopter under the tree this year ends up in the attic with my remote controlled cars.
NEXT STORY: FAA announces drone registration rule