Making AI-based language learning easy as child's play
Connecting state and local government leaders
The Defense Advanced Research Projects Agency wants to build an automated language acquisition system that learns language the way children do -- extracting meaning from hearing sounds while observing the environment.
Although language recognition software in call centers, translation apps and personal assistants has radically improved over the last decade, the machine learning algorithms used to train language recognition systems suffer from a lack of annotated training data, which limits the technology's accuracy and applicability. The technology is also "brittle," meaning it often has no way to manage new data sources, topics and vocabulary.
To overcome this limitation, the Defense Advanced Research Projects Agency wants to build an automated language acquisition system that learns language the way children do -- extracting meaning from hearing sounds while observing the environment.
The Grounded Artificial Intelligence Language Acquisition (GAILA) program aims to develop a prototype that can associate text or spoken input with images, video or virtual visual scenes of previously unseen entities and actions and produce English descriptions of events and relationships. It expects a system that sees a black table, a white table and a black chair, for example, to be able to describe a previously unseen object as a white chair, DARPA said in its solicitation.
Similar to how children learn about variations of word forms, the GAILA software would learn to describe events or actions (verbs), the entities that participate in those events (nouns) and the relationships among those entities and events (adjectives and phrases). It would also be able to distinguish between "pushing" and "throwing" and know that "rolling" can only apply to certain objects. It would understand that objects have functions (chairs are for sitting) and capabilities (containers hold things) and grasp indefinite and imprecise concepts such as near, tall or heavy.
DARPA sees several options for conducting this research. A prototype learning platform could leverage a 3D vision system, work in a custom-built virtual world, observe annotated movies and TV shows or tap into the Situations with Adversarial Generations dataset used to evaluate common sense natural learning inference tools.
Awards for the two-phase, 18-month project are limited to $500,000 for each phase.
GAILA is part of DARPA's Artificial Intelligence Exploration program that researches and develops “third wave” AI theory and applications that making it possible for machines to contextually adapt to changing situations.
Responses are due April 26. Read the full solicitation here.