Advancing AI with grand challenges, greater security
Connecting state and local government leaders
The last of three hearings on artificial intelligence examined challenges to the technology and government's role in driving AI forward.
In the history of driverless cars, the Defense Advanced Research Projects Agency’s 2004 and 2005 Grand Challenges contributed to improving the tech to the point of viability. The government could play a similar role in the development of artificial intelligence, said Jack Clark, the director of Open AI, at an April 18 congressional hearing focusing on government’s role in artificial intelligence.
DARPA’s 2016 Cyber Grand Challenge concentrated on autonomous systems and is a model other agencies can adopt, Clark said during a hearing of the House Oversight's Subcommittee on Information Technology
“Every single agency has … problems it's going to encounter, and it has competitions that it can create to spur innovation, so it’s not one single moonshot, it’s a whole bunch of them," he told lawmakers during the hearing. "I think every part of government can contribute here.”
This is the third hearing this subcommittee has held on AI. The first two looked at the general state of AI technology and how government could potentially benefit from its use.
Ranking Member Rep. Robin Kelly (D-Ill.) said securing the large amounts of data to AI needs is critical. With recent news about a political research firm gaining access to information on Facebook users and massive data breaches at companies like Equifax, what can be done to secure the data used to inform these AI systems? Kelly asked.
“If a company or government organization cannot protect the data, they should not collect the data,” said Ben Buchanan, a postdoctoral fellow at Harvard Kennedy School's Belfer Center who focuses technology deployment and government.
However, there are some technologies that could help ensure privacy while allowing for data needs of artificial intelligence, Buchanan said. One of these is a mathematical technique called differential privacy.
“Differential privacy is the notion that before we put [an individual's] data into a big database, we add a little bit of statistical noise to that data, and that obscures what data comes from which person," he said. "In fact, it obscures the records of any individual person. But it preserves the validity of the data in the aggregate."
Another technology that could help increase user privacy is on-device processing, Buchanan said.
“If we’re going to have [users] interact with an AI system, it is better to bring the AI system to them rather than bring their information to some central repository,” Buchanan said. “So if an AI system is going to be on your cell phone, you can interact with the system on your own device rather than at a central server where the data is aggregated.”
Subcommittee Chairman Rep. Will Hurd (R-Texas) said a summary of the three hearings will be released in the coming weeks that will include the lessons learned and what steps the committee expects to take going forward.