A call for global action on cyberattacks
Connecting state and local government leaders
The unprecedented security risks posed by artificial intelligence and machine learning demand an array of technological, legal and policy countermeasures, a new report says.
The exploding growth of mobile and internet-of-things devices, coupled with the increasing sophistication of artificial intelligence, have combined to create unprecedented cybersecurity risk. That’s the conclusion of a just-released report by the Institute of Electrical and Electronics Engineers and Syntegrity, a Canadian consulting firm.
The report calls for global cooperation by governments and the private sector on an array of technological, legal and policy countermeasures, including -- and this is likely to be very controversial -- strict regulation of artificial intelligence and machine-learning programs.
“The potential impact of an intrusion has increased substantially,” the report’s authors wrote. “Globally connected devices and people mean that attacks affect not only the digital world as in the past but also the physical world through the Internet of Things (IoT) and the social world through ubiquitous social media platforms." To address the issue head on, "our entire community needs to respond and develop the technology, and data structures, and the legal, ethical, legislative, and corporate governance mechanisms needed to secure an environment that is increasingly under siege,” they said.
The mushrooming of connected devices, according to the report, “will render human security personnel incapable of defending the entire system.” As a result, the authors recommended using artificial intelligence and machine learning to combat malware that relies on those technologies. The authors cited as an example the software used by Volkswagen that could detect when a car was being tested for diesel emissions compliance and reconfigure its behavior.
What’s most interesting about “Artificial Intelligence and Machine Learning” is its warning about the dangers of AI/ML even when being used for legitimate purposes.
“Creators and users of AI/ML should not be financially rewarded for shipping or implementing code prematurely without a thorough analysis and testing,” the authors wrote. “We should start preparing now for the AI/ML-caused disasters that will inevitably occur.”
The report also calls for requirements that security be embedded in the hardware of connected devices. “Because IoT and mobile devices usually lack the computational power needed to run advanced security software,” the authors wrote, “security must be embedded within the hardware of the devices themselves. The devices must
become the front line of defense, or they will be used to enable attacks.”
Col. Barry Shoop, a co-sponsor of the report and professor of electrical engineering at West Point, recommends a global clearinghouse on cyberthreats. He acknowledged that private-sector companies might have to be forced to participate in such effort. “In the for-profit sector … they are less willing and in some cases not willing at all to share data for the common good of everybody,” he told BRI, a risk intelligence organization. “They’re not willing to share what has transpired, what the attacks against them were, what their defense was.”
Some experts, however, said that a rapid increase in cyberattacks and the resulting damage, may convince the private sector to voluntarily join in the effort. “What we’ve seen over the last five years is increasingly larger, deeper, broader attacks,” Brian David Johnson, futurist in residence at Arizona State University and a contributor to the report, told IEEE Spectrum. “Not only is it raising this to the attention of people, it’s also becoming bad for business -- and bad for the business of government.”
NEXT STORY: Elections officials explore security options