How hackers outwit facial ID
Connecting state and local government leaders
Deepfakes, spoofed metadata and digital injection attacks are some of the most prevalent biometric hacks, and governments could lose billions if they are not addressed.
A “hurricane” of biometric threats is coming, and governments and businesses could lose billions of dollars to fraud if they are not prepared for digital injection attacks, metadata spoofing and deepfakes, a security expert warned.
Digital injection attacks, where hackers bypass cameras used for identity verification and insert synthetic images, deepfakes or even video recordings into authentication systems, are now taking place five times more often than persistent presentation attacks, where bad actors verify an identity by showing a photo or a mask to a system.
Meanwhile, attackers are increasingly spoofing devices and metadata like IP addresses to get around enterprise security, obscuring hack origins and making them more difficult to defend against, according to a new report from face biometric verification and authentication technology company iProov.
Deepfakes, meanwhile, have become a common tool in online fraud, including a new, more sophisticated iteration known as face swapping, which emerged for the first time last year. It combines a victim’s face image with a synthetic video to spoof both liveness and passive authentication software.
Other researchers have previously suggested that the various forms of synthetic fraud could be stamped out in a matter of years with the right education and investments, but iProov argued that the threats from synthetic fraud are continuously evolving.
“The speed of development in synthetic imagery technology has been extraordinarily rapid,” iProov CEO Andrew Bud said. “So you're seeing kits with exceptionally high performance being available to pretty much everyone.”
While mobile platforms were once viewed as easier to defend against due to investments in mobile application and software security from developers, iProov’s report suggests that is no longer the case. Researchers said there was a 149% increase in the second half of last year compared to the first half of 2022 in attacks that targeted mobile devices and platforms, growth partly due to the importance of mobile devices for biometric security. The company believes the use of emulators that pose as mobile devices and mimic their behavior is also a key factor in that growth, Bud said
All these biometric attacks have become more sophisticated and easier to execute, Bud said, due to the wide availability of the tools needed on the dark web and the rise of “crime as a service,” where hacking enterprises operate like businesses. While the report found that only around 2-3% of threat actors have advanced coding skills, bad actors can acquire the malware technology for these kinds of attacks for between $10 and $20.
And those threats will only evolve, Bud said, especially as artificial intelligence-driven ChatGPT and similar tools improve their capabilities. With the latest version of ChatGPT experimenting with image prompts, Bud said the use of generative technology to produce human-looking avatars for synthetic identities to get around biometric screens is “inevitable.”
He warned that a “hurricane is coming” if governments and businesses do not properly protect themselves from these evolving threats. They risk a repeat of the COVID-19 pandemic, when many lost billions of dollars due to fraudulent unemployment insurance claims, Bud said.
As the federal government and states increasingly look to technology to verify people’s identities online, Bud said he is concerned that too few take the risk of biometric threats “seriously enough.” Rather, he said, they “are using [verification] technologies that are increasingly not fit for purpose.”
He said those agencies and businesses that fail to invest in effective biometric verification technology will face a “moment of reckoning” in the future.