For smartphone makers, security is a matter of economics
Connecting state and local government leaders
Only when average smartphone users start to value security as much as they do the new bells and whistles will manufacturers be forced to treat security like the life-or-death issue it currently is for many targets of Pegasus.
The Pegasus Project, a recent reporting effort to go behind the scenes of NSO Group’s infamous mobile spyware, has opened many peoples’ eyes to the potential for smartphones to be compromised and weaponized against their users. Reports have confirmed that individuals within government, from heads of state to diplomats, are particularly vulnerable to this threat given the value they represent to spies. In the wake of the Pegasus Project, much of the attention has turned to Apple, whose sterling security reputation is seemingly at odds with the ability of Pegasus operators to remotely and surreptitiously take total control of a targeted individual’s iPhone -- in many cases without any interaction required from the victim.
To understand why smartphone makers provide adequate security for the majority of users but struggle to contain the latest and greatest threats facing government users and other high-risk individuals at the hands of nation-state actors and cyber-arms dealers like NSO Group, it’s important to realize that smartphones are primarily commercial products. With any commercial device, manufacturers weigh security decisions against factors like usability, user preferences, implementation costs and reputational risk. In other words, security is viewed through an economic lens.
This tension can be illustrated by considering how smartphones frustrate common security practices.
Limiting smartphone features
An axiom of software development is that more features mean more code, and more code means a greater likelihood of vulnerabilities. If security were the prime consideration, smartphone makers like Apple and Samsung would limit the number of features and focus more on system stability and security.
In the real world, proprietary new features and services not only attract new customers but increase customer lock-in within the vendor’s ecosystem. Apple’s iMessage service, for example, was originally designed to share text messages and photos, but over the years it has come to offer features like GIFs, emojis and third-party app integrations. Each of these extensions and interconnections increases the chances that skilled hackers will find and exploit a security gap. Yet, to many iPhone users, these features make iMessage an indispensable tool.
Slowing down the release schedule
Properly vetting code before it’s released is tedious and time-consuming, but critical for maintaining system security. In the push to release features to market as quickly as possible, this process tends to get shortchanged.
At Apple’s annual Worldwide Developers Conference, a litany of new capabilities are introduced to both maximize user interest and attract development efforts around these features. Such feature-heavy iOS releases create a harrowing schedule that leaves Apple developers with little time to vet new features for security flaws. Importantly, each new iOS release must be tested on each supported iPhone model (iOS 14 supports a whopping 19 models). It’s no surprise that Apple has come under fire in recent years for the multitude of bugs that accompany each major iOS release. So far this year, the company has already had to patch 13 zero-day vulnerabilities.
Developing a security-first architecture
If Android or iPhone were engineered for security above all else, the user experience would be drastically different. Since many of the exploit chains affecting smartphones result from the challenges of parsing complex data, smartphone makers could abandon this practice altogether. Imagine iMessage with just text -- no links, no images, no app integrations -- it’s clear why this option is a nonstarter.
Developing a security-first architecture also requires the use of specialized, isolated hardware, which is difficult and expensive to implement. And, given the space and power constraints of modern phones, focusing on hardware security may mean compromising on other areas such as camera size/quality and battery life, which happen to be two of the most important user considerations in purchasing a new smartphone.
Offering deep analysis to users
With an advanced threat like Pegasus, victims have no idea that they’ve been attacked. This situation is exacerbated by the fact that smartphones only offer limited security analysis tools to users. To effectively combat Pegasus, users would require greater visibility into their device’s filesystem, processes and system logs.
It’s understandable why, in addition to legitimate security reasons, Apple would want to limit such deep analysis. The company’s focus on the customer experience (“It just works”) is an incredibly valuable brand asset, and forcing users to deal with security notifications could be an unnerving and distracting experience that would go against this philosophy. Apple also doesn’t want any of the bad press or social media buzz that would result from users broadcasting that they suspect they’ve been hacked based on such analysis.
Increasing the size of the security team
Apple, for its part, has attracted some of the most skilled security talent on the planet and has increased its investment in its security team over the years. To match the offensive hacking skills of intelligence agencies and commercial surveillance providers, however, the company would need to effectively subsidize its own offensive hacking unit. Consider that NSO Group alone reported $243 million in revenue in 2020, and it becomes clear how much Apple would have to invest to credibly defend against Pegasus and other advanced mobile threats.
According Apple Security Engineering and Architecture head Ivan Krstić: “Attacks like [Pegasus] are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and are used to target specific individuals.” This quote provides a good window into the cost-benefit analysis Apple uses to prioritize securing the iPhone against threats that the vast majority of users can expect to encounter versus threats faced by government users and other individuals targeted by nation-state threats.
Beefing up the bug bounty program
While Apple has made strides in recent years in both opening up its bug bounty program and increasing the bounties paid for each type of exploit, the company has nonetheless received criticism for being overly stingy with its payouts and for not publicly championing researchers who have brought exploits forward. Though Apple wants to downplay the severity of any bugs that are found lest they tarnish the company’s security reputation, doing so limits the motivation of those who may otherwise be inclined to find and report vulnerabilities.
Bug hunters may instead choose to sell to an exploit broker like Zerodium (who will then sell it to the highest bidder), a commercial hacking company like NSO Group (who will leverage it in its hacking tools) or even a government buyer. Because many of the end customers have virtually unlimited budgets for surveillance, a bug hunter can make more money by selling an exploit than reporting it to Apple. Apple must offer significantly more money or other perks to swing the balance back in its favor.
The economics of the smartphone market aren’t changing any time soon. At the same time, as smartphones grow increasingly important in our day-to-day lives, malicious actors will be even more motivated to look for and exploit security gaps. Only when average smartphone users start to value security as much as they do the new bells and whistles will manufacturers be forced to treat security like the life-or-death issue it currently is for many targets of Pegasus.