What gives? Shellshock fails to shock
Connecting state and local government leaders
Shortly after the Heartbleed bug caused a panic in security circles, along comes something which could be even more serious and the reaction seems to be one big yawn.
What a difference a few months can make. Shortly after the Heartbleed bug caused a panic in security circles, along comes something which could be even more serious and the reaction seems to be one big yawn.
The so-called Shellshock vulnerability is in the GNU Bourne-Again Shell (Bash), which is the command-line shell used in Linux and Unix operating systems as well as Apple’s Unix-based Mac OS X. It could allow an attacker to execute shell commands and insert malware into systems.
This is not a vulnerability in concept only. Trend Micro, which has been looking for threats based on Shellshock, has already identified a slew of them and says other common communications protocols such as HTTP, SMTP, SSH and FTP are also vulnerable.
Shellshock doubles the threat posed by the OpenSSL Heartbleed bug in various ways. Apparently servers that host OpenVPN, a widely used application, are also vulnerable to Shellshock, just as they are from Heartbleed. Other security researchers have reported exploits. Heartbleed and Shellshock come from similar development stock. Both are faults in code used in the initial writing of programs that went unnoticed for a long time, apparently for over 20 years in the case of Shellshock. Developers then simply didn’t think of the kind of vulnerabilities today’s threat environment can use, and it’s brought the issue of open source development rigor into question.
Patches are quickly being thrown out to cope with Shellshock, just as with Heartbleed, though security organizations have warned that initial solutions don’t completely resolve the vulnerability. And, anyway, it depends on what people do with these fixes. Months after the Heartbleed bug was trumpeted in the headlines, critical systems around the world were still at risk.
Not all vulnerabilities are equal
Then again, perhaps organizations aren’t as vulnerable from Heartbleed, Shellshock and similar code-driven bugs as people think. University and industry researchers have proposed in a recent paper that existing security metrics don’t capture the extent of actual exploits.
The researchers developed several new metrics derived from actual field data and evaluated those metrics on some 300 million intrusions reported on over 6 millions hosts and found that none of the products they used in their study has more than 35 percent of their disclosed vulnerabilities exploited in the wild, and that for all the products combined only 15 percent of vulnerabilities are exploited.
“Furthermore,” the authors wrote, “the exploitation ratio and the exercised attack surface tend to decrease with newer product releases [and that] hosts that quickly upgrade to newer product versions tend to have reduced exercised attack surfaces.”
In all, they propose four new metrics that they claim, when added to existing metrics, provide a necessary measure for systems that are already deployed and working in real-world environments:
- A count of vulnerabilities in the wild.
- The ratio of a product’s vulnerabilities to how many of those are exploited over time.
- A product’s attack volume, or how frequently it’s attacked.
- The exercised attack surface, or the portion of a product’s vulnerabilities that are attacked in a given month.
These metrics, they say, could be used as part of a quantitative assessment of cyber risks and can inform the design of future security technologies.
Don’t forget the hardware
Then again, what’s the use of vulnerability announcements and security metrics, all aimed at revealing software bugs and fixes, if the hardware that hosts the software is compromised?
In times past, when chips and the systems that use them were all manufactured in the United States, or by trusted allies, that wasn’t such a concern. But with the spread of globalization comes a diversification of manufacturing sources to China and other countries and increasing fears of adversaries tampering with hardware components to make it easier for them to successfully attack U.S. systems.
That’s been the impetus behind several trusted computing initiatives in the past few years. Most recently, the National Institute of Standards and Technology developed its Systems Security Engineering initiative to try and guide the building of trustworthy systems.
The National Science Foundation is now in the game through the government’s Secure, Trustworthy, Assured and Resilient Semiconductors and Systems (STARRS) program. One approach, in concert with the Semiconductor Research Corporation (SRC), is to develop tools and techniques to make sure components have the necessary assured security from the design stage through manufacturing.
Nine initial research awards were recently made for this program, which is a part of the NSF’s $75 million Secure and Trustworthy Cyberspace “game changing” program.
While all of this is pretty broad-based, the ultimate result for government agencies could be that, in just a few years, they will be able to specify in their procurements exactly what assured hardware the computing systems they buy need to contain.
NEXT STORY: Cole guides agencies in next-gen cyber warfare