How secure are your open source-based systems?
Connecting state and local government leaders
Software developers often assume that open source components in their supply chain are reliable – but assumptions like were behind the Heartbleed exploit. Here are ways to lock down your open source projects.
Responsibility for secure open source software is, well, complicated.
Some believe open source is more secure than proprietary software because, as Linus’s Law says, “Given enough eyeballs, all bugs are shallow.” That means that the more widely available open software is, the more scrutiny it will receive, the more flaws will be surfaced and the stronger the code will be.
That would be true if components that make up open source code were regularly reviewed and if developers verified the security of components before incorporating them into their work.
But that’s not always the case. Like automobile assembly plants that build cars with independently manufactured airbag and brake components, software developers often assume that open source components in their supply chain are reliable, patched and up to date.
Unfortunately, assumptions like that allow for vulnerabilities like those that were behind the Heartbleed bug.
Flaws exist in open source software for a variety of reasons: the components might be old or not mature when they were first used. Or they might not have been audited or adequately tested. But often, once an open source component makes it into a widely used application, it is assumed to be secure, and demand for testing diminishes.
It’s not just open source code that’s vulnerable. Much proprietary software uses open source components. According to Gartner, 95 percent of all mainstream IT organizations will leverage some element of open source software – directly or indirectly – within their mission-critical IT systems in 2015.
And in an analysis of more than 5,300 enterprise applications uploaded to its platform in the fall of 2014, Veracode, a security firm that runs a cloud-based vulnerability scanning service, found that third-party components introduce an average of 24 known vulnerabilities into each web application.
To address this escalating risk in the software supply chain, industry groups such as The Open Web Application Security Project, PCI Security Standards Council and Financial Services Information Sharing and Analysis Center now require explicit policies and controls to govern the use of components, according to Veracode.
The use of open source in federal systems is also attracting scrutiny. In December, House Committee on Foreign Affairs Chairman Ed Royce (R-Calif.) and Rep. Lynn Jenkins (R-Kan.) introduced the Cyber Supply Chain and Transparency Act of 2014 (H.R. 5793) that would have required any supplier of software to the federal government to identify which third-party and open source components are used and verify that they do not include known vulnerabilities for which a less vulnerable alternative is available.
The bill also would have required the Office of Management and Budget to issue guidance on setting up an inventory of vulnerable software and replacing or repairing known or discovered vulnerabilities. Agencies would have had to annually report on the security of projects using open source components and their suppliers for reference by other agencies.
The bill is important because, as Rep. Royce said in his introductory remarks, much of nation’s economy relies on software with open source components.
“It is precisely because of the importance of open source components to modern software development that we need to ensure integrity in the open source supply chain, so vulnerabilities are not populated throughout the hundreds of thousands of software applications that use open source components,” Royce said.
But not everyone thought the proposed bill was necessary. Trey Hodgkins, senior vice president for public sector at the IT Alliance for Public Sector, told Government Technology that he thought H.R. 5793 duplicated security measures many companies already use.
Do you know what’s in your software?
“We cannot afford to include known exploitable software in our government infrastructure,” said Wayne Jackson, CEO of Sonatype Inc., a software supply chain service provider that is the steward of the Central Repository, the largest source of Java components, as well as creator of the Apache Maven project and distributor of the Nexus open source repository manager.
Today, 90 percent of a typical application is composed of open source and third party components, Jackson wrote in a blog post. The Central Repository clocked in 17.2 billion downloads in 2014 – more than 47 million components every day.
That makes the inventory of open source components critical, Jackson said, because without it, IT managers can’t know if their systems contain compromised components.
One way to check is with Application Health Check that provides a free breakdown of every component in an application and alerts IT managers to potential security and licensing problems.
“When open source is found to be defective, it’s disclosed, but if you don’t know what’s in your software, that disclosure tips off adversaries who can use it to exploit vulnerabilities,” Jackson said. And hackers get the biggest bang for the buck by going after the components that are widely used, as the OpenSSL/Heartbleed attack demonstrated.
And it’s not just enterprise business software that’s vulnerable, Jackson said. The problem affects the security of any system with digital components, from websites to cars to insulin pumps. The whole Internet of Things is vulnerable to exploits because it is based, in part, on components that have no upgrade path once deployed.
So how can agencies ensure that their systems use a software supply chain that’s been secured?
Use the best ingredients. Agencies should first make sure the components used come directly from a trusted repository. Look for software that is officially compatible with CVE (Common Vulnerabilities and Exposures), the set of standard identifiers for publicly known security vulnerabilities and exposures, said Red Hat’s Dave Egts.
On the flip side, don’t use components with known security (or other) defects, especially when newer, fixed versions are available. Although this sounds like a no-brainer, it’s not yet a mainstream best practice, Jackson said.
Make a list. IT managers should create and preserve a bill of materials, or a list of ingredients, for the components used in a given piece of software.
Scan the code. Agencies should use automated code scanners compatible with the Security Content Automation Protocol (SCAP). Open source tools like OpenSCAP are free and built into many operating systems and certified by the National Institute of Standards and Technology.
Use government-certified software. Using FIPS-certified cryptography libraries, for example, to write encryption applications eliminates the need to obtain additional FIPS-certification.
Monitor security information sites. Check the NIST National Vulnerability Database for new disclosures that might affect the components in critical systems.
There may be no way to completely protect government’s critical systems from determined adversaries, but ensuring that the basic building blocks are secure is a good place to start.
NEXT STORY: Agencies get roadmap for security data sharing