Faster networks, closer inspection fend off agile threats
Connecting state and local government leaders
Securing IT systems now means continuously monitoring the status of those systems on the back end and closely inspecting the traffic moving through networks on the front end to provide real-time situational awareness.
Requirements for securing government information systems are evolving away from the static check-box compliance that has been the norm for eight years under the Federal Information Security Management Act. The focus is moving to more real-time situational awareness.
“There are changes afoot,” said Mike Lloyd, chief scientist at RedSeal Systems, a network security company.
The National Institute of Standards and Technology is outlining the new approach in a series of revised publications that specify how agencies should comply with FISMA and other information technology security requirements.
“One of the big changes is that agencies are going to have to develop a continuous monitoring strategy as part of their security plan,” said Ron Ross, who leads FISMA implementation efforts at NIST. An annual snapshot of system configuration and status is no longer adequate to ensure that a system is adequately secured.
Many in the security industry applaud the shift.
“These are good changes,” Lloyd said. “They are very much welcomed.”
In addition to the attention to IT systems' status, more information is being gathered about the traffic that is running over those systems and networks. Deep packet inspection (DPI), the ability to look past the address information in a packet header and examine the payload itself, is becoming more practical in real time.
The evolving security requirements have ratcheted up the demand and increased the market for DPI in the past 18 to 24 months, said Elan Amir, chief executive officer of Bivio Networks. “Everything interesting happening in networking today has deep packet inspection at its core,” he said.
The idea of continuous monitoring of systems is not new. It is mentioned in earlier versions of NIST’s “Recommended Security Controls for Federal Information Systems” (Special Publication 800-53), which specifies the baseline security controls needed to meet the mandatory requirements in the Federal Information Processing Standards.
“An effective continuous monitoring program results in ongoing updates to the security plan, the security assessment report, and the plan of action and milestones” required under FISMA, the publication states. But the schedule specified for this monitoring produced only isolated snapshots of system status. “Those security controls that are volatile or critical to protecting the information system are assessed at least annually,” the publication states. “All other controls are assessed at least once during the information system’s three-year accreditation cycle.”
In Revision 3 of SP 800-53, released in July 2009, the streamlined specifications eliminate the snapshots. “Continuous monitoring of security controls using automated support tools facilitates near real-time risk management and promotes organizational situational awareness with regard to the security state of the information system,” the revised guidelines state.
Although the latest version is Revision 3, this is the first major update of the guidelines since initial publication in December 2005. The shift in security focus is appearing in other documents, too, including Revision 1 of 800-37, “Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach,” which is expected to be released in its final version this month.
“It used to be more focused on the certification and accreditation” process required for IT systems under FISMA, Ross said. “The new SP 800-37 is putting the focus on a more balanced approach. You are going to see continuous monitoring take on a greater importance over time.”
Ross said the emphasis on continuous monitoring is one of degree, reflecting the more dynamic systems that now characterize IT.
“It’s an evolution,” he said. “There is a lot more churn in the environment, and our adversaries are taking advantage of that churn.”
“Agencies have to deal with the fact that they have sophisticated adversaries who understand the stock defenses,” said Eddie Schwartz, chief security officer of Netwitness.
As a result, defenses must be monitored more closely to see when they have failed or have been circumvented. The baseline requirements that NIST has raised reflect not only the change in the threat environment but also advances in technology available to monitor IT systems and organizations' process maturity, Schwartz said.
“You can’t go from zero to 60 that quickly,” he said. But both the technology and the processes are in place to enable the automated, continuous monitoring that can produce near-real-time awareness. “The state of the art has come pretty far.”
All of this is what Ross calls back-end security, adding that “everything that happens on the back end is a result of what happens on the front end.” The front end is where the packets come into a network and IT systems, possibly carrying malicious payloads or looking for ways to circumvent defenses. “I think that 2010 will see a fundamental shift toward front-end security.”
That is where technologies such as DPI come in. It’s a discipline that allows fine-grained processing of network traffic and can be applied to a variety of applications, including intrusion detection and prevention systems and network-monitoring tools.
Address data in the packet header has been the focus of traditional monitoring and filtering. “Once you go beyond that into the payload, you are into DPI,” Amir said.
The advantage of knowing the content of your network traffic is obvious, but it comes at a cost. Traditional networking has focused on high-speed but low-computational requirements: Interfere with the packets as little as possible to keep them moving to their destinations as quickly as possible. However, DPI has high computational requirements, and to be practical, it must be done without slowing down traffic. The ability to do deep inspection at adequately high speeds is a relatively new development.
“We are at the beginning of the road in deep packet inspection,” Amir said. “The capabilities are pretty significant in terms of raw processing, but we are just at the beginning of learning how to use it.”
Besides technical issues related to DPI, there are legal, political and social issues that come with the ability to examine the content of network traffic. Technology defines what is possible, but privacy rules that define what should be done remain incomplete.
“All of that stuff is in the future,” Amir said. “It has not at all been solved.”
Development of the technical capability for DPI has been evolutionary, Amir said. “There probably was no breakthrough,” he said. “What happened was that there was a recognition that the discipline of deep packet inspection was worthwhile.”
During the past decade, the quest for network speed has moved to the 10 gigabits/sec range, with 40 gigabits/sec and 100 gigabits/sec speeds now appearing. “That problem has been solved,” Amir said. But it has been in the past two years that the industry has seen increased demand for information about what is on the network, forcing the development of DPI.
DPI’s evolution has been aided by improvements in semiconductors, which have produced the chips needed to perform the high-speed computations that enable DPI. That is part of another trend of stronger security capabilities being built into semiconductor chips.
“The underlying hardware is the foundation for securing what sits on top of the platform,” said Steve Grobman, director of cybersecurity technology and initiatives at chip-maker Intel.
One recent innovation in hardware security is Trusted Execution Technology, or TXT, which allows verification that a computing platform is executing in a known state, which provides a level of trust for the computer’s user or a platform that is interacting with the computer remotely. The chip measures the launched configuration of a computer and stores it as a baseline in the computer’s Trusted Platform Module. The baseline can then be compared with a measurement when software is executed to verify that the configuration has not been altered.
TXT was made available on Intel vPro client processors in 2007 on platforms intended for government, the financial industry and other environments that require high levels of trust. It was made available for mobile processors in 2008 and will be introduced for servers this year. Software developers are working with it, and applications using the technology should begin appearing on the market this year, Grobman said.
“The lower in the stack the technology is, the longer it takes to make it to market,” he said. “Given that in 2007, the technology was not in mobile, they waited for that before they started to ramp up.” Grobman said he expects TXT development for servers to occur more quickly and that the technology will see rapid expansion into virtual environments.
Improvements in the ability to monitor systems, inspect traffic and evaluate platforms all have contributed to agencies' ability to perform real-time forensics on, and increase awareness of, what is happening on systems.
“The state of the art has come pretty far” in intelligent analysis of data that agencies feed to their security operations centers, Schwartz said. Combined with real-time threat intelligence, analysts will be able to evaluate data to see if systems are operating as expected and what has gone wrong if they are not.
“People are going to be compromised,” Schwartz said. “You can’t avoid that.” But with effective monitoring and analysis, compromises can be minimized, recognized and responded to.
NEXT STORY: DARPA: Calling all cyber geneticists