Does current security leave agencies vulnerable to state-sponsored attacks?
Connecting state and local government leaders
What can agencies do to get the full network visibility they need to comply with guidance from OMB and the Cyberspace Policy Review Report? Sourcefire's John Negron writes that active scanners or passive discovery tools that understand the network environment in real time and understand the traffic are key.
In his keynote address at the RSA Conference earlier this year, Army Lt. Gen. Keith B. Alexander, director of the National Security Agency and chief of the Central Security Service, said increasing situational awareness is imperative for improving cybersecurity. “We don’t have a way of sharing and seeing the networks today in a timely manner,” he said. “We’ve got to build that situational awareness.”
In addition, the recently released Cyberspace Policy Review and the Office of Management and Budget’s Fiscal Year 2008 Report to Congress on Implementation of the Federal Information Security Management Act (FISMA) both mandate technologies and processes to protect cyber space.
Unfortunately, most network security isn’t good enough anymore. Today’s threats, like today’s networks, are dynamic. A proliferation of laptop PCs and advanced handheld devices support a growing mobile workforce. And threats to our networks are faster, smarter, more prevalent and more elusive than ever. Hackers are able to break into most technology environments at will, placing not just our information and infrastructure at risk, but also our national security. These sophisticated attackers are talented, motivated and well-resourced, with methods that aren’t even known by the security community. With cyber threats by nation states against U.S. government agencies on the rise, most information technology organizations are challenged to keep pace.
The fact is, most security solutions offered to date have been static, leaving IT organizations blind to the network and the changes that are happening every second. To give one prime example, typical intrusion detection systems (IDSes) were built on the set-and-forget principle, so rules are rarely updated for the latest threats. How can agencies truly protect the network if they can’t see what is running on it, don’t know what to protect, and can’t identify the threats they are facing?
Because many solutions, including firewalls, IDSes, intrusion prevention systems and vulnerability management systems, are based on outdated assumptions formulated during a less dynamic time, they simply can’t get the job done.
What are some of these assumptions and what can government agencies do to overcome them?
Assumption 1: Agencies know what to protect and what threats they face
OMB’s report on implementation of FISMA states as its first recommendation that agencies should develop and maintain an inventory of major information systems -- including national security systems -- to support monitoring, testing and evaluation of information security controls. And the Cyberspace Policy Review states, “The government needs a reliable, consistent mechanism for bringing all appropriate information together to form a common operating picture.”
Sounds pretty straightforward, right? The reality is that while it sounds fairly simple, it isn’t. Most agencies don’t know everything or everyone on their network and where the vulnerabilities are. A report by the Verizon Business RISK Team, entitled “2008 Data Breach Investigations Report,” which includes four years of forensic research of more than 500 cases, showed that 90 percent of the breaches involved unknown systems or systems that had unknown network accessibility.
So what can agencies do to get the full network visibility they need to comply with OMB’s and the Cyberspace Policy Review’s guidance? How can they know the assets and people on their networks before it’s too late? Active scanners or passive discovery tools that understand the network environment in real-time and understand the traffic are key. These systems watch what is happening on the network, can profile the devices, and can figure out who is on which device. They have the context to make inferences about devices and their potential vulnerabilities, as well as users and possible non-friendly network behavior. These tools have the intelligence to identify the appropriate action to protect the IT environment.
Context also provides the essential level of intelligence when it comes to combating the next assumption.
Assumption 2: People are well-trained and able to manage a growing number of technologies
Let’s face it, it’s extremely difficult for most IT departments to deal with all the available technology and understand how to make it work for them. With a large enough security staff and significant human intervention, two to four technologies can be manageable. However, the reality is that most agencies don’t have a large enough staff or are trying to manage even more technologies.
According to the Cyberspace Policy Review, “the U.S. Government should invest in processes, technologies, and infrastructure that will help prevent cyber incidents. Options include increased security testing, investment in systems that automate or centralize network management, and more restricted connectivity to the Internet for some unclassified systems.”
Automation is key. Most available security technologies require fine-tuning by hand, as well as people to understand where the assets in the network are in order to protect them effectively and keep up with continuous change. But we can’t expect people to endlessly fine tune systems. What we need are solutions that are automated so that they require no human intervention, and go beyond simply detecting possible threats to identifying meaningful threats.
Tools with the ability to apply a set of recommended rules for a given environment and to conduct impact assessments to further qualify threats have been shown to dramatically reduce the number of events requiring human attention from as many 20 million events per month to 2,000 events per month.
Assumption #3: People are vigilant and responsive
The Cybersecurity Policy Review states: “Federal cybersecurity centers often share their information, but no single entity combines all information available from these centers and other sources to provide a continuously updated, comprehensive picture of cyber threat and network status, to provide indication and warning of imminent incidents and to support a coordinated incident response.”
Most agencies place extremely high expectations on IT staff members to tap into the wealth of information about the network and apply that knowledge on an ongoing basis to protect constantly evolving networks and users. Unfortunately, in the real world, at some point something will get through. Although agencies have to protect their networks and users on all fronts, criminals need only find one weakness. And damage happens quickly. IT organizations must be ready to respond quickly.
People can’t be as vigilant as they need to be to watch for policy violations or flag abnormal network behaviors. Organizations need automated, passive discovery tools that are always on and always watching the network. And whenever suspicious behavior is identified, surgical scanning provides the ability to hone in on the source of policy violations, attacks or network vulnerabilities immediately. Used in combination, these tools provide agencies with the ability to enforce compliance with security policies and enable remediation actions based on user identity.
Outdated assumptions are making it virtually impossible for IT organizations at government agencies to successfully protect their networks from cyberthreats. To avoid the traps these assumptions create, network security solutions must be intelligent to be effective. Only by knowing what assets are running on the network all the time, if those assets are vulnerable, and if they’ve been attacked and/or compromised, will government agencies have the situational awareness needed to secure our dynamic networks in the face of today’s dynamic threats.