Why network resiliency is so hard to get right
Connecting state and local government leaders
Both perimeter defense and effective elimination of threats that have breached the network are key components of a cybersecurity strategy. But what's the right mix?
The new chairman of the Joint Chiefs of Staff thinks the July hack of his organization’s unclassified email network showed a deficiency in the Pentagon’s cybersecurity investment and a worrying lack of “resiliency” in cybersecurity in general.
It was an embarrassing event for sure. The hackers, suspected to be Russian, got into the network through a phishing campaign and, once in, reportedly took advantage of encrypted outgoing traffic that was not being decrypted and examined. Gen. Joseph Dunford, who took command Oct. 1, said the hack highlighted that cyber investments to date “have not gotten us to where we need to be.”
As a goal, resiliency is a fuzzy concept. If it means keeping hackers out completely, then Dunford is right – the Defense Department has a problem. If it means being able to do something once hackers get in to limit or negate the effects of the hack, then he’s off the mark.
Best practice in the security industry is now to expect that even the best cyber defenses will be breached at some point. The effectiveness – or resiliency -- of an organization’s security will ultimately be judged on how it deals with that breach and how efficiently it can mitigate its effects.
In 2015, the government’s cybersecurity low point had to be the hack of the Office of Personnel Management’s systems, which compromised the personal data of millions of government workers. Attackers had apparently gained access to OPM’s networks months before the hack was discovered, giving them plenty of time to wander through the agency’s systems, steal and then exfiltrate the data.
That experience prompted plenty of heartache and soul searching. It seemed that, even after some years of experience of increasingly sophisticated hacks, both public and private organizations were still not paying the attention they needed to their internal security, and instead fixating on defending the network’s edge.
In that sense, the Joint Chiefs email attack could be seen as a success, at least in terms of the reaction to it. Security personnel quickly detected the attack, closed down the email network and then set about investigating possible damage and systematically eradicating any malware that attackers had left behind.
In the end, the email network was down for around two weeks, with the Pentagon declaring it a learning experience and claiming confidence in the integrity of DOD networks.
Learning experiences are great, but the fact is that most government organizations are still more vulnerable than they should be. And some agencies still seem to place more faith than is warranted on networks’ peripheral defenses.
Even the best of those will prove vulnerable at some point, however. Google’s Project Zero recently reported a vulnerability in security appliances produced by FireEye, one of leaders in the field, that allowed someone access to networks via a single malicious email. (FireEye quickly patched the vulnerability.)
The many government assertions that agencies are also raising employee awareness of potential email security hazards has also come into question, given that phishing remains such a successful way for hackers to get network access credentials. According to Verizon, a phishing campaign of just 10 emails has a 90 percent chance of at least one recipient becoming a victim.
A basic problem in all of this is that security, like resiliency, is still much more qualitative than quantitative when it comes to assessing cybersecurity strength. You know you’ve got a good system in place if you can deter attacks or catch and mitigate them quickly once they happen. But there’s no way to know, with a level of certainty, if that’s the case until a serious breach is attempted.
To move the needle on that, the National Institute of Standards and Technology will be holding a two-day technical workshop in January that will look at how to apply measurement science in determining the strength of the various solutions that now exist to assure identities in cyberspace.
To that end, it’s released three whitepapers ahead of the workshop that look at ways to measure the strength of identity proofing and authentication and how to attribute metadata to scoring the confidence of the authorization decision-making process.
NEXT STORY: DHS looks to smooth the path for tech startups