How 'indicators of behavior' deliver left-of-breach security
Connecting state and local government leaders
With more context-aware and automated protection of users and data, agencies can identify high-risk behavior that could indicate a malicious insider or compromised account.
The federal government is taking unprecedented steps to move beyond traditional cybersecurity methods and adopt innovative solutions to protect our nation’s interests. One example is the recent formation of the Cyberspace Solarium Commission -- a collection of representatives from science, academia, business and other sectors -- who have come together to make recommendations on how the government can better combat today’s rapidly evolving cyber threat. The indication is clear: The nation needs a more proactive and outside-the-box approach to cybersecurity.
In this new era, traditional methods of detecting a cyberattack, such as indicators of compromise, are not enough. IoCs are evidence a cyberattack is taking place or, worse, has occurred already. They encompass a wide range of data points: a virus signature, suspicious URLs, email phishing campaigns, abnormal computer operations, network traffic in little-used ports or via tunneling and so on. But while IoCs are useful, they have shortcomings.
Usually, an IoC represents a single event, data point or piece of code. It offers hints about what’s happening, but lacks sufficient context. It’s often up to a security analyst to string together a large number of IoCs to fully understand, from a forensics point of view, what happened. Responding to IoCs often means blocking access based on the presence of a particular indicator, which can create friction.
In short, IoCs are table stakes. They represent surface-level security, but they won’t enable IT pros to identify an insider threat, someone going rogue or very advanced attackers.
To reach the next level of security, agencies must move left of the breach and adopt more of an indicators-of-behavior approach. IoBs are essentially a top-down approach; they focus on events generated by users interacting with data and applications. By understanding how an employee or contractor typically behaves, it's possible to identify high-risk behavior that could indicate a malicious insider or compromised account. This allows IT security pros to automatically contextualize anomalies, understand changes over time, identify what direction the risk is trending and have the system react accordingly.
How can agencies effectively go from depending on IoCs to relying on IoBs? Successful implementation requires transparency, employee coaching and cutting-edge data science. Let’s take a closer look at each.
Build trust: Many people falsely assume that being too transparent about security programs will undermine them, giving bad guys a blueprint they can follow to easily bypass security measures. However, a very sophisticated attacker will find out what monitoring an agency has in place anyway.
Security by obscurity is never a good option. Instead, it’s crucial to inform everyone about behavioral monitoring so they understand the agency is not keeping tabs on employees, but creating a safety net. Much like most people welcome monitoring of their credit card, employees will likely welcome such programs too -- especially if the agency has a “privacy by design” principle, as it should. Privacy by design means proactively embedding privacy into the design and operation of IT and understanding where the boundaries are in terms of data collection.
For instance, government agencies may have knowledge of a person's financial data, especially for those with top-secret clearance, because financial difficulties can be exploited by hackers. In the enterprise, though, monitoring such data is a privacy infringement. Privacy by design is essential and involves understanding -- from the beginning -- what information can and cannot be monitored. Having clearly marked lines in the sand, in addition to transparency, will build trust and employee buy-in.
Coach employees: Another way to build trust is by speaking with (rather than to) staff members. Employee training usually consists of annual offerings that teach workers what spam and phishing are or help them understand specific policies. Coaching, on the other hand, is contextualized and timely. It signifies a willingness to work with employees on an ongoing basis to empower them as the first line of security. Thus, it is a crucial part of preventing insider threats.
Continuous coaching keeps security top of mind with employees. For example, if an employee attaches a document with personally identifiable information of customers in an email addressed outside the company, a pop-up window can appear to confirm this is intentional. The pop-up explains why this email was flagged and asks the employee to select criteria that confirm the business case for sharing such information externally. It’s one additional step that doesn’t block productivity, but simply adds a layer of security to confirm intent of action is correct. Ongoing engagement in this way will quickly demonstrate a return on investment.
Leverage behavioral science: With IoBs, some behaviors, like the clearly good and bad ones, are easier to define. Agencies should note that they can’t only track bad behaviors, as that will just increase the risk score and never give a system or entity time to become good again. The challenging part is being able to contextualize all the in-betweens, which can happen only by considering how actions and events intersect.
To truly understand what drives a person to take a specific action, agencies must have a strong understanding of human behavior. Incorporating an element of cognitive science into security programs -- i.e., taking into consideration the human thought process -- can help draw the connection between cause and effect.
For instance, behavioral monitoring can include sentiment analysis: something as simple as being able to tell if someone is angry or stressed. An employee in this state may be more likely to cause a security incident, either intentionally or accidentally, than someone who is sanguine.
The good news is that the efficacy of IoBs will improve over time: New data sources will mean even more context and higher fidelity risk assessment. However, it’s not too early for agencies to begin leveraging IoB to augment traditional IoC approaches. It will add a valuable layer of security in a world that needs more context-aware and automated protection of users and data.
NEXT STORY: Defending the government enterprise