Zero trust: Moving from concept to reality
Connecting state and local government leaders
Despite the daunting complexity of establishing zero trust, agency cybersecurity execs are finding ways to crack the problem.
On the one hand, zero trust security seems simple: Assume compromise and authenticate every action. In practice, though, zero trust can be maddeningly complicated.
A group of security specialists from across government recently met to discuss what's needed to move zero trust into the mainstream, in a roundtable convened by FCW, a sibling site to GCN. The full discussion was on the record but not for individual attribution, and the quotes have been edited for length and clarity.
One of the requirements for zero trust is establishing a baseline of normal application, user and network behavior – a task recently made much more difficult with so many government employees working from home.
"The new normal" has become an overused term since COVID-19 upended workplaces, but several participants said the surge in telework was indeed changing security conversations. "I think it's been a catalyst for people to think about how that strong network perimeter isn't what they thought it was," one said.
New or old, however, establishing what's normal in a network is essential to a zero-trust approach. Location data has changed dramatically in recent months, but multiple officials said defining a baseline is difficult even without maximum telework.
"What is normal will change over time," one said. "Certain changes, while deemed anomalous, could be quite normal in a network. And so this whole idea of understanding patterns and normalcy and looking for anomalies becomes an extremely challenging problem."
Thanks to the Continuous Diagnostics and Mitigation Program, the 2015 governmentwide "cyber sprint" and recent efforts by the Cybersecurity and Infrastructure Security Agency, federal agencies now have much better data on their users, devices and network traffic than was the case just a few years ago. But understanding that data and using it to create a baseline are other matters entirely.
"People forget it's not always a user accessing the data system," one official said. "The systems also are sharing data all the time." Another pointed to the surge in robotic process automation initiatives and said artificial-intelligence-powered automation can conclude: "'Hey, this data and this data really work well together.' So we now have automation creating these streams in the background, which complicates things a little bit further."
Similarly, another added, "we always talk about access and the data as if data is always sitting still. What are we doing to protect it when it is in motion? That needs to be addressed, too. I don't hear a lot of that when I hear people talk about zero trust."
"Some system owners don't really know how their data flows," another participant said. "It's going to make your life much more difficult if you cannot baseline that normal."
AI and analytics will be essential to making sense of all that data, the group agreed. The complexity is effectively forcing CIOs to become data analysts, one official said, because "you're going to have to use analytics in order to help manage your networks."
Given the many complexities involved, most of the participants were focused on finding practical starting points rather than perfecting the larger framework.
As one official said about adopting zero trust: "At the end of the day, I've got to be able to answer one question: Is my data still protected as a result? If I can answer that question, I'm good."
Another recommended focusing on specific use cases: "Can enterprises with satellite facilities connect without compromising the entire network? Can contractors get access without compromising the entire network? Can collaboration across enterprise boundaries happen without compromising the entire network? That's really what we're talking about from a zero trust perspective."
Similarly, other participants emphasized starting with clearly defined functional building blocks. "How do we tackle lateral movement?" one asked. "What degrees of trust do we implicitly give to your Common Access Card, to your Kerberos token? What is the exact level of lateral movement that can come from those different things? And then start attacking that."
Specific applications can also offer a starting point. "A lot of people focus on devices and protecting the device, but it's actually the application that facilitates the access to that data," another participant said. "So that should be hardened."
All admitted that the complexity can be daunting. "Nothing is going to make this simple," said one official who urged focusing on the data layer. "But if we can start to define policy at the layer that we care about, we can at least simplify the approach and reduce the number of layers we have to take into consideration."
And while it's important to think about design principles at the enterprise level, there was strong consensus that implementations should start small.
"I'm of the opinion that the component technologies that enable something like zero trust can be small and have clear finish lines and run in parallel," one official said. "But I count myself personally fortunate that at my agency, nobody thus far has stepped up and said, 'We're going to have a [departmentwide] zero trust initiative' because that's intractable."
Read FCW’s longer account of the roundtable discussion and find the list of participants here.
NEXT STORY: Election admins vulnerable to email attacks