7 zero-trust misconceptions that every agency should know
Connecting state and local government leaders
To ensure successful transitions to the dynamic, but controlled, zero-trust environments, agencies should fully understand the complexities of the technologies and the key role of visibility.
Zero trust has rapidly become the most discussed security term of late, garnering more attention from the rise of ransomware and President Joe Biden's Executive Order on Cybersecurity. The rush to zero trust has also generated confusion about its implementation. To help, the Cybersecurity Infrastructure and Security Agency released the draft zero-trust maturity model, and the Office of Management and Budget published the federal Zero Trust Strategy to provide technical guidance for agencies. While these documents lay out a solid roadmap, there are still a number of misconceptions about zero trust that agencies should avoid if they want to ensure successful transitions.
Misconception #1: Zero trust is a product from a vendor that can be implemented in a single project.
Zero trust is a long-term, never-ending journey. Known in modern computing terms as a lifecycle, achieving a zero-trust architecture is a long-term process implemented through a phased approach. Most ZTA frameworks and maturity models rely on the National Institute of Standards and Technology’s Cybersecurity Framework for guidance -- but even then, many agencies do not know where to start. The task might feel overwhelming, as agencies might have just partial pieces of the puzzle.
Misconception #2: Zero trust is predominantly focused on end-user access privileges.
While end-user access privileges is one component of zero trust, equally important are understanding and verifying system, service and function identities, including applications, workloads and devices (such as internet-of-things), that can gain access to applications and data maliciously. Identity systems of record easily become outdated or inaccurate as users are on-boarded, leave the organization or switch groups, gaining inappropriate permissions along the way. Agencies must have a way to discover and validate identities in use across the environment to validate the identity system of records and build appropriate zero-trust access policies.
Misconception #3: Zero-trust security can be defined once assets are identified.
Asset identification is only one step in defining zero-trust security policies. Agencies must understand the dependencies and relationships among entities and assets and be able to maintain that visibility in near real time, dynamically. When maintaining ZTA, access policies become even more granular, evolving from a traditional flat network topology to macro-segmentation of networks with micro-perimeters around applications. Machine learning and real-time risk analysis and threat prevention will become a necessity. So will the use of automation to continuously map dependencies and interconnections among applications and assets to baseline interaction behavior to enforce security policies and detect anomalies, violations and incidents.
Misconception #4: Application environments are homogenous.
Large federal agencies with mission-critical functions often use multiple generations of software functions executing across mainframes, through high IO computers delivering database operations to virtualized and software-defined estates as well as cloud-based infrastructure-, platform-, software- and function-as-a service solutions. Applying zero-trust principles to this heterogeneous collection of services further increases the challenge. To approach this challenge, agencies should first focus on cloud environments, where the software functions are temporal and dynamic, or adopt a management plane that can discover and map flows across heterogeneous environments.
Misconception #5: Zero trust is an all-or-nothing approach.
Any zero-trust implementation requiring the broad migration to a new environment or universal deployment of a new set of agents is destined for failure -- or at least an incredibly expensive and complex path to value. Instead, successful implementation requires a well-defined and controlled use case at the start, and then broader adoption via a phased approach. Starting with critical applications inside the network perimeter can be one place to start.
Misconception #6: Agencies should select the “best” and most established solutions for today’s needs.
Anticipating future needs is critical on the zero-trust journey. In a few years, the majority of workloads will be hosted in public cloud environments, and agencies will need to adapt accordingly, without deploying new agents that can create dual policy conflicts that can quickly lead to failure. Many existing security policies and analytics solutions that were created for on-premises perimeter defense may not be as relevant in a dynamic cloud-native world.
Misconception #7: Zero-trust deployments should be hard. But at least it’s “fire and forget.”
Zero trust is challenging and should be achieved gradually. Agencies should adopt an agile approach that requires little up-front investment and delivers initial results fast, whether that is gaining better visibility by uncovering unknown identities or assets, or by segmenting a critical application. This is achieved best with an agentless solution that leverages infrastructure already in place.
Zero trust is here to stay, and the initial deployment is only the first step in the journey. For many agencies, it will be a phased transition into a brave new more dynamic, but controlled, world. Steering clear of the now common misconceptions about ZTA and having a firm understanding of the critical role visibility plays is key. With government focus on security increasing, leveraging the zero-trust guidance and frameworks available is one of the best ways agencies can quickly realize success.