Maintaining zero trust over time: Why set-it and forget-it won’t work
Connecting state and local government leaders
Zero trust requires continuous revalidation of trustworthiness -- of the devices, services and identities connecting into an enterprise environment, as well as the systems to which they are connecting.
In order to maintain zero trust long term, it’s important to recognize that zero-trust architecture is a set of design principles, not merely a collection of security tools or permanent, static network modifications. According to the National Institute of Standards and Technology, “ZT is not a single architecture but a set of guiding principles for workflow, system design and operations.” NIST refers to these principles as “tenets.” In other words, ZTA guides its adherents to think about securing their environment in a new way, not just adding new (or not-so-new) capabilities to the existing architecture. It’s a philosophy of security that, when implemented fully, will substantially reduce risk.
Given that only 13% of 100+ federal agency security professionals recently surveyed described their progress toward zero-trust adoption as “mature and fully implemented,” determining how federal adoptions are performing over time would be difficult at this stage. But, there is a way to ensure long-term success, and it centers around the principle of integrity.
Trustworthiness
According to NIST, integrity is one of the central tenets of ZTA: “the enterprise monitors and measures the integrity and security posture of all owned and associated assets.”
Adhering to this principle requires continuous revalidation of trustworthiness—of the devices, services and identities connecting into an enterprise environment, as well as the systems to which they are connecting. And one of the essential elements of trustworthiness is ensuring that nothing has changed from the original trustworthy state, which makes limiting unauthorized or malicious changes essential. This is why agencies can’t have zero trust without integrity monitoring.
One of the most important reasons for monitoring system integrity is that security tools will never catch everything. A recent study by Innovate Cybersecurity concluded: “Of 22 EPP (Endpoint Protection) and EDR (Endpoint Detection and Response) products in use today, nobody performed better then 50%. All but two vendors were below 40%. Our sense is that efficacy is frighteningly lower than what consumers expect when purchasing these products.” If the best-regarded and most popular EDR/EPP tools are not even catching 50% of the “bad,” it’s no wonder that agencies continue to struggle in the face of persistent, high-volume and sometimes advanced attacks.
So, detecting and mitigating the “bad,” while necessary, is insufficient. It’s certainly insufficient to achieve the high state of trustworthiness called for in a ZTA approach.
Determining a trusted state
As explained in a recent Tripwire whitepaper:
A best practice for determining a trusted state begins by creating a baseline of each component/device in the infrastructure (capturing what is there now), then applying hardening principles using set or specific standards to each while remediating failed configurations along the way. Updating the baseline with each remediated configuration provides you with a running baseline that’s up to date.… By creating and maintain an updated baseline of your infrastructure, you will greatly reduce the attack surface and strengthen the “integrity” of your environment.
Once an organization has established a baseline, it becomes a matter of applying integrity controls in order to maintain that baseline and ensure a trusted state over time.
Ensuring a trusted state over time
In the context of zero trust, integrity controls are required to ensure ongoing trustworthiness whether on-premise, in the cloud or within hybrid infrastructures. It is not a set-it-and-forget-it approach, and often existing tools can be leveraged to implement these controls broadly. Integrity management organizes security controls to align with key elements of the architecture.
A mature approach to integrity management starts with ensuring integrity of individual assets or systems. This means establishing a good, secure state (applying secure configuration management). Then, those systems must be continuously monitored to ensure that unauthorized changes are not made to critical assets. Specific controls for this effort include file integrity monitoring (FIM), secure configuration management, host-based intrusion detection systems (IDS), vulnerability management and patching, etc.
In the context of ZTA, this approach applies not only to relatively static systems in the enterprise environment, such as servers, databases and network devices, but also to the many devices, services and identities interacting with those systems on an intermittent basis.
There must be a way to quickly assess the trustworthiness of those systems, but then to continuously recheck state, as well as detect changes which may cause them to become untrusted. Robust solutions exist with these capabilities, but it will require a shift from the typical approach of treating FIM and SCM as compliance-driven controls deployed in a “check-box” manner.
Integrity as the foundation
Aligning security controls with an integrity platform builds and monitors trust in an organizations’ people, processes and technology and is essential for establishing and maintaining a trusted state.
In our survey, 42% of federal respondents said they believed that integrity monitoring was “foundational” to a successful zero -trust strategy, while 58% believed it was somewhat important (although not required) or not important.
Hopefully, by better understanding integrity as the basis of trustworthiness, which in turn is essential to successful implementation of ZTA, we will see those numbers go up. While NIST refers to integrity as a “tenet” of zero trust, it is even more--the very foundation of zero trust.