Stay Tuned and Subscribe For Updates
The information technology industry has always been enamored by breakthrough technologies with flashy names.
Looking back at the history of IT in general, and cybersecurity in particular, it is obvious that every few years another new technology takes the world by storm, promising to be the silver bullet to address all the problems and ensure that enterprises can do their business in an even more secure, convenient and productive manner.
Earlier, it was, perhaps, “Big Data” or “The Cloud”, then came “Blockchain” and “Artificial Intelligence”.
Nowadays, especially after the COVID-19 pandemic forced the world to work from home for over a year, Zero Trust seems to be the biggest buzzword of every conference, webinar, or sales pitch.
We have learned so much about the potential benefits of Zero Trust solutions that seemingly the only question left for most people is: where do we buy ZT already?
Unfortunately, the reality is not quite that simple.
Even under the enormous pressure of the pandemic, the adoption rate of ZT remains very low, with most organizations still struggling not just with finding the right solutions to buy but finding the right people to deploy them or even with grasping the basics of Zero Trust Architecture.
If these are the questions you’re looking for answers to, keep on reading!
Back in the early 2000s, before the era of the cloud and ubiquitous presence of smart mobile devices, organizations were already feeling the pressure to reorganize their networks for digital transformation – establishing communications to their partners, contractors, and even customers, building API-based public interfaces to their data siloes, and embracing the early ideas of Internet-scale computing.
In later years, the growing adoption of cloud and mobile technologies has led to erosion of the very notion of a corporate network perimeter; the legacy security tools that depended on it are no longer able to keep up with modern security risks.
Continued growth of corporate networks, both in scale and complexity, as well as the rapidly growing number of complex targeted attacks for purposes of hacking, industrial espionage, or government surveillance, have led to a sharp increase in the number of data breaches.
In 2009, John Kindervag, back then with the analyst firm Forrester, came up with the idea that networks should be designed without implicit trust, enforcing strict identity verification and least-privilege access policies for every user, device, or application, regardless of whether they are located in the former local area network or somewhere on the Internet.
His ideas on eliminating the very concept of “trusted systems” from corporate networks helped popularize the catchy term “Zero Trust”.
In the same year, Google introduced BeyondCorp, a security framework created as a result of a sophisticated cyberattack on their internal infrastructure and aimed at preventing similar breaches in the future.
BeyondCorp shifted access controls from the perimeter to individual devices and users, thus dramatically reducing the potential attack surface for future hackers and making it much more difficult for them to perform lateral movement, compromising one system after another.
In a sense, it was a reference implementation of Zero Trust that the company deployed internally. Now, everyone wanted one as well!
As a result of a massive marketing push by network and cybersecurity vendors, most organizations were looking for turn-key solutions that would completely replace their existing network infrastructures: they were planning on “buying Zero Trust”.
Perhaps, they are still looking…
Alas, Zero Trust is not something you can buy as an off-the-shelf product.
Zero Trust is not a product or even a technology – as a concept, it requires a major shift in many aspects of IT and even core business processes of an organization.
The Zero Trust paradigm focuses on eliminating implicit trust from IT architectures and enforcing strict identity verification and access controls for every user or device.
“Trust but verify” is a popular saying often quoted when talking about it, but the real motto of Zero Trust has always been “Don’t trust. Verify!”
Zero Trust helps to redesign your cybersecurity architecture to function consistently and holistically across multiple IT environments and systems – and thus implementing Zero Trust properly will affect multiple existing and new security controls within your organization.
Zero Trust is a journey that begins with a long-term business strategy and focuses on a step-by-step implementation, using existing or readily available tools and technologies, while maintaining the continuity of business processes and avoiding adding even more complexity to the existing architecture.
In August 2020, NIST, the US National Institute for Standards and Technology, published a special publication 800-207 “Zero Trust Architecture” that outlines the fundamental principles of Zero Trust Architectures, provides practical design advice, and even addresses the potential shortcomings of the model.
The document proposes the following definition: “Zero Trust (ZT) provides a collection of concepts and ideas designed to minimize uncertainty in enforcing accurate, least privilege per-request access decisions in information systems and services in the face of a network viewed as compromised.
Zero trust architecture (ZTA) is an enterprise’s cybersecurity plan that utilizes zero trust concepts and encompasses component relationships, workflow planning, and access policies.”
In summary, ZTA is a cybersecurity architecture designed using zero trust principles with a goal to prevent data breaches and limit internal lateral movement.
So, what are the basic principles of Zero Trust?
First of all, the concept is based on the assumption that any network is always hostile, and thus, any IT system, application, or user is constantly exposed to potential external and internal threats.
There is no place for a “trusted network” in a Zero Trust architecture, which is radically different from the traditional approach, which is based on the notion of a security perimeter that separates a trusted local network from the rest of the world outside.
Perimeter-based security architectures assume that any internal actor is friendly, and their access should be implicitly allowed.
As we all know from reality, this assumption is profoundly incorrect: most modern cyberattacks – from credential hijacking and ransomware to phishing and social engineering – are in fact happening inside the corporate perimeter.
After the initial compromise, attackers are free to move within the trusted network, intercept traffic, and pivot to other systems.
Of course, most organizations try to mitigate this by introducing network segmentation, setting up a DMZ for “less trusted” external access, and so on.
However, all these measures fail to address the key flaw of traditional network security – the fact that the initial process of authentication and authorization is performed only once, and that access is usually granted to large network segments.
This is fundamentally different from the notions of least privilege and continuous risk-adaptive authentication that are postulated by ZT.
Here are the primary tenets of Zero Trust Architectures:
Looking at these principles, it should be immediately clear that very few of them rely on specific new technologies that your organization does not already have (or at least could have even before any ZT considerations).
They simply imply that many existing technologies like identity and access management (IAM), endpoint detection and response (EDR), security information and event management (SIEM), or even cloud security posture management (CSPM), can be configured to work in accord to facilitate the primary purpose of a ZT architecture – enforcing centrally managed policies for each access to every enterprise resource, regardless where these assets are located or whether they are even on the enterprise-owned infrastructure.
Of course, many vendors would happily sell you their products with a “Now with Zero Trust” label on the box, but blindly trusting vendor claims is the biggest mistake an organization can make.
Remember: “Never trust, always verify!”
To be able to assess the functional capabilities of a proposed ZT solution, it is important to identify the core logic components of the architecture, and then to understand how they can be translated to the real world through existing technologies.
Again, it is worth stressing that there is more than one way to design a ZTA, and in a real-world, sufficiently complex deployment, you’ll probably end up using (and integrating) several approaches.
The control plane of a ZT architecture follows the typical policy-based access management model, where any access attempt from a subject to a resource must go through a single point of access (Policy Enforcement Point, PEP) that enforces a specific decision calculated from a policy and a set of context attributes by a dynamic engine (Policy Decision Point, PDP).
NIST ZT architecture splits the PDP into two separate components, one of which is responsible for the actual decisions and the other for manipulating the relevant PEPs; however, this distinction is not relevant for high-level understanding.
Another key component is the central administrative console where policies are managed, tested and deployed (Policy Administration Point, PAP).
Anyone familiar with policy-based access control systems will immediately recognize the similarity of this diagram to existing solutions that are widely used in providing fine-grained access to applications, APIs and microservices, databases and so on.
The rest of the components also represent existing external IT systems and security tools that provide the attributes needed for policy decisions as well as the technologies for encrypting the traffic, managing user identities, and monitoring various parts of the infrastructure.
At this point, the readers might be wondering how exactly is ZTA different from the solutions they already have in place, when all the components of the architecture are essentially based on existing IAM and security technologies.
In this case, congratulations! Your company has already started its journey towards Zero Trust and is perhaps much closer to the ultimate goal than many of your peers.
The only change you might need to make is to ensure that all these current and future developments around security and access management are aligned with the basic principles of Zero Trust, and to strive to implement them with consistent central management.
Certain technologies are more suitable for some use cases than others, and while all of them implement the basic principles, different organizations might be facing different challenges when implementing them.
In the end, any substantially large enterprise will have to combine all these approaches across heterogeneous network environments, public and private clouds, and legacy systems.
This approach utilizes identities (of users, devices, and applications) as the key factor for access policies.
The primary component of any access decision is the access privileges granted to a given subject, while other attributes like device security posture, geolocation, time of the day, etc. are used to augment the final decision.
Policy enforcement points that protect specific assets must be able to authenticate the subjects before granting access.
Typically, identity-aware proxies are used in this architecture – the best-known example being Google’s BeyondCorp architecture.
This approach can be roughly compared to traditional web access management but with policy orchestration.
The identity-based Zero Trust model can function reasonably well even without any network-level access control, and thus can be easily deployed in SaaS cloud environments.
However, it does not provide sufficient protection against hackers and cannot prevent their lateral movement in an open network.
This approach relies on placing individual assets or asset groups on isolated network segments and letting infrastructure devices such as switches, firewalls, or network gateways perform the function of policy enforcement points.
Some solutions additionally implement host-based micro-segmentation using software agents (or standard firewalls built into operating systems).
But managing such a large number of firewalls even in traditional, largely static deployments is a substantial challenge; orchestrating them in real time to enable truly dynamic policy decisions, requires a sophisticated orchestration and automation platform.
A great advantage of this approach, however, is that for many organizations, such platforms can be deployed relatively easily, and many enterprises already have them in place.
The biggest downside is the potential difficulty of making this architecture identity-aware through integrating it with existing IAM and identity governance solutions.
Software-defined perimeter (SDP) or “Black Cloud” is an overlay network that hides internet-connected infrastructure from external access, creating a virtual perimeter around a set of resources, which allows only authorized users to connect to it.
After initial authentication and device verification, the control plane of an SDP platform creates an encrypted, monitored network tunnel that connects a subject to a requested asset without exposing any other resources not covered by the policy.
In this approach, the SDP control pane acts as the policy decision point and a software agent or a gateway performs the enforcement functions.
SDP is currently the most popular approach when implementing ZT architectures for remote access to corporate resources for users working from home.
In this regard, it is hailed as the modern, more secure, and fine-grained alternative to traditional virtual private networks (VPNs).
Since it is entirely software-based, it lends itself naturally to managed offerings delivered from the cloud.
Despite the difficulties everyone is facing when trying to define what exactly are we supposed to be shopping for, it is a rapidly growing, vibrant, and extremely diverse market that hasn’t reached the peak of its hype cycle yet.
Fueled by a major increase in demand during the pandemic times, the Zero Trust market size is projected to at least double within the next five years, surpassing $50 billion by 2026.
Unfortunately, understanding the real capabilities behind the “Now with Zero Trust” label can be challenging. It is also worth noting that the concept has greatly contributed to creating entirely new markets for innovative security, compliance, and IAM solutions.
Unfortunately, the abundance of new technologies often marketed under multiple acronyms makes navigating this market quite complicated and confusing.
The fact that products with supposedly distinct capabilities, like cloud access security brokers (CASB), secure web gateways (SWG), and zero trust network access (ZTNA) can now offer substantially overlapping functionality, makes architecting and budgeting new IT infrastructures even more difficult.
Customers have to deal with an entire alphabet soup of acronyms that they have to somehow combine to build an effective and efficient security architecture.
So, what kinds of solutions should you look for when shopping around for Zero Trust?
Obviously, you should start with the basic prerequisites:
These are just some of the questions that have to be addressed before you even start considering your priorities and choosing the right ZTA implementation from the three mentioned above.
Do you feel ready for an identity-based approach?
This will work well if most of your assets are in the cloud or otherwise out of your direct control.
However, it won’t be as efficient at protecting your existing on-prem infrastructure from cyberattacks like ransomware.
Does your company own a large on-prem IT infrastructure?
Then the microsegmentation-based approach might be a good first choice.
Microsegmentation is a mature technology used for isolating workloads and limiting lateral movements in networks, and a multitude of vendors, both large and small, are adapting their existing products to align with ZT architectures.
Finally, if secure remote access to corporate assets for employees working from home is your top priority, then software-defined perimeter technology is the recommended approach.
Providing a much more fine-grained, secure, and auditable alternative to legacy VPNs is a perfect quick win on the road towards ZTA adoption, especially in these challenging times.
Your next step might be combining networking and security capabilities in the form of Secure Access Service Edge (SASE) – a unified cloud-native platform that incorporates ZTNA functions as a part of an integrated package.
A good place to start looking for a cloud-native security solution most fitting for your business requirements is KuppingerCole’s Market Compass on Cloud-delivered Security.
However, one should not focus on networking alone – the principles of Zero Trust must transcend network-layer thinking to be successful.
ZTA requires proper authentication and authorization for each session involving users, applications, networks (including clouds), and data.
Other key components of ZTA are continuous risk-adaptive authentication and dynamic policy evaluation.
Zero Trust Architecture is a set of basic principles organizations must follow to build a modern security architecture, which must be underpinned by adequate technical solutions.
Furthermore, Zero Trust is not primarily about networks, but about identities, devices, systems and applications, and, last but not least, data. Zero Trust is about ubiquitous and continuous verification of security.
That includes network security controls, but also authentication, device security, information protection, and other areas.
Zero Trust must begin with a clear definition of targets, a vision, and a strategy.
Once this is done, policies, processes, and organization must be defined.
The cybersecurity team must work with data security teams and IAM teams, as well as with many other people in and beyond the organization.
For technology, the essential best practice is to run a thorough portfolio assessment. Which technologies are needed for a comprehensive Zero Trust approach?
Which technologies are already in place?
Which must be added, which must be replaced, and which should be retired because they don’t deliver the needs of the modern Zero Trust concept?
Zero Trust also must be defined, as another best practice, in a deployment-agnostic manner.
Zero Trust models must work for everything, from on-premises infrastructure to the public cloud.
They should enable and optimize business processes, not hinder them.
And Zero Trust must be extended to cover software security, for software of any form (embedded, COTS, as-a-service) and regardless of whether it’s home-grown or externally procured.
As we have just learned, all the prerequisites and necessary technologies to enable Zero Trust Architectures are already available, and the market for Zero Trust products is booming.
So, what exactly is preventing so many companies from adopting Zero Trust beyond just standalone “quick win” deployments? First and foremost, it is the requirement to make the model work in every layer of your IT, from devices and networks to applications and data.
Without a carefully planned strategy, it may end up wasting enormous efforts and resources.
Another potential obstacle is the lack of standardization in Zero Trust Architectures.
They were designed to be as generalized and technology-agnostic as possible to make them compatible with businesses of various sizes and industries.
Unfortunately, this vagueness is also the biggest challenge for many companies.
The NIST special publication on ZTA correctly identifies the need for standardization in ZTA, especially in the areas of APIs.
Most identity and security products expose APIs for other applications to use, therefore securing these APIs must be a top priority.
The proprietary nature of APIs is itself a source of security concern.
Working on these standards will probably take a few years, but this does not mean that your best strategy would be to wait until some future, standard-based turn-key ZT solutions emerge.
Especially in the current challenging times, migrating from legacy IT architectures to ZTA might be the difference between surviving in a rapidly changing world of working from home and avoiding the harsh penalties of compliance regulations, and failing completely under the combined weight of various security threats.
The choice is ultimately yours. Start your Zero Trust journey today!
Looking for Decision-Support in the Field of Zero Trust?