KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
Adopting a Zero Trust security model of strict identity verification and access control for every user or device is widely accepted as a solution, but many organizations struggle to find the best route to implementing it. Join security experts from KuppingerCole Analysts and Zero Networks as they discuss network segmentation as a departure point and how microsegmentation is evolving to make it easier to use.
Alexei Balaganski, Lead Analyst at KuppingerCole Analysts will explain the problems with network security, how microsegmentation addresses those problems, what it is, how it works, and why previously it has not been more widely recognized and adopted as a means of achieving ZTNA.
Nicholas DiCola, VP of Customers at Zero Networks will outline and demonstrate the concept of intelligent microsegmentation. He will explain how this approach makes it easy for organizations to use microsegmentation to achieve least privilege networking automatically and in a scalable way for every user and device without having to deploy agents or configure policies.
Adopting a Zero Trust security model of strict identity verification and access control for every user or device is widely accepted as a solution, but many organizations struggle to find the best route to implementing it. Join security experts from KuppingerCole Analysts and Zero Networks as they discuss network segmentation as a departure point and how microsegmentation is evolving to make it easier to use.
Alexei Balaganski, Lead Analyst at KuppingerCole Analysts will explain the problems with network security, how microsegmentation addresses those problems, what it is, how it works, and why previously it has not been more widely recognized and adopted as a means of achieving ZTNA.
Nicholas DiCola, VP of Customers at Zero Networks will outline and demonstrate the concept of intelligent microsegmentation. He will explain how this approach makes it easy for organizations to use microsegmentation to achieve least privilege networking automatically and in a scalable way for every user and device without having to deploy agents or configure policies.
Well hello and welcome to another Kuppinger Coal webinar. My name is Alexei Balaganski. I'm a lead Analyst here at Kuppinger Coal and I am joined today by a distinguished guest, Nicholas Di, who is a Security Jedi and VP customers at Zero Networks. I'm really how, how can one get such a title as a Security Jedi? Maybe I could be a thrift load next time.
Anyway, before launch you to the topic of our webinar today when we are going to discuss zero trust network access and how it can be implemented with Intelligent Micros segmentation. You really have to spend a minute on the housekeeping rules and talk about how the webinar is actually run at. So we are all muted centrally. You don't have to worry about that. We will be recording this webinar and recording as well as the slides will be posted on our website and everyone will get a notification probably over email really soon.
We will run a couple of polls during the webinar so whenever you see a popup asking for your response, please spend a second. It'll help us to understand the audience better. And finally, after the presentation we will have dedicated q and a session. I will be answering your questions. You are able to enter a question at any time using the corresponding box in the go to webinar control panel somewhere on your right. And without further ado, let's just start with our webinar.
As usual, we have three parts planned. I will start with my introduction into this whole topic of zero trust and how it is it, how it's related to micro segmentation, if at all. Then I will, we will switch to Nicholas. We'll be given you a much more technical and in-depth view into actually designing and implementing the micro segmentation architecture within your company. He will also be doing a live demo of their solutions. And finally, as I mentioned, we will have a joint question to answer section by the end of the webinar.
And again, before starting into the content, let's just run a really quick one question. Paul, have you actually considered zero trust to the organization already? Can I please have a poll? And you really have just probably 30 seconds to respond, so please be agile.
Okay, interesting. And we only have few seconds left.
Okay, thanks. Can we have the results as I can see and hopefully you can see them as well. 86% of our attendees responded with a yes, which is of course great news for us as presenters and Analyst house who have been covering zero trust for years. But of course it'll be interesting to compare your initial response with a second poll. We will run by the end of this webinar and we'll dive a little bit deeper into the technicalities. But I would like to start with a slide I have probably used almost 10 years and it has never been more relevant than today.
So yes, we are living in a hyper-connected world and every time when we talk about cybersecurity we are always remembering and longing for the good old days of the castle and more security times. Unfortunately we don't have that anymore. Those parameters have all but disappeared years ago because our IT is now no longer behind a wall. We have resources and assets on-prem in multiple clouds, on manufacturing floors or just somewhere in the world where our work from home employees or contractors or even customers are deciding. And of course for decades this was the priority for business.
Nobody cared about security because it was much more important to open up and bring your data, your products, digital services as close to as many people as possible. And of course this has led to the emergence of numerous security risks, ransomware, data breaches, cloud breaches, social engineering, you name it. Every resource, no matter where it's located now, is basically open to a number of possible attacks.
So we have to kind of go back to the drawing board, we had to 10 years ago maybe or even earlier, and decide what are we going to do with this new reality and how do we bring security at least some semblance of security back to our business? Obviously it has only been growing worse and worse with years.
Yeah, we know that cloud is in your normal, but fully mobile workforce is basically a reality for the majority of businesses, especially after the Covid pandemic. We have cloud services, we have basically our data. We have our assets everywhere. So what do we do with it? How do we bring back at least some semblance of safety in these times where we have espionage and political issues and cyber wars already fought on their malware, ransomware, you name it.
Ah, well we've been promised multiple different solutions of different vendors. But of course the buzzword DU if you will, is zero trust. And every time we talk about zero trust, we have to remember that zero trust is not a product. It's not even a specific architecture, it is just a set of guiding principles to design your it basically your IT and operations, which by design will be more secure and improve your security posture. This is the definition of the Americans Needs Organization and this is what we've been promoting in basically every hour in our own zero trust.
It's a little bit odd to understand that zero trust has been with us actually for 30 years already. It's nowhere as new as many people expect. The term was coined back in 1994 and of course in 2009 the well-known and respected researcher John Kindbar has actually introduced the principles of Zero Trust as we know them now and basically had a period of C explosion of zero trust related things, mostly labels and marketing above world, but of course architect architectures and products and services and a lot of interesting developments.
I would say a major milestone was in 2020 when NIST, again has finally codified the ideas of zero trust and came up with a reasonably detailed explanation of actually how to build a real life implementation of those principles. And yet we are still struggling and yet people are still looking for basically take my money and give me some box with a zero trust label on it and I just want to have it deployed by tomorrow. And we had to say again, again, sorry. It just doesn't work like that. It's a journey. You have to work hard, you have to combine your existing tools, whatever.
So by now we actually have the first report that people are finally getting tired of zero trust or at least of the actual term. So is this the end of hype? I would already know this is in fact the beginning of the real productive development period. As I say at Gartner, after the trough of disillusionment comes the plateau of productivity and this is what we are talking about today. So again, just a quick reminder, zero trust is just an architectural concept. It's not just about technology. It's basically covers all of your company's assets, users and data.
And the goal of implementing the trust is not just to put a label on on your IT infrastructure but to actually reach some tangible business benefits to make your more secure, less complicated protect from ransomware and hackers, boost user productivity, you name it. And the primary idea behind this zero trust, the concept is basically you have to know at any time what's going on within your network. You have to track all your users resources data. You have to assume that your network is unsafe by default.
So assume breach at any time and implement the security controls the way that that that breach, that implicit untrustworthiness if you will, does not affect the use of productivity. And for that of course you have to enforce strict security policies and restrict access for every even the smallest resource within your network and you have to verify and monitor everything continuously. You've probably heard about the tenant of zero trust many times. I won't go through every of those. I just want to highlight that basically those seven rules can be grouped into several categories and statements.
Basically they say first you have to isolate each of your sensitive resources. You have to apply strong security controls at every layer from the lowest one. Like you have to protect your data, you have to protect your network, you have to protect your identities and from authentication, you name it. And of course you have to monitor and record all security related activities within your network only by implementing all three pillars if you will. You can combine those and ensure that your security policies are actually working consistently and securely.
And for every access decision dynamically, I try to sketch a really high level or overview of what are traditionally zero trust architectures are presented. Like basically you have your endpoint devices, you have your identities, and every time they need to access resource, they have to go through a policy evaluation process. Basically some kind of an engine decides came this user from this endpoint access that particular resource or not.
And if yes, your access request will go through a set of security controls which classify, encrypt, acc, protect access, monitor access, whatever to your data, to your application to infrastructure. And this happens dynamically in real time every time you need access. And of course this access is always strictly enforced to be the least privileged one. And of course there is always the feedback loop which provides context and information to optimize and improve and secure those policies even further. So this is an endless process if you will.
And this picture actually doesn't go into details and like you will have, you will see many more technical details in the second part of the presentation. Of course, I only want to highlight one really big takeaway from all this. Zero trust.
Again, it's not a specific set of tools or protocols or anything, it's not even a specific architecture. I call zero trust the function way of it because it just like those principles of Chinese philosophy if you will. You don't have to build a new house to leave according to phu. You just need to rearrange your furniture to probably hang a picture on your wall. You follow really easy set of basic rules and guidelines and there is more than one way to achieve the perfection if you will. And this should be like the, the bigger takeaway from our today's webinar.
Again, zero trust can be implemented in more than one way and you probably already have all or most of the components you need for that. You just have to make sure that you know how to combine them together and how to make sure that they operate in a continuous circle. So he would ask me, how does it actually relate to segmentation, the main topic of our webinar, oh that's kind of easy because segmentation is the foundational principle of security. It even predates cyber security, it predates computers, it even predates submarines if you will.
Segmentation is a basic approach to securing anything starting from the medieval photos to modern multi-cloud application architecture without segmentation the the entirety of your IT will go down if hit in one space, one place. And of course it's a little bit ironic that for last decade at least or even more, we have been conditioned to think perimeters are bad. Zero trust is a good alternative to those. But this is really a a false dichotomy. The problem is perimeter by security is not the perimeter itself but what's inside it.
If your perimeter is too big, if it only isolates like a few key components of your IT but not the rest of them, you have too much implicit trust within a single section of your IT submarine if you will. And thus you will have an increased risk of a bridge. Segmentation is a well known and proven remedy, but it has to be applied carefully and consciously. It has to be modified for the modern times. The biggest challenge of traditional quote unquote traditional segmentation as it is static of course you can vary in a degree of your segmentation.
Basically you can go and just set align and DMZ and end with that. Now that would be a real kind of legacy old school approach towards segmenting your network. But you can go smaller and small, you can segment your cloud environments, you can segment applications or even individual like microservices for example, or go even further and segment on the container or single process level, whether you call it micro or nano segmentation or anything else. You have a spectrum of approaches.
And the only challenge here is that basically the complexity and the management effort increases as you go smaller and smaller. And of course ideally you build a tiny micro perimeter around every single asset within your network and around every single user and endpoint and database and application and whatever. But doesn't it really sound like zero trust that's exactly the same but is stipulated among the zero trust tenants that you have to isolate your resources, you have to enforce access to our resource, you have to validate it every time.
So yes, the micro segmentation is actually the workhorse. One of the tangible, proven, validated and better tested approaches towards implementing zero trust. The only problem is how do you make it manageable at scale? Cause as soon as you start thinking about basically deploying tiny firewalls around every one of your assets, you have to think, okay, now how do I deal with hundreds of firewalls? Thousands maybe. What if my firewalls are ephemeral because they have to be instantiated for every container I am running?
How do I do it across multi-cloud environments, hybrid environments and so on? Obviously you need some kind of sophisticated orchestration and automation platform for that. And this is where we are coming to the idea of intelligent micro segmentation. How do we do, how do we extend this technology to work for the multi-cloud scale? Well obviously it has to be identity driven. So static micro segment, static segmentation no longer works. We know that. So it has to be aware of the actual identity of a user or a device or even a resource you need to protect.
It has to be dynamic and always learning. It has to know what's going on with your resources, with your traffic flows, with your network configurations. It has to constantly understand what's changing and how to adapt your policies to those changes. Obviously when you are thinking at large scale, you have to add some kind of automation and ideally that automation has to be smarter than just if, than else kind of rules. So maybe there is some really strong place for machine learning at AI here and of course it has to be everywhere.
It has to cover all of your IT footprint, otherwise it just just won't be zero trust if you're only doing it halfway, if you will, only for parts of your infrastructure. So ubiquitous deployment is a must and one added bonus I would argue would be zero footprint. So ideally it has it, it should be implemented without actually throwing in additional expenses for security hardware or administrative effort for changing your infrastructure and so on, which has to just work with existing tools. It has to follow the approach if you will. How do two actually technical implement this requirements?
Now that would be discussed in the second part of our presentation and I would like to leave you with one quote of Louis Carroll. So you see it takes all the running you can do to keep in the same place. If you want to get some else, you must run at least twice as fast than that. So act now, you actually, I probably have had to start yesterday, but do at least start today, not wait for tomorrow. Because implementing zero trust according to all those requirements, it's, it's still a journey.
It still takes time and you never know when the next ware or a bridge hit you could be tomorrow could be even today. But I guess with that we can give the stage to you are very welcome. Hey Alexei, thank you. I appreciate the intro.
I think it's, it's really good context and here at Zero Networks we agree with everything you said around zero trust being in architecture, it's gonna take multiple components, but I think some things that you know the last couple years have not been a reality to actually implementing, especially intelligent micro segmentation is something we've really overcome here at zero networks and spent a lot of time kind of creating that solution to make it easy to intelligently microsegment.
Yeah, so, so very much like Alexi slide, if you think about networks today, they look something like this in some way, shape or form, right? Maybe some of the components there, maybe all of those components are there, but you know, we have remote workers, we have some type of on-premise network or cloud networks that are all well connected, which really is a problem because if you look at an attacker from an attacker perspective, right, this, an attacker can get in and they typically start at an endpoint, right?
And once they get to that endpoint, they're easily able to compromise all of the different assets in the environment due to that lateral movement because these networks are well connected and what I like to call network openness, right? Because of the lack of segmentation or micro segmentation, the attacker is just able to move to the resources that they need to, especially as soon as they can get a credential. And so a lot of things, a lot of tools out there are great for detection, but attackers are still getting in and getting past those and able to laterally move and compromise.
And we've seen a lot of ransomware recently in the last few years which can really quickly bring down a network by just stopping all of the capabilities from the clients and servers. And so when we thought about how to really micro segment the network, as Alexi said, you know, making it ubiquitous was really one of our big concerns. And so the way that we do it is by deploying a virtual appliance that we call a trust server. And that virtual appliance remotely controls every asset and its host-based firewall, right?
So the built-in operating system firewall that's available inside the asset and controlling that remotely without having to put an agent on every machine. And so once we can remotely connect and enable that built-in firewall, we can start collecting and monitoring events from that firewall. Well this allows us to then take the asset and put it in a learning period.
And again, back to Alexi's last slide there with the bullet points. This is where we automate building all of the micro-segmentation rules for the environment, right? So we look at what's going into that machine and we build inbound rules to allow all the low privileged traffic. So instead of ending up with, you know, 65,000 ports open, maybe not all listening on an asset, we get that down to only the few ports that are low privileged that need to be. So think about, you know, clients accessing a web server.
We're gonna learn that we're gonna create a role that allows clients to access that web server, right? Because we don't wanna break applications as part of microsegmentation, you'll never be able to implement it as part of your zero trust journey. A web server talking to a backend database server will create a rule that says that web server can talk to that backend database server. And now that database is not exposed to everything on the network, but only to the few servers that actually need to access it for storing data.
One of the unique things that we do during this learning period is we don't open any privileged ports and protocols. So think rdp ss H win rm, those types of ports and protocols, we actually create MFA policies for those. And that's important because to really stop an attacker attackers when they're in, they need a privileged port and protocol on that destination to perform that lateral movement, right? They need to be able to connect with those credentials that they've stolen and actually execute something on the distant machine, right?
So they typically need to R D P PS exec use some type of tool like that that allows them admin access. And so we don't open any of that app, any of that up and we create MFA policies that require users to self-service MFA before they're able to connect. And we'll talk a little bit more about that as well as demo it. So at the end of the learning period, again back to automation, we automatically move all of those assets that are in learning into protection.
And so this is when we turn the firewall on to block inbound by default and only allow the few things that are low privilege that we've learned on the network. So again, this is really about getting to segmentation easy, using automation and using an agentless model so you don't have to deploy agents to get to that kind of basis of microsegmentation using intelligence really, really quickly without having you to ha, without having you to have to do a lot of work, right? By using that automation to build all the rules, you don't have to analyze all of your traffic and build it.
The automation will do it all for you, makes it super easy. We also give you the ability to protect OT assets and IOT assets in a little bit different model because think about OT and IOTT assets, they don't have firewalls on them, right? You can't remotely control and restrict or micro segment that asset by putting a firewall on it because there's no firewall built into it. But what you can do is zero networks is segment those using IOT segmentation and we will block access to those iot devices from all of the assets that we do control the firewall.
So now even if an attacker gets on a machine, they can't laterally move to that OT device which has no firewall. And let's just say you happen to get a bad patch on the OT device with a malicious payload, that attacker is not able to spread to the rest of your environment again, because of this micro segmentation and starting with a block inbound and and only allowing those low privileged kind of ports and protocols in the environment.
So just to back up on the OT segmentation really quickly, to recover this, I know you didn't have the visuals, so again, you can can segment OT assets by using zero networks and we block outbound access to those OT devices. And because the firewalls on the hostos is blocking inbound by default only allowing that low privileged traffic, then those OT devices aren't able to spread laterally to your endpoints.
And then lastly, on the architecture side, there is an agent like model for certain use cases where maybe your machines can't be controlled by a virtual appliance, then you can use an agent model to kind of manage those. But again, everything functions the same way. You're just in real time remotely controlling that firewall from the cloud by applying all the rules and policies that are in the environment.
So then just to show kind of a visual of how the MFA piece works, again, we do this for all the privileged ports and protocols, but if a legitimate user wants to connect to a machine to administer it, again use that privilege port and protocol, once that connection is blocked, that triggers a matching of the MFA policies and the user is prompted with a self-service MFA using an existing identity provider. So this is being able to again, step up that authentication when they have to connect and use that privilege access.
Of course once they mfa then they're allowed a just in time rule that allows, that's temporary and allows them access to be able to do that administrative function that they need to do. Now we built this specifically to really stop attackers from using privileged porter ports and protocols, but you are able to use this for really any port and protocol.
So kind of again back to Alexi's kind of concepts and architectures, you can now apply this to any port protocol or if you have a sensitive application that you wanna make sure is M F A, you can now apply this technology to do that as well as part of microsegmentation in the environment. So now if we go back and think about an attacker and even take something like SolarWinds, which was a software update compromise or a supply chain compromise, right?
Once that attacker is in and they try to go look for these ports because of this micro-segmentation, which is kind of the underpinning of zero trust, nothing is open, right? Only few things that are are allowed for low privileged kind of normal user access. And now the attacker has nowhere they, they can go, they can't use any privileged access without an mfa. So now they're stuck.
And this is really the core concept of what zero trust is all about, stopping the attacker because they will get to a machine at some point in time in the environment, but you don't want them to be able to spread to 1,000 thousand machines in the environment. You want to contain them to that first one. Hopefully the EDR will detect that, contain that as well. And now because of the network level, they're not able to move anywhere inside of the environment. So now let's jump over and see what this looks like.
So here in the Porwal you can kind of see a protection overview just very quickly of the environment, how many assets might be protected or in learning, remember I talked about the assets go through a learning period to do that. Intelligent microsegmentation, you can see the rules, you might see how many MFAs are occurring across the environment, but most importantly here in access you can see all of the different rules that have been applied to your environment from microsegmentation.
First and foremost, you can see all of these were created through AI or through that learning period, which we took the asset, we said, hey, it's time to learn. Once it went through learning, we automatically generated these rules based on those assets. And you can see we do a a lot here to really make this human readable or human sizeable if you wanna call it that. Because if you remember firewalls operate on source and destination IP addresses.
And so here you can see we have things like system groups, like all protected clients and all protected assets and any and internal subnets and functional tags where we automatically discover server roles and functionally tag them for you. And once we take those assets through learning and discover these types of tags and groups, we make it super easy to read these rules. So for example, all of my internal subnets can talk to my domain controllers, all of my domain controllers on all of these ports and protocols that we learned.
And again, you would have to do a a lot of human analysis to figure out what are all the ports and protocols that an domain controller uses, whether that's searching through doc stop microsoft.com or literally watching a bunch of packets on the network. And we automate all of this for you through a 30 day learning window. But if you notice there's no 33 89, which is R D P or 59 85, which is when our m, again, those privileged ports are protected with mfa.
And so we automatically build these inbound MFA rules and you can see we've, we've created quite a few to protect various different ports and protocols, but more importantly we keep it simple. So the policy, the default policy that we give you will say any user on any asset using any process trying to connect to something that's protected on RDP must mfa. And if they do MFA and pass that MFA, then they will get a rule created for four hours that will allow them that access. And of course you can change how long you want the rule to be created for.
You can get very granular and say I only want certain users. So back to Alexi's point about being identity driven. You can make decisions based on identity assets, even the process level, the source process or destination process on top of that Porwal protocol that you want to make sure the user is actually connecting to when they mfa. And so if I lock this down to administrators, if a non-ad administrator tries to R D P to any machine in the environment, they would never be prompted, it would just be blocked, they would never have that access.
But what it looks like for the user with these MFA policies, if they want to go go connect and I'll try to connect to our domain controller and I'm just gonna grab my phone. So here I just got a duo prompt on my phone, I know it's hard to read. I'm also gonna get a browser prompt, which I will drag down to my screen here in a second. And in this browser prompt, we give context to the user. Remember this is a self-service MFA for that you know, privileged connection. Nicholas is trying to connect to the domain controller from his machine using rdc man xi.
And this context is important because if an attacker was on my machine trying to connect with command xi PowerShell dot XI malicious process dot xi, that context is very, very important. And so here I could sign in with my identity provider, which would require an MFA and then I would be able to connect. Now I just did approve on my phone in duo for simplicity and we can just quickly go back and look and now we can see a rule has been deployed that says my machine can access that domain controller using 33 89 and it'll expire in four hours based on the policy.
And that came from the MFA platform. And just to show you that the rule actually did get deployed to the endpoint, now when I connect I will be prompted for credentials.
Again, controlling that network level access is very, very important because then the attacker is not able to connect. And this is super important. If you think about it a little bit deeper, if that port and protocol is closed, the attacker is not able to launch even a vulnerability against it without an mfa, right? They must MFA first to even open the port and protocol.
So they, even if there's a vulnerability in RDP or SSH or any of these privileged ports and protocols are really any protocol again you wanna protect, then that is closed and they have to MFA first to open that up, which means they can't even use their vulnerability. And we give you visibility into all this traffic in the environment in what we call activities. You can figure this, excuse me, as connection metadata. And so you can see everything from the user, the process, the machine outbound on the left side, inbound on the right.
We can see it was a loud outbound from the source, but it was blocked inbound to that domain controller when I tried to connect on R D P. And if we click on the shield we can see why it was blocked. It was blocked because there was no open rule allowing that inbound traffic.
Again, we keep it closed and require an mfa. And once I MFA now I was able to connect because of that MFA rule that had been created. And of course you can see all of these things that happened such as creating just in time rules and a audit log here. So you can see I created just in time rule for the domain controller and we have all of the details to include what MFA method was used. You can jump to the rule, you can jump to the mfa.
So again it's, it's super important to be able to use this as part of your zero trust architecture to really micro segment the network only allow the few things that need to be allowed in the environment and you can even review these. So if I go to that domain controller as an asset, I can see some rules, which is great and I can also see any global rules that might be applied to that machine. But I can also visualize this. So here I can see my domain controller has some incoming allow rules and outgoing allow to six different entities.
And then based on the rule type, I can expand this out and see, oh here's Nicholas's access on 33 89. There's also some permissive rules allowing some various different things like Red Hat has access to the domain controller. There's some groups like internal subnets that have a few ports that have access into the domain controller. And I can also flip and look from an outgoing perspective, what does this domain controller have access to? And I can see it has some privileged access to the share server on port 4 43 and port 4 45.
And I can also analyze all of the traffic instead of just looking at activities in a list, which can be tough to kind of summarize, you can actually summarize all of that traffic and say okay, what are the distinct ports, distinct users assets processes And you can see kind of counts over the last week. So here we can see 59 different assets with 22 different users and you can click on each of these and see who they are across 49 processes. We're accessing LDAP in our environment and it occurred about 27,000 times in the last week.
So you can aggregate all of that activity traffic to really understand the network connectivity that's occurring in the environment. And then just one last thing, we talked about OT segmentation. So here we have an OT asset, we've segmented, you can see it's protected with zero networks and same thing if I try to go attempt to connect to this, it's blocked. I just got an M F A policy prompt as well. So I'm going to say approve.
I also got a browser and just that fast I now have access to that device which has no firewall that can be controlled, but now it's segmented and I can apply MFA to even basic connections like web where this machine does not allow or this asset OT assets does not allow any type of mfa really it doesn't even allow us to change the admin username. The best we can do is put a complex password on it, but now we're able to protect that with what we call an outbound mfa. Where here, same thing we're saying if any user tries to connect, they have to pass an mfa.
And so you can see a temporary rule has been created that allows Nicholas access to that meeting room camera for a time bound period and that'll automatically expire and I'll have to re MFA if I want to connect again. So let me jump back over to the slide and I think it's your turn to present the last slide, Alexi.
Okay, well that was really impressive. The next part in our agenda is the q and a session. And of course again kind of I encourage all of our attendees to submit that question to the questions tool on the go to webinar control panel. I will just read them aloud and they will talk about those and in the meantime we will just go through some additional useful content. But before we jump into that, let's quickly run the, our second poll of the day.
Again, it's just one question, but this time it's about your plans on micro segmentation. Like are you already a fan or are you still evaluating it? Or maybe you are a skeptic. Can you please run our second poll?
Okay, interesting. And in the meantime, let me just kind of get give you my kind of first feedback to your presentation, including your demo. I have to say it's really impressive how it ticks all of the boxes I mentioned in my part it's all, it's intelligence and automation and scalability and even a zero footprint approach, right? Because you are mostly reusing the firewall which already exist, which already exist on the host devices.
Yeah, we we I was just gonna say we, I appreciate that we spent a lot of time, a couple years, you know, really building the product to solve those challenges. You know, when we talked to customers and got feedback about these challenges, the, the things that you mentioned were things we really, really focused on to solve these for customers, right? Because no one has been able to implement microsegmentation, I mean just alone building the rules. Like let's say you could deploy agents and you could get past all those hurdles.
Building all of the microsegmentation rules is just super complex, very, very time consuming, right? Right. And and many customers have told us they've been trying for years and have not been successful. So we really focused on solving kind of those challenges that you mentioned. So thank you for, Appreciate it. Right?
So kind of looking at the result of the poll, it's interesting to say, yeah, about half of the people are actually already implementing it and I really hope that they are implementing the right kind of micros segmentation, the one we have seen today and not the old school approaches. But I can also see that at least a few people were actually considering it before even trying it before, but somehow just didn't work out. So this is probably a good day to to, to try it again this time, doing it the right way, kind of aligned to zero to the tenants of zero trust.
I will say it's, it's good that nobody said we don't need it. Yeah, absolutely. So everybody realizes they do need it in some way, shape, or form, which is good. Of course we have kind of a bias here because people who don't need microsegmentation, they usually don't attend webinars on that topic, but well at least I think we have a really committed bunch of people attending our webinar today and this is great.
Okay, so we'll just kind of start quickly with the first question that we have from the audience. This actually the question I hear almost every time we discuss any kind of an automation platform. So how long does it take to teach your platform to, to do its job properly?
Yeah, so what we've found is that most business cycles run on a 30 day period, right? You think about things like finance, jobs run once a month so that they can report and all of those types of things. So we do a 30, we recommend a 30 day learning window for servers and for clients you can typically do two weeks. You can also extend that if you have critical assets and you're like, Hey, I'm worried there's only a batch job that runs once a quarter. You can extend that for multiple months and just let the learning go longer. But by default we recommend 30 days and that's, that's enough.
We haven't seen really any issues with the 30 days in, in the customers that we've worked with. Great question. And I guess a follow up on that would be, and this is like, this is my own question if I may. Yeah. So how do you ensure that nothing breaks, like of course you can extend your learning period more and more, but are there any alternative approaches? Do you have some kind of a testing or simulating or simulation of policies or something like that? Or can you actually manually end something which like you probably like some customer would feel that it won't be caught by learning?
Yeah. Can you combine all those?
Yeah, so we, we have what we call block the learning. So let's say something goes into protection and all of a sudden there's a few blocks we alert and say, hey, like this might have been missed in the learning and let let them review it. Let's say it's a couple months later and something new is, is blocked. You can easily update the rule and add a port and you saw how fast those rules are applied, boom, it's open. And so you can very quickly reply, respond and open that up if you need to. You can very quickly respond and close something down from an IR perspective if you need to.
For example, we had a customer that asked us about the new SMB vulnerability and how to block that, right? Because you shouldn't be talking to the internet on smb and we helped them block SMB on all external IP addresses except their corporate ones because you really shouldn't be talking out on SMB and that helped protected protect them from the vulnerability very, very quickly.
So yeah, there's again in what we've seen is, you know, very, very, very rare cases, something does get missed in learning. It's very rare.
Again, with 30 days you're normally gonna see everything that you need to. And one of the things we think about in AI is favoring opening ports up so that an application doesn't break. So sometimes we might open, you know, some might say extra ports that maybe aren't needed because we don't want to break the application, but we know it's okay because we've protected everything else that's administrative with the mfa.
So in in terms or in, yeah, in favor of being more application-centric and not breaking applications, we definitely are very sensitive to that and make sure we open the right ports and protocols and sometimes one could argue some extra ports so that things don't break. Well, another kind of follow up questionnaire here a lot when we are talking about AI and machine learning and stuff is can you actually reuse the findings, the learnings from one customer with another one? Like this kind of a crowd wisdom approach where your models or whatever you call them are shared between customers?
Is it something which you can support? Yeah, so no, actually we don't use a bunch of ML and AI underneath. I know it says AI meaning artificial intelligence created the rule for you, but it wasn't through ai. One of the things about network traffic is it's not actually great data for ML models and AI models. So we use a very deterministic approach. Hey we know domain controllers need these ports, we know web servers need these ports, we know these types of things need these ports.
And then anything else kind of falls into a bucket of like, hey, what is actually connecting and maybe it's something we're not aware of and go ahead and create a port for that again, if it's not a privileged port and protocol. And so it's more of just smart intelligence that we've learned and so we can apply those things that we've learned from all the customers to all customers, yes, but it's not an AI or ML model in terms of using those across different customers. I guess that's just another point to strengthen the, the point I made earlier, like don't look at the label.
If it says AI doesn't mean that it, it has to be machine learning and if it says zero trust, don't trust it, actually look behind it, ask the vendor questions, how it works, why it works. And you might actually be pleasantly surprised that it's supposed to work much easier and more deterministically. That's Right. Traditional quote unquote traditional machine learning.
Okay, next question from the audience. How do you isolate the bad boys during the learning process?
Okay, I'm not sure what they mean by bad boys people. Yeah, I think they mean do we learn attack if an attacker's in the environment, could we learn that? So the answer is yes potentially, right? Because again, if the attacker is very much acting like a user and connecting to a file share, right?
Like yes, we could learn that, but again we're gonna open the file share up so users can access because again, that's a low privileged operation that doesn't need to be, again, super complex. So there's a potential, yes, the reason I say it doesn't matter because again, that attacker is gonna try to LA laterally move in the environment and that's gonna require a privilege port and protocol. We never open those up unless it's an interactive like service account, right? Because obviously service accounts can't mfa and so that will automatically go to an MFA policy and that will stop the attacker.
And we've actually had a, a real customer that we learned on, we deployed the system, they never knew the attacker was in the environment. They found the attacker after because they started getting weird MFA prompts, correct Dalicious process names, right? Remember when I said the context is very important? And so at the end of the day I would argue it doesn't matter because we're gonna close down all of those privileged ports and you'll start getting prompts for MFA and that's what's gonna stop the attacker.
So maybe if you learn that they connect to a server, it's not that big of a deal because likely other users connect to that server too. I guess that that's again showing the importance of, of the difference between anomaly detection in traditional sense and actually knowing what's going on. Like you don't have to worry about mis learning something because you know that these things, they just aren't anomalies. There are known malicious behaviors which you could have already protecting from the, from the start, right? That's right. Okay.
Okay, next question. Have you experienced breaches to renewable energy or wind farms? How would you protect remote locations like that?
Yeah, that, that's a tough one, right? And it's something we're evaluating and and looking at because there's physical access that you have to think about in that type of model, right? If it's a wind windmill sitting in the middle of someone's property that's being leased, you may not be able to control physical access into that device that's in the environment. But ultimately you can still segment it like we do so that if an atta an attacker were to come into the network in the traditional model that we're seeing many, many attacks, right?
They're not traditionally today walking into buildings and plugging into networks, right? The nation state actors just do it remotely. They don't need to fly over and try to break into the building. And so with that you can use the OT segmentation, IOT segmentation for those type of devices to make sure the attacker can't move to those type of devices. So ho hopefully that helps answer the question.
It's a, a little bit probably requires a a little bit deeper conversation, but Right, Right. It's really interesting. So OT seems to be like a really big point use case for your solution. Do you face this kind of traditional opposition from the hardcore old school OT security people who would actually value like process continuity much higher than cybersecurity? Are they afraid of segmenting too much and breaking the manufacturing process for example?
No, I mean that's the good thing about the learning, right? We're very deterministic and again, when we learn, oh, this OT device needs to connect to the server to run this batch process daily or upload metrics daily or whatever it is that is doing, you know, connecting to that server, we're gonna create a rule that allows that OOC asset to talk to that server. But now it's nice and controlled, it can only talk to the destinations that it needs to.
And so regular clients and other things like that won't have those capabilities and so the attacker can't talk to that OT device or even if it was a bad patch on the OT device, the attacker can't move again to the other assets in the environment. So yeah, we, we absolutely learn that and make it easy and I think when people see how quickly the learning happens and how easy it is because you don't have to analyze and again, how we favor making sure we don't break applications because we know we're protecting the privilege ports and protocols.
You know, we, we haven't had a lot of pushback in that, in that OT sense. Right, Right. So that's again the major difference between the old school segmentation and the intelligent clients. That's right.
Okay, great. Great. Next question.
Okay, how can we be sure that the principal of least privilege is used? My fear is that the rules of the type any to any are enabled. So how do you prevent those types of rules from applying?
Yeah, so the AI will never create in any, to any role. There are some certain cases where we might say because of what we learn in the environment and it depends like a central source to any, right? So think about a vulnerability scanner, it needs to be able to talk to many assets and normally on all ports and protocols, right? So we'll create a rule that says that vulnerability scanner, a set of vulnerability scanners can access all machines on all that.
So again, this is about being deterministic in the AI and learning period. When we create the rules to make sure it's not any, any, of course we have administrators in the Porwal, an administrator can always create a rule and could create in any and any, so you do have to do some rules of, you know, review and we have review approvals, right? So you can set those types of things up to make sure somebody doesn't create a rule that doesn't, you know, go through, doesn't create like in any, to any role.
And again, kind of let me trust this is the right way of doing zero trust because like who is watching the watchmen? If you do not have this kind of zero trust governance, then you are not doing zero trust properly. Great. Next question now it's about the multifactor authentication. So can you only apply it on privilege Porwal or can you configure any kind of protocol to support that?
Yeah, so it can support any port or protocol during the learning period. The AI will only create for privileged ports and protocols.
Again, our number one goal was to really stop attackers because if you look at the attacks today, it's all lateral movement. Once they're inside the environment and nothing is stopping them from getting in, they're always gonna get in on something some way, shape or form. If you have that assumed breach mentality, then it's about stopping them from laterally spreading. So that was the number one goal.
So kind of the side effect of building it to protect privilege ports and protocols is that you can use it for any port and protocol and it doesn't matter about the application because this is at layer four. So it doesn't matter what's higher in the app stack, whether it's a web app, whether it's a, you know, SQL database connection or whatever, you can protect any port protocol with the MFA capability.
Well, going back to that kind of fear of breaking stuff up, do you provide some kind of a guidance or best practice of which ports should be always protected with M FFA and which have some risks involved? I can imagine that, I mean for human identities it should be probably fine almost all the time, but can you extend this strong authentication to, I don't know, APIs, automated clients and stuff like that, which can actually break some flows?
Yeah, that's kind of a deeper conversation, but the short answer is it depends, right? So we do actually have a method in our MFA methods called no mfa. So you could have service accounts and non interactive type things still do just in time. Of course they wouldn't be MFA because they can't mfa, they're non interactive, right? So you can still do some just in time capability to protect those and ensure they're only connecting to the machines that they should be connecting to. But we're already gonna learn that again, we will, we will already look at interactive service accounts.
For example, I, I didn't show it in our rules, but we have a rule for a server that manages another server. It's a service account, it does use WIN rm. And so we did allow WIN RM because we learned there's already a interactive service account.
But again, we could convert that to a just in time capability if we wanted to. But again, we're, we don't wanna break applications, we wanna make it easy.
So yeah, our guidance is protect all your privileged ports and protocols for HU for interactive kind of human places service accounts limit where they can actually use that network connection using the rules, which we will outlearn for you. And so then you really won't have a problem with breaking the applications if you follow that model.
Okay, great. There is one really interesting question or totally from totally different perspective. One thing I mentioned earlier is that it has to be a ubiquitous deployment. So in a way you already do that because you are, you are reusing the existing firewalls, the host based firewalls, but can you actually also incorporate other types of firewalls as well? Like can you, what if your customer says, well, but we already have our for Nets or Palo Altos or anything like that, would you work with those as well?
No, because the, the problem and the reason we haven't implemented it is because yes, if you have a for net, it's in the middle of your network and you might be routing traffic through it, but those two clients that are plugged into the same network switch on the same vlan because not every machine is on its own vlan, right? Can talk directly to each other without routing through the for net.
So you, if you want to micro-segment, you have to get to the endpoint. You can't put firewalls in the network, you can have them, we actually have most customers removing their middle network firewalls, not their perimeter ones, but the ones that they have in the middle of the network because now they're basically doing it at the endpoint. Why do you need to do it in the middle of the network and force routing traffic? It actually makes it easier because now you can go kind of back to a flat network because you're controlling it at every endpoint if you so choose, right?
It's, it's really ultimately up to the customer and their architecture and how they wanna design it. But like you said, it should be easy and you shouldn't have to change your architecture. So if they want to keep the, the network in the middle, that's fine. The problem is doing it at the fornet doesn't buy them the value of lateral movement between things in the same segment because ultimately what you have by a fornet in the middle is segments, not micro segments. So again, this is, you're not competing with them. I I I can imagine you, you you could, but I guess you can also coexist.
Yeah, we can coexist and Yeah, you know, there are, there is some kind of, there might be some, depending on the view, some competition of it. But yes, we can absolutely coexist with them. It's just you would manage the rules in two, two different places, right? You can get all the way very granular down at the end point or kind of do a centralized policy in the middle of your network.
But again, that doesn't give you the protections between inside that segment. Okay, great. And we've actually just reached the top of the hour, so there will be no more time left for for the questions. If someone still has questions and answer, you are very welcome to contact us directly. You will find our contact information in our slides, which will be available for download anyway. Thank you very much Nicholas. Thanks to all of the attendees and the future viewers of the recording. I hope to see you at some future time at another company call, webinar or event.
Thanks again, have a nice day. Thanks Alexei. Thanks for having.