Webinar Recording

Policy Based Access Control for Cloud-Native Applications

Log in and watch the full video!

As companies shift to cloud-native applications, the complexity of a microservices framework can be daunting. When applications are built in a cloud-native stack, authorization is also infinitely more complex. Crucially, Open Policy Agent (OPA) decouples policy from code, enabling the release, analysis, and review of policies without impacting availability or performance.

Log in and watch the full video!

Upgrade to the Professional or Specialist Subscription Packages to access the entire KuppingerCole video library.

I have an account
Log in  
Register your account to start 30 days of free trial access
Subscribe to become a client
Choose a package  
Welcome to this webinar. My name is Graham Williamson. I am an Analyst with KA call and this session today is on OPA OPPA, which is open policy agent. And I have with me gusta and Jeff Berg, who from the company who are going to help in us understanding this very important facility of a cloud native environment,
Just a couple of slides in on activity. The biggest one is right at the top there, the European identity and cloud conference coming up next month. More on that in a minute, do have a look at the other activities. Casey live is coming up on June the 22nd. It's an opportunity to delve a bit deeper into enterprise cybersecurity to join us for that one. And then in November, we have our new next cybersecurity leadership summit, which is a whole affair where we deal delve deeper into this whole concept of how do we look after our cybersecurity in our enterprises back to EIC. So it's coming up mid to may and this year is the first year it is in Berlin. And that's going to give us the opportunity to have a larger conference that we really help hope that you'll be able to join us in person.
If not, there is the opportunity to join virtually. The whole concept of an EIC is again, a time where we can come together and discuss identity and access management, cybersecurity topics. And again, double a little bit deeper than we would normally be able to do in a smaller activity. So do join us in Berlin in may, okay. For this webinar. Unfortunately, we, you are all muted. So as a participant, we have just too many to have you participate interactively. So the audio controls are, but we do want you to participate. And there opportunity in the last com section of the webinar for questions and answers is very important that, that we understand the issues that you have that are on your mind that we can can so that we wanna be able to answer your question. So it helps us understanding the audience, and it helps you in terms of answering a specific questions. There are three polls during the webinar, and again, we would like you to participate in that to, again, help us understand the sorts of organizations that you're representing. This webinar is being recorded and the slides are going to be made available as well.
So the agendas in three parts, I'm going to start with introduction to the cloud native market to help us all come to a common level of understanding of where development of cloud software is, is, is moving the, then Jeff and Gustav are going to help understand Tyra's solution. And they he's got some slides specifically addressing the solution to Thea putting together an Oprah environment for your organization. And then the last part is, is the question and answer. So in terms of the policy based access control, so is open policy agent that enables us to how a policy based access control mechanism within our cloud environment. The traditional feedback that we're probably all familiar with is a critical component of our cybersecurity stance within our, within our company. It gives us the opportunity to have a centralized policy management approach. We no longer have these silos within our organization that are doing their own policy for access control.
It allows us to have decisions in real time. So we, we are no longer constrained to some entitlement that was set up in some time in the past. And it enables us to have very fine grained access control. We can put more attributes into that policy decision. We can include context attributes as well, like the time of day, the endpoint devices being used and those sorts of context variables. And it now allows us then to, to, to have an integration of our policy access control policies within our organization. Typically there's four components of our policy system. One is a policy decision point. So an application will use a policy decision point to get a permit or deny or an access or a false. And, and, and that decision is being given to the application or the resources being protected. We have an enforcement point in typically integrated within or close to the actual resources using that decision. We have an information point, which is providing the detail for the policy evaluation. And we have an administration point, which is the point that allows us to set up and control, manage the policies that we have.
So the I've just completed a research document that discusses some more detail. And why was really amazed at what's happened over the last couple of years in the changes in this market segment, we've gone from a traditional or still have many companies using a traditional long premise approach. There might be a private cloud or, or, or even a public cloud in, in terms of a hybrid approach. So there's multiple applications that are spread across various cloud and on premise environments. And we need to obviously keep, keep that environment that companies are using, but there's been a lot of movement now to a cloud native environment where we've taken our application is no longer a monolithic VM. That's running a monolithic application. We've now moved to a containerized approach that allows us to segment our, our application and that changes the services approach. So each of those containers now needs to have an authorization service attached to it.
So that has brought us then to this cloud native development, how do we manage our cloud native developer activity? So our first poll is helping us understand the, the, the participants in this, in this webinar, some of you will have large developer environments and some of you won't have for any software development activity. Could you please just give us an indication of your organization, which, which of those four options would to thanks. We at the results in the Q a session, a little later in the webinar, in terms of this progression into the cloud native environment, it's, it's happened very quickly. As I mentioned, we've moved from this core screen approach that we've had, where we have a, a sort of a left and shift deployment in, in cloud, on, on a BM to a containerized approach. As soon as you move to a containerized approach, we now have application program interfaces that we must manage.
Those APIs need, need to be managed in terms of the security capabilities that are built into it and the management of that API. But then we've moved into now a microservices environment where we, we have fine grained requirements. So we need to have a comp a component of our decision making attached at a distributed point at a container level within our, our, our, our application. And even goes a little bit further than that. Now, what we're seeing is if we've developed in this an open solution for one application, there's probably other applications in our organization that need exactly the same. So we moving into more of a services mesh environment that supports all of the, the resources within our organization. That must be protected huge.
So just re recapping why cloud native there's several benefits to that. The first one is the lower cost associated with containerization. The containerization allows us to scale a lot better than we have in the, in, in a monolithic application. So if we've got, say for instance, in enrollment component with an application that might be used very heavily at certain times, but very, very little at other times. So it really allows us to, to scale and main and, and minimize our costs. As, as we do that scaling, it also allows us to have a very agile development approach. We no longer need to do a full dev test prod approach, software development lifecycle type of approach, where we are constrained. And this, it takes time to do that. We can, if a particular service needs to be modified, we can modify it and release it into the production environment. It gives us the opportunity to leverage platforms and talk about this in a minute. It's important when we approach cloud native, that we look at what platforms we're going to be using, because that's going to help our developers and helping our developers as a very important part of a cloud native environment. And lastly, it accelerates our, our deployment because it means that we, we can use an project management approach where we're using standups and, and it allows us to very quickly make our changes and, and release those changes to our production environment.
OK. The issue though, that we need to address when it comes to cloud native, is this increasing complexity, because as we move in an, an environment where we have these multiple APIs, unless we have a policy and approach to how we're going to manage those APIs, what, what tools we're going to use for that? What sort of security procedures do we need to put across that we cannot let developers do that individually for different containers within our organization. We need to have a common approach to that when it comes to the, our development staff. Again, we need to have a management process there where we provide the developers, the tools that they need to do their job. When it comes to a cloud native environment, we can no longer expect a, a DevOps dev or ops person to really understand where everything is. We need to give them the tools that when a change is made, the, the, that platform will deploy to the, the places it's needed within that cloud native environment.
In many cases, we are going to automate that we have to automate the C I C D pipeline to allow our developers to do the, the, the bits that they're good at, and then have the platform look after the, the, know how that's actually, how that's actually managed and, and, and, and put into practice. There's lots of platforms now that are coming along. It's important that we choose the platform where you, we, we are going to be using and, and platforms in cases that there'll be multiple, because it depends upon the functions that you need. But once that decision is made, it's made, we don't want the, the, that to, to change in terms of, you know, developers are great for saying, well, this is the, the latest thing that we need, and we need to move this direction. You, you need as an organization to, to put some controls around that.
Okay. So just to, to summarize why OPA well, in, in, in many cases, this is really the only option you have to get control of an authorization service within a cloud native environment. So it provides us a common model for our policy decisions, and it then offloads the access control logic that we need, that we've typically put in our applications to a, an, an external open environment and what that does. It provides us a visibility into the policies. So those that have access to the management screen and Starro are going to be showing you what the management screen looks like in, in a minute, those that have access to that, that are able to understand the, the policies that have been implemented. And that gives the organization visibility to how those policies are, are put together. Okay, I'm going to, we're gonna move to the, the second poll and then Starra will be taking over and showing us a little bit of the solution that they have available in this space. So pull two, if you could give us an indication of your cloud environment with your, your, within your organization, that will allow us to understand, at what point are we in the development and what interest is the in moving into a cloud native approach.
Okay. That's great. Okay, Jeff, over to you.
Yes. Perfect. So, thanks. My name is Chris Kaiser, and I'm running Tyra here in Europe to get a today Ibar we saw director of product management, and we'll be talking a little more about the specific use case and show you some slides about how our control bank looked. I will puta a little bit in a perspective. So, first of all, as Graham said, why did Tyre create Uber? We saw the same thing that he already said. We saw that when people are moving from a monolith to cloud native to point where you need to do authorization, goes from dust to 20 to hundred or thousand and simple writing authorization. Incode in the application, doesn't scale, the cost of change, the cost of development, the cost of audit will be too high. So the solution we come up with was the open Porwal agent and combined that with a central control play, I've come a little bit more of that later on.
So how develop, well, we need, we need to say that op has become really real like success. When I joined as the first employee in Europe year and a half ago, we'd have 7 million download. And I think it was the OPPA slack 1,700 something. If you're interested in open polys, please join OPPA slack. There are like 5,700 actually. Are there happy people now talking about open, answering your question. We also hear a week ago, so past 130 million downloads to upload our applicable has been really, really great. There isn't any other authorization solutions that come even close. If you look on deployment and Scaleable today, we enter Thecla comput information as an open source product in 2018 and graduated order in 2021. So we are in that way, a young, but still mature and tested technology.
And why are people using OPPA every year? Do an OPPA survey. We do it more or less this time, which should go out shortly. And this is from the 2021. So the main use case is actually people want to do internal and compliance governance, and they order to have policies and guidelines in place. But what they want to do is put that as policy code and automated, that's the main year reason people use Singapore. The other thing is operational excellence. And that really means that if you need to change your authorization logic, you can do it in a central way in the control payment, push it out to the application that are impacted that way you doing that is of course, to rewriting application in code in 10 or 2050 microservices, but that will simply not scale. So this is a way to move faster in a cost effective way.
The third one is actually implementing end user identity and access management. And that's a little bit what Jeff gonna talk about here in a few minutes. If you look in the use cases, the most common use case is command emission control. Try to follow our application microservice that we see growing fast. And if you do your Matthias little quick here, you see that this actually ends up to a lot to more than 100%. The reason for that is that most of the uses of open policy segment actually have more than one use case. People usually start with one, and then it tends to grow over time. What we also see is that already a third of our uses will are in production. And another third are in what we call pre-production, which means that it will go in production within six to 12 month. So it's, yes, it's a young technology in one way, but it's already been in production in large enterprises and large deployments, short and Shaw, a picture I'm gonna ask to add a few things.
SOPA is really versatile. So it's built with a purpose to doing all the authorization needs you have for the cloud native environment. So the service could be anything. It could be, it could machine controller, it could be a Terraform, it could be a CIC pipeline. And while is your microservice or a database tion. You see everything as use cases. And the reason for that is that OPA so flexible. So you can ask us in any a and query, and this is rather important. So you can send us any question you want. We will evaluate that with a set of policies we have in the together with the data you send to us, we have stored in memory or as we fetch. So we are very, very flexible there as well. And we can also answer back with Annie and queries. So OPA is very flexible and that's for people and developers really, really like it and what we are doing as Tyra.
Of course, we still invest a lot in and enhancement on side, but also like selling essential what we call it. Like every open source company, we have an offering that is press Porwal, but we also have the Tyra as, and that's what the F gonna show you in a few minutes. And that's what we call the Tyra, the clarity, authorization service. And normally when we start talking to companies, they tell us, okay, we've been using OPPA, we've been using Uber for three to six months for a tactical deployment. We really like it. Now we want to deploy it at scale. Do we have an best practice? Do we have an totaling? And that's what we have done. We have taken a best practice from all other people we've seen using OPPA and put it in a central control plane. And that's really been the aim to make it easy to deploy your and right policies.
So it's a single plan glass. You can ride your policies to version control our policies. You can test an impact your policies before you put them in production. You can make sure that the right set of OPPA, because you will not have one. You will not 10. You will probably have a couple of a hundred on policy agents get the right set of policies and right set of data. We also have a function that we can look at the decision log of all Europas all your estate that makes that you can do central audit and central governance. On top of that, we also have, pre-bill probably call system types. And that means that you can get up and running actually within a few hours to few days for different use cases. We have pre-bid system where we have integration policies and enforcement points read alter box for you today for Kubernetes Terraform Tovo K Kua. And as the recent addon for three, four weeks ago, added on the use case, entitlements that yeah. Gonna show you now. So now I hand over to you. Yeah.
Great. So I'm Jeff Broberg and here to talk about OPPA and cloud native and, and really what we're starting to do offering with in this environment. So within cloud native environments, you know, we just talked about Oprah Jason in Jason out, and we've talked about a lot of different things in which you can use this for Terraform or Kubernetes service meshes or gateways. But what we really see is a lot of organizations are now starting to say, you know, how can I unleash my existing authorization information? They've spent a lot of time developing their coming entitlements. Maybe that information is stored within ad. Maybe it's up in some systems that they've built, but they really wanna find a way to use that information. They don't wanna recreate it, but they want to, you know, build off of that. So one of the things that we've done is we've noticed that, you know, OPA as you saw, was Jason in and Jason out.
And one of the big things that's really important is to be able to extend or allow organizations to use the information they already have today. Those enterprise grade data sources such as LDAP or ad or using open API V3 or scam and things like that. And using that information to help make policies. The other really big thing that, you know, we start to notice that's our customers are telling us in the context of developing entitlements or authorization for their applications is they have different types of personas inside their organizations. They normally might have an IAM team that's responsible for sort of setting up the, the general guidelines. And then they're also gonna have, you know, application teams that are responsible for developing the various different applications that they have. And there needs to be some way in which these two teams can collaborate so that they are going forward and really understanding what needs to happen.
And the other angle about this is as well is a lot of times IM individuals are not coders. They, you know, they don't write things and normally they just have their policy and maybe some sort of PDF or some spreadsheet, and they need to be able to take that information and turn it into policy. And sometimes not being able to do that in code is, is a bad thing. So what we've done is we've actually provided a way to enable them to reduce the amount of code that they need to do by creating, or what we've delivered is outta the box policy snippets. So Gustav forward, please. So the, really, if you think about it from the perspective of these two teams, the personas of the IM team and the application team, there's really a few things that they like to try to do. And these are optional as well.
You may not have this situation where you have this sort of bifurcation of having an IM team that wants to sort of define, you know, these are the data sources, you know, our users, these are our groups, and these are our roles and making those data sources available to the application teams to use sometimes, you know, those, the IM team might wanna be setting up certain policies for the entire organization, regardless of all the applications may, one of those policies may be that all of the requests that come in has to have a valid user, right? That might be a type of policy that they may set up. And these application teams, you know, want to be able to, you know, live in their own world, but they have to understand that there are gonna be some of these constraints as far as policies, the IM team has been placing in front of them and the different types of data sources that have been made available for them.
And of course, you know, this isn't a one to one relationship. It's an end to end relationship where I can have as many of these sort of overseeing, you know, governance teams. I could have one that was providing just IM information. Another one that was just defining the policy elements or another team that's defining, you know, the data or a compliance team, it's really, you know, dependent on what the organization needs. And, you know, the applications you can have as many as those as you'd like one 50, a hundred or a thousand, and they really can integrate into the cloud native environment in those various different ways, as far as service meshes or gateways, or just, you know, normal microservices go south. Next next, please. So when, when you think about, you know, this whole notion of entitlements and you think about OPPA and, you know, OPPA being that generalized sort of agent that can sort of make those decisions, you need to find a way in which you can start to normalize the information that you have inside of your environment, or you might have a lot of different data sources that come from over time.
And so really there's a few different things that it creates us and at the center of the whole way in which we attack sort of this entitlements problem is, is a opinionated object model that supports both in our back and a back type situation. An opinionated model basically just says, you know, that we have a perspective or an opinion on what like a user looks like or what a group looks like and things like that. And that allows us to do a lot of fun things. There's two basic sides of the equation related to this sort of, you know, object model, this opinionated module object model. And the first one is, you know, to make good decisions, you need context, right? You need data, you need information. You need to understand, you know, who the users, where they're going, what they're trying to do, what the time of the day is all this different type of information.
The more and more context you get the better. So on the left hand side, what we see here is at the beginning of the journey, what our, a lot of our customers are doing is they're saying we have existing data sources. We already have things that are out there, and we wanna be able to utilize that information. Now, one thing to understand about star DAS in the entitlement systems, we are not attempting to be an identity management system. We don't wanna be an identity management system. We are an authorization system, right? So in that vein, what we really want is to be able to utilize that information that already exist inside of these environments. And as I mentioned, that data could be coming from many different types of sources, L D HTTP, SQL XML, who knows where it may be coming from, but we need to be able to take those data sources and transform them, convert them from their original flavor, that original raw format, and convert it into this new opinionated model.
And those transformations, once again, following this whole notion of policy as code and being able to do things where you conversion everything that you're doing. So nothing is hidden. Even these transformations as well are, are code. They are reg that allows you to do these things on the right hand side. The we're gonna see that there are other elements that besides having our data within this model, we wanna be able to ask questions of that model. We wanna make decisions, can Jeff access this resource or not. And to do that, you know, what you can do normally do is you write some rego code and that, you know, enables your policies, the policy as code. And on the other side, we have these snippets, what we'll talk about in a minute, which really helps people that are not developers understand what a policy is and how to configure it next.
So the, as I mentioned, the opinionated object model supports both in our back perspective where it knows about things such as users and groups and roles and resources, but it also supports at the same time, an aback model. And a lot of our, the companies that speak to us are going through this transformation where they may have something that they're trying to get into externalizing their authorization. And they normally will attempt to go after the R back model, other organizations, you know, maybe have already gone through R back and want to get to AAC, wanting, do more of an attribute based, you know, access control and support both cases here. One of the nice things about the environment is that we can actually support both at the same time. And one system itself, one application could be supporting both our back and also start to evolve, you know, or move over into the aback model, if you'd want to one really interesting item to think about when you think about things from either a policy as data or policy as code, when you're more in an R back model, as we can see on the bottom diagram here in an R back model, the majority of the decisions that you're getting about, you know, if Jeff can access a resource or if Gusto can do something, a lot of that inherent knowledge is actually embedded within the data, right?
So the actual policy that you have, the code, the rego code that you're actually developing is very small, cuz being able to analyze in our back system to understand if someone has access to something is really not that difficult. It's not a very complex set of rules and it doesn't change that much. That's why we say that in a policy is code model. The majority of your decisions really are coming because of the data and the data that's in there as the data changes over time. But on the other side of the equation, if you start to go to aback and you start to say, I wanna know, you know, if does Jeff have certain attributes that match what the resource needs and doing things like that, the amount of policy you're gonna be writing in that case, there is, is a little bit more right, because you're actually changing a little bit more based on the policy that you want as opposed to the data itself.
And there is a real spectrum here on how these things go. And then of course, on the very far right hand side, you have the situation where, you know, most people may want to go. I just want basic input information. I'm gonna send it some context information, but I, the majority of the information is gonna have code there. So this is a real interesting spectrum to think about when you sort of under try to understand how much, you know, visibility do I have on how the system is changing next, please, to enable, you know, you to go through and get that data, right? How do we provide more and more context to you? So one of the things that we've done is instead of just having regular Jason, we've provided a few other types of data sources that we've heard from our customers where they normally store their identity information or where they extract their identity information from.
So the first one is, of course you can just create adjacent file and go out there and define your users and groups in this opinionated model. If you'd like to, or you could actually store all of your users and groups and all these types of information over in a GI where you could either store those in either adjacent YAML or an XL format, I could do the same thing, you know, by putting my Jason Neel and XML also in AWSs three buckets, or I could put it also in, in this, whereas in my AWS three, right? So the other things that we provide in this case here is that we've noticed that a lot of people want be able to access their resources or the, the identity information they already have through HTS. So we deliver that, that allows you to access skim, allow open API B three, to get your resources.
And of course, one of the largest requests was a lot of identity information was stored within LDAP. So we provided an LDAP data store as well. That allows you to go out and retrieve your users and your groups. And all of these data stores have some unique characteristics where first of all, you know, they allow you to set a refreshable for a period of time where we need to go back and get the new data and then understand what has been changed. Also have the opportunity to transform this information from its original format into what we need to talk about. It'll talk about that in a second. And third is a lot of times you might see organizations that are working in this hybrid model. And some of their identity information is already is stored on premise and they don't wanna open up some of their ports to allow access to that.
So these data sources also support an on-premise data agent to get that accessibility and out of the box. You know, the environment provides transformations from these different types of data sources into LDAP users and groups, skim users and groups open API three, or you can actually create your own transformations if you like to. So when you think about data sources, you know, there's a lot of different types of data sources you may have inside of your organization. You know, as I just mentioned. So all the data sources that we have, those ones, you know, L D and HTP, when they retrieve their information, they bring it back in a raw format and that is always gonna be Jason. So when we bring back LDAP, it's gonna come into an L D format, I'm sorry, adjacent format. When we bring down skim, eventually it's gonna be into a, you know, it's adjacent format and going forwards like that.
You can have as many data sources as you would like for your systems. As you can see on the right hand side, over here in this Acme car info service, I've got a few different types of data sources that are defining my actions. The days of the week, I've actually retrieved my users from some sort of HGV feed, or I can see them, I'm using jump cloud to get some users down here or S three. So I can actually have as many different data sources as I would like. And that's very useful. A lot of times, if you see organizations that have gone through acquisitions, or maybe they have their employees in one data source and they have their partners in another, this allows you really to bring all those different types of data sources together. And the notion of the objects is you wanna start to be able to treat this data as an object. So you can start to make decisions on that and you can start to evaluate it and do different things. And that's inside of the system. We now allow you to use an object representation on top of your data. So if you do have this notion of having multiple data sets per user or per group, you can actually join them together. You can do additional processing on top of them and do other types of things that you may need to do to get that data exactly as you want it. All right, next, please.
So the way that this notion, you know, works is if you think within your organization, on the very far left hand side, you have a lot of different types of data sources. And I explained a few of the ones that are available right now, and, you know, additional ones are coming out over time. The situation is, you know, basically you use these data sources and you extract the raw information from them. You go, and maybe one organization may save for us. Our LDAP users are coming. You know, our users are coming from LDAP. So that raw extract of that LDAP user is gonna bring that information back in Jason. They may do the same thing for groups, for roles and for resources that are coming from HCP. The transformation allows you to take that information and then convert it into opinionated object model about, you know, what a user looks like, what a group looks like, roles and resources.
And at the end of the day, if you think about it, that, you know, maybe let's say that your IAM team has defined for your organization, that your users and roles are coming from ad and, and they give those to you. And you're able, able to use them. You as that application team now want to be able to link, you know, those roles that your, you know, IM team have given you to your resources. And so this whole notion about going through extract, transform link, but even more importantly, in the context of open in the context of sort of, you know, distributed authorization is you wanna make sure when you send out that information to all these various different endpoints that you are sending, the, the information that's needed, you don't wanna send too much, right? There's a lot of different reasons why you wanna shrink that information down as possible so that you can, you know, the latency, the size of the machines that you need. There's a lot of reasons to make sure that you are really reducing it down to be just what you need for that particular use case next, please.
So here's just a, a quick example of a transformation showing on the left hand side here was an HTP data source that was bringing in some users, you can see a pretty complex user record that has a lot of information, as far as their address and things like that. And in the middle, what you see, here's actually some reg code, right? Everything, you know, when you talk about OPPA, the language that's used for OPPA is REO. Rego is a declarative language, allows you to define exactly what we're seen right here. And the nice thing about having rego and things like this is that you can actually have this now stored within GI. You can put this under diversion management and you can understand exactly when someone has done, you know, somethings inside the environment. So if things do change and on the right hand side, we can see that new converted, you know, or transformed information from the raw for input out to the opinionated object model. Okay, next please.
So there there's really the two sides of the equation, right? I mean, to make a decision, you need two things. You need to have some context, right? And we just talked about data sources and how you can get your enterprise data sources that already exist and how you can transform those into this new format. And, and the question is, well, why would I, I want to transform that into this new format. You know, what does that help me do? Because normally I would just go up there and I'd start writing rego. So next, please. So we came up this notion of snippets because what we realized when we started to talk to everyone, there were different personas, right? There was this IM team that wanted to set up the guardrails and the data sources and things like that. And they had compliance people that wanted to come in and sort of look at the rules.
You had other people that, you know, were not like that, and maybe wanted to become a little more in depth. And they wanted to look at different things. So the notion here is that snippets, as we see on the lefthand side where there's a snippet that says R back role explains, allows subject access to a resource. Here's one rule that is implementing R back. Now what happens behind the scenes here is unknown, right? Because R back uses the data model to sort of figure out if things are true. And that's really remember, I was talking about policy as data versus policy as code, but then you can see there's another snippet here. That's just doing a match a day of the week. And you know, someone is entered in Saturday and Sunday. So those are very easy to understand if an auditor or a person who's trying to understand what are my policies.
I can look at these snippets. I can look at them. I can understand them. If that person wants to understand, to go down to the next level of understanding, they have an opportunity to view that exact same rule or snippet in the policy builder. And the policy builder allows you to look at things more in if this, than that metaphor. So people that wanna really start to understand the mechanics of things on how they're running can do that. But then again, at the last level, you may see that, you know, I am a developer and I really wanna understand the code that's going on. I wanna see that rego code. I wanna understand that if I'm having a decision that is checking for the day of the week, how is that being done? I wanna understand what some of those best practices okay, next please.
So here's, what would you do with a snippet? So now, you know, we talked about OPPA and we talked about really trying to take open into the enterprise grade and, you know, not just having people writing, you know, code and things like that. So this Isty dads, a declared authorization service. And what I'm seeing right now is I am in a system, the Acme car info system. And I am looking at one of the ways in which I can define policies. And this particular way allows me what we call the swim lanes. And this allows me to instantiate rules and to place 'em into one of three different modes. Now, one of the things that you want do when you're developing your policies in identity and access management, is you wanna be able to test out your policies before you put 'em into play so that you know exactly what they're gonna do.
And sometimes that whole situation goes through the fact that I may be in the process of defining those rules, and they're not, not quite completed yet, or I haven't sent 'em all my parameters, and I might wanna take those rules and put 'em into unused state. And that's the left hand column that we see here. Another case was, I might have these rules that I'm actually defining them, and I wanna see how they're gonna run, what's gonna happen. So I wanna put them into the pipeline. I want them to be evaluated, but I don't want them to contribute to the decision. I just wanna monitor them. I just wanna make sure that if I understand, if I put this rule in there, what may occur, because that would allow me to review decisions over the next week or two, and sort of look at this particular monitored rule and find out how did it behave.
And then the last one is over on the right hand side is I can actually come in in and say, these are the rules that I want to enforce. You know, how are they gonna be doing that? You know, and what are gonna be the specific things that, that are available for them. Okay. So this really allows you as a developer or as that IM team, that's gonna be putting these things together to understand, you know, what I'm monitoring, what I'm working on and what are those rules that are currently in play at this point in time. And of course it's a lot more, it's easier cuz you can configure these things right on the screen, as opposed to going into the code next, please.
One of the things that we deliver is an out of the box set of libraries or snippets that allow you to do things. You know, for example, we mentioned that we support both aback and our back. So here you can see there are some of these snippets that allow you to sort of, you know, check if a, you know, the attributes that are coming in the request are gonna be matching that what's happening on, you know, the resource matching, if a user and a resource have similar attributes, you can see some just generic calendaring type snippets here as well. And down below this, I know there's the R back snippet, you know, that allows you to basically go in and sort of just say with one thing that I I'm implementing our back. But once again, you could set up R back and you can also set up aback at the same time. You can have one system running under both constructs. Okay. And if the way I think about it is those libraries, this thing that we're seeing, these rules, these are nothing more than class definitions. And I can instant each one of these rules as many times as I like, and I can parameterize each one to make it act very differently. Next, please.
One of the other things about snippets, you know, besides being able to visually show what a person's doing and allowing them to enter into values, a lot of these values that they're entering could be coming from data sources. You could point to those things. But one of the other nice things about snippets is you have the option to filter snippets, to make them become much more specific to a certain circumstance. So here we see, you know, that I have a snippet that's talking about Monday through Friday and we can see down on the bottom, it's allow tonight, this is gonna be an allow operation. So right now, if this snippet was to evaluate and it was Monday through Friday, it would set in allow operation, but maybe I wanna become a little more specific in that case there. So in the middle we see that I've decided to filter this particular snippet by my subjects. So I want to basically go in and say, there's gonna be a list of users or user that I wanna have this rule become specific for. So in this case scenario, I select the subjects and then I'd be able to detect Analyst. And now this rule or this snippet has become very specific for Alexei, allow her to work on Monday through Friday next, please.
So all those things, I, I just spoke a moment. We spoke about the whole notion of, you know, Daaz coming in and allowing you really to have this environment to, you know, provide a way in which you could have your IM teams go out and develop. You know, if they want to remember, it's optional to develop, you know, what are gonna be those existing enterprise data sources that you have that host your identity and access management, you know, entitlement information that you wanna make available. And that IM team, you know, one of the very first things they wanna do is they wanna go through and they wanna define, you know, what are gonna be the data sources I want my application teams to use. So they may go through the process of defining users and groups and specifying that, you know, my users are coming from ad, you know, maybe my groups are coming from another location, but really just sort of setting these things up and then making those available for their, for the application teams to use, besides setting up the data sources, these app, the IM teams also remember are setting up these various different types of policies that they want all of the various applications to have to honor.
In the case here, we see this IM team is sort of specified that they're monitoring for some reason, if any access is coming between two and four o'clock, you know, theoretically would be a deny operation. Maybe they have something that happens during those periods of time and they wanna understand if they place this rule into the pipeline what's gonna occur. But we see that that IM team has already placed into their role or the enforced thing that they are checking on. Certain IP addresses that are coming in. They are also making sure that every single, you know, request that's coming in across the entire environment is needs a, a valid user. And from here, you know, all this information is sort of given or made available to these application teams. And these application teams, you know, are able to then go in and take the information that was provided from that IM organization and augmented, they can say, okay, this is great.
I understand the data that you gave me, but I also myself need to go get some data for some other things as well. So they can take the data that the corporation has given them. They can use that, but they can also make it more specific for their own desires and the same thing. They can do that with their policies. They understand the policies that the company has placed in front of them, but they can come in. And now also say, you know, here are some of the policies that we need for our application. Now, if you think about it through this whole process, there's a situation that may occur that this IEM team may have placed a rule or a policy in front of all of these application teams. And one of these rules may cause a problem for one of those application teams and it would not allow their application to work.
So one of the other things that you need in this collaboration and coordination between the IM team and the application team is you need for the opportunity for the application team to ask for an exception to a policy and say, you know, this particular policy that you you've given me is gonna cause me a problem for the next month or two. So could you please remove that outta my face? And in this case here, the IM team can review that and get this sort of known. So that later down the road, when the auditors come in and say, Hey, Y was an application X, you know, you know, adhering to this policy, you'd have some sort of knowledge and record of exactly why that request was made and who authorized as well, and who did it. So it's a really nice flow on how all of these things to go together and really having different mechanisms. So you can have developers, you can have people looking at the policies and collaboration next, please.
So as Gustav mentioned, you know, some of these notions of, you know, really, how do you take open to scale? How do you bring it out into enterprise? How do you start to build, you know, entitle instead of gonna be going across your entire organization? How are you gonna be doing injecting identity down when you're doing your gateway things, or when you're doing Terraform or C I C D. And that's really what St is attending to do is across the entire cloud native taking identity information from existing entitlement systems that you have and making it available across that spectrum. And this really is, you know, made available or allowing you to really author these particular types of situations. How do you build them, evaluate them? How do you do your impact analysis? How do I come in and review? You know, if I make a change to a policy what's gonna happen?
Cause I wanna know that before I do it, how can I take a particular decision that's already occurred and understand what happened? So I can do some sort of forensic analysis or an auditor can come in and verify that everything's true. How do I go out and distribute my hundreds or tens, you know, thousands of opens to all these various endpoints I want. So I have the level of granularity to really support zero trust. And how can I monitor all of these things so that I understand as my system is behaving, you know, what's going wrong. Am I starting to find out that things are thrashing? Do I need to do things this really gives you a better perspective of understanding from a central control plane, how all this works. And then at the very end of the day, you know, you need to justify this and have your auditors coming in and looking at these logs and going forward.
So this is really the whole situation on, you know, how all these things come together. And it's a new way that a lot of the IM teams are starting to notice the benefits that companies are starting to see as they've gone through this sort of cloud native transformation and are getting benefits from policy as code and OPPA down at Terraform at Kubernetes at, you know, now the gateways and service measures. And it's just a logical transformation for the IM teams to say, Hey, we would love to come in and start to help in the dev sec ops environment. So with that next, please. So let's sit back for me and I think this is gonna be back to you, Graham.
Okay. We, we are into then the Q and a session. And maybe what we'll look at first is the, the result of the polls. So Oscar, if you could help us with the results of poll one in terms of developers, we have, oh, nearly 60%. You you're in that 10 to a hundred developers. So it's a lot of you guys are into the development world and nearly 20% in large developer world. So that's really good. Thank you for that slide. Two 30% are multi-cloud very good. 36%, nearly 40% are oriented containerized apps. So it's obvious that this is well entrenched now. And so, you know, the solution is probably something that's going to be a great use to you look percent into a service environment. Excellent. Thank you. And the third one.
Okay. So 40 are using APIs that would make sense 40% other. Okay. So good. That gives us a real good indication of the audience. So I'm very pleased to get that, that information in terms of your questions. Now, if you could enter your questions into the question dialogue box. One question I did have Jeff is you mentioned the ETL approach. You have, you're obviously supporting a large number of very disparate data sources for the out of the box, transforms that you, you provide. How do is this some configuration step you must go through in order to normalize that data?
So good question. So the transformations are, are reg code, as I mentioned, and we provide these out of the box situations that for the very simplistic case of taking an LDAP user L D group will transform it into that opinionated model, you know, by default. But if you have a situation where you decide, I would like to bring in some additional attributes I'd like to bring in the employees address, let's say, or I'd like to bring in the employees, you know, whatever I, other information might be, you're able to, to configure the data source query, to bring in those additional attributes. And you're also given an opportunity to then modify the out of the box transform to now take that information that you brought into the raw format and to then place it into the OB to the model. So that is, it's normally a step that a person goes through and says, I'm gonna define my raw data source I'm then gonna, you know, apply the transform. I'm gonna look at what that did. If I'm happy with that, that's fine. I can just go forward. If it's not fine, then I can then go in and I can actually then, you know, modify that transformation to my, to my desires.
Could we have a question on reference models? Do you have a, a, a, a reference model that you would have that talks about how, how to pull that together? Do, do, do you have a reference model approach that you take?
So what we have is, you know, because, you know, identity and access management entitlement is so varied on people, how implemented it, you know, the, the pattern or the approach we've taken is really to try to just have this reference model of an R back, you know, object model, an a back object model and providing these various transformations that are gonna sort of take your data and fit it into that. And then having these snippets be able to execute on top. That that was really the intent of the snippets was that we noticed, you know, we wanted to have, non-developers looking at these things. We wanted to enable all these different organizations to work together. And the only way we could get snippets to work was if we knew what the data was structured. Like. So that was the reason we, you know, from a reference model, you know, we like going this path, but we have noticed that some people are more into policy as code and some are more into policy as data. And there's just something that they need to understand around that perspective. So it really, from a reference model, it's, it's a little difficult because it's every single, almost every single instance we've spoken to people, it's, it is different, right? So we have to give 'em the general guidance and we have to give them these basically building blocks so they can use those to really move forward.
Thank you. Another question here on resource attributes. So if we want to pass attributes back to an application, what are our options there?
So the, the entitlement system it's, you know, you open in and right. So your results structure actually gives you a yes, no denied, allowed, or false, but it also gives you opportunity to respond. You know, what were all those reasons that you got denied in line, but it also gives you the option to return arbitrary data back to the caller. So you, you know, for example, we have different snippets are like return the roles for a user. So if you would like to just get that information back out of that model, you can use a snippet for that as well. User can do what you can have a snippet to get that, to bring the information back. So you also, within your own rules or policies can send back arbitrary information through what we call the NCE object. And that that's how you can actually transfer that information back to an application so they can do additional processing on their end.
Okay. A question on deployment models, what deployment models are supported by da. So in terms of the, you know, psych or, or what, what, what, what, what would be the range of, of models? And I guess there's still legacy applications. Can we still support our legacy applications that we have in terms of providing data to them, to enable them to externalize their authorization? What, what models do you do you support?
You can, so if you think about there's many places and you, which you may want to integrate entitlements, right? You may have written a go application that already has ex exhibited some microservices, and you would like to make some calls at that level. So you basically have, you know, inside of your own applications, making a call out to open to do that, and you can do it via HDP to call Oprah in the case of go, there's an SDK that you can call to use to make that we've seen a lot of companies that are starting to want to put it inside of a centralized, centralized Kubernetes environment. By having as a sidecar, we've seen a lot of organizations that are gonna go for like a Demonn as well. Right? So use something like that in that mode. There's a lot of different ways, but the, the one that a lot of people are having interest in is how do entitlements, or how does my identity information interact with my gateway?
How do I take that information and how do I get into my mesh? So there's a lot of things about how would entitlements actually contribute to the decisions that are being made inside Envoy or Envoy based environments. So there's really, it's, it's very exciting to see how a lot of organizations are sort of bringing the call native together and starting to make sure that I'm not only using information at each one of the stacks, but I really truly are able to get my identity information to be utilized across the entire stack all the way down to C I C D.
Okay, great. So the very versatile in, in terms of this questionnaire is agent based model. The only one that's supported, I EEG reverse proxy. How do you simplify upgrades? So there's two questions there in terms of an agent based approach. And, and maybe just talk about that and then how do we do an upgrade simplify upgrades?
Yes, you can do it as an agent in this case. So you have to pass a site car, but you can also actually employ by in certain cases as library in code. So every option there is actually viable. And another one how we support an upgrade and our guess is an upgrade of policies and data. And the way we use we started is that we call build what we call bundles that they'll pass, download in, and they do that and then deploy them without any reason to redeploy the services or redeploy they'll pass. So it's a hot patching in that case. So you don't get any downtime so you can update policies and data in run.
Very good. Thank you. Okay. We have time for one more question here. Just a clarification on the previous question in regard to the, the models being used. So how are access attributes attached to data for data resources and, and discovered by the policy engine?
So at this point, you know, the way that the data acquisition happens is, well, most companies have to know where their data is, right? And they have to understand if I'm getting my resources from this location, how they wanna transform that into the resource model. They need to understand how they wanna bind these together. And this is really important, especially when you have one organization like the IM team, delivering your users and your groups. And yet you have another organization that's delivering the application and they're sort of binding this with their roles. What we've seen in a lot of cases is organizations will actually place, let's say if they are using endpoint or definitions of their endpoints, as their resources open API V three, you know, people will place in the definition there, what is the security context? What is maybe the group bind or the role bind over there. And then that normally allows the model sort of just click together at that point in time. But that's how we sort of see it is, this is one of the areas. That's more, we're really working a lot on the policy as data side to get more knowledge on how those sorts things sort of interact.
Fantastic. Look, thanks guys. Really appreciate. It's been a fascinating webinar and I've, and I've, I've thoroughly enjoyed it. What we, for any of the questions that we haven't been able to to answer will do that offline, but now we are going to be pulling the webinar to a close and just a couple of slides there by with the, the advisory services that call provides. And I would like to thank all of those that have attended. We do appreciate you taking the time to do that, and we'll follow up with you offline for any issues that you might have. Thank you. Thank you.

Stay Connected

KuppingerCole on social media

Related Videos

Event Recording

Effects of Malware Hunting in Cloud Environments

Webinar Recording

A Comprehensive Approach to Solving SaaS Complexity

As businesses adopt cloud-based services as part of digital transformation programs to enable flexible working, boost productivity, and increase business agility to remain competitive, many IT and security teams are finding it challenging to gain oversight and control over the multitude of…

Webinar Recording

Multi-Cloud Permissions Management

Most businesses are adopting cloud services from multiple providers to remain flexible, agile, efficient, and competitive, but many do not have enterprise-wide control over and visibility of tens of thousands of cloud access permissions, exposing the enterprise to risk of security breaches.

Event Recording

Panel | Protocols, Standards, Alliances: How to Re-GAIN the Future Internet from the Big Platforms

In talking about a "Post Platform Digital Future", it is all about a Vision, or better: mission to not let the current platform dominance grow any further and create the foundations for a pluralistic digital society & business world where size would not be the only thing that matters.…

Event Recording

Enhancing Cloud Security Standards: A Proposal for Clarifying Differences of Cloud Services with Respect to Responsibilities and Deployment

Widely used cloud security standards define general security measures/controls for securing clouds while not differentiating between the many, well-known implementations that differ with respect to the Service and/or Deployment Model they implement. Users are thus lacking guidance for…

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00