Some of the most common causes of cloud security breaches include system misconfiguration, dynamic system updating and patching, unmanaged and leaked access credentials. The industry is applying different methods to overcome these challenges. These methods include dynamic system monitoring and alerting, automated deployment pipelines, and access management including credential and key management and rotation. But what if we could overcome all of these challenges with an immutable cloud infrastructure that could be accessed without any credentials which could be leaked or compromised?
You probably have all seen this, this figure where the exponential patterns first start slower than the linear, but at certain point, the exponential growth will exceed the linear growth. And initially the disruption is, is maybe invisible, but later when the exponential pattern moves forward, it will disrupt the linear thing. The normal analysis for this topic is in artificial intelligence, how to CPU and competition power increases with computers. But we have seen the same thing in form of digitalization and the materialization of services. So you probably can still and will still buy physical books, but most of you will be using kind media is streamed nowadays instead of purchased as CDs, and then different markets will face the same disruption like mobility there's Ubers and there's Airbnbs. And the same thing is also happening in, in cloud, or let's say in, in data centers traditionally, and maybe even even today.
And of course also in the future, you will help dedicated service that are provision for certain purpose. So you have your database server and you have your application server, the cloud, the first thing that it did it virtualized over the hardware. So you could run virtualized images in a cloud. And as that was so convenient, instead of ordering a new CP or new new server hardware, you simply allocated new servers from the cloud provider, you started getting more and more servers after that. The next development is, is moving into smaller competitional units like containers. So now one word server can run multiple containers. The other aspect in the cloudification is that as more and more services were converted on the cloud platform, the cloud platform provider started providing general purpose services, such as hosted databases or message queues and on various services that can be integrated with your application.
So instead of having the one Oracle database connectivity, you can have multiple databases and data sources that are connected. And the third thing that is happening with the, with the cloudification is that the life cycle of the servers and services is getting so with the physical hardware, you need to install the system. You need to configure it, and you are running it for for years and years in the cloud environments, the pure where it's almost invocation, you can start a fresh virtual machine in, in matter of, of small minutes or, or tens of seconds. And that brings us the elasticity. So you don't need to prepare for the capacity by purchasing the hardware beforehand, but you can have the elastic rules that will scale your infrastructure based on the network load you are facing. So this means that the, the amount of computational entities you need to handle in your infrastructure is growing and it's growing in several dimensions. So we could say that this is an example of exponential growth in your data center or in your virtualized data center,
But it's, it's not just about the way how the computational resources are organized also with the containers. And, and initially with the virtualized server instances, there was this new concept of microservices and, and it's actually an old topic. You might argue that it's the same thing as service oriented architect, or it's yet another form of object oriented programming, but something was done right with the microservices because it's really gaining popularity in the cloud environments. And that means that when you earlier had your backend system and it contain front end server, and maybe some application server and database server, now each of these entities are split into containers. And instead of running three servers, you could be running 30 or 300 containers. There was actually an example that some of you might remember the web search ends in called Alta Vista. And that was somewhere around 95 or so.
And, and they say that Alta Vista is a big server. It consisted, consisted total of five servers. And then I guess it was around 2004, Google introduced the map produce paper. And when they did the test drives for the map produce paper, they say that the test runs were run on a cluster of approximately thousand 800 servers. So this really changed the way how the applications are built. And other change that is coming with the clarification is this DevOps model. And I guess it was Netflix who say that if you build it, you run it, meaning that you, as AER, if you build an application, you will deploy it in the production. You are responsible that the system runs, and if something's wrong, you need to go and debug the thing and, and prepare a new instance if there's some issues. So it means that if you earlier, or in, in more static data center environment have maybe one or two products and updates per year in the DevOps like environments, you might be updating your products on hundred times a day.
So that's like a huge difference in the speed, how the operation has changed. The another aspect of the DevOps development is that the, the traditional let's say privileged access management roles change. So if you think privileged access management is sort of binary, you have the privileged users, the administrators, and then you have the normal users, but in DevOps model, every DevOps develop could be in the role of quality engineer or operations guy, or there could be someone doing the database analytics and, and then the operation thing. So instead of having just two roles, there are several roles. So this is yet anothers on how, how the deployment and how the, the world changes in the cloud environment.
So, as an example, if we now apply the current or, or traditional ways of managing this environment, if you have, let's say 20,000 entities in your network network or in your data center, and it takes one second to update all of these, it means that it takes five and a half hours to update your infrastructure. And 20,000 entities in this multidimensional thing is not a big thing. It could be hundreds, hundreds of thousands or millions. So it is clear that linear models don't scale in the exponential environment. And now in, in the context of identity and access management, we are seeing the problem with the SSH key management. The traditional SSH key authentication works by managing the keys, the same thing with passwords, or if there's anything you need to do to configure your environment. And it requires changes per changes or parole changes into the target infrastructure.
It won't scale, or it scales very poorly. It can take hours or days. And now if the environment is so elastic that things are spawned, they are operational for, for milliseconds or few minutes, and they are terminated. How can your model where it takes hours to update? How, how can they ever reach this computational entity? And when we started thinking the cloud access and how to manage it, we came up with an idea that the only way to manage it is to do it without credit cells. And here, there are three things. How we, how we formulate this thing. The first thing is that we of course need authenticate the end user. So we will use whatever methods there are. And that could be traditional username password combined with multifactor authentication. It could be biometric authentication. It could be federated authentication, but the pure and sole purpose of this thing is to get real identity of the user.
After that we know the identity, we will have roles in the system and the roles tell you what kind of access you can make in the, in the infrastructure. And the roles could be unique system administrator. They could be network administrator, they could be database administrator operation. And then finally, based on the roles, we are giving access to the infrastructure without actual credit source and how we do that, we do it so that the whole target infrastructure is configured trust, a certificate authority, a trusted endpoint, and the access is made by using short lived certificates that are issued on demand for the user based on the role. So this means that first of all, the cert lived certificate helps us in the, the traditional be problem, which is certificate revocation. If the certificate is valid only for five minutes, and then you think that you, for some reason, need to Revok that how long it takes that you go into some console, click revol button, push the certificate Analyst. It takes more than five minutes. So it means that you don't have to worry about the certificate rev because the target server knows that the certificate was valid in maximum five minutes before.
The other aspect is that when we are issuing the certificate on demand, we are issuing them for an FML key pair. That means that we don't have permanent, for example, R stay keepers on, on a storage, but we create an FML keeper in memory. We issue the certificate for that. Keyer use it for the authentication and after that, we can dispose it. So that way we don't have to worry about the key rotation or how we store security, the private keys, because the private keys have no value without the corresponding certificate.
So with these, these topics, we can provide the access without certificate management problems without key management problems. And there are no passwords to vault and rotate. And with this model where all the credit cells are provided by the client and hand it to the server, when the authentication is made, the server gets all the information that is needed to make the access decision. There's no need to install any agents that will do verification. There's no need to call any central authority to verify it. So all information is available at that point, when the con request steam system and the server can make the access existing patient, that information. So it means that they are less moving parts in the server infrastructure, which is the most volatile and most elastic entity in the system. We, with this model, since all the target server configuration is, is configuration.
There's nothing that needs to be changed. We can also embrace this new term. That is, oh, it's, it's old term, it's called immutable infrastructure. So that's one way how you can, how you can harden your data center. You design the, the world almost in instances or your content, so that there's no persistent data. There's no need to modify anything. As an, as an example, you could, for example, boost your servers from a CD RO thing, and you can write anything on that one. So it means that there can be dynamic configuration changes. So no one can go and install some rogue keys or back doors in your system. So this is hardening the hardening deployment model, but with if, if you needed to do some changes in order to get the access that doesn't work well with the immutable infrastructure, but with this kind of static configuration, you will get also the benefit of, of Ural infrastructure. What we have done, we have built the product called vex, which is, which is using these paradigms to provide access in the cloud environment. It's, it's built the cloud native way. It is, is using microservices, architecture. It separates the identification from roles, from access. So we can provide role-based access for S eight and for R P connectivities. It uses FML key person, shortly certificates, and it works with the immutable
Infrastructure. So, so this, this topic is, is how, how we see you can handle the challenges where the digitalization and dematerialization is, is hitting your infrastructure also from the it and access management perspective. And if, if the dimensions go big enough, there are no other ways than, than think outside the box and think how you can manage this dynamic environment. And I, I can see any other way than remove the credentials from, from this accusation. I'm, I'm happy to tell you more about this later. We also have a booth where we can discuss this in detail, and also, I guess we have some time for questions. Thank you.
Thank thank you, Marco. And I think your topic fits pretty well in one of the hot seams we are seeing at EIC, which is, I think it has two facets. The one is identity management for micro, for microservices and containers, and the other is containers and microservices for IM by the way, I trust published this morning, or I think one is online. The other will go online soon, two blog posts the one on the one aspect of it, the other on the other one, because what we really see is there are a lot of things going on in that space. And I think we, we need to get better in protecting these dynamic environments, the entire DevOps world, the entire air trial thing, but it also will help us to be more at trial. And so, yes, I think that's one of the big seam for the upcoming months. And it's interesting to see a growing number of solutions appearing in a space which help us to solve some of these issues we are facing here. Let's have a look, as we have questions we have. So how do you produce certificate that are trust versus, so you talked about short lift ones where, which reduces the trust problem to some extent. Yep.
So in, in this setup, the trust relationship is between the, the certificate that is alter, that is part of the system. And that is, that must be secure. And that must be handled with the care because that is the call for the access. And it is trusted by the infrastructure. And then the certificate are created by the CA and, and then they are trusted because of, of the known practices
Straightforward, I would say. So the other thing is probably more a comment we said, I would say we are at the end of the morning. Oh no. Too fast. Okay. Anyway, quick hit on the gameification we had.
How can we help you