This session will be about the journey of Kubernetes and Crossplane at Deutsche Bahn, to provide platform consumers with access to a unified API for deployments, infrastructure provisioning and applications in a manner that is independent from the cloud, addressing compliance and cross-cutting concerns while providing a Kubernetes "native" experience.
The journey has not been without challenges, where the platform team has managed technical and functional requirements including an access model in an enterprise environment, user expectations of cloud native infrastructure usage, and issues with excessive API load, shared resources, as well as controllers written by the team and open sourced along the way.
Okay. That's four folks and I know them very well. So, and then who has heard of DBAN before? Okay, that's good. That's a relief. So let me allow to set the scene a bit. So I set managing platforms or on the platform architect. So usually it's what I talk about is an it platform. So a development platform, it's, it's talking about provisioning of cloud resources or on-prem resources, applications managing the life cycle and so on and so forth. So, so to introduce the topic, I've a couple of people here who have different problems. For example, the developer in, in, in a large company wants to needs a database for the application to store a couple of things, and he needs to secure it according to company guidelines. And that usually is a kind of a large document, which he needs to go through. And that's a bit cumbersome usually then, depending on how the it organization or the service providers behind the it organization are set up.
So for example, if it's on cloud, if it's on-prem, if you get direct access to the cloud environment, or if it's abstracted behind some other APIs, or if it's like a platform, it's a service system like cloud Foundry, sometimes the, the cloud resources are not compliant. And then those managers who are owning the cloud platform, they are getting bombarded by emails to please secure the cloud account. For example, encrypt the database, or like tie together the security groups or something like that. And another very popular or unfortunately very popular incident is that by accident, a resource was made public, whereas it should be really private. So a very famous or very famous example are S three buckets at, at AWS or any other cloud provider, which usually store a large amount of data. And then by accident, they are made available. So everybody on the internet gets access to it.
And obviously that's not very good for the company. So what all these things have in common is they, they kind of need a bit of a platform, an it abstraction to get into. So that's exactly what cross plain does. Cross plain is an open source software and it's depicted here in the middle. So an it organization usually sits on those service providers. So an it organization offers those services to their product teams, which are depicted at the top here. So for example, this is an example for an enterprise, which has two major cloud providers, one on-prem provider, which runs the data centers here in this case, it's VMware and a couple of SA services, either available as software as a service solution or internally running by different teams in their enterprise. And what the it organization wants to do is they want to offer these services to the product teams where the, usually the development happens in, in a, in an easy way and without like having much of development to do. And this is exactly what cross plain does. It's a bit of a, a proxy or facet block where you connect service providers in the back end, and then you are allowed to do some modifications to the API and then offer them to those product teams to use the services.
It's a CNCF project. If you heard about the CNCF, that's where also Kubernetes is kind of hosted and it's of course, open source license cross plain itself is implemented in Kubernetes. It's an extension to Kubernetes, which usually, you know, as a controlled plane for containers, if you're familiar with Kubernetes, it's famous for this controller pattern where basically a controller reconciles the state, the actual state versus the, or the two B state versus the real state. And basically cross plane extends this to, to manage arbitrary APIs. And these are exactly these service provider APIs pictured below.
There are a couple of benefits to hopping on to Kubernetes, but in the end, it's an implementation detail to how cross ban chose us to implement their services. So we are seeing here a bit of a, of a different approach. It's not something like a full pass solution like cloud Foundry, for example, or Heroku or something like that, where it's really just a tiny bit, a tiny fraction of services offered. And it's also not like having direct access to every kind of service provider API or service provider backend systems where you, where usually those compliance and security concerns are pushed to the client. So this is basically something in between a platform API, which those product teams are free to use either via API, via via web interface, like something which is similar to the AWS or Google cloud console.
And, and, and then basically provision those cloud resources. So this is a bit of a deep dive. I don't want you to focus too much on the, on the three layers in the middle. So again, we have those service providers at the bottom here depicted with their APIs for AWS. For example, we have our, I, I think around 800 APIs, which they offer to their users. Azure has around 700, and then there's usually for every organization, a wide range of SA services. They use secrets store, which maybe for example, vault an observability solution, which so faces as Grafana, maybe, and a policy engine, which could be Tyra and obviously some messaging and, and database solutions, which offer an API as well. And then usually a platform team sits on top of these services and offers them to their users. And this is here in three layers depicted we have at the top, or maybe start at the bottom Kubernetes controllers, which basically mirror the service provider's API into the Kubernetes ecosystem.
And you can describe Kubernetes as many things, but in the end it's a database of things. So it's a database where stuff is stored and that's usually descriptions of your containers or your deployments or your services, or in this case, whatever resources you chose to connect to this Kubernetes instance. So really it's a one-to-one mirror. And when you interact with it with, with this Kubernetes layer, you have the same language to speak to AWS, to Azure, to Grafana, to gile or whatever you connected. So you don't need to implement all those things in a, like in a different way, but you have the same Kubernetes YAML interfaces, which you can use to talk to those services. And then there's an integration layer basically. And this is where the platform team and the other teams really come into play where basically compliance and security will be made available by default.
So a simple example is this S three bucket. So whenever product team wants to store some data and they want to kind of provision an S bucket for that, they go to the API and create an object called S3 bucket, apply it. And then a Kubernetes controller will create this S3 bucket. However, what happens in this integration layer is that this S3 bucket is already kind of encrypted by default. And the, the default visibility is private and not public. So those product teams, they don't need to kinda make their minds on implement those guidelines and read those as compliance documents. They just simply provision as three bucket and it's by default compliant. And this is for, for every, for every resource at those service providers on the case. So another example usually is those is encryption. So every for sure, every database and every, whenever there's data addressed, it needs to be encrypted in AWS.
For example, that's a Ks key and this, you would configure in the, in this integration layer. And then every resource is automatically encrypted. And on the top is the platform API. This is the API which you offer to your product teams. So this could be just a one-to-one of four of those, what those service provider offer, or it could be like en enriched with the security and compliance Betty for, or it could be also simplified. For example, a Kubernetes cluster at the, at the cloud providers, it's usually a very complex thing to set up. It's, it's a managed solution at, at AWS, it's called EKS. However, there are many different API calls necessary to provision a Kubernetes cluster. And usually you would also need to provision some agents at those Kubernetes clusters, which are running on the EC two instances to ensure compliance with the security department. And this is exactly what the platform API can make simpler for you. So those product teams, they really want to have a Kubernetes cluster. They don't care about all those compliance and security defaults, and there's different API calls.
And this is what the platform team can make simpler for the product team for each product team. That's the huge benefit. If you centralize this in a platform team, you don't push it to those product teams, to each and every product team, which basically need to go through every step. And well today it's a European cloud identity and cloud conference, and I want to focus a bit on the policy engine. And this is the part here below Tyra. Basically it's a control plane for OPA OPA is a open policy engine. And I want to talk a bit on how these services are provisioned with OPA.
So St basically is a declarative authorization service and what it is, it manages many, many OPA instances on the ground. So this control plane pattern, if you are familiar with Kubernetes, Kubernetes is a control plane for well, for compute. So it manages many, many instances, compute instances, VMs, or containers or whatever. And those are registered at the controlled plane. And similar steroid does is a control plan for OPA instances and those OPA instances, they are the well the actual policy engines, which do the job on the ground. And usually you, you would have many, many, many of them. It's a very small binary and I'm sure good stuff. And the next talk we'll cover those things a bit more. And since they are very small engines, you can implement them at every, basically at every layer in the, in the software system. One is for the Kubernetes API here depicted. So every call to the Kubernetes API is, goes through the policy engine and the policies you define, you could install it at the VM layer and similarly at the network layer. So to basically manage who is allowed to talk to whom and to have a bit of more fine granular control, which I've not depicted here is applications, which usually also have the well, a large requirement to implement policies. However, for this talk, it's not super relevant.
So what you'd need to do when you kind of create or manage OPA instances at Starra, does you go to the web interface of Tyra does, and I'll show you that in a minute, you create a system at the da API, you had a system to the stack, you achieve the API token. You configure those, you deploy the OPPA instances at Kubernetes, for example, and you configure these instances with the talking so that they are connected to the control plane. And then basically what you can do from the control plane push policies to this system. This is kind of a bit of, well, you could do that via the UI. You could write your workflow yourself by implementing the Ty APIs, or you could use some of the cross plain controllers, which are available, which basically automate these steps. And this is exactly what the teams at Deutche bond and Accenture have done.
They implemented provider Tyra, which mirrors the Tyra does API into the Kubernetes control plane. Again, it's a one to one mirror of the API at the service provider in this case, Tyra does. And what it does, it accepts plain Kubernetes Yamo to manage the state at Tyra does. So if you are familiar with Kubernetes, you will notice the very famous pattern of API version kind, metadata spec and status, which is not depicted here. So if you apply this YAML at star does as new system of type Kubernetes, V two will be created, and then the registration token will be synced back into a Kubernetes secret and will be available to use in your, in your automated deployment stack.
So another, a little demo of cross plan and Tyra, a provider Tyra here, I already kind of simplified a Kubernetes provisioning process in its own API. So we see the API version is MP do example.org. That would be your enterprise. CMP stands for cloud native platform. I think it's a very famous name for internal platforms, but there are also many different names how your internal platform may be called it's up to you. The kind is a cluster because it's, what I want to do here is provision Kubernetes cluster. The name is cluster one and it's name space it's default. So when I apply this in the demo, then that will basically create a virtual Kubernetes cluster. In my local Kubernetes cluster, it will create a system. ATS does API, and then, then will deploy these OPPA instances into the virtual Kubernetes cluster and register them with the sterile does API. This is what the platform team basically implements once. And this is what every customer of the platform. So the inter product teams will use and see whenever they want to provision a Kubernetes cluster.
So I hope this is readable even in the last growth, I can make it a bit bigger. So this is at the top, it's a local Kubernetes cluster. It's a mini cube cluster just to get a Kubernetes API. And what I did here is I installed cross plan already with helm, and I already installed this configuration API to abstract complex objects into simpler one. And this is what we saw the cluster object. So what I will do now is create two Kubernetes cluster. They are shown at the bottom. So this is basically the screenshot which I just showed. And if I apply them,
We can see here. Maybe we go for our cluster. We see two name spaces getting created. They have an auto generated name at the end, and those are the Kubernetes clusters, which, which are running those virtual Kubernetes classes in a, like in a local mini cube cluster. This is again a core component of those local Kubernetes cluster. And now we are already seeing at the background, the sterile does systems were created. We're already seeing the data source agent and the OPA agents being installed in those Kubernetes clusters. And they should be ready in any second. So we can take a look at the star, does web interface. And we see at the top left, there are two new systems being created, cluster one and cluster two. Those are the Kubernetes clusters, which are now connected to the Ty system and be well and, and, and can be, and some policies can be pushed onto them. Yes. So basically, I'm, I'm not going into this stuff too deep because I'm sure the next present presenter will and
Oh, well that's more or less the demo. Glad it worked. What I can do now is just delete this stuff again. And it will similarly kind of delete the cluster here at the SDU system and delete the Kubernetes cluster and basically clean up everything again. So looking a bit at the time, I don't think we need to wait or, well, they are gone already and they are gone here and stay terminating, so should be cleaned up. So again, this is all open source, so it's free to use for everyone. The teams from Accenture Deutche band are heavy contributors through the cross plain ecosystem to the core repositories, and they maintain several providers in the provider. Repositories use a, just a small list provider styro, which we've just seen in action. We also developed a provider for GitLab and provider for Argo CD. We are joining forces with the Grafana team for provider Grafana, which the work is done in their names, in their repositories. And you can connect with us at, at the cross plain slack. That's where we all hang up. We also have a couple of team members here in the last row if you're interested.
So I believe that's it. What I brought today and.
How can we help you