Thanks so much. So as he said, I'm Michael. I represent a company called Bank Data, which is a banking consortium in Denmark. And what I'm gonna be talking with you about today is sort of the strategic steps that we took and that we envisioned that other people need to take if they try to take an more old school authorization approach in like a very compliance heavy company and try to go to a more decentralized DevSecOps organization. So isn't isn't so much a deep dive into OPA and Tyra, which happens to be the tools we use, but like a lot of the steps are essentially the same whether you're using Tyy and OPA or you're using some other vendor. We just happen to choose those. So just a bit of background on me. As I mentioned, bank data, head of restoration there, been there for for a few years and a background in in academia.
I guess the, the more interesting part of that is that like all the technical details of this, I can go into them and I did use the code a few years ago. Nowadays it's mostly just like talking and drinking coffee and also if you're wondering why opa, I am gonna get to that but at least one part of it being a Scandinavian guy like the Viking helmets is sort of alluring. So Bank Data is a banking consortium and it's one of those companies that it's comprised of several banks, eight banks in total soon nine banks. So one of them bought another bank and it's like super old, it's established in the sixties and that means like a lot of other companies from that age a lot of the tech stack is a mix of new technologies and very old technologies. So we have Kubernetes cluster, we have some, some cloud setups with a cloud vendor and we have a mainframe and a bunch of other stuff in between.
And trying to do something like a more modern approach to authorization in this sort of space becomes very complicated very quickly. Also because we're in financial industry, especially in Europe, we're highly regulated and that regulation is only going up if some of you are not aware of what two is and you're in basically any industry should probably look that up. You're gonna get fines soon. So yeah, that's sort of where we are now. What we came from is essentially an A an ad hoc approach to authorization. By that I don't mean that we didn't do authorization cause obviously we're a bank, but the authorization that we did was always adapted to the specific system we were trying to implement. So someone was a given development team, was building a new solution and as part of that they built up a number of APIs and those APIs had fairly limited usage scopes.
So they built authorization around those scopes and a lot of it was based on our back because that's a very, in a way simple approach to author authorization because you just essentially build the functions you you're trying to to make and then you assign a role to it and then you give it to the people who are supposed to implement this in practice, which, which for us is the member banks and tell them, oh yeah, assign this role to the people who are supposed to have this access and then they'll magically figure it out. The problem with that is it's super expensive. Role maintenance becomes a very, very big deal whether you're doing it towards customers or towards employees, it can really blow up. And sometimes you also see really bad takes on it because what people actually needed was an attribute based approach but it wasn't available to them.
So I have one example for example where someone needed to make an authorization of whether or not someone was 18 but they didn't have access to an attribute of their age. So they created a role like customer over 18 and customer under 18 and then it becomes games on poor bank employees problem to assign that at a birthday. That's, that's horrible With this sort of thing you also don't have any central overview, you don't have any natural auditing mechanism, which is a problem in a company that has a lot of auditing because again finance. So every audit becomes very expensive. Now what we're journeying towards, and I'm specifically saying towards because I don't believe that you can easily get there and it takes a long time, is a managed decentralized authorization approach. So each of our different development teams manage their own authorization for their area.
So they build their policies for their area. It is not business consultants, it's developers. We also have security and compliance related policies that we want to enforce organization wide. We can also do that. It's mostly AAC by mostly I mean that in a lot of cases AAC is not the right solution. There are situations where Rback does make sense where it is the right solution for the given business need and there are also situations where none of those two are the right I'll, I'll get back to that. But with this sort of setup where we're joining chose, we also have the full overview of our events for policies and the audit capabilities and compliance parts are sort of self-documenting. So I alluded to before that the reason why we chose op diwa was because of the a Viking helmets. There are other reasons we tested different vendors on our developers.
Because we are a developer centric organization, we don't really believe that our business people can realistically come up with all the ways someone can expose and and sort of destroy our security. We'd rather have that be developers and and security and compliance. So we tested Rigo against Sacol and the variance then you could build onto Sacol Rigo was was hands down the one they wanted. It's very important for us to be configuration is code everywhere because configuration is code is makes compliance easy, which I'll get back to. We also needed to be highly resilient. We're a company where if we are down for a few days it can literally kill the bank in this, in our case a thanks. So we don't want to be down for a very long and if Tyra crashes everything else keeps running. We need both the flexibility of APAC but also Rback and also more.
We actually have situations where because we have a lot of legacy, you might have built an authorization system that handles a very narrow case like 20 years ago or 30 years ago and instead of trying to make that into an aback model which may not be cost effective, you can just do a call out for Mopa that also works. So there's a number of situations where this gives us flexibility. It's just whatever the coder can express INGO and data that can be modeled with with Jason. And also the deployment needs to be very flexible for us, which this also gives us because we have to, we always assume that attacker can be anywhere. We don't assume they're in the somewhere on the internet, they might as well be deep inside our our network somewhere. So we have to assume that. Yeah, so the primary challenges that we tried to go through when trying to implement this is that there were four things that we really needed.
We needed a strong identity, we needed a way to enable DevSecOps. So essentially when we tried to get all these development teams to do this and they used to just write like this role or whatever, we need to actually give them the tools to do it. We needed a way to have efficient compliance and we needed, it was pretty obvious at the time that we needed a way to roll this out that wouldn't create like bad solutions and would not break everyone's neck. Cuz with a big company that's been there for many years, there is a lot of legacy and you can't implement everything at once. Now identity, we were a little bit lucky for our case and I guess that's not necessarily the case everywhere else, but this is the first thing you need to solve. If you want to do like decentralized authorization, you need to have strong identity.
So we were using security for a number of years, strong adherence to open, open I and all standards and also all configuration is code and for our case, which is was an important thing that we could run it on-prem because again compliance, this just makes things easier. I should also mention that Danes are spoiled for strong identity because every single Danish citizen has had a a digital identity since 2010. So any, any person in all of Denmark can identify themselves and digitally sign using this both towards the banks and towards the states. So you don't really use paper at all. So we're a bit luckier there than, than most at least. But that's the first thing you need to solve. Now the next thing you need to solve if you want to actually do this because as I said, I don't believe that you can get business consultants to do this for you.
I don't think that's gonna work. I also don't think you can get a single security unit to write all your policies because no one can hire that many security people and and they don't know the specific domains well enough to do it. So what we did quite a while ago and what I generally recommend is to go away from thinking of security and compliance as a centralized unit and instead thinking of it as a core unit with a D managed decentralized extended arm. So the way we did this was we introduced something we called security champions and it's essentially on the realization that you can't make all developers security experts, it's just not feasible. It's also not cost effective cuz not all of them need to be. So instead we tried first to identify the people in the organization who had an inherent interest in security and had the means to really be good at that sort of thing among the developers.
And you can also specifically hire for DevSecOps profiles but by building up their, their their skills and giving them the right tools and the right guidance from the security and compliance unit, you can have representatives that are more close to the actual code and know their domains but they, they still report up and security compliance, it looks something like that. And this works, you can do this, you don't have to have a security champions unit that's like full of security experts. You need to have some sort of mix. So for us we, we generally want to have each of the business units have at least a couple in like the higher tiers of this little ramp that we built. But you also need people that can just manage to implement secure coding tooling like vulnerability detection and like static analysis tools and stuff like that. So you need a mix. You can't have like these one man armies that try to handle the security of an entire business unit that's not feasible.
So that's the DevSecOps part. Then the compliance part because if you're not in a very highly regulated industry right now, you may not know what that exactly looks like but what it looks like in general is you have pretty much all of the different security principles you can think of. They're probably mandatory. They're probably like if you don't do this you will get fined and might not have a bank tomorrow. So you have to do a lot and you have to document that you do it. It's not enough to actually do it. So this is stuff like change management, risk-based management, asset management, redundancies everywhere, a lot of resilience testing, a lot of scenario testing, all sorts of things. And for us, because we're a bank consortium rather than just a single bank, we also have a problem of bank separation which is we need strict separation between each of our member banks which almost no vendors can handle by the way. That's usually why we need configurations code.
We get a lot of this from Styron opa. So with OPA what we really want teams to do, we want them to write policies, we don't really want them to focus too much on anything else. We don't want them to do what they have historically had to do, which is write a bunch of authorization in their core code and then write a long word document describing the authorization they wrote in their code because they had to do that for compliance. We don't want that. Instead by writing policies in opa, they are at a sufficient level of of simplicity that an auditor with some technical INF technical knowledge can read and understand them And we actually, we have an internal auditing unit that does that sort of thing. So you can sort of have authorizations that are more or less self-documenting with perhaps a few caveats if it's very complicated.
You also have in Tyra, which I imagine is gonna be the same in a lot of other control planes, you you have full audit trails, you have full overviews of all the policies being applied different places and you can sort of allow your auditors to just see that, see all decisions being made and everything works together and of course you can pipe it all into a cm, which is also a a natural thing to do. One thing we also really want is that we need change management and everything we do. So onboarding new systems to Tyra can't just be like someone has to go into a gooey and then like create something there because as soon as you do that you sort of have to create like a super user where one person can change something in production and that again violates change management. You can't allow that.
You need to have at least two people to make a change in in production. So that's all done through pipelines and through code management. So for that we built something we call the Tyro controller. It's essentially just a Cuban controller that automates the configuration of styro A. So if you want to be let teams onboard themselves easily, you can do this, create something called a custom resource definition and then make that provision all the configuration that makes an OPA highly resilient. So that includes the op itself, but it also includes something called first Tyra Styro local plane for other solutions. There are probably similar tools but this service change management and actually if you do use farra or a similar system, we actually open source that, see you can go find that.
Now I mentioned this about supporting your auditors. This is again, not everyone will have this problem because not of all of you have internal auditors, but we both have internal auditors and external auditors and the internal auditors create an auditing report that they then send to the banks and then to the financial regulatory industry, like to our finances, you know, in Denmark. And we don't want that to have a lot of bad stuff and we also don't want the audits to be expensive so we wanna enforce it properly and we wanna make it easy for them to see that we're enforcing things properly. So this actually gives them that and can look into this and just because I needed some graphics for this actually went into mid journey and wrote something along the lines of evil auditors scolding a developer and some, I know others have looked at like generative AI as part of this conference and like sometimes that's kind of scary because this is our actual director of internal auditing and that's the lead auditor.
I don't know how mid journey figured that out. That's a little creepy. I didn't tell it that. So yeah, I think Skynet is coming essentially another part that we have isn't technically necessary, but like if you don't have an internal red team today, highly recommend getting one because all the compliance in the world isn't gonna save you if a developer just made a mistake and they do, sadly everyone does. So you need some way of figuring that out. You can pay an abundant amount of money for external pen testers. You're probably not gonna find it anyway because they usually don't get the same level of access in a highly compliant organization. You can't just let them loose with a laptop on your network. So they're probably gonna be handicapped from day one if they try. So this way you get more control and it is however very important if you do do this, start small.
Do not try to solve every security issue in your entire stack from day one. Go for quality over quantity because there is no value in testing all of your systems with par with subpar testers that then say everything is fine when it's not. And then finally the phase rollout, which is probably the biggest indicator for success and I think Gustel talked about this earlier today as well, that like the biggest, the biggest part of getting success with a a transition towards AAC is data modeling. It's the most complicated thing because usually the organization does not have the data available in the format. You could just use it. So what I suggest and what we also did was do a control rollout. So essentially you identify a couple systems, maybe free systems most likely to improve your overall data modeling. So not necessarily the systems that are most critical, but the ones that touch upon most of the data sources.
You would need to do an aback transition and then you start introducing those data sources systematically and and doing it well during it, right first time and then you migrate those few systems to opa, you then continue doing that a few times until you have sufficient coverage to cover most of your use cases for generalized APIs in your organization. So how much that is completely depends on the organization, but I would estimate somewhere between six and 12 months from when you start until you're finished with that. And then after that you can go into a general availability start man, like this is what we did start mandating the all new APIs must use an OPA because, because it's never gonna get cheaper than the first time you're implementing it. If you try to do it as an afterthought, it's always gonna be more expensive. Then encourage, of course for for any others, you're making it generally available.
Encourage others to use the same and then slowly integrate these new data sources, new system resources, get everything up and running stably before you go to the last, but probably also way hardest and most expensive part, which is legacy. So not every company has a ton of legacy, but I would argue everyone has some, everyone has some of those, that code that got developed the first few years and no one kind of dares touch it. You might have things that say it works, but you know that every line of code you changes that is like 10 times more expensive than any of the new systems. If a company is from the sixties, they have a lot of that. So what you should be doing is should, you should be taking completely risk based approach to this. You should evaluate the overall risk of your current authorization policies in the existing systems.
Go through them, actually have someone with knowledge of security run through the system and determine all the ways in which you can make this system do things. It shouldn't. You're probably isolating it right now if you're, if you're concerned about security for legacy systems and then when you do that you get a certain risk score out and then you just start pitching forcibly onboarding these different systems onto it. It's gonna be expensive. All of them are gonna be expensive. There are no quick fixes for this, but you can't do all of it at once. So you need to take a very controlled approach to it and isolate the ones you can't do immediately because everyone has a budget, even if everyone asks us if they don't. So that's it. If there are any questions,
We have a question
Actually the questions are sort of, there's a, it's a complex question. Okay, who reviews the reviews, the accesses Do like dev teams or dedicated teams that do that? Maybe it's a security, your security guidance p people. Are you certifying slash reviewing just policies or accesses? Does the accesses go into the seam? What about the policies? How do they get reviewed? How do you prevent rubber stamping during reviews? That
Was one question on that.
That's one question.
Fair enough. Yeah, so there's several answers really. So the policies get evaluated by the security champions for the area, if they're unclear about any of it, it also goes to security and compliance. It also gets evaluated by internal auditors because it has to on a regular basis. That's that they have like a, a maximum amount of time that can go between audits. On top of that, all access gets audited automatically. So there are part, as part of our risk risk management approaches, there are controls for all accesses. So if a, if an employee suddenly gets access to something new, there is a, an evaluation every, I don't recall the frequency, but let's say every couple of months where you have to go through the last few accesses that have been granted and see if they were legitimate and if there's any sort of discrepancy there. People get fired really soon really fast. And obviously for, for really high risk stuff there's, there's Pam and all the other stuff that you would normally have to manage risky access. I think that was all of the questions.
Super. No, that's very good. Thanks very much for that. Okay, let's give Michael a round pause and please take your seat at the panel.