Event Recording

Pre-Conference Workshop | How OpenID Standards are Enabling Secure & Interoperable Digital Identity Ecosystems

Log in and watch the full video!

OpenID Foundation Workshops provide technical insight and influence on current digital identity standards while also enabling a collaborative platform to openly address current trends and market opportunities. The OpenID Foundation Workshop at EIC includes a number of presentations focused on 2022 key initiatives for the Foundation.

Log in and watch the full video!

Upgrade to the Professional or Specialist Subscription Packages to access the entire KuppingerCole video library.

I have an account
Log in  
Register your account to start 30 days of free trial access
Subscribe to become a client
Choose a package  
Yes. Hello everybody. If you could please kindly take your seats, we're going to be kicking off. So please kindly take your seats.
We get started. Good morning and welcome my name's Mike Les I'm open ID foundation, program manager, pleased to see a great turnout this morning, both in person and virtually just a couple of housekeeping topics. Can we get the presentation up please?
I apologize. The slides are just slightly out of order here. We made some last minute changes this morning. Just say, just to please note the note. Well statement, I'll allow you to read this in it's entirety, but in summary, there's no sort of contribution agreement to sign, to participate in an open ID foundation workshop, but we do govern the, the workshop using the note well approach, which in summary is anything that you share in the workshop may be used. There is no IPR protection with regard to that. So without further ado, I'd like to introduce open ID foundation, chairman NA smore to make some opening comments this morning. Nat.
Thanks Mike. Thank you for everybody for coming to this session. I'm so pleased to see you in person here in Germany today, we've got quite a full agenda, so I'll just be very brief and talk a little bit about vision and mission. So this year the board has come up with a new vision and mission statement for the foundation. For, for those of you don't know, the open foundation is a nonprofit standardization organization. Who's doing open connect Moderna FPI and these things. And we were also kind of instrumental in creating JWT, JWS, things like that. So our new vision statement was help people assert their identity, wherever they choose. New mission statement is lead the global community in creating identity standards that are secure interoperable and privacy preserving. And you will hear more about the nitty gritty of those later today, but before going there, I'd like to introduce our new executive director gal Hodges to introduce more about us.
Great. Hello everyone. Thank you, Nat, obviously luminary in our field and delighted to have Nat on a, around the world ticket right now, going to the us and then to Tron time. And then now, obviously here before he makes his way back to Japan. So thank you, NA gonna make think our slides, as I say a little bit outta order, so we're gonna get ourselves reconnected. There we go. So as part of that board strategic review that Nat mentioned, we wanted to kind of think about the holistic work of our working groups and what was needed of the foundation to deliver on that vision and that mission. And we are working in many different areas simultaneously, but we thought of it as taking the great work that was already well adopted, like open ID connect and progressively FPI, and obviously continuing to scale those, but we needed to put our attention to other areas simultaneously our, our certification program, again, as an area that's on the more mature side, but there were some areas like open ID connect for identity assurance only about two years old now is progressively applicable to develop many of the challenges of this moment and making sure that the wider community is available of the power of what that standard can offer.
So we'll be talking more about that today. Moderna which seeks to address many of the needs of the mobile network community, which still has many broad applications shared signals and events, which will, will cover later today as well with very interesting way of looking at signals and events and connecting. Like I like to think of it as taking the very best practices of what would happen within individual companies and starting to share those signals across different entities, more widely. So really kind of fascinating breaking work there. And fast Federation will cover as well investment areas. We announced here at the EIC last September Gottfried and Nat we're on the, on the stage introducing gain the global assured identity network. So we'll be sharing our progress in that initiative since we launched a launched and we're very keen to drive, you know, wider engagement with that initiative.
And I'll be standing on stage on Thursday talking about the vision and, and where we see that going from here. Opening open ID connect has just had a brand new name, open ID connect for a verifiable presentations instead of for SSI. So apologies Torston for, for not updating my slide before today. He's, he's disappointed as you can tell and government engagement. So progressively, you know, it's very important for us to be working closely in, in concert with government, both on the work of FPI and their goals on open data and open banking, but also in the need to develop digital identity wallets, like the EU digital wallet initiative, California, looking at launching capabilities on issuing physical credential, digital versions of physical credentials and many countries around the world are trying to make that transition and create the right infrastructure and architectures that they need to serve their, their communities.
Some of the things we will, we will spare you from like where we are in our marketing programming. So we will skip that for today, but I'm happy to talk about it if you're interested, but the other areas of new opportunities, like how we reframe our approach to health and the applications, there is an area we're focused on for the next six months. I don't think we're gonna spend time on that much today, but happy to talk to anyone in the room who are health experts. We're really looking to think about how open ID foundation standards can be applicable to the health community scaling some of the use cases we see now like the sharing of medical records in the us and the UK and in Norway, and looking to expand that to a much wider set of, of use cases and then emerging trends.
There's obviously a lot of buzzy words going on with web three and metaverse and fraud. A lot of those, you know, really depend upon having core identity capabilities as part of them as well. So how do we make sure we're also supporting some of the most cutting edge trends in the market IOT? Another one, right. A lot of great work is coming about in, in the IOT space, but that's often thinking about how do you move the information on those IOT devices? And they need to interact with a human being at some point, right, as an administrator or the person that's driving the car or the supply chain infrastructure and how you assert there's a particular person authorizing any given transaction. So how do we work together in closer concert with them and across all those initiatives? We're we have seven white papers that we are focused on writing and delivering this year together with our membership to frame our thinking in a more mature way on some of these critical topics, how we talk to government and position our work and our progress in a language that's very accessible to a wider range of government officials.
So we can bring them along on this journey and they can get a sense of where the landscape is at this moment. And we take those themes. We're already feeding them back into the EU digital wallet initiative with their architectural reference framework request for comments and, and trying to share with them how we think our standards can help them achieve their goals, as well as the O E C D that was asking for perspective on privacy, providing feedback to them and, and missed as well. When they've been looking at things like mobile driving licenses, what is, what are the opportunities there we are R indeed actively looking for ant champion. So if any of your yourselves consider yourselves expert in the IOT domain and are familiar with our standards, we are actively looking for someone who can help us drive some of that landscape assessment and put together kind of six months worth of diligence to, to help frame the approach in that space. Since it's, you know, we often in our world have silos, right? So we're trying to break down some of those silos with, with this work and last, but definitely not least is bringing along that next generation of thought leaders and diversity into our community. And we are so pleased to, you know, to recognize Kim Cameron. I welcome Don Tibo. My predecessor is executive director and non-executive director on the board now who thought up this wonderful idea, and it's already, can't wait for him to tell you more about it.
Thanks GA
We, if we could have, there we go again, my name's Don Tebow. And I think if you pause for just a moment and think about the people in your life and your careers that have really made a difference, Kim Cameron is one of those people for many of us. So the open ID foundation board wanted to recognize Kim beyond just good words, but we wanted to do in put into action. So what we created is we hope the beginning of a community-wide effort to create awards and scholarships so that we can have an OnRamp from those that are in academia, those that are in the research community. So I'd like to introduce two of our awardees, and I'd like you to kind of check out who they are and introduce yourself because we really wanna be able to create not only an experience for this new generation, but we wanna work across organizations. I'm really pleased that the KuppingerCole folks have provided free passes and an entrance into their young talents program. So the response from different conferences around the community and around the world has been most welcoming to this. So I'm really pleased that those of us as a community and the board can recognize our two awardees, which are shy and standing in the back, but I'll make them stand up.
So introduce yours yourself to them. And if you have ideas about how this award can be continued in the future are other organizations that might wanna work with us, please let me know. And I think that'll be it for this part of the program.
Great. And please do indeed fine, Rochelle, and, and Alan, if you see them, I, that's kind of, part of the goal is that they are able to act, become really active members of the community. And we all know that that's through relationships, getting to know people and sharing our expertise. So please do reach out to them during the course of today. And if you're ATDs, we'll, we'll have a couple of our other candidates attending. There will be introducing them to really delighted, to bring some really great new thinking into our work and continuing the work of the open ID foundation and bringing great minds into the identity sphere. So if I go back to the, to the agenda, I don't think anyone's gonna be able to read a word on there. There's a lot of content we're pointing to cover today. So we very rich agenda and the first kind of series, I'll just introduce briefly, we're gonna start off with Christina Yasuda and Torsten LTE talking about open ID for verifiable credentials, and then Torsten will carry on and talk about progressing gain with the gain community group and the technical proof of concept.
And then onto edema, who's gonna talk about open banking going global and the white paper. He just released a few weeks back on open banking and open data and the trends in that, in that domain. So gonna be a rich agenda. We'll be here with you until about 1230 and thank you for such a great turnout in the room and for those of you online. So to kick us off Christina, and Torsten,
Let's do it
Right. Good morning, everybody special to me for, to present you all work around, open a Deconnect for verified credentials together with Christina. I need to click to proceed.
No, I think it's the other way
You just confused.
All right. Okay. So you successfully confused the first presenters. Thank you very much. All right. So open ID connect for verified credentials previously known as open ID connect for SSI. So we had a incentive discussion in the working group, whether we could come with a, with a better name or a more suitable name. And the conclusion was that we call it open for Verifi credentials going forward, which also represents the fact for example, that this is, this is not only about identity. It's about all kinds of credentials, including diplomas, for example. So let's go forward. So open ID for verifier credentials is an initiative that is conducted in alienation between open ID foundation and decentralized identity foundation. So we cooperate to build that protocol. And if you take a look onto the kind of reference architectural that was established by, for example, WC verify credentials, we've got the three canonical parties in, in this kind of applications we've got in the center of it, a wallet, a wallet where the hold or the user maintains her or his credentials.
And on the left hand side, you've got the issue of those credentials. And on the right hand side, you'll see the, that in the end gets to know the content or parts of the content of those credentials in the, in the form of a presentation. And the nuity is that in the end, there is a decoupling. So the issue doesn't get to know where the credentials are being used. The user can compose presentations of different credentials to respond to any kinds of, of requests. What we have done is we have defined three specifications that comprise open ID for verifiable credentials. The first of them is on the right end side self issued op the self issued V two as the, as the version indicator suggests is the second version of self issued op, which already was part of open ID connect back then in I think, 2012, it was published. Yeah. And it, it is, it, it is related to authentication of identifiers, cryptographically, proven provable identifiers, and, and key material. And we have defined a new spec, which is called open ad connect for Verifi presentations, which defines an extension to the protocol that allows verifiers to request and receive verifiable presentations. On the left hand side, you see open ID for credential issuance, which is another spec that allows issuers to issue credentials. Why are a or of protected or simple API?
So one of the obvious question that is they are being asked in the community typically are, why do you use open ID connect for that? I mean, isn't open ID connect this tool of the evil that big tech use to build monopolies. Well, basically open ID connect is just a protocol, right? It is, and it can be used for implementing identity solutions and to move claims in different kind of architectures. And if you look around in the wild, there are centralized IDPs and they are good and they are great. And they have helped to, to evolve the standards to, to where we are right now, right? Billions, billions of transactions every day are being performed using open ID connect. And without this practical experience, I think open ID connect wouldn't be as simple and as secure as it is today, but there are other architectures as well.
There are huge federations with thousands of IDPs. I would call them decentralized open ID ecosystems. And there was from the beginning, the idea, well, such an open ID connect op could also reside on the, on the user's device. And that's called self issued op. So you see open ID connect supports a variety of architectures. And we, we, we, we thought that would be a good starting point because of a couple of reasons. First of all, open ID connect is that simple. And if you're doing your own survey and the, I would say competitive market of protocols that are available today for SSI applications, open ID connect stands out because of this simplicity. It's just HTTPS S some random numbers and some best practice. That's, that's basically, yeah, there are signatures involved as well, based on JWT, JWS, and so on, and it's secure.
And there are libraries available out there that you can just use to build your applications server side or, or client side. It's great for mobile applications, because it was, it was invented already for mobile first for the mobile first world. And the security of, of opend not only has been shown in practice and has been analyzed by security specialist. It has also been formally analyzed. And as we are speaking, for example, newer components of open ID connect, such as FPI two are being analyzed by researchers at universities. And this is something that is really, really valuable and gives people confidence that what they are using is really trustworthy well, and the goal of what we are doing is that not only new applications can be built using open ID for verified credentials, but we would also allow existing deployments to migrate easily into this new world of wallets and credentials.
So existing open ID connect RPS can integrate this extensions to receive verified credentials. And which is important for me, for example, as a CTO of an open of commercial open banking ecosystem, our banks, for example, using this kind of technology can integrate or can become issuers of credentials and can in the end also help adoption of the principle because without digital identities, without credentials, the principle is nothing right. You, you need to be able to, to, to implement it and to get attraction, adoption, attraction, also attraction, but adoption in the market. Right? And with that, I hand over to Christina,
Thank you. Let me dive into each of the presenta. Each of those specifications followed by the concrete examples, because they're ongoing implementations going to production. So you can see how it might look from the technical perspective as well. So first starting with the presentation side of the things, if you remember the first conceptual image trust and shows of the right hand side interaction between the application running on the user's device and the verifier. So, and please distinguish selfish PV two when it's used alone, versus when it's used combined with open for profile presentation specification. So with self issue PD two, let me just quickly help position it within your mental model compared to existing open eConnect flows. And again, here we are not saying we're trying to re replace existing flows, but they believe there are use cases that, you know, can be met with this new model.
I'm about to explain. So with the current huge openly flows, and I apologize, excuse me for oversimplification, but when the user is trying to access to the resource as a relying party, there's no way for the users to prove user herself, right? So what happens is our P redirects the user to the third party P where, and their trust relationship is happening between rely party and third party op, and then third party op let's say, if it's Microsoft, it's going to return an ID token signed by a Microsoft with an ID token with sends a name space of Microsoft. If it's Google, it'll be Google signature and name users, identifier name space under the Google's name space, right? This is very important because, because what happens with selfish model is when users try to get access to a resource. Now there's no need to redirect the user because conceptually we took the op and put it within the user's control.
And I don't want to say it's running locally on the user's device, because we enabled the model where the op component can have a cloud component or be entirely running as the crowd. But conceptually, think of it as you know, within the user control user's device. So now when RP sends a request, user can, can sign an ID token with a key controlled device, a user. So remember I said, it's important in the left model, the identifier is named based under the third party op well, the biggest difference between self issued subject signed ID tokens and usually ID tokens is that in subject signed ID tokens issue equals subject. So that's the biggest differentiator because the issuer of ID token now becomes the user herself. So that's the key conceptual difference. If there one thing I want you to take back from the site V two work is that self-signed ID tokens ISS equal subject.
Now you might ask if Zi ID token is signed by the subject. How can I trust the claims inside that R and a self attested? Hence we need the mechanism that for the user to present claims signed by a trusted third party, by a trusted issuer, let's say, you know, the government of Germany issued a mobile driving license. I want to present it alongside self-signed ID token. This is a conceptual model user tries to get access to a resource. Now, alongside self-signed ID token, we define a new artifact code for present VP token for file presentation token. So that is sent back alongside ID token. And again, I'm showing this as if it looks like an implicit flow, but if you're having a selfish issue P with a cloud component, you can decode flow and other flows as well to make it more secure.
So, yeah, selfish ID token file presentations. And we included those picture of the issuer just illustrates that our P is not directly talking to the east shore, but because when RRP receives a trusted third party claims, it needs to verify if it's actually issued by a trusted third party, right? So there's this mechanism where you have a flexibility, how to do it, where you need to get the issuers publicly and verify the signature on the credential you received. So that was this, you know, cloud component, just do this straight. It's not talking to the issue directly, but some infrastructure hosted by the issuer potentially, or, you know, some trust framework. Do you wanna cover this one
Can do, right. So what we, what we've learned while doing this kind of presentations in the last couple of months is people see open ID and they think jot. And the question is, is this, is this tied to jots? I mean, you can use jot based or verifiable credentials in conjunction with open ID connect for verifiable credentials. But a protocol itself is, is I would say credential, agnostic or neutral. So what we have designed is in the end, the rails to request and present different kinds of credential formats, right? So you can determine what did method you use, whether use JW key or else the credential format, revocation mechanism, Scriptor suite, and so on. That's all up to you to decide if you're implementing an application, it works with different kinds of crypto suites and credential formats, and
Sorry, just call out in the credential format. We have ISO, you know, certain by five MDL. And if you look at the specification, actually an example, I think that's one of the, you know, people ask VCs and Analyst. So, you know, but in our opinion, it's really important to conversion the transport spec transport layer. So we can use, you know, transport layers, the same transport layer to transport different types of credentials. So like really look at it from that perspective.
So there's, there's there's and very important reason why we, why we made that decision. I mean, basically for implementers, it's, it's also difficult because they need to do that, make the decision. However, in the current situation, I think the best can, best thing we can do is to come up with interoperability on the transport layer, but there are so many credential formats and adjunct crypto suites in the market right now, that is really, really, really hard to decide on which one to use going forward. There was the last internal identity workshop, two, two weeks away. And we in a couple of sessions around what credential format is the best. And in the end, we came up with a list of 11 different credential formats, and we even made it through all those credentials and compared them. So my conclusion of my takeaway is this is, this is a very emerging, emerging space, and it's, it's, it's still unclear.
Who's gonna be the winner if they're ever, ever will be one winner. So that's why we decided to, to at least provide the community with an attempt, to have an in protocol for the transport, but allow to pluck in credential formats as needed. And this is not limited to WSC Verifi credentials. So you will now see a couple of examples that were implemented Mike, oh, five minutes. Oh, we need to speed up all. All right. So just, just a quick, quick illustration with an example. So what we did is we extended the opend connect request UX. For those of you that are familiar with opend connect core, there's already a claims parameter. We edit the new destination called the VP token, which, which tells the, where they put the presentation. And then we just include presentation definition. So we use D presentation exchange 2.0, to define what we are up to.
For example, in this example, we are requesting a credential of type ID card and that in the ID token, we embed a presentation submission, which in the end contains made data that tells the, the application where to look for the actual presentation. So that presentation isn't the VP token. However, the VP token can contain more than just one presentation. So we are prepared more or less for anything but optimized for the case where the VP token includes exactly one presentation, because we assume that's the majority of the use cases. If you run into a use case that works differently, please let us know we are, we are really seeking for implementers feedback and, and would like to incorporate that just to, to show you the, the protocol also works with ISO MDL. That's just an indication how that would look like. So we would, for example, determine the MDOC doc type, and then the ISO MDL would also be put in the, in the VP token and you see the corresponding presentation submission. It works for ACRs we ourself@yes.com have build a prototype for that. So in the case of, of the ACR, we would use the, the schema definition identifier in the, in the presentation definition. And then the, the actual ACR would be the return to the VP token as well. So sorry for, for running this through, but as we are running out of time, if you haven't have question just yeah. Get in contact with us later.
Yeah. There's no example, but at Microsoft, we used the same syntax with Jo based VCs. All right. So the status real quick, both specifications are as implementer's draft status. So if you want to implement, you have an IPR protection, we are planning to take both specification to the second implementer draft by the end of this year, because they're external bodies like ISO relying on them. And we want the stable version. They're ongoing implementation, including European blockchain, service infrastructure, AKA BSI, Microsoft Workday, P identity, convergence tech, Idun, both ID sphere. And Jim Lee is.com. So there's a growing community around it. Let you take the picture and now onto credential ISSU. So this covers the left hand side of the model, which saw at the very beginning. So interaction between the issue of the credential, what they called trusted third party, when I were explaining SYOP and the wallet, the holders, the user herself. So conceptually, this is, you know, the presentation model that I used previously. And now we're going to talk about introduction here. The, the insurance could have been initiated by the assure by the wallet. Let's imagine on the user is requesting the credential and there is the consent, the user authentication identification happening at this stage. And Alice receives an access token, potentially refresh token to refresh the credential in the future, or to get, you know, multiple types of credential and ones that happens. Credential insurance via
API. Oh, API. Okay. Sorry. So it might be a bit more easier to understand if I go to the slide first. So at the center, what we defined is this credential endpoint, a new API, you know, how we have user info endpoint. So there would be a credential endpoint, which is capable of issuing these credentials speak, you know, there was three CVC or is OMD or whatnot. And then the question becomes, how do I authenticate identify the user and get authorization from the user to get access to that credential endpoint and be enabled to mechanism based on the existing use cases, implementations. And one is what you're really familiar with where that authorization identification happens at the authorization endpoint. So usual flow user goes to authorization. Endpoint gives you consent. You get a code exchange to the access token, use access token to get the credential.
But the unique one is Preau authorized code flow, which is where, you know, imagine you have an admin process where you can't issue credential right away. So you need to gather users information beforehand. So you ask user to upload certain documentation, you do a liveness check, whatnot, and you get information, give user a token exchange token that user can later take to the token endpoint saying, you've already, I already gave authorization. You've already identified me. Please give me a credential. So those kind of just conceptually imagine there's a credential endpoint. You can get user identification, authorization, Etsy right beforehand, or it happens, or it can happen in a different session. So, yeah. And one of the really important things with this insurance, why we have to define a new endpoint is because you need to bind the credential to the user meaning because now the user is presenting that credential without RP talking to the issuer user needs to prove it controls the same private gate control during issues that it still controls it during presentation.
So you can have this idea. It's, you're talking to the same person or the same device that they user had during presentation during the issues that was the presentation. And just really quickly, you know, we talk about product goals, but from UX perspective, it could look like somehow you can use different means SMS, QR codes, emails, you communicated to the user, Hey, you're eligible put together this credential. Do you want to receive it from issue.com? So on user phone user goes proceed. And then, you know, you ask the user for identification, a syndication doesn't have to be username. Password could be, you know, Azure mechanisms probably, you know, least recommended as a password, but you know, it's up to issuers discretion. So you get user consent also, which is very important. And, you know, there's complicated things happening as a backend, but from the user's perspective, you know, you got a credential it's in your phone and that I can keep using at different places, you know, without sure knowing where I'm using, how I'm using and you know, really privacy preserving manner.
All right, just run. You quickly do the example. It's, it's, it's not that exciting because it's just awar, right? It's really boring. So first thing let's assume we use the code flow and the way you request authorization for a credential issuance is either via a scope value, which includes the type, or you can use a more complex structure that uses authorization details to also determine format and other, other elements. And then you use the access token in the, in the authorization, her it's oof, again, to request the credential at the credential issuance endpoint. So you determine the type, again, the format, potentially the identifier. You wanna, you wanna bind the credential two, and then you also include the proof for the key or at least one private key in that case associated. Oh no, that's, that's just one key because it's that key. So the proof of possession for the private key and in response, wow. You get a credential that's pretty straightforward and really boring because it's just synchronous. No asynchronous, no, no state management required. You get a credential and that might look like this.
So it's a decode string. Yeah. Yeah. So this is a draft in open ID. Foundation's connect working group right now. We are really ambitious. We wanna take it to the first implementer's draft by the end of this year again, because one of the ISO specifications rely on it. We have ongoing implementations, you know, similar suspects as in the presentation slide and to call out select disclosure seems to be really important, especially in some of European use cases and there's work starting to define how can we use select disclosure with jus so if you're intrigued, come talk to us. And I also wanted to call out Alan HBA, who was a recipient of Kim Cameron award. He's been instrumental in this work. So thanks a lot, Alan. And thanks for being here. Yeah. Scale.
There's a question
Online. Okay. There's a question online. That's what we have. Yes.
So one is how would this work in the offline scenario? How would this work in the offline scenario,
Please define offline first.
All right. Amir's question. How does the trust negotiation between verifier and issuer manage in the offline scenario? In the case of checking the credential status, is it possible to cash the status list on the RP?
So first of all, everything that is related to revocation status list and so on is tied to the credential format and the framework you're working in, what we're doing here is not, is nothing more than just a transport protocol for presenting the stuff. So for example, in the case of the S we did a prototype and the verified then used in the, the, the India or the indie library to the end, verify that the, that the presentation approve was, was, was integral authentic. And also that the credential wasn't revoked. So this is not part of the protocol. This is in the end subject to the framework, the credential format, the vocation scheme, you are, you are, you are using. And also what, with respect to offline. I mean, if, if the, if the, the verifier and the, the, the wallet reside on the same device, no problem, because it's all communication going on in the same device we are considering be talking about going across the air gap, using NFC. Right now, we have a QR code based mechanism to go across the air gap, which require relies on HGS communication. So that's why I ask please specify what offline means, because offline in the end might mean that one of the parties is offline only. So, and right now, I think it would be possible to, to apply the protocol in situations for the wallet, meaning the, the, the device where the wallet is reciting is offline, as long as the fire has an online collectivity.
So thank you on that one. And I also know the answer to my question. What documentation do you have to help explain this to people and reflect, cuz it's a lot to take in yes. This morning. So what are you doing in terms of, you mentioned the implementation draft, is there more you're doing to make this accessible to the community? Thanks
Scale. Yes. Apart from three specifications, we have a white paper coming out, really informative. It covers how this work open credential is positioned within the whole ecosystem that I know most of you are really confused. There's so many choices what to do. So hope that brings more clarity into the business value, the use cases, the positioning, you know, we gave you the summary, but if you need, you know, more detailed, you know, structural answer, please go to white paper. How it's going to be published. Do we
Published online on the open ID foundation website? Awesome. Okay. And through our blog.
So we have so tors and I have a presentation on Thursday. So the white paper view go live after our presentation. So check it out. Yeah. It'll be a first editor's draft. So not perfect, but we really wanted to get, say, you know, first draft out because we work really hard on it and we hope that it still serves as a clarification.
Oh yes.
Speaker 10 00:44:33 Yeah. Hello. Minhas from if insurance. So I have one like note from the real world here, comment, maybe we as insurance company operate in seven countries in Nordic and BIC. And we actually, we're still looking to move towards that kind of approach of, of, of having this like credentials locally to the devices of the user and what there is one practical like obstacle we are facing. Now, when we like, look the market of those issues of credentials, let's say, or IDPs that actually they're business models at the moment do not support that or are not, there's no incentive for them to, to, to start that type of cooperation. It not for every of them, but at least there's no like very consistent, good consistency in that, but since they normally charge per transaction, so that goes against their business models and that's, that's a really huge obstacle, I would say, because we'd like to have like the unified infrastructure across all the countries we operate in and it's, it makes it close to impossible. Or we also need to, to look for some alternative of the credential issues. Yeah. That's the comment. So I really appreciate that work and it looks really cool from, from, from, from technical and from many aspects. But, but this part is, is really like weak at the moment. Okay.
Now, if we get it work from business perspective, then you're all in. Yeah. Okay. So thank you for the question for, for, for two reasons. First of all, that help will, will allow us to also point out another characteristics of what we have done. I'm a CTO of a commercial like open banking framework. So I'm, I'm fully aligned with you. Our banks are not in, or are only interested in that kind of, of, of stuff. If they can somehow continue their bot business model or can adopt new business model that allowed them to, to monetize in the end, what they do. So that's very, that's, that's key to us. And what I've learned in the last couple of years, that the, the rails that open ID connect provide with issue, authentication, client authentication, all that stuff that helps you to really determine who is going to do what and afterwards, afterwards, to send them an invoice and open ID for very credentials sits on top of those rails.
And that's also one of the reasons why we introduced the code flow also for presentation side, because it allows us to, first of all, have wallets with backend components and have secure authentication between all the parties. And if you have secure authentication between all the parties, then you can, again, somehow count how many claims and credentials you have been presenting to whom and can charge the problem here is a conceptual or philosophical problem because someone needs to keep track of what's what's going on. If you want to have a verified pace model. So, and there are different different approaches in the market. There are some people that believe that should be solved on the blockchain and other people, including myself, believe that it's, we can do it much simpler in a very pragmatic fashion. If the, if the wallet somehow is involved in the process of, in the end invoicing and that's the direction we are heading right now. So happy to talk about that topic even more detailed during, during the conference and, and the second, sorry, the second reason why I, I, I thank you because that's a cool segue to my next presentation.
Well, in between yours, if I may insert, so Microsoft is going to general availability with the product based on those protocols, and we believe it's the next multibillion dollar business. So find anchor Patal he's our business person and ask him the same question. I think you'll hear a really similar answer to the Stan just gave, but they are definitely exploring the mechanism. So let's create this ecosystem together, drank it.
That's that's mine.
Great. So thank you, obviously, please direct other questions to toin and Christina offline. And if you have other questions on and you are online already, feel free to post your questions online and they'll look to try and answer them. So without further ado, back to Dr. Toin letter set to talk about gain and the progress of the gain community proof of concept,
Right? Thank you very much. I think I have to serve the coffee afterwards. So can you it's pretty early. All right, so let's zoom out a bit. So this is the, the next presentation is less about technology, but it's building it's about building a community and, and, and building a, a huge global ecosystem to solve digital identity problems on a global basis. This is about the global assured identity network, and I've got the pleasure to co-chair a community group at open ID foundation that does the technical POC for, for the gain initiative. Well, where do we come from? In the beginning? The internet was really, really a small and trusted place because it was small. It was closed and it was when it become successful. It, it grow bigger. It opened up and in the end, trust was lost and we all know what the consequences are, right?
Criminals utilize that lack of identification authentication, and there is identity theft fraud and so on and so on. And the idea that we came up with in the global assured identity network is not to build a new technology or to build a new kind of thing, but to utilize lies what's already there because it's not the case that the whole internet is an unsecured place. There are solutions out there that can be used to securely identify users to authenticate them, to authorize them. So to just list some of them there is verified on me in Canada. There's this me in Belgium there's bank in Sweden, there is AAR in India and so on. And so on. There are a couple, couple of different initiatives and project that, that can you be utilized, but a rather fractional space. So, and the idea of gain is to bridge between those islands or to build a network of networks to, to actually solve the digital identity problem.
Well, and then the first step, there was a white paper and the white paper as it happened, was published here at that conference. Well, it was actually Munich because they moved to Berlin last year in September. So more than 150 authors set together, wrote a white paper about this, this idea in a no logo pro bono open source approach. And afterwards, after the tremendous, tremendous feedback on this conference and from the larger community five non-profits picked up and set together and signed an MOU and now driving to gain vision going forward. So gain is not a new organization. Let me clearly spell that out. It's not a new organization it's in the end, an effort where interested parties and especially those five nonprofits cooperate to move it forward.
Let me start with oy X open identity exchange, open identity exchange. And I think we will hear at least one keynote in the, in the EIC conference. And, and a couple of other presentations I hope is working on the, on the rules on the governance because such a global network needs governance, right? And there is also another challenge because digital identity is very specific to jurisdiction. Typically, if you go for regulated use cases, there's a tight link to the law in a certain country or in the certain jurisdictions, like the European union. And that also means that if you want to go, if you want go cross boundary, you have to cop with different of those. Yeah. You're nodding, right? You are international. So you have to come with this incompatibility between the different trust frameworks. We did the exercise ourselves by mapping the anti LA law, for example, to E so it's not, it's not an easy task, so, but luckily, oy will, will take on that work and will be working on, on yeah.
At least ways to compare trust frameworks. I hope they will come up with clever ideas, how to transform between those trust frameworks going forward. Then we've got the internet Institute for finance, which, which ensures that the international financial services community is involved in that picture because among other potential identity sources, financial institutions are be seen as, as a, as a very good, very good source of those identity sources beside, for example, tailor communication providers, government, and so on. And there is CC which stands for cloud signature consortium and the cloud signature consortium as the name might suggest is working on APIs for remote signature creation, which is also very, very important use case in, in, in that context that we've got ly, which provides the, the yeah, global legal identifier framework. And we've got open ID foundation and open ID foundation was so kind to offer to host the community group.
This is a new kind of group. It was never being done before. So it's not standardization group that we, that we know from open ID foundation. It's a community group that is working on a technical POC for, for the gain, for the gain initiative. I'm gonna dig into the details of that, of that community group in a couple of seconds. So the principles that gain is based on our pretty simple, we do not believe in, in a single solution for digital identity, and we do not want to build new stuff or new technologies. That's why we think the whole network should be based on global interoperability. So the shortest path between the provider of identity and the recipient of identity is just one connection, right? That's why we are up to every of those existing networks should be able to keep its autonomy, but they shall be able to enlarge their, their reach on the identity information provider side, but also on the under side, by leveraging interoperability, we have seen an open banking that this way to, to managing network's work, there are a couple of prerequisite, precise specifications, automatic conformance testing, all that stuff.
So we, we somehow know how to, to make it work from a technical perspective, at least so I'm yeah. So technology and agnostic, well, that means we are open, right? We are not biased towards open idea. For example, other protocols are being discussed right now as well because in the end, the overall goal is to give relying parties as much as possible reach in terms of assured identities and to give users the ability to use their digital identity wherever they want. That's the primary goal, right? And everything else is secondary. We, we use open standards because we believe that open standards are more mature, more secure, easier to use, better supported and well, yeah, we wanna do it on internet scale. And the, the principle that we are applying is built on what's being built. And that I think says everything right. Let's stick into the game POC community group.
The P POC community group was established in March this year. And we, first of all, defined our goal. So what we do, what do we wanna achieve? It's on the left hand side. And in the end, what we are doing is we are evaluating a technical concept of the global short identity. And in the end, also the proposition. So what does it really take? What the short kind of claims to relying parties need today for a couple of use cases, and can we provide them on, on a global basis? And can we really build that network? And we say, we are done. If couple of parties out there have all the information and have the blueprint and say, well, let's do it. Let's bring it in production. Let's, let's build a business based on that blueprint. And on the more tactical side, we have to find five hypothesis that we want to investigate.
First gain can be built on top of existing networks and solutions. That's quite obvious because that was our initial idea. We wanna show that identity information providers from different jurisdictions can input inter gain. This is also quite obvious. We wanna support internet or identity information providers utilizing different approaches. So, as I said, we are pragmatic. We are not biased toward a certain principle might be federated. And I mean, the world today, at least from a practical se standpoint, runs on federated identity systems, but there is this new promising thing called SSI or wallet based, or even the, either two is going that direction. So it's, it's something we, we are incorporating. So we wanna support this in this, this concepts, and also other concepts as they might may arise.
The fourth point might not be that obvious. We, we are not a hundred percent focusing on identity for a simple reason, because we have learned from practical experience, that identity is, is, is a very important enabler. However, in most transactions you need more than that. You need, for example, a payment, right? You wanna identify yourself and, and pay a SIM card, or you wanna electronically sign because this is, this is a contact you, you, you're gonna, you're gonna close. So we are want to look into whether using that network. We can allow service providers provide more services than just identity. How can we combine and well, I mean, from technical perspective, that's easy. If you use all for authorization, you can leverage any API you wanna, you wanna leverage, but it might be more complicated from a network management perspective. And I think one of the most important topics from a relying party's perspective, let's assume we are successful. And there are 10,000 identity providers connected to the network.
We do not want that the relying party needs to sign a contract, which each and every of those identity information providers get the 10,000 invoices and how well needs also to manage 10,000 client credentials for those different identity information providers. So what we are, what we are seeing is the vision of a single contract, a single credential, a single technical integration. That's what we wanna achieve. So the whole network shall be, should work as a single, a single virtual service provider. Let's see how that will work. Right? Timeline wise, we already started to work on the POC, just beside the, or in parallel to the publication of the white paper. And we made good progress up until end of last year, where the first group of IDPs were successfully tested for conformance with our first compatibility profile that we have had defined. Now we have successfully defined, but that's was more an informal group.
It was more an informal group. And we learned since we are up to building a sandbox, we are more or less building a simulation of a global network in a sandbox. We need more than just an informal group. We need an agreement because basically all of the service providers typically require a developer that wants to connect to the sandbox to somehow accept terms of service, right? And this is more or less in a more smaller scale where we do not want to see in a global scale. So that's why especially open ad foundation put significant effort into defining a so-called participation agreement. And we now have that and anyone that wants to attend and either consume data or services from the test network or one that provides services into the test network, just science set agreement. It's pretty, it's pretty, it's pretty small and can, can, can work with us.
So that was really a heavy lift legally. And thanks for that. And then we had to agree on the goals and hypothesis for the test and know, well, we can come back and start working right now. We are, we are focusing on, on two areas, which is integrating relying parties with the, with the IDPs that we have in the network and to design or to select, to select the technologies that we will use going forward to discover I IPS to, to establish trust between relying parties and IPS and so on. And our goal is next quarter or end towards end of the year to demonstrate, end to end scenarios in the sandbox environment. How are we regarding the hypothesis? So one and two was we can build gain on top of existing networks and solutions and go across border. Well, we did that on the right hand side, you see, we have already successfully connected three IIP to the test network, which is bank ID Sweden, which is by far on a global basis, one of the most successful identity solution.
We've got an integrated SME, which is a SS I based proposition by info Richard, which is the European union's largest certification authority. And we've implemented German banks via the gas network. And there are other other participants that are planning to, to, to provide their IPS into the network. Just as a representative here, it's my ID from, from CCH Republic and security or from Canada. And the integration right now is based on a profile that we have defined that is based on opend connect for identity assurance. I think there will be, be a presentation about identity assurance later in this talk. So just keep in mind, we use that. And we, we somehow TriMed it down to very, very small feature set that we believe any of the I can provide. And we have already sex forcefully integrated relying parties with that on the different I P approaches.
Well, those three different IPS have completely different architectures and they also have different concepts. So bank ID is basically a central service that is provided for all customers of all the banks that participate in the service. So it's, it's really centralized. And the, the German banks are integrated via the ecosystem, which is a large ski Federation. So we have a Federation of 1,200 different IDPs. Each of them run by a particular financial institution. So this is, this is pretty different. And we've got SME. And its me, as I said, is an SSI based solution. It's it's based on the Hyperledger in the area stack. And they also implemented the open ID connect for identity assurance profile, which requires in, in that case a transformation. So they take the verifiable or it's an ACR proof basically. And they transform that into a, into a open ID connect ID token and provide that to the relying party.
And what we've learned is it's pretty simple for relying party. Just keep that in mind, but it changes the trust model because the relying party can no longer verify the cryptographic holder binding, right? This is what you lose. What you gain is it's simple to use and it's compatible with the rest of the world. As you will see going forward, we will also investigate in really leveraging as this icon concepts and cryptographic holder binding by way of other protocols. The protocol you just learned about in the previous session might be one of them did come whether be another, but let's take a look. So what we are discussing right now is as I said to be SSI specific, which means that there will be a subset of service providers in the ecosystem that provide other interfaces. And we believe that even though we are aiming for global interoperability, global interoperability typically comes with limitations because you have just a common denominator that you can operate on.
So it might be beneficially for some use cases to, to trim down the, the, or reduce the number of, of parties you're gonna communicate with, but use more specialized interfaces or a, a broader set of claims. And that's has a very, very deep impact on the way we will manage discovery and radar data because relying parties will need to determine what capabilities IIP will have in the ecosystem and select based on those, we are also looking into account information and we already have a relying party that does electronic signing with data provided by our service providers. Well, we are also working on the, on the, the way we manage trust in the network, because we are assuming there will tens of thousand or even hundred thousands of online parties and identity information providers. So to start with they, they need to find each other. They need to establish trust in each other.
They wanna authenticate each other. They wanna communicate and they wanna be sure that they getting paid for the service, for example. So all that shall be made possible by the, the, the, what we call the, the trust management in network. And right now we are doing a survey. We are looking into different technologies that are, are used in identity space and open banking. And we hope we can just pick one and use it. But early experience suggests, well, there might be some work to do to really make it work in a, in a easy to use, but scalable fashion stay tuned. We will report. And if you wanna join, please talk to us. The community group has a regular meeting every Thursday. So if you wanna contribute, if you just wanna listen in, please please go to open ad net, gain POC. You can see all the information when we have our meetings and what it takes to, to attend.
We've got a couple of use cases that we're looking into. I want to go the detail in the interest of time and to sum up, we are on our way towards evaluation of the technical concept. Right now, we are focusing on, on, on management of the, of the ecosystem participants. Next steps might be to look into S protocols. We are integrating relying parties right now. So if you're interested in utilizing data from global IIP, please talk to us. And let's, let's talk about how we can, can incorporate. And we are nicely aligning with the work that gly and, and, and open identity exchange are doing because in the end we gonna provide the technical rails to make it happen. So whatever Nick and his group is coming up, we somehow need to find ways to support that. Right. And yeah, really looking forward where this journey will come to an end in the end and with that, I'm now move my presentation. Thank you. So we're
Thank you. Thank you. Torston we are a little bit over on time. So unfortunately we won't take questions at the moment, but for those of you in the room, please do find Torston myself. Many others who will be presented today can talk fluently about gain, just to kind of hammer home the point. This is a huge idea, right? I it's really bold and a really big idea. And the American in me can say that I don't know about the German and U Torston, but it's really, really incredible from, from my standpoint, like how you can allow for the serenity of individual governments to kind of control their own identity ecosystem or an individual network to control their ecosystem or a bank to create their own service, but they can still have global interoperability of that solution. It's kind of getting the right balance of global interoperability and that, that serenity that a country or an entity might need. And it's, it really works like again, it's been proven now in the last eight months, it's been proven.
We also show DAOs prototypes here at EIC, and there will some presentations and panels also on Thursday and Friday. So if you wanna know more,
Yeah, definitely on Thursday, they'll be on track actually to kinda go into more detail. Nick mother Shaw, from the open identity exchange and myself will be standing on a stage asking for you, literally to vote where you see yourself, like you're already an early adopter. You see the vision, you see the potential in this, or you're maybe sitting on the sidelines, you're questioning can the commercial model work. Where's the production implementation I can deploy, but you're interested to see how it can go. Or you're a naysayer. You don't think it's for you. You think it's impossible. You know, somebody's taking over the world and you know, in all of those camps, we, we hope to bring everybody along on the journey because we fundamentally believe this is the right thing for people around the world to structurally solve some of the problems with identity that we're facing.
And if it's not this, please tell us what the right answer is. Because if we, as a community, don't come together and solve the problems we're seeing in the internet and how people assert the vision of the open ID foundation, which Jeanette talked about. How do people assert their identity around the world where they want to, this is one of the ways to actually answer that question and we've yet to see a better way to do it. So it really takes us all coming together as a community to solve these problems. Thank you. Torsten. Thank you.
All right. Open banking goes global. We now have DMA coming up to the front. Thank edema, talking about open banking and the white paper. He just completed a couple of weeks ago, the movement to open banking and open data to get us away.
Speaker 11 01:11:50 Thank you. Can you hear me? Is it working?
Speaker 11 01:11:58 Working? Yes. Okay, awesome. The purpose of this session is to explore some of the ideas. Some of the thinking that we have around the next phases of open banking, we have heard about open banking for many years. Now. I remember I think it was 2019 during I was listening in the audience about open banking in the UK at ID. And there have been a lot of development over the last few years. And the question that we're trying to ask here, what's next open-ended foundation and fiber working group have been involved in all of those initiatives very closely. And we have a lot of experience to share. This is a non-technical session, unlike all the other sessions and the purpose of the session is to request the feedback. So the white paper has been written as we speak. And any feedback is appreciated from everyone. Now I have to look out this and by the way, it feels unusual to speak to people in person and then presenting to them too.
Speaker 11 01:13:12 So the way global global open data evolution is closely linked to API evolution. So over the years, all the, a lot of institutions across the globe of the APIs, it started off with private API ecosystems where many digitally savvy institutions decided to embark on API, joining themselves. And they have explored external partner integrations and platform opportunities. They, they were usually large banks, telecommunication providers, or more near banks then came along the classic. What I call classic open main connect systems, the regulators around the world, and some industry bodies around the world. I recognize that there is a benefit of providing API access using the same access framework to across multiple participants across multiple data providers. So this is, this drove the adoption of open banking in it started off with the UK PSD, two Australia, Brazil, and many others. There was an open ID foundation, white paper published on open data and five security profile recently, which explores it a bit further. So if you're interested, look it up.
Speaker 11 01:14:43 It started off with the countries that I mentioned, but now there are active discussions. There are active projects implementing open bank in the cross manager jurisdictions, and yeah, across the world then comes across industry ecosystems, a lot of open banking ecosystem that ecosystem that's gone live in the last few years, they started exploring where else it can be taken to. And to be fair, it's natural for all consumers or FinTech, FinTech developers. To ask a question, if I can access open banking data with the user consent, why can I access the same data? Why can I access pensions data, telecommunications, data, energy data, using the same access framework Australia is going live with energy market sometime this year. I think September, October, this year, then moving on to telecommunications next year, Brazil is also adopting, extending the open banking ecosystem to open insurance as well. So the question that we're trying to ask here is what's next? Where else open banking can develop to
Speaker 11 01:16:04 Couple observations that we'd like to share here. Geographically, if you look at this map, it's clear that open banking is a global phenomenon, and this is not even up to date map. This is not even up to date map. It's pretty easy for anyone to make a prediction that over the next few years, there won't be much gray areas left on this map. Open banking in some shape or form will be existing in every part of the world in most of the parts of the world in five to 10 years. And there are a lot of conversations happening at the same time, but with all this activity, with all this global activity, one observations that we are making is that all of these systems in develop independently completely locally. So they expand the features. They expand sectors, but they are not integrating with each other. They're all very local.
Speaker 11 01:17:11 Before I go further, I'll just reset. What are the typical ecosystem building blocks? Usually you need three layers to implement any open banking ecosystem. One is business layer, second one legal and regulatory regulatory and technical frameworks. So this slide is focusing on a technical framework where you need to, to implement technical trust management, going from the bottom technical trust management, which establishes, who can trust, who in ecosystem security profile guarantees that we can share customer data securely and with appropriate consent identity building block at the bottom in the right corner, right? Bottom corner, signifies the need to authenticate the user and capture the consent properly and transfer identity information where required data model describes the business data model describes the business. Meaning that is, that should be well understood by all participants in order to, to make it useful. The use of open banking and API specifications, they are the delivery envelope. This is just one waste to slice it. And they have been different levels of standardization across across the years. So if you look at the three phases of open banking ecosystems that I've described before, you can draw some observations of how they delivered it. Open 80 connect, tend to carry identity information. API security profile tend to be, we see the domination of FPI, which has strong in general, those two layer. They fairly standardized and have strong vendors, both not surprising. These layers related to security, parts of the framework, trust management and APS specifications are done differently everywhere.
Speaker 11 01:19:22 AP APS specifications, including data model. Clearly there are a lot of opportunities to standardize parts of the framework, even not necessarily for the global interoperability, but also for, for the sake of local participants, not reinvent or local ecosystems, not reinvent in the wheel from scratch
Speaker 11 01:19:50 And the reason oh, recent and not so recent market activity has shown us that there is definitely interest from global reliant parties. We've got credit, cut schemes, both visa and MasterCard over the years shown that shown keen interest and significant investment in open banking solution that they've yeah, and their own teams looking after open banking master got partnered with talking acquired ity and others visa quiet team. After exploring the plate acquisition with a lack of global, with the lack of local interoperability, with a lack of standard that covers both technical and legal aspects. The only options that these credit card schemes have is to buy intermediaries or work with intermediaries that help them isolate the market differences.
Speaker 11 01:20:46 And most of their quite solutions tend to operate in the same in one region, either EU or EU or us or Canada, technical platforms are fairly active in this space too. Apple recently acquired credit quotas and it's created decision capabilities slowly, but consistently apple is expanding their reach in financial services. And their use case is definitely global sharing economy. Also another sector of the market that has, has significantly increased over the last seven to eight years, it's dominated by global companies that operate in different parts of the world and B Uber TaskRabbit. And so on. Currently those companies have to implement the solutions to do with identity and financial services differently in every market. And there are many more other examples. Imagine if there was a global reliant party that allows you a global reliant party that could easily access your data across multiple jurisdictions, maybe it's a dream. Maybe it's not achievable, but this is what we're trying to explore. So maybe the next phase of open BA open banking evolution is a global interoperable framework for cross jurisdiction ecosystem.
Speaker 11 01:22:17 What are the potential use cases that could be explored globally? They're not necessarily different from the classic open banking use cases, but still important one and have, have a slightly different flavor or slightly different angle. If you look from a global perspective, account information is a classic open banking use case, which is implemented probably everywhere payment information. It has been implemented in some jurisdictions, not everywhere in some areas it's prioritized in some areas, there is no plan to do so. PAYE confirmation is an interesting one, especially with a global in a global context. It's, it's hard enough to prove where the money go to in a local market. It's much it's even, it's even harder to do it in a global context to under if you need to transfer. And I think all of us been in this position where we need to trans to transfer money across the globe and struggle to verify the details of where the money go to. It's complicated process in that. So that's one of the use cases. Identity is another use case, which sort of builds up on Toons presentation and gain POC. So if you're interested to understand more, definitely engage with gain and ultimately the most exciting use case of all and the most unpredictable is the combination of those use cases. So this is where, especially a global combination of those use cases.
Speaker 11 01:23:49 We might discover certain things that we haven't even predicted possible. For example, a current company in Norway can prove my identity to rent me a car can receive my credentials with my driver's license from Australia. So it knows I can legally drive. It can gimme a range of payment options, including paying with my bank upfront during the rental or after the contract. And if the refund is issued, the system could automatically confirmed that the money have arrived, backed my account. So these are the type of combination use cases.
Speaker 11 01:24:34 So how can this be achieved? There are some principles that we could explore. One is reuse. We're not, we shouldn't be reinventing the wheel, setting up new frameworks, new foundations, we should be reusing what's possible. And there's definitely a lot. There are some areas that don't have any standardization, and this is where we need to do some work, but some others fairly standardized a problem with global trust management is an interesting one that has been explored in game POC as well. How can a reliant party from Brazil to trust UK bank or how Australian bank would know that the region parental companies indeed a part of the network and can receive my data with my consent, obviously in regards to the APIs, sophistication, potentially a new lightweight interface that could deliver simplified and limited functionality that is common across different jurisdictions. Maybe we could apply 80 20 rule where we focus on 20% of the functionality that is required by global reliant parties. And there is a significant benefit of free use in existing payment rails. I don't think, I don't think anyone wants to invent new payment rails. I don't think anyone wants to redo AML, CDF compliance and fraud detection that a lot of institutions already have implemented.
Speaker 11 01:26:04 And there are some other areas that we can explore. So what we are asking here is feedback. Do you think there is a use case for global interoperability? What use cases have we missed and any ideas how please reach out to myself or open ID foundation to talk
Any questions,
Any questions from the room or while you think about questions and I'll check online as well here, just a little bit of a transition statement while you think about your questions. So there are two different white papers that the foundation is working on related to this topic. So the first one was released a couple of weeks ago and I made Nara, it was Dave Tong who published that paper a couple of weeks back talking about the basics of open banking and movement to open data. So that's a really great primer. If your market is thinking about going down this journey, or you're a regulator thinking about going through this journey, like what are the basics of what you need to know on the underpinnings of the FPI standard, whether you use whether to pick FPI or not, or create your own, and this, you know, leaning into the concept of interoperability over time, but it's really going through the, the basic information and that's live on the open ID foundation website now.
And what edema is leading for the organization is this next question, right? What he said on the slide was how might one think about interoperability across these open banking networks once they've been developed and it's really in these formative stages, right? DMA, I think a handful of people, including Daniel gold, Schneider in the room have been trying to prompt the community to think this through, think about what good could look like of, you know, bringing connectivity across those different networks, regardless of where they started, right? If it was a Berlin group or a Singapore or an Adhar or a FDX, and you might have more, some proprietary APIs that's okay. It's okay to have those proprietary or domestic implementations. They can also interoperate globally with FPI based protocols and all of those 20 plus markets that are working on this open data and, and open banking implementations. They can start to move together and there are the needs that we're starting to see emerge. So how do we do it as a community? How do we pull that together into something that's viable? So with that, any additional questions from the room? All right. Thank you very much. DMA. Please find DMA. If you're in the room, we'd really love your feedback on how we take that conversation forward.
All right. Are you convinced we're trying to change the world yet? Only two, three presentations in, okay. So the next one, we continue to try and change the world now with shared signals and events and happy to welcome to the stage, Tim Kapali is you're gonna speak to this one or is your yeah, just me on this one. Just you on this one. Thank you, Tim.
Speaker 13 01:29:02 All right. Hi everyone. Thanks Gail. My name is Tim Capal. I'm a standard architect at Microsoft, and I'm also one of the co-chairs with the tools in the room of the shared signals and events working group. So this is actually probably the only session we're actually gonna ask a question and hopefully have you come back or interact with us after the session to, to think about this problem, right? We, we have this thing called SSE and we'll talk a little about what it actually is. Cause I see a lot of new faces in the room, but there's really no mechanism that we're aware of in the industry to, to really do have an event driven token revocation type framework, right? Tokens generally don't get revoked. Do they either short lived or you try them and see if they work. There's one token revocation spec in IATF, but it's actually the opposite direction, right?
Speaker 13 01:29:45 The consumer of the token actually tells the issuer that they no longer want to use it. And that's a very, very specific use case. So we wanna just ask the question, should we pursue this framework? Should we add an event to this framework that allows, and let's say a token issuer to tell everyone that this token is no longer valid. So we should start with what is SSC, right? It is somewhat new. You may have heard this referred to as the risk framework. If you're familiar with open idea over the years, and I had to take the opportunities are new animated emojis. They're really cool. The marketing team told me not to, but I don't care. So what, what does a world look like without SSC? We've been SSC, you can think of it in the most simple form as a secure web hook framework for events, right?
Speaker 13 01:30:29 That's Webhooks have been synonymous in the industry for years. They're not really standardized though. They're just like post this JS object to this end point, right? And that's pretty much the end of the webhook discussion. And if you look at this, right, this is, this is not meant to be any offense to any of these providers. We're up there too, like creating subscriptions and managing Webhooks. It's more or less proprietary for each platform. You have to understand their syntax. You have to understand their API service. So in the, in the context of setting up security signal sharing with AWS, Microsoft, Okta, and sale point, you would've to have four different sets of, you know, functions in your code to actually handle that. And so that's why we want to try to fix that. And so if we use a Quin, oh, I don't know where this came from.
Speaker 13 01:31:12 I didn't add that. I like the fireworks though. It's kind of cool. All right. So if we look at what this means and probably the, the quintessential use case we like to use for zero trust, right? You have an identity provider in a file sharing service, right? So, oh my there's supposed to be someone with a desk here trying to access a file, which turned into fireworks. I know to interpret that, but the user signs into the file sharing app. And ultimately the, I idea here, the new piece is that the IDP and the SP, which, you know, the SP is a file sharing service Dropbox one drive over may be the IDP could be anyone. They actually end up subscribing to each other's events about this user in the session. And so what can happen is, again, there's a person at a desk here.
Speaker 13 01:31:51 The user tries to access a document containing PII. In this case, they're actually at their desk in the office. All good. You know, they're in a trusted location. Everything about their recession looks great. The user can continue accessing the doc. Now the coffee shop, right? The typical coffee shop, zero trust use case, and the firework has now moved up there. The user leaves the office and starts working from the cafe, right? So they go to try to access the file again, or a new file in this case, the file sharing service, because let's say they have an agent on the client, you know, most file sharing services have a little sync client. It actually notifies the IDP that a property of the session has changed. And so that may be a, an event that gets sent via SSE to the IDP. Now the IDP can more or less reevaluate their policy.
Speaker 13 01:32:36 Most IDPs. These days are policy engines as well. And they can now take that new context that they've learned reevaluate that, and ultimately return a new policy or our policy change. Right? So in this case, maybe the user is not allowed to access the document. So when they try to access it they're denied. Right? So the nice thing is these two flows our complete their, their stateless, right? This is in this case, the file sharing service as a transmitter and the IDP as a receiver that is completely independent of the IDP being a transmitter and the file share service, being a receiver. So it's incredibly flexible. You don't have to have it. Doesn't have to be bidirectional. It can be one way. It can be two ways. It's super flexible. And we think it provides a lot of opportunity to do cool things like maybe token revocation, where this fits kind of in the open ID stack.
Speaker 13 01:33:25 You know, we're, we're trying, we need to come up with a name for this and we've been brainstorming, but you know, we have fast Federation, which I'll talk about in a little bit, which is how you set up Federation. We've had skim for quite some time. Skim is not an open ID. It's an I ETF spec family, but it's heavily used as part of the stock and open ID. We have obviously open ID connect and SAML. I don't put Sam on here, but Sam for, you know, Heather is here, but I don't. So, so there is SAML that thing in here as well. And then we think SSE completes this kind of Federation circle to have a full life cycle of an identity session. And we think that's really important, right? Cuz fast fed and SSE are really the, the new kids on the block.
Speaker 13 01:34:08 And we do think if you can build, if, if we can find a way, especially with developers to build this into a stack, that's easy to deploy. We have a really solid solution that meets kind of the modern needs of, of session management, right? Cuz it used to be just fire and forget with tokens. And you're good until the token expires. So the question we want to ask, I wanna skip the slide in the sake for the sake of time. Should we have an event for token revocation, right. Ultimately if you're using a structured token format, whether it's a Sam assertion, whether it's an ID token or just a JWT access token, there is an identifier for that token. Can we simply tell another party that this identifier let's use a JWT, for example, you know, JTI, this JTI is revoked, right? Period. And that's as simple as the event could be.
Speaker 13 01:34:52 And we, we, you know, my hypothesis that there is value in that in the industry. And so really what that would look like, right? So we have a resource server, we have a security token service, right? A token issuer, there's obviously a token endpoint on your STS. That's an existing one. And then we have this new SSE management endpoint, which you can think of as really for subscription management. And then you have an SSE transmitter in this case, right? So the STS is the transmitter and, and the resource server is the receiver. Those could flip depending on the use case and then on the resource server of an SSE engine, you know, that's just a generic term. None of these are kind of official terms, but an SSE engine, that's gonna handle the processing and ultimately an endpoint that can receive the events. And so the SSE engine more or less says, Hey, subscribe to all token revocation events for the subject of me.
Speaker 13 01:35:39 But that could be more specific. The nice thing with SSC is that you can be as specific down to we've literally made it. So you could say this user on this device with this session, or you could be as generic and high level of saying any token that is, has me for the audience, right? And that's, that's the flexibility in the subject mapping that we're now actually going to probably expand to some other open ID specs with this con complex subject thing. That's kinda what we call it. And in this case, right at once the, once the resource service subscribe to the events, they just wait, right? They're just waiting to receive events about revocation. They may never get one, but when they do, let's say it's a, a JWT, they see the JTI and they can decide to do whatever they choose avoiding the round trips, avoiding hitting user info, just to check if it's valid, right?
Speaker 13 01:36:24 There's a lot of opportunities here to avoid, you know, cost the cost of cogs. The, the in large scale environments, cogs are a huge deal, right? When you're sending a token, just to see if it's valid at 12 billion tokens a day, it gets expensive. Speaky has Microsoft. So yeah, I mean, that's really, you know, here's an example of a Sam assertion ID, right. That the assertion might be revoked because something changed. So we're really just asking the question and there's another one aren't they cool. Like very, like I actually funny story, I had to dig through like GitHub repos at Microsoft to find these, they won't publish 'em so yeah, what we don't need to do do Q and a here, but it is something we would like to get some feedback on. The nice thing about the way SSE is architected adding event is an event is adjacent object with a structured set of claims. And that's really it, the rest you don't need to touch the upstream management spec so we can support new use cases very, very quickly with SSE. So something to think about, we don't know if it's the perfect solution. It may be too much for a smaller relying party to implement, but we do think if we can build this into a stack where an RP can just deploy an SDK, this could be a really interesting solution for them.
Thank you, Tim. So let's just check. See if there's any questions in the room, sorry. Introducing a tool. One of the co-chairs the SSE SSE working group.
Speaker 10 01:37:53 Sure. Just wanted to put an open question there for the verifiable credentials or verifiable presentation group, whether you think something like this could be used to revoke, you know, verifiable presentations,
Anyone in the room think it could work for revoking verifiable presentations. All right. Well, mu that one over and find a tool or Tim. So a lot of these questions I think are real time of interest to us. And if it didn't come through from, from the page here, this is obviously not, it can be within one organization that you would implement these, but it can clearly also be a cross organization. I can't remember if we're gonna touch on this later, but do you wanna talk about not just how that can work across different entities, but also how one thinks about the commercial context of that or the privacy issues associated with moving these signals back and forth between entities?
Speaker 13 01:38:48 Absolutely. And what I'll speak to the privacy thing first. Right? So most of the events in these frameworks are designed to be minimal disclosure, right? That the, the token ID, not the claims and the token, right? The original set of risk specs, which are really, really talk about account, right account, disabled account, compromise, et cetera. Those were all designed again to, to have the least amount of data possible. Now they're also flexible in a sense that you could include more data if your use case needs it, but we don't. We require very little in terms of claims really just to be effective at conveying your message. So that is a, I think a design, always a it's an implied design requirement since we started working on this, just because I think the times when we started to enhance the existing specs that became SSE privacy is obviously top of mind, but yeah, the, the cross, the silos, right? The whole goal of this is to cross silos. You know, even for us at Microsoft, we actually started this discussion internally, right? Obviously SharePoint teams, all these massive, massive products inside Microsoft, more or less operate as different companies. And we started implementing it there. And then we said, Hey, why don't, you know, getting this information outside of the Microsoft wall? You know, the Azure border is super important because, you know, telling, telling folks who use the Azure ad IDP, that their session is revoked is a critical part for the conditional access type policy. Right?
Speaker 10 01:40:10 Yeah. Just wanted to make a comment about the privacy. So there's a block post on the SSE website that talks about how privacy can be managed when you, you know, add the subscription. So if a receiver wants to subscribe to a particular subject, the token that they use to authenticate that call may actually involve a user content. And so, you know, privacy can be protected in, in these subscriptions in that way.
And do you wanna comment on the commercial piece as well?
Speaker 13 01:40:43 Let's do. Can we do that in the, and
The other later?
Speaker 13 01:40:45 Okay, great. Yeah. Cause I think we're. Thank you.
Perfect. Thank you very much, Tim. Thank all right. So happy to now. Welcome up Joseph Hein, who leads the open ID foundation certification program to talk about the dramatic growth in certifications and the opportunities to test your implementations using the latest and greatest of our certification test tech, Joseph.
Speaker 15 01:41:11 Great. Thank you very much, Gail. So yeah, I'm Joseph Henan, I'm the technical lead of the open ID foundation certification program. I also wear other hats as well. I'm a senior architect Orly. Who's one of the vendors in this space as well.
Speaker 15 01:41:28 The open ID foundation certification program it's been going for seven years now started in early 2015 for identity providers. And then over the years it's expanded. We started covering relying parties a year later. And then three years ago we started covering the, the financial grade profiles, the FPI profiles as well. And the, the programs evolved over time and become better. EV every time people go through this process, the, the tools become better. We get feedback. We discover how to actually do this kind of testing well and apply that knowledge. And it's a self certification program. It's not one that involves expensive consultants coming into your business. It's we provide tooling, you run the tests, you send us the results, and then you get listed on the, the public website. It's anybody can see the certification results and you, you get to see the full log files and exactly what happened during the tests as well. So for the, the FPI certification in particular, the core goals are really interoperability. See very important security, also very important, and just making sure that people are actually correctly deploying software. So we end up certifying both vendors and the actual end deployments.
Speaker 15 01:42:56 So it's important that both are certified. Obviously when you, when you are a bank or insurance company, whatever, wanting to deploy a FPI solution, you really want to get some software that's already certified because then, you know, it's actually gonna pass cert, oh, it's possible to make it past certification, but then you actually do need to certify actually in your deployment because this kind of software packages are incredibly complex. They have a lot of configuration options, and it's only by actually testing the end system that you can actually figure out that your actual production deployment is actually working as it should be.
Speaker 15 01:43:37 So why the open ID foundation certification program, we work very closely with the working groups that are actually developing the specifications. It's very much a two-way relationship and the testers also get direct support from the open ID foundation certification team. So if you have questions about what results you're getting, you're not sure we are there to help. We're all domain experts that are very familiar with all the, the specs involved. Obviously we've brought the test for them literally, and we've also got access to all the spec authors when we need it. If, if we are not sure how to interpret particularly particular parts of the spec, the program's internationally recognized it's being picked up and mandated by various regulators at this stage it's award winning the program, received some awards at EIC previously.
Speaker 15 01:44:35 And obviously the, the tests are continually maintained by the open ID foundation. New versions of specs do happen occasionally, especially with some of the, the newer E open banking ecosystems, where we're still getting to the point of standardizing everything. Occasionally we, we become aware of new security vulnerabilities. It probably didn't pass people by that recently, there was some noise in the, the Java world because some of the Java implementations turned out to be not verifying signatures as they should. That's the kind of thing that we pick up. We'll be adding tests to that, to the test suite, hopefully, cuz it's kind of important that systems don't get deployed with things like that in them. We also quite often find new interoperability problems. We work quite closely with some of the ecosystems and we get feedback from people on both sides. That's trying to implement the specs and then trying to work together as to what does and doesn't work. And we also try and make the test as easy as possible to actually understand. So if people contact us, cuz they're not understanding a failure, they've got, we will try and make it clearer. And anything that we do find, we always feed back to the relevant working groups, cuz that then also means they can potentially improve the spec language and make the specs clearer, which really helps everyone.
Speaker 15 01:46:02 So when we're talking about all these open initiatives that we're getting here, the things already talked about are important. They have to be interoperable, they have to be secure and they have to scale. And to really achieve that, you have to test both sides of the connection. So that, that the party that's actually sharing the data. So the bank or the authorization server and the receiving party, so that the FinTech or the OWA two client, it's only if you test both sides that you're really gonna achieve the scale. You kind of need that the network affects a dramatic. If you just think about an ecosystem with 44 participants. So that's 40 people that are both sending and receiving data. You've got 1,560 distinct connections there. You just imagine if you end up raising a support ticket or multiple support tickets, even for each of those connections, it's, you know, it adds years to your timelines essentially of getting everything working together. So this is why it's important to actually test the stuff and to test it before you go live, cuz trying to take systems that are already running and in production and then trying to retrofit interoperability and security to them. Usually doesn't go very well and it's pretty disruptive for everyone.
Speaker 15 01:47:27 And the open ID foundation is very happy to engage at this kind of top level ecosystem regulatory level. It works very well for the ecosystems. It works very well for the, the foundation getting feedback on the tests and the specs. So what's happened in the last year. The, the big thing that's really been going on is open banking in Brazil. We've now got 150 certified banks in Brazil and 32 certified fintechs or people that are receiving the data. But that's been a massive effort that Brazil has really moved very fast, but they've mandated certification on both sides, which has helped them try and move that fast. I don't think we've yet seen an open banking rollout that you can absolutely say, oh yeah, everything just worked on day one, but they've certainly got to things working faster than any of their predecessors, which is always good. It's nice when people learn from experiences elsewhere and we've also launched beta tests for the open ID connect for identity assurance the E K Y C protocol that Daniel's gonna talk about next, we've already tested those with five plus implementations. We've been working with some of the, the members of the gain proof of concept group to do that. And we've been getting some good feedback on the tests, providing them some good feedback on their implementations.
Speaker 15 01:49:03 We've launched alpha versions of the, the FPI two P and RP tests. So I'm gonna be coming back in an hour or so to talk about FPI too. But yeah, with the, the foundation's been taking all its learnings from the first version of FPI and trying to make things simpler. So the certification team has been developing tests for those. And we're hoping to put those live later in the year, but if anybody wants to test them in the meantime, please do contact us or join the FPI working group calls. And we've also been doing a, a lot of work on the certification listings. They're now actually database driven, which has been a, a massive chunk of work, but sets us up for the, the future to be able to do new and better things with how we actually display certifications. So the roadmap for the next six months, we're looking to get those FPI two tests launched we're, we're partly waiting for the, the, the working groups to actually release the next versions of the standards. So we can align with those. We are developing tests for the, the FPI version of Seber, the aligned party tests. Also, we're hoping to get those launched before too long.
Speaker 15 01:50:23 And we're also creating initial versions of tests for the self issued op. So as to and Christina were talking about earlier and now called open ID for variable credentials,
Speaker 16 01:50:40 And
Speaker 15 01:50:41 I'm very confused by what my last bullet point was meant to say. Cause it definitely wasn't meant to say FOP tests,
Except it's a patent faced here.
Speaker 15 01:50:55 What would that, I think that S probably it was not SSE. Cause I think I've got one thing on my list before SSE, that was meant to say SIOP test, cause we're going to start, we've already developed a FY Seba test. So one of the next things to do with the Moderna working group is to turn those into generic Seba tests that will support some of the other, the use cases they have. So we'll be looking at that.
Speaker 16 01:51:23 And
Speaker 15 01:51:24 That seems to be an old slide
Speaker 15 01:51:27 And yeah, so just to wrap up, so the conformance suite itself, it's, it's a completely open source system. You can look into the code and see exactly what it's doing. It's on GitLab. All the development happens in the open as much as we can. The certifications, sorry. There is instructions for certifying on the open ID website. There's hopefully enough that people can test without needing any help from us. But if you do need help, we're always here to help. There's a production deployment of the system in the cloud. People always get a little bit confused, it's protected by a login, but the entire purpose of the login is just to protect the data that you put into the system. It's you can literally log in with any account there's no preregistration necessary or anything, and yeah. Feel free to email the certification team or myself, if you want help or want to better understand what's going or indeed find me afterwards.
So question yes. How much does certification cost?
Speaker 15 01:52:35 So that varies depending on what you're certifying and you've put me on the spot a little bit. So I'll hope I'll get the figures right for where are we for open ID connect for foundation members? I believe it's $750, I think for FPI, for members, it's a thousand dollars. And if I remember correctly for non-members FPI certification is $5,000.
Yeah. So pretty low cost, right? For an implementation you're trying to make sure is tested correctly. It's peanuts, right. It's very low cost. So it's kind of like a, why wouldn't you, you know, certify and it does have the benefit of being published, right? So people can also affirm that you have been certified, you know, by, by us.
Speaker 15 01:53:23 Yeah, indeed. And I think the number of people that we've had go through the certification and testing without finding at least one interoperability or security issue, I think I can count on one hand.
Very good. And clearly also helpful that, you know, governments like the UK government, like the Brazilian government are, are trusting us, you know, to perform the certifications for them and to help facilitate that adoption for their, their marketplaces. So a lot of great adoption we're seeing there and it's continuing with conversations with Canada and Nigeria and, and others. So great work by you, Joseph and, and the certification team. Any questions for Joseph on the certification program.
Great. Well, thank you very much. So hopefully we'll see. What is it? 1400, 1500 certifications that have been done now to date, you know, rapid hockey stick growth, you know, that we're starting to observe and hopefully much more to come right with the work of gain, with the work of, and roll out of open banking around the world. We, we are an unusual organization, right? That we're kind of like we have these amazing gifts for you community. You can feel free to use them, but nobody has to, right. We're just saying this is incredibly low cost, incredibly accessible capabilities. We really wanna see the community succeed and not do things like publish their private keys. Right. Joseph, I mean, it's, you know, a developers working in a garage, the classic example, they're not fluent with all of this security protocol stuff. You might all be in the room, but is every single developer who's implementing this fluent with a level of work you need to do. So I it's. It's really great work. So thank you, Joseph.
Speaker 15 01:54:56 No, that's worries. And GA's referencing one of my previous presentations there.
Joseph. All right. Now bringing up open ID connect for identity assurance, the much promised much awaited presentation on open ID connect for identity assurance. Please take us away. Dr. Daniel fed.
Speaker 17 01:55:21 Thank you, Gary. So welcome everyone. Many familiar faces, many unfamiliar faces. Happy to see you all here. I want to talk about the work of the identity identity shows, eco YT and Ida working group. I'm one of the authors of our main spec and apparently the title. So there's a subtitle enabling networks of networks, which I didn't know until now, but that's exactly what we do. So let's get started. What are we doing? Let's start with a question. What is open ID connect? Well, you should know what open ID connect is. I assume many people know already. It's a signed claims passing protocol. So if somebody has information about a user, say a bank, for example, they have verified user's identity and they want to pass that information to somebody else. They can do that using open ID connect. So they say, we have a couple of claims about this users. For example, there's a name, date of birth and so on. That's all plain standard, open ID connect and open ID connect for identity assurance. That's essentially more information to that. And that is the information about the verification. So who did the verification? How was the verification performed? When was it performed and what evidence was used in that process? And this might seem like a small addition to the protocol, but it's really enabling a network of networks. It's enabling many new use cases that were not possibly for.
Speaker 17 01:57:14 So let's step, step back a moment. Why would you use open ID connect for identity assurance? Well, the lack of a short identity, Intel probability is a real problem. It's a problem for relying parties because they get information. So if you don't have this, they get information via open my Deconnect with a lot of implicit underlying information. If they get it from a bank, they can assume that maybe the verification was done according to some anti-money laundering law, but can they be sure it's also a problem for end users because this information, when it's not transported, users might have to verify again. So that's not really great for usability help. MediConnect for identity assurance addresses these issues, hopefully in an interoperable way, and it's ready for implementers today. So if this specification matches your use case, you can start implementing right away. What's the business case.
Speaker 17 01:58:25 Well, obviously it's already mentioned reduced friction and cost. The number of validations can be lowered. For example, you can start reusing data instead of having to, to acquire that data again, through verification processes, video identification, and similar things, users need fewer credentials, they reuse their bank account, for example, to open a new bank account or something completely different. And also it's great for data minimization because in the spec, relying parties can say exactly what information they need. For example, when they need the information. According to which trust framework some data was, was verified, they can get that information without getting all the details about the evidence that they might not need. Neat.
Speaker 17 01:59:24 So the use cases to get a little bit more concrete on this is it's essentially everywhere where you want to have interoperability interoperability when transporting this kind of information. For example, as I just said, when you want to open an account potentially based on existing account data at a different entity for account recovery, for staff onboarding, you can use this and in general, just accessing restricted services. If you think about government services, which might require specific data about the user, maybe also the like information on the ID card passport that was used. This can all be done using this specification healthcare of course, age verification, when you want to buy alcohol tobacco or similar, this specification is not tied to specific geographic areas. So it's in particular interesting. If you want, if you have use cases that span geographic areas, you can transport from one area to another. What kind of trust framework was underlying when acquiring this data? Of course, it's not tied to a specific sector and can also be used for it. Assurance processes.
Speaker 17 02:00:46 Our hypothesis is that this visitation is the best one that you can get for this use case. It scales we've shown that at yes, it's already being used at internet scale. We have 1,200 identity provider supporting this spec. It so, so, so it's really designed for, and of course that's open ID connect. That's underlying all of this designed to scale and it works through operating services. The one yes.com the other one, the UKT sir ecosystem they're heavily using this spec. Yeah. So it works, but we also have conformance testing to ensure that it actually works. Joseph talked about that. It's an extremely useful tool. It just ensures that if you have two parties that have not, not seen each other so far, they can just start using the spec together and they will work together that again, of course, in reduces implementation costs. And this spec really also helps to link the technical implementation to a policy domain. Because with this spec, you can express the trust framework under which the data was acquired. You can, and it's an explicit reference. It's not an implicit reference. So it's not just because you're getting that data from some certain party. It must be under that trust framework. No it's made explicit in the, in, in the da in the metadata. And that of course enables you to, to ensure that legally and regulatory requirements are met.
Speaker 17 02:02:34 This is what such data block, according to the specification can look like. We don't go into the details. There's no need to read through this right now. You probably also won't have enough time for that. But just to give you, give you an idea of, of, of the kind of information that's in, in a response, according to the identity assurance spec, this is a made up example. It is meant to show that you can use this also, for example, in mobile operator use cases. But again, this is really independent from your concrete sector that you want to use this in. So for example, at the top here, we have information about the trust framework for the data, for the verification. And this can also be more fine grained. So for example, you can say within this trust framework, we know that there are different assurance levels and this data was acquired shown.
Speaker 17 02:03:32 According to, for example, the assurance level substantial there's information about the transaction, like when digital transaction happen. And there can also be a unique identifier that you can use for auditing purposes, so that when something goes wrong, you at least know how to find information about the process that might have gone wrong. Then here in this example, we have as evidence, a driving license document, and you can put all the relevant data about that document into this response in this, in this case, we show for example, that the rev verification of that document was done remotely. So the V I stands for verification that was done remotely. There's also information about the verification, the time and document details. Like what kind of document, when does it expire, maybe also, when was it issued by whom was it issued and so on? Yes. Another example, this is phone subscription information that was here also used as evidence.
Speaker 17 02:04:49 It's much more detailed. For example, we have your information about the verify that verified this electronic record. And in this case, there's also scoring information contained somewhere deep within that evidence. That was part of that verification procedure. Again, the details don't matter so much here. What's important is that we can express all of this data in standardized data structure. And here we have the actual claims about the users. Of course, this looks similar to what you know, from open ad connect. So we have claims about the user, like given name, family name, maybe in this mobile operator case, also an Ms. I T N, but this data's contained not in the root of this response, but it's called it's, it's in an object called verified claims, meaning that we have an explicit, a data structure for these, for this verified information. So it's made very clear that this is the verified information and nothing else.
Speaker 17 02:06:04 So what's the current status of the spec. We went through implementers draft three in November 21. So as you can see, that means we also had two other implementers drafts. And before that working group drafts. So there's a level of maturity in that spec. And I think it's, it's a good point to in time to start implementing this, if you, if you're interested to production services deployed in production, that's why it's production services today. As I said, others are under development. There's a huge interest in this. So we hope to see more production services soon. This is also what we use in the gain proof of concept technical work. So that was already part of the implement of the conformance test and internal tests that we did end of last year in the gain proof of concept, we are also working. So we're not so much working on the core spec right now, much of the work is on additional features to the spec that we also want to deliver within the working group, but potentially in other specs in the same working group.
Speaker 17 02:07:24 So we're working on, for example, advanced syntax for claims that enables you to transform claims. So if you're only interested in the age of a user or whether the user is above 18, for example, then you don't need the whole birth date of that user. And that's what we're specifying and advanced for claims. You can also say things like if you cannot deliver this whole block of data, please don't deliver any data. That's great for data minimization. It's great in use cases where you pay for each claim. And that's what we are doing in, in this spec, which is really BYD extension. So it's not specifically tied to the Ida work. Then there open ID authority, claims extensions, whenever you want to make claims essentially about the relationship between the user and some entity. Then this is suspect to go to. And we are also working with the SSE working group, which I've learned about earlier today on topics like pending verification. So how to express that verification is still pending and how to get the data and the information that the verification was performed and also complex subject identifies. And we're also working on profiles to meet specific assurance standards to, because these are so common to, to say how you can put that into essentially Ida language, obviously the next 863 3 spec G G 45. And we're also working together with pH people to, to see how that fits together.
Speaker 17 02:09:08 Quick update on the conformance test, we have beta beta state E K Y C tests available that test against the current implementer drafts. They are to be updated, to use a us version of the spec soon that is work together with the UK T ecosystem because they can provide us access to a real world system. Yeah, so the, there will be a certification program as well, so that you can officially certify your implementation. That's expected to follow the next implementer draft. Once we have that out, we don't have a date for that yet, but we're working on it. And if you're interested in testing the tests essentially, or getting early access to the tests, please drop us a mail to certification@idf.org, because we would be very interested in working with you to yeah. Test the tests, essentially. Not that that's free and that's free. Yeah. So if you want to, so of course that's like early access to the test. Yeah. So that's a great opportunity. Yeah. And that's all I have for you today. Thank you very much. So
Just see if we have any questions from the room. If not, I'm going to inflict on you my own quick case study and put you on the spot, approaches us.
Speaker 17 02:10:45 Here we
Go. Hopefully not, hopefully not too painful. So let's imagine you are a Google or could be an apple, could be a Microsoft, any one of these large organizations who has content of their own, and you need to assert age verification. So let's say you need to affirm that an individual is an adult so they can get access to adult content or that an individual is a child, so they can have access to child related content. How could you use this kind of capability to do age verification online?
Speaker 17 02:11:19 So first, so I think there's, there's two aspects to this. The first one is if you have, so, I mean, you need a birth date for that essentially, right? So you probably names, so something tying to the person and a birth date, and that's the, also the core data set that we have. And again, proof of concept work where we said, okay, this is like, let's start with that minimum use case. And the good thing about that is many in many ID schemes, this is already there. It's, it's a very simple data set. And yeah, you can use this specification to express that you have this data and that you have verified it according to a certain trust framework and then Google, or whoever wants to verify your age can use that data check, whether they are they okay with accepting values from the trust framework and then accept the user, or if they, if they're not fine with it, deny the user access. So that's really very fitting use case. And also of course, if you want to go step further in future, you can use advance for claims to reduce the information that is transported so that Google only learns, okay, the user above 1821 or whatever is relevant to the jurisdiction. And then only get that data that they need.
Thanks, Daniel, for letting me put you on the spot like that. Cause this is just one of those really hard use cases. Right? Right. How do you do age verification and meet compliance across the U or the across north America or other jurisdictions? It's, it's a real challenge. And so being able to combine a network of network based solutions to get that data on a birth date from a variety of different sources could give them the coverage they need. Right. Absolutely question in the back might need to help me. You might need to
Speaker 17 02:13:15 Repeat the, the, yeah, I repeat the, I repeat the question. Oh, like the helpful, yeah. Cool. So just following up on that. So you, you talked about how the response could include the level of assurance and the trust framework. How does that work on the request though, from Google in terms of what level of assurance they will accept and what trust frameworks they will accept? So Google can express, in this example, can express in the request, what kind of trust frameworks they would be willing to accept. They can say what kind of level of assurance they would be willing to accept they could. So they could essentially run two different approaches either. They are very specific about this. So they list all the trust frameworks, which with which they would be fine. Or I imagine that in the gain network, at some point we will get to the point where you have identifiers for groups of potential trust frameworks or similar. That's not really speced out yet. So I'm that this is just me having an idea about it. But yeah. So in the, so from technical perspective in the vest, you can say very precisely what you, what you would accept and what you would not accept.
Another question from the room.
Hi, thank you for your presentation, Daniel. I was wondering, so this response structure, would that be only in the ID token, but or could, could you like expose that through other endpoints or tokens as well?
Speaker 17 02:15:03 So right now, as an open ID connect extension, it's defined for the ID token and the user in for endpoint. So wherever you would get like claims in a regular open ID connect process, but it surely doesn't depend on that. So if you have other data structures where you say in your application, I, I need that data there. I imagine this can be used there as well. In fact, we are already also using it in a different data structure in the introspection response.
All right? Not, not hearing anymore questions. I will, I will add my personal plea as the mother of two small children, please solve this problem. Many of you have small children as well, right? And it, it doesn't matter. You know, which individual entity it is out there. Like no one entity can solve this problem on their own. So please let's all lean in together and help work on solving problems like this to protect our children and to make it a safer internet. Thank you, Daniel. Thank
Speaker 17 02:16:04 You.
All right. Welcoming back up, Tim Kali and also is partnering crime, a tool who are the co-chairs or two of the co-chairs of the shared signals and events working group. We're a little bit back to front. We did a deep dive earlier, and now we're coming back to the main thesis. So putting it into context a little bit more. Thank you, Tim. In a tool.
Speaker 13 02:16:29 Thank you. Hi everyone again. Yeah. So this is gonna be a little bit more on the working group and where we're at, what we're thinking about. I think hopefully we answered a high level of how it works if we didn't raise your hand and we can answer a question at the end. Did you fully introduce yourself? Yeah. Yeah.
Speaker 10 02:16:44 Just to introduce myself, I'm a tool city of signal, brand new company. You can see us at our booth upstairs, so,
Speaker 13 02:16:53 All right, we're gonna try something which always could go solid. We have a video in a PowerPoint with sound, leave a place.
Speaker 19 02:17:01 One of the first things we learn in life is that sharing is good. It's the right thing to do for everyone's benefit. It's a lesson we're learning in cybersecurity too, because when your security tools share data with each other, your whole ecosystem becomes much more efficient and effective. But the challenge is with thousands of security solutions now available that information usually sits siloed in the various tools and dashboards.
Speaker 18 02:17:28 What
Speaker 19 02:17:28 The industry really needs is a universal language that lets all security tools, talk to each other in an easier and more simple way. Introducing the shared signals and events framework, an open API that allows communication between security products from any vendor. The framework is based on five primary communication concepts. Whenever a subject experiences an event, the transmitter sends information about the event to a receiver via a stream. Each product still performs its core function, but now its data can be efficiently shared with other systems to give context, allow automation and better enforce a zero trust environment. Shared signals is being adopted by some of the biggest names in technology. And you can download sample code to try yourself because insecurity like in life sharing is good. Explore more@sharedsignals.guide
Speaker 13 02:18:29 Special, thank you to the duo team at Cisco for putting that together. They surprised us and we're like, we have this awesome video. And then here we are so amazing. It's, it's so good to have like visual resources. That site kind of follows that same visual pattern of a balance of easy to read and sample code and how it works so special thanks to the, the duo team at Cisco. So I think you're gonna take this one.
Speaker 10 02:18:51 Yeah. So Tim explained what SSE is in his talk. And so I thought what we could do today in this talk is just talk about what we are working on right now, right? So the, the SSE group has three standards. One is the, you know, the SSE framework itself. The second one is the continuous access evaluation profile of the SSE framework, which is more for session security. And then the third one, which is sort of the original risk profile, which is the risk incident sharing in coordination, which you can think of as more about account security and what's happening right now is that that risk profile that is based on the SCC framework is now being proposed for adoption as an implementer's draft. And, you know, we are seeing a lot of new activity in the working group interest from a lot more, you know, newer companies, as well as, you know, existing companies, which are very active in the, in the working group. The website has been revamped to take a look, not to mention, you know, the enormous contribution that Cisco has done by setting up their own website called shared signals.guide, which is focused on this. We, you know, just some changes to how we manage our, our processes there. And also, you know, maybe talk a little bit about, you know, what we are working on. Is there a separate slide for that? Yeah. Yeah. Okay. Yeah.
Speaker 13 02:20:17 Yeah. And, and I wanna, I just wanna harp on one thing I know it seems like for organizations and six new people is like not a big deal, but for a group like this six active new people is like game changing and we've got Cisco came to the table and started, they started immediately submitted a PR, which was something that a tool and I didn't have time to do. And immediately came in and did it. They created that website. So as much as four and six seem like low numbers, these are, these are huge for, for these types of smaller working groups.
Speaker 13 02:20:46 So in terms of what the current and future work items, what we're actively working on, kind of in our weekly or biweekly meetings, we have a new collaboration with the E K Y C working group around one is taking some of the concepts in SSC, like the complex subject mapping, and just maybe extracting that out into its own specs. So other groups in open ID can leverage it without having to reference a more specific spec. So that's super easy to do. We just have to pull it out at a preamble on a post and, and in theory, push through the process. So we're gonna do that. That was literally fresh off the press. Two weeks ago, we wanna make stream management more flexible. So that's actually something that Cisco contributed the idea being it used to be that the relationship between a transmit or receiver was heavily bound to really the token, right?
Speaker 13 02:21:30 That's how you determine that relationship. We wanna actually make it so you can have multiple streams as part of a relationship independent of the way you authorize it. And there could be reasons for data sovereignty, just different privacy levels between event types. You may wanna have a Cape stream and a risk stream. So we think there's a lot of possibilities to make it more flexible and still maintain the simplicity of if you only want one stream, the big one, right? The potential expansion beyond identity centric, use cases. Everyone assumes that because this is an open ID. It has to be used for identity that we finally got over the hump of, of making everyone realize you don't have, you can, you don't have to be using open ID connector SAML for this, right. This could be completely independent. My background actually is a network identity, which doesn't use either of those protocols.
Speaker 13 02:22:10 And there's a huge value in network infrastructure providing insight to an IDP, right? So we want to, we wanna make sure we're telling that story. We just talked about the token revoked event and we're doing a lot of industry engagement. You, you may hear about Tom Sato from Vera clouds, doing an amazing job, helping with industry outreach, which again, in a small working group, is we, the more people doing that, the better for the future, you know, working with Galen team, we do want to get to a point where we can do interoperability and certification developers need something to point at to be able to test, right. They need to be able to point at a receiver or point at a transmitter and be able to get events back just to test their implementation. And then ideally that should lead to a process where you can certify your implementation.
Speaker 13 02:22:53 The last one is we want to do an SSE profile for fasted. So I'm gonna next up, I'm gonna talk about fasted, but the idea is, could this new Federation set up protocol or negotiation protocol actually set up authorization and stream management for SSC, right. To kind of complete that full circle so that those are the big ones that we're working on. Obviously we're open for new use cases. As I mentioned earlier, the creating a new event is really simple and we don't have a lot of, for better or worse. We don't have a lot of process in the working group. So if you have an event, let's, let's look at it. If everyone thinks it's great, we will add it and we'll go through the process. And that's something we're we are actually talking about. How can we speed up that process right now?
Speaker 13 02:23:29 The events are all in a spec, which means it has to go through normal spec process. But in reality, they could be in a more, I like registry that could just be approved. So we wanna explore what's the best balance between process of a spec and an event registry, cuz at the end of the day, they're just a list of J objects with expectations around what the claims mean, right? So it doesn't necessarily have to be a full blown spec in terms of evangelization. I forgot to update the first bullet point. We're obviously at EAC in may we have a session tomorrow, honestly, there's a lot of, it's gonna be similar to this, but we we're gonna expand on a lot of the topics. So please, if please come, if, if you're interested continuing the token revocation use case, I'll be very honest. We're probably just gonna add it, but we want to hear more feedback about who will actually use it.
Speaker 13 02:24:10 That always helps kind of tell the story as cuz it's so simple to add the, the event. We're probably just gonna do it. And as, as you showed in the, in the video and we've mentioned, Cisco did an amazing job, they have sample code that helps you get up and running. I believe it's all compiled into a container that you can just run. And eventually that's probably what we will end up turning into like a more self, a more hosted IOP type site, hopefully, maybe second half this year, but likely into next year. So a lot going on, we're excited. This group, I think had pretty rapid acceleration from, from day one, which is, which has been great. And I think we're starting to really get some traction in the industry. So
Speaker 10 02:24:50 Yeah. Would just like to add that, you know, this is in production and in really heavily used services today. So it's something that is ready for use it's you know, the SCC framework and the Cape profile are both in sort of implementers draft. So you have the IP protection and you know, Microsoft is using it in their production. So in the busiest services that they have, which is, you know, a zero ad teams and exchange online, Google is using it in their identity service with, you know, more than a, a few hundred thousand applications, you know, millions of events being sent out every day. So, you know, this is real stuff. You can start using it today and you know, have it.
Speaker 13 02:25:31 Yeah. And like, so for example, if you are Microsoft shop, if you get a prompt that says you need to do a new interactive authentication, there was likely a Cape session revoked event set in the back end in Microsoft, between parties, right. It was a great way to bridge. You know, what are typically, I don't wanna say siloed, they're not siloed anymore, but they're, they're very big product teams, right? That don't always have common roadmap. So it was a great way to bridge that. And you know, the next step for us is to start exposing those signals, outbound, to relying parties that are outside Microsoft. So that's natural progression, least the way Microsoft does that kind of stuff. It's a natural progression
Question for me. Well actually, you know, you're always guilty of hearing my questions if you don't bring em, so anybody in the room have a question, oh, alright, now I get to have one. What about governments? Could governments use this?
Speaker 10 02:26:18 Yeah. So I think we haven't, I mean, we, I have spoken to N at one point of time about this and I think they're going through some kind of evaluation. So I think that's possible, but nothing in terms of like, what is publicized out there that, you know, this is coming kind of thing
In theory. Yes. So maybe in practice, not yet in practice, but
Speaker 13 02:26:42 Nothing technical is preventing it. I mean, I think, I think government is one example where it may make sense to have a, we could write a profile on top or just a implementation guidance for a government specific use case because things like how you authorize, you know, cuz this, this ultimately depends on scopes and which scope you have access to all the typical othe stuff, which isn't formally defined because we don't, we don't try not to distract away from the core protocol, but that's an example where in government, there may be, maybe we can work to create a, that deployment guide for high security use cases, which has a set of more restrictive privacy principles or, or disclosure of, of claims. So
Speaker 10 02:27:17 I I'd like to add that, you know, recently the us government, I think published a set of guideline that zero trust is sort of the way to go forward and you know, Cape and SSE are, are definitely some things that will be required if you want to really implement a realtime zero trust environment.
Any other questions from the room? I still have one or two more questions I, you do. So I mentioned the commercial piece first, right earlier on in the prior session, clearly some private networks are starting to kind of lean on this and use the standards within their own proprietary implementations. You might wanna talk about that a little bit further, but when it comes to kind of exchanging information across different types of networks, is there already a sense of how that would be commercialized or how could that be done?
Speaker 13 02:28:09 Yeah, there there's been some there's been some talk about could, is there an opportunity for, for companies to create a data sharing service, right? Where either aggregating or distributing these events and that, that there are ways that could be done in a fully blind manner to, to the aggregator. You know, I know one, one company involved is actively thinking about how they can be a more or less a relay and they would handle all the authorization. So I think as you start to think about these, this gain initiative and some other Federation in the, in the organizational sense, I think there's a huge opportunity for new players to, to get into the space and start distributing or auditing, or there's so many layers to what this is right. That they could play a part in. So there's a huge commercial space in my opinion.
Speaker 10 02:28:56 Right? Yeah. And I think what we hope to see in the near future is more, more interest from the relying parties like the SAS providers that, you know, consume identity in SSC. Right now it's a little bit driven by Google and Microsoft, which are more of the transmitters if you will. Right. So I think as we go along, I think as we see more adoption from the relying parties, we'll see a more sort of cross network as you call it sort of effect.
And one last one, you know, my interest in the kind of cyber crime piece of this and the truly nasty global actors that are out there and what this could do to kind of help mitigate some of that cyber terrorism, cyber criminal work.
Speaker 10 02:29:38 Yeah. So I think that is very interesting because you know, the shared signals framework itself is, is independent of like, you know, SSE of Cape or risk the specific applications. And I don't see why it cannot be used for something like communicating cybersecurity events, right. Either ongoing or new attack vectors or, you know, things like that. Obviously we need to figure out how that fits with like taxi and, you know, other standards that may already be present, but would love to talk to people who are more in that area to understand, you know, how this can maybe fill the gaps, you know, bridge systems that may not be talking very effectively right now
Bridging between the identity and the security silos. Right. Exactly. And again, kind of convergence of standards is a common theme we've heard today. This is another area. Yeah. You know, where we can work closer with the security community. Yeah.
Speaker 13 02:30:32 And I, and I think when, when you implement SSE for your identity stack, right. Dropping in a new set of events is fairly trivial, right. That that's the goal, right? This is designed to be more or less modular from an eventing standpoint. So we, we think that's a huge plus. We just have to, I think if we start with the big, the, the, the thinking was if we get Microsoft and Google to implement this fully right. And start exposing that to third parties, you know, hopefully the, just the natural network effect of being able to drop in new use cases over time. So
Wonderful. Don and I were just saying, this is just one of the most exciting areas of work in the foundation. So thank you for your thought leadership gentlemen, thank you. And to the work of the shared signals and events working group, clearly come to the other sessions this week. If you wanna hear more. Okay.
Speaker 13 02:31:17 Thanks. And you have to listen to me again. Sorry.
You wanna jump right into it, Tim,
Speaker 13 02:31:26 Go ahead. Yeah, go ahead. Sorry. It's too much, Tim. Okay. So yeah, the other, the other group I'm involved in is the fast fed working group and fasted is short for fast Federation. And the goal here is to enable an easy button for administrators to set up Federation, even with the number of guides and docs. And, you know, I think all the big SA providers have done a great job of simplifying this, but there's still too many things that a user can mess up, either typing or copying, copying metadata file, wrong cert T rollover. There's just still too much for an administrator. And we think we can, we can fix this by allowing these providers to talk to each other seamlessly. So the, the high level goal right is configuration life cycle management of Federation relationships between IDPs and application providers. So application providers would be like your relying party, just mix of terms, the top three pain points.
Speaker 13 02:32:18 We think it solves today are metadata exchange, SAML certificate rollover, right. Which is, we've heard a lot about that in the news over the years and skim configuration, right? So skim is the user in group provisioning mechanism. So if you don't wanna do just in time provisioning, you can actually push the group's push or pull the users as part of a flow instead of relying on just in time provisioning. So that can obviously be a big pain to set up as well. And for identity folks, right? Even identity developers, it's just a lot of different protocols to have to figure out and manage. So a quick visual. So we have the ID IDP and the application provider, and we have an IDP admin and an app admin. These may be the same person in the organization, but in many cases they're not right. Maybe the people who run the IDP, maybe, maybe this is just a, a random business unit owner that traditionally would not be able to initiate this process themselves.
Speaker 13 02:33:13 Right? In many cases, the app admin would have to file a service now ticket and wait for someone else to go in. Then the IDP admin needs access to the other side. So the idea in a very, very large company where there are smaller, it groups, universities are a great example where computer science has their own it group, et cetera, to be able to actually initiate this Federation relationship without the central IDP admin having to be fully involved. Right. So if you think about the app admin essentially says, this is deliberately in like human speak. It's, it's a lot, it's a little bit abstract, you know, please federate with my IDP, right? So this could be, you know, a, a research paper solution, and they need to federate with the central university IDP, the application provider reaches out and says, Hey, Mr. Hey IDP, please federate with me.
Speaker 13 02:33:59 The IDP will say to the IDP admin, do you approve this relationship? Right? And that could be an instant approval based on who the I app admin is, or it could go through a normal approval process. And we think that's one of the most powerful pieces of this. You can, you can plum as much business logic on any of these steps as you need, based on your organization or your audit requirements, the IDP admin approves clicks. Yes. Maybe they have to put in a, a business justification, but ultimately the IDP says, okay, and the last step is let's actually set up the protocols underneath. So in this case, it would be Samin skim for this use case, but it could be Sam skim and fasted someday, or open ID connect and skim so super easy. And we think it's gonna be very important for setting up these relationships, theta participants in the group, Okta, Microsoft, AWS signal, and Google.
Speaker 13 02:34:54 This is actually as much as these are very big companies. We think this is super important, right? If we can get the big IDPs, the big SAS IDPs out there to set this up with big applications, we think we, this is definitely a trickle down effect. Significant number of the issues that we have reported for Azure ed are because of setting up this Federation relationship. And that affects not only our support affects your support as the application owner too. So we do think, we do think getting, getting the big SaaS providers to implement this first will be fairly game changing for the industry just in time money. I mean, it's, it really touches on all, all the big OKRs, right? So the active in current future work items, the active work we're trying to do is really quite, you know, just full disclosure, right? Fast fed kind of took a break for two years.
Speaker 13 02:35:43 There wasn't a lot of work. So we're, we're ramping back up and trying to get this both up to date with more, we've gotten a lot more perspectives from other companies. It was two companies really that were driving it before. Now we have five providing very different use cases. So one of the examples is if you've ever set up SAML Federation, maybe the best example is ADP, right? The payment or tax Porwal thing. There's actually three different applications. One for Amio, one for America, one for APAC. And those are more, those are the exact same application. That's the same Federation relationship, but you have to add a query parameter that says America, right? So that it takes you to the right redirect. Porwal, that's a very simple use case, but actually very hard for something like fasted to set up because it's not a new Federation relationship.
Speaker 13 02:36:26 It's more or less an alias for the exact same relationship with additional data. So we wanna add that the ability to define that inside of fasted so that you don't have to go through this process three times for all three use cases, you just it'll just create the shortcut and away you go. The other thing we wanted to do is fasted was largely set and forget when it was first written, right? You set up the relationship and fasted was done. And if you wanna do something again, you'd have to do the whole handshake and go back through this process. We actually want that relationship to persist, right? And the reason we think that's important is hopefully in the next five years, we wanna make a heavy push towards moving people from SAML to open ID connect, if they're so willing. And we actually see this as an amazing opportunity to provide a one click move from SAML to open ID connect.
Speaker 13 02:37:10 And so we want that relationship to be persistent so that you can have guidance for the, the administrators on upgrading a relationship, right from SAML to open ID connect. Heather's not in the room. So I can say that now. Yeah. So that that's really one big one. And so having persistent relationship identifiers is super important for that, right? If, if we throw, if both sides, you know, having a femoral identifier, you know, we more or less have to go back through the process. So that's more, that's a big structural change we're making right now. So that's, that's probably the heaviest of the work items. A lot of implementer feedback that is driving this and the normal industry ecosystem engagement that I copy and paste onto every deck, cuz it's the most important part of what we do. The future work that is not committed. No, no one's actively working on it, but we want to start looking at maybe second half of this year, the skim working group in I ETF has been rechartered, they're looking at a lot of new use cases, which includes formalizing, skim, authentication.
Speaker 13 02:38:05 I believe the original spec says something like just do basic a and whatever else you want do is outta scope, which is not ideal. So once that gets finalized in, I ETF, we would like to make sure that these skim profile for fasted can support those new authentication methods. The open ID connect profile is actually written, but it was in a very, very early draft. We want to reinvigorate that work and make sure that we can support this transition use case when the time comes. And as I mentioned, SSE configuration to build this full circle life cycle,
Speaker 13 02:38:36 I think that's last some resources. We bought a domain fast fed.dev. It just takes you to goes from there to there, but it's easy to remember and there's a, we don't have a, we, we don't have a pretty landing page like we did for open Sara circles and events, but we're working on that as well. So lots of good resources though. We, we are, we are hoping to do a video and all that kind of fun stuff. I just gotta get Cisco to give us their develop or their designer. But the goal is to have more of those type of resources. So definitely would love feedback. If you're an application provider, we have a ton of feedback from the big IDPs. We, we don't have a lot of feedback from resources providers or application providers that so please share any feedback you can.
So any questions for Tim on the work of the fasted working group? I have the question in the back. Sorry. Do you wanna come up so I can get the mic, Justin?
Speaker 21 02:39:36 I wouldn't bother with the mic except they were recording. Ah, I believe so. Anyway, my question was about the provisioning side of things. So what types of use cases are really driving the pre-provisioning use cases that you've seen, especially in case of fast fed, because it seems that in practice, a lot of people are doing just in time provisioning of first time I see an account, I set it up. I go on my day and then my attributes are probably stale by next week. Right. Right. So are you seeing it, are you seeing it being driven in terms of synchronization or in terms of sort of like pre replacing all of the accounts and attributes set up and the implications of
Speaker 13 02:40:16 That? So full disclosure, I'm not very involved in skim, but I know that provisioning is like the number one ask from customers for Azure ad applications. I can't tell you why. Cuz I do think has traditionally been just in time, but I do think it's more or less sync they want, they want accurate data all the time in those applications. I know one heavy use case is anything that requires searching for a user. Right. Sharing a document like in where you need the user populated upfront. So that's one use case. I know, especially with the file sharing in collab apps. So I shouldn't say, I don't know, use case that's one of 'em. Yeah. Ask Pam actually. When did you see Pam?
So I AC I think you just told me something in that presentation. I definitely hadn't heard even two weeks ago, which is that some of the entities out there with SAML implementations, I'm sure some people in the room have SAML implementations might be worried about the current kind of cookie and redirect changes. So is the work that work within the fast fed one way to think about a migration from SAML to open ID connect? Not that even that will solve all the problems of redirects, but do you wanna speak to
Speaker 13 02:41:24 That? Sure. Yeah. I mean that, that is the hope, right? I mean, there's, there's a little bit of a religious war about SAML versus open ID connect. Like I don't know what the exact number is, but like high, I think it's above 80% of applications in the Azure E D app gallery are still SAML. And many of those still many of those application providers do support open ID connect people just like to implement SAML cause it's perceived to be easier, but it has higher overhead. So we, we certainly believe that if we can give the admin this easy button where they don't have to really necessarily worry about the differences between connect and SAML, that would be magical. I think for everyone and in an easy button fashion that fasted would allow, we think, we think that's a, to me that's the biggest use case for fasted.
At least for my side, I've only been around for a year in this community, but I'm not really aware of the war. So maybe there's a war.
Speaker 13 02:42:16 Maybe it's my own war. Cause I would like to see Sam go away personally, but it could be a war in my head, which happens a lot.
I, I personally I view it as a little bit more like I see a, you know, sister community, you know, at risk from the current changes in, in the browser configurations. And if they're at risk and their implementations, can't persist and they're about to smash into a wall, can we help, you know, our sister community with tools and capabilities? And I think that's what this is about.
Speaker 13 02:42:42 You're, that's a much better way of saying what I'm trying to. Thank you. Yeah, no, I mean, at the end of the day, right there won't be a SAML three as far as like, I mean, it will be very shocking if there's a SAML three. So anything that breaks is gonna be broken. I think that's the reality we've come to. So connect has active development. Connective has active, worked active contributors in an active organization. Right. So there's certainly, yeah, I think you're right. It's
Just, and we, you know, we're, we're concerned. I mean, it's a lot of organizations like universities and governments that have implementations and people literally might have passed away that had the expertise that did those original implementations and there's no one there kind of guarding the ship to, to support that work or figure out solutions. So absolutely this kind of click a button sounds magical. Great. I'd love, love if we can really help them. Absolutely.
Speaker 13 02:43:28 Yeah. I think those Han yes.
Another question from the group. Yeah. This is me running. Gotta get some exercise today, right?
Speaker 10 02:43:39 Thank you very much. And actually the question is about, is quite related to the gentleman just in front of mind. And so I see you have a scheme between the, although you just implement the authentication part between the application between the, the application provider and the identity provider. So one of the biggest challenge when we implement less, the adjusting time user provisioning scenario is when the users leave, the let's say is disabled from the identity providers.
Speaker 13 02:44:13 Devision yeah,
Speaker 10 02:44:14 Yeah, yeah. And then, you know, the, the, the user in the application provider cannot be captured anymore because the, he or she doesn't log in anymore. So if here you have a scheme connector between two, does it implies that at a certain point, the users, when you say disabled identity provider is also get disabled on the application
Speaker 13 02:44:36 Side? Absolutely. That that's a great example. Yep. And that, that is one of the primary use cases for skim is deprovisioning. And that's something I know they're looking at the other, the other aspect to the skim recharter on top of improving deprovisioning, improving the authentication framework. They're also looking at bringing that entire model of provisioning and deprovisioning to devices. A lot of IDPs now have a device entry, whether it's an MDM device or just some relationship for a certificate or something, but it's super important to be able to share that data between parties. So I'll repeat it
Speaker 13 02:45:14 It can. So the, the, the, oh yeah, sorry. Doesn't SSE pro. I said, I was gonna say doesn't SSE provide that deprovisioning use case. I think the difference is SSE is designed to be a real time eventing framework, not necessarily a, a protocol utility, if that makes like, so I, I would fear that if you did not subscribe to those events, how else would you like, let's say actually, let me pause on that tools. Like I made reminded me the, the new skim working group wants to use SSE as part of one of their sync fabrics. So I was like something clicks there and I can't remember, so yes, so SSE could work and they may actually very well use it as, as the transport. Yep.
Keeping you on your toes there, Tim, multiple presentations
Speaker 13 02:46:03 I jetlagged and talking. And
Do I have, you're doing a great job. Great job. All right. Any final questions? We'll move on. All right. Thank you very much. I'm now pleased to introduce Mike Jones who will be talking about, oh no, I almost went too far. All right. Open ID connect, working group. All of the great adoption of that. And it's got plenty left to go, right? Lots of good work happening, Mike. There
Speaker 22 02:46:26 We are. I'm Mike Jones. I work on identity at Microsoft and identity standards as well as I'm on the open ID board and work on some specifications there. So let's, let's talk about it. So you're almost certainly using open ID connect, even if you don't know it. I mean the, the list of places that are using it on this slide, isn't close to comprehensive and you know, we don't have the intelligence of every place. It might be. I'll, I'll give one personal anecdote. I went to pay my direct TV cable bill, and I happen to notice because I'm a geek that the redirect URLs passing by in front of me to do the login with at and T were open ID connect. And so, you know, at and T is using open ID connect to log in to direct TV who knew, well, I saw that and that's just an example. Open ID connect is plumbing. It's part of the infrastructure for identity. That's become increasingly common. Unlike open ID two oh, which came before it, we did not try to make this a consumer brand. We are happy to be plumbers to the identity industry.
Speaker 22 02:47:55 Here's a very old February, 2014 diagram by Pam Dingle showing the structure of the, or the relationships of the original open ID connect specifications. So everybody builds core. A lot of people delivered metadata using discovery. There's dynamic client registration. If you wanna do is for instance, the certification presentation was saying where you can use any provider dynamic dynamically. We worked on logout, but interestingly, a lot of this is built on a set of ITF specifications that we were concurrently developing. And these were intended to be pieces that can be reused yes, for connect, but also in lots of other circumstances among them OAuth and the Jason web token.
Speaker 22 02:49:00 So this is a really exciting time for the connect working group. There's more going on now than there had been, except when we were developing the initial set of specifications. And I'll give you a brief taste of some of what's going on, some of which you actually already heard. So people don't just wanna sign in, they wanna sign out, particularly if you're in a kiosk setting or something, you don't want your sessions to persist. And so there's a family of specifications that enable that there is an interesting piece of EIC history in this set. We were at EIC and Munich some years ago. And a gentleman in the room stood up and said, I work at a stock trading firm, but we do bond trading as well. But we do bond trading actually outsourced to another firm. We do silent login to them and put it in an eye frame.
Speaker 22 02:50:06 But when you log out of my firm, we wanna be certain that you've also logged out of the bond trading application. And it was that use case where they needed the log out to be reliable. That convinced me that, yeah, we really did need a back channel logout method rather than doing everything in the front channel, through the browser. And, you know, as history has moved along, some of the facilities used by front channel and by session management are at risk due to browser changes. I was really happy last week at oof security workshop to have one of the lead developers in the space, Dominic bear, who many of you probably have used his JavaScript relying party code say, oh, we've already moved to back channel. And I said, oh, I'm sorry. That was probably more complicated. Wasn't it? He said, no, actually it made our app simpler because then the JavaScript front end only does UI. And all the state management happens in a back end. So it made the code cleaner. So that was a data point I didn't have until last week. And I'm happy to share it. So we are in the process of trying to actually finally finish these things, make final specifications and move on.
Speaker 22 02:51:40 And so more on this. We are, we just finished. Actually. We're not halfway through the working group. Last call. We've been resolving the final comments. And following that, we'll do the open ID foundation wide review of the log out specifications. So another major piece of work is the open ID connect Federation specification. This enables establishment of large scale identity federations much like happen today in the higher EDU higher education and re research worlds. Those typically use SAML, which is great, but there's pull from that community saying, can we do our federations using connect instead? So Roland Hedberg and Andrea Berg, John Bradley, and I have been working on that with the working group. And that's another one that we're close to saying, this is done enough to make it a final specification.
Speaker 22 02:52:54 There's a few very small specs that do one thing. Well, one of them is the prompt create specification that enables account creation at authentication time. If you don't already have an account, it's a signal from the relying party saying, let the user create an account and then come back to me if they don't have one, and we've already answered this question, we're gonna take this final shortly. There's another one by George Fletcher, formerly of AOL about enabling single sign on across applications by the same vendor. So an example of that is AOL actually owns map quest and merged with Yahoo. And so, you know, there's lots of different apps on your phone if you're in that ecosystem and you would want it to be the case that if you sign into one, you're already automatically signed into the others. Another one from Erston LTE, who you heard from earlier today is just an code that signals I wasn't able to do what you needed me to do in order to authenticate the end user at a sufficient level and saying that that's the case.
Speaker 22 02:54:25 There's a claims aggregation specification by net and that and others that enables relying parties to request that claims providers who are not necessarily identity providers, return aggregated claims, claims bundled up together through identity providers. And in some ways that's similar to the use case that Christina was talking about earlier this morning, where you could have a self issued identity provider, but you might wanna grab some verifiable claims from other places and put them together. You heard about this this morning, the second round of the self issued identity provider. And in fact, there's a trio of specs that toin and Christina talked about. The second one is how you present verifiable credentials. And finally, there's one for how you issue them.
Speaker 22 02:55:27 This is boring, but it does matter that not being perfect human beings. We did make a couple mistakes when writing the original core and related specifications. Most of them are trivial like there's one place and an example that needed to say HTTPS that says HTTP. And, and so it goes, but it's important to fix these things. And we're in the process of working through the remaining erota issues. Joseph told you about certification certification is a big deal. I love that. He talked about how, if you have a Federation or just relationships with many, many providers or many relying parties, the combinatorial explosion of relationships will kill you. And the alternative is to have the participants in a set of relationships in identity, run the same tests, verify that they're all operating as best as we can tell in a compatible way. And indeed, it's been the case for years that some of the major relying parties as names, you know, will not allow an incoming Federation relationship with an open ID provider, unless it has been certified. Why, in fact, there's multiple such parties. They don't want the support costs and it's better for their customers and you, because they want it to just work.
Speaker 22 02:57:11 As Joseph said, we migrated to a database, yay. It used to be hand edited. WordPress pages that stopped scaling when Brazil open banking took off. And as a board member, I'll say, I have to congratulate the foundation on. This is finally self supporting. It is not a profit center per se. And we're not trying to make it one per Gale's question earlier about what does certification cost, but it hadn't been covering its expenses where now there's enough certifications coming in, that we can pay the development fees of people like Joseph and his team and the board doesn't have to wonder how long do we keep pouring revenue into this, but it's, it's been very good for the foundation.
Speaker 22 02:58:04 There's a bunch of related working groups to connect many of which you're hearing about today. So I won't belabor this, but these slides will all be up on the foundation website. And you can have a look at that. Finally, I'll leave you with a set of resources. You can get to almost all of these from open id.net/connect, which is a high level description. There's also a link to the working group site per se, where if you wanna participate in the working group, it tells you what you need to do to join up and help us with this work. I will also put the slides on my blog, which is self issued debt info.
Thank you very much, Mike Jones. So while the group thinks about any questions they might have for you, I have a question for you. Okay. Which is, can you tell us a little bit more about the publicly available spec processes with ISO and their, their process with ITU and Etsy and the various liaisons that kind of relate to open ID Connect's work?
Speaker 22 02:59:15 Sure. That's a great background question. And if I answer incorrectly, Mr. Zamora, please correct me. The both ITU and ISO have a process whereby other standards organizations can take an approved standard, submit it in this case to ISO and they will give it an ISO spec number effectively, endorsing work, done other places. This is important in some jurisdictions because ISO specs have special legal status and correct me if I speak in incorrectly, but I believe there's countries where in the procurement process, if there's an ISO spec that solves the problem you're solving, you're expected to use it. You, you have to get a justification for not using it. And so while connect adoption has organically done really well, there are places where having an ISO number for the same specification would help us. And that's part of why I'm trying to finally push through the erota process because we want them to be as perfect as we can. And then we will submit them to ISO. And there's a small team in the connect working group and in the foundation that is actively looking at that. And by the way, we'll probably do that with the finished FY specs as well.
So just a little bit on the it side, we are all the open I foundation final specifications are actually a five state meaning that ITU specifications can norm ly reference open ID foundation specification as an international standards. So just to let you know, but you know, could numbers could make it easier, right? For the people who's writing the procurement things, right. I mean, they could for, for example, go through the it T route and then, you know, find the a five documents and say to the people that it is actually a no reference, no referenceable document or something like that. That's a bit of process. So
Speaker 22 03:01:48 Right. We, we want, as Tim was saying, we want the easy button for Lowe's living in the ISO world. And that's what this is about.
So I, I think, I think helpful clarification, cause I think a lot of the examples you put up on the page of who's adopted open ID connect doesn't list, all the governments. So even though some governments have implemented the open ID connect standard, this unlocks even more being able to use these standards with confidence, but it's also helpful for the, the SYOP self issued open ID provider work that, and Christina were talking about earlier. Right. Because although those are in implementation drafts, those could really solve some meaningful questions for government right now. Right.
Speaker 22 03:02:26 That's right.
Yeah. So really important that we continue that pressure on the formal recognition and another exciting area of, of development in the open ID connect working group. So thank you very much. Sorry. I did promise any other questions, see if anyone else had questions? Yes. All right. Hearing none but violent enthusiasm from the group for your work. Thank you, Mike. Clearly, I mean, crucial, right? You saw at the beginning, it's 3 billion plus people around the world are using these standards today. Millions of applications, not just for large tech platforms, although we are grateful for Microsoft's contributions here, it's really standards that can be adopted by anyone. So thank you for helping build those pipes. All right.
Speaker 22 03:03:15 And with that, I'm gonna change hats for a minute. There's another smaller, but very focused working group that I'm also a participant in, which is the enhanced authentication profile work EAP. And I will talk about the status of that at this point. So what is the EAP working group? The charter text is there we're developing a security and privacy profile of connect that enables users to authenticate open ID providers using strong authentication protocols and methods. And in particular it enables sort of seamless integration of your RP, your op, and for instance, a web authentic fi oh two authenticator.
Speaker 22 03:04:19 So there's two EAP specifications. Both of them implementers drafts. One defines how to use ITF token binding. Now that seemed like it was going to be the future, then deployment problems arose. And instead we're doing in OAuth, the Depop work, the application level proof of possession. This would've been TLS level proof of possession. The second one defines some authentication context, class references, which I'll parenthetically say was a concept that connect barred directly from SAML SAML had authentication context, class references for expressing business requirements on the authentication and a way of saying I did or did not meet those requirements. We thought that already works. Let's just import that into connect directly. So we defined two values.
Speaker 22 03:05:29 One saying that fishing resistant authentication was used, meaning one that you can't get into the middle of. And the other is a, a tweak on that, which is that it's fishing resistant authentication. And then it's also backed by a hardware key. I mentioned the token binding deployment stall that doesn't look like it's gonna resolve. So this work is actually on hold. We haven't dropped it. And there's related work on hold for using it for oof access and reach fresh tokens and connect identity or ID tokens. The work that is live is the one, as I said, that defines these ACR values for fishing resistant, authentication and hardware backed fishing resistant authentication.
Speaker 22 03:06:35 And we recently added, and I know I'm way down in the technology stack, but there's something called authenticated authentication method reference, which is different than authentication context, class reference. This is a sharp knife in your drawer that you need to be careful with, or you will cut yourself, but it allows the identity provider to tell the relying party. These are the things I did to authenticate the person. I could say that I used a password. I used an OTP. I used a fishing resistant authentication method, and a lot of other things that you could say, but there was a request from Google that they wanted a pop Mr. Value that wasn't specific to whether it was a software or hardware key. They just wanted to be able to say, we used a key and this would apply for instance, to every phyto authenticator, if you wanted to indicate that. So Brian Campbell and I are chairs of this, you can go to the working group page to see status. We do plan to take the active spec to second implementer's draft status shortly. And with that, I will pause for questions or if there are none, we will harvest the rest of my time and give it to FPI.
Well, we'll probably need some time back. That's very great kind of you, but it sounds like if there's more interest in like in those specs useful to get in touch with you, Mike. Yes, absolutely contribute to that movement to implementation draft too.
Speaker 22 03:08:31 Right. And all my contact info has been on the public web since like 92. So I'm very easy to find. Yeah.
Or Matthias like this and you can find him in the conference
Speaker 22 03:08:41 And you can find me here. Thank you.
All right. And so Joseph, you're coming back up to talk about FPI financial grade API. So welcome back Joseph Heenan, head of our certification team, but also an expert on the FPI standards to talk a little bit more about the progress in that working group. Thank you, Joseph.
Speaker 15 03:08:58 Thank you very much, Gail. So yeah, I'm very glad Mike gave us a bit of time back. So we do seem to have quite a large slide deck for this. So going straight in FPI is an acronym, obviously for the financial grade API, it's actually a general purpose, high security way of doing OWA two. It's also interoperable as well. So the, the FPI working group originally started back in June, 2018 and very much was originally targeting the financial use cases. It was a very small working group back at the beginning. I wasn't there back then, but since then, there's been lots of, of requests to generalize it it's, it's really just for use in any high risk use case. So it absolutely applies very well to healthcare and a whole bunch of other things that just need a water too, but better. Oh, and I missed a point. So yes, it, it was originally called the financial API and then we added this little grade bit in there to call it financial grade instead to try and really make it show that it, it is generalized. So yeah, we're, we're showing our roots, but yeah, it is really not restricted to finance. Everybody should be considering this.
Speaker 15 03:10:27 Yeah. One of the most successful open ID protocols after connect, you'll see this map. It doesn't look that dissimilar to the map Deema showed earlier about open banking for probably fairly obvious reasons. Quite a lot of the open banking ecosystems are using or recommending FPI. The UK was the first got Australia and New Zealand, Japan, the government's recommending FPI. Now Brazil went live last year very quickly. And in the USA, we've got FDX recommending FPI as well. So it, and all sorts of people in Europe as well and discussions in the middle east and all sorts. So it really is growing adoption quite fast, both in finance, but we're also seeing the, the, the non-bank ecosystems come online fairly rapidly. Now, as I say, any API that's exposed to, to high risk essentially, and where you don't have control of the environment is it's is where FPI is targeted. There's two different security levels. One of which is higher security than the other.
Speaker 16 03:11:43 And
Speaker 15 03:11:43 Yeah, it's say very much a general purpose. So there's really three different ways you can authenticate as well. And FPI has ambitions in all three areas, but obviously we started with the usual redirect approach. That's exactly what you see an open ID connect. Those standards have been around a while now. We took them to final almost just over a year ago now, so that that's FPI one baseline and advanced. So it's mainly FPI one advance that we're seeing all the, the open banking ecosystems use. And that's, that's really the one everyone's looking at. So one of the issues with oath two or RFC 7 67 49 is really that some of the messages just aren't authenticated and that that's really the part FPI addresses we're using various different, slightly more modern or slightly less used parts of the standard to just have message signing. So source, destination, or authentication following the, the BCM principles from ISO. So yeah, it's secure and it's good practice all tokens in fender constrained instead of being Bera tokens. So yes, you might have seen Bera tokens. Most recently in the news as a whole bunch of GitHub tokens were stolen from Heroku, they were Bera tokens. So the attackers could just use them to access the GitHub, private repos of the organizations they were stolen from. Unfortunately, if they were sender constrained tokens, that would've been an awful lot more tricky as they should should have been a, a private key somewhere that would've stopped. The, the stolen tokens actually been presented and used
Speaker 15 03:13:54 FPI has also been through a formal security analysis that was led by Daniel, who was talking earlier or Dr. Fit, I should say. So that was done income as a joint effort with stout university. So they they've done full form of modeling of FPI and essentially put it through the, they call their web model to just prove that the security properties are what, what we say they are
Speaker 15 03:14:22 And specs are nice. But as you heard me say earlier, for real interoperability implementations do need to be tested. So FPI one has a, a full certification program. It's been going years now, tests not only the functional side, the interoperable side, but it's also got plenty of negative tests for all the security properties of the profiles. The, the suites are really quite well used. We have seen them, a lot of people use them very early in their development cycle to make sure that they are developing the right thing. They've been adopted by the, the regulators in the UK, Australia and Brazil. And yeah, Brazil has grown the number of certifications massively in the last year with a very fast, fast, very aggressive rollout. And all the certification results are, are public on the open ID foundation websites. We have lots of tables that look like this lots of different columns for the, the few bits of the specs that there's a few implementation choices to make,
Speaker 16 03:15:34 But
Speaker 15 03:15:34 Yeah, these tables are getting really quite large now, which is great.
Speaker 15 03:15:40 So FPI also supports a, a decoupled approached authentication, which is the, the Seba profile that I mentioned briefly earlier, that comes from the modern working group that be honest presenting about later, but it basically allows the user to authenticate. Even if the device they're consuming content on, isn't the one they're going to authenticate on. So that that's great for all kinds of use cases. If you're using a laptop like shown there, but you're authenticating on your mobile device works well for that, but it can also support other potentially more interesting use cases like authenticating the user whilst they're on the phone to a call center or something to give agents access to part of their account with real explicit authorization.
Speaker 15 03:16:33 And we could go further. Europe is very keen on what they call the embedded approach, where the user doesn't actually get involved with the bank during the authentication process. So the, the, the FPI working group and the Berlin group who are the people that have defined much of the standards that are used in European open banking are starting a joint group to try and figure out if we can find a good solution to the embedded approach. Yeah. Hopefully have some news on that later this year. So the FPI working group has also started working on FPI 2.0. So it's very much taking what we learned at doing FPI one and trying to improve it and make it more secure or more interoperable or easier for developers. And also to cover more use cases that we didn't cover in FPI one.
Speaker 15 03:17:37 So yeah, let's, let's expand the scope as well as refactoring to simplify. Most developers will tell you how well that usually goes when you say it with a great deal of excitement. But yeah, the people in the working group have been around this world for a long time, and we think we are doing the right thing and making progress and making the world better. So the, the FPI two framework adds a few more extra specs into the mix. So we've got something called a advanced authorization profile and also a spec called grant management that solves some use cases that were being done in both the UK and Australia and a few other places in non-standard ways. We're actually trying to standardize that. So it becomes easier for everyone.
Speaker 15 03:18:31 And we make use of some new standards that weren't available at the time when we, we started FPI one. So most of these have come out of the I O working group, but they have great security properties. So we're using them in FPI. So the pushed authorization requests, authorization requests, and the Orth security best practices have all come outta that working group and really represent the latest and greatest. So that they're all used in FPI two. And the grant management API has been developed within the FPI working group, and that just really enables better consent management. So FPI security very well understood. We've got the, the in depth formal analysis for FPI one FPI two started off with a defined attacker model. It's got reduced optionality to compared to the O two specs. And the university of Stuttgart have now started a formal analysis of FPI two and are working along that process. And we're hoping to have that formal analysis analysis of FPI two completed later this year, which is another great milestone for FPI two, becoming a
Speaker 15 03:19:46 Reality. So, as I mentioned earlier, we're, we're also building the conformance suites for FPI two at the moment. And there's a few issues coming outta that, which is a good thing. The spec language is improved. The spec is being improved, and we also have to consider some new challenges at the same time, not because of FPI too, but just one of the challenges that we've we're running into recently is testing production systems. It's proving to be very important to actually test the production systems. It was one of the things that didn't happen initially in Brazil, and they're having to kind of roll back that and make sure they are testing production now, just because no matter how, how much people say, Hey, I've got a sandbox that's configured exactly the same as my production system. There appear to be very few companies in the world that can actually make that a reality.
Speaker 15 03:20:44 So in terms of where we are with the FPI two standards, the baseline profile went to implementers draft a few months ago. Now at the, at the same time as the attacker model, the grant management spec has been actively worked on that's a working group draft at the moment, hopefully going to the first implementers draft later in the year, the, the FPI advanced profile, which adds some non-repudiation or signature methods that's been actively worked on by the working group as well. And we're hoping to have the first implementers draft that out within the next few months. And we are already seeing adoption of FPI two, the Norwegian health systems have an integration, ongoing Australia, consumer data rights have already announced their intention to, to migrate to FPI two, and that they're actually sponsoring the, the formal analysis. So that's a, a really quite strong intention and we have a micro site about FPI. So please do visit it if you'd like to learn more about FPI and that's the end of my slides. So if anybody has any questions, please do a shout out.
All right. Thank you very much, Joseph I in the we're not done yet. Not
Speaker 15 03:22:10 Done yet.
I'm this is in the, in the spirit of a workshop. I'm gonna call on the audience again, because we have some implementers in the room. So Steiner, would you be open to sharing a few of your observations from implementing FPI two in Norway? Yeah. Sure. All right. Thank you.
Yeah. Hi. So we have been looking at FPI for quite a while. And when this work on FPI two started, we quickly grabbed the early specifications and started require, require requiring our relying parties and resource servers and ourselves. We are like a national Federation gateway. So we started using those requirements and, you know, it's, it's, it makes our lives a lot easier because it's the difficult choices have already been made. So it's, it's much easier for everyone within our ecosystem to do things securely and in an interoperable manner. Also, I can say that we, well, the slide said integration ongoing. I must admit we don't have support for the entire set of requirements. So for instance, we are, we haven't yet been able to implement center constraining tokens, which is a very important piece of the FPI requirements. So we are hoping to, to see more support for that. And also the par the pushed authorization requests is, is not in place yet, but we're working together there. And yeah, I think also a part of the things that we see is that we, we haven't seen use for, for instance, Seba, which is part of the F B framework or whatever you, you know, wanna call it. It, it consists of different, different specifications. But anyway, our experience is that this is a really, really good initiative from, from open ID connected foundation. And we're thankful that they've made it. Yeah. Thanks.
Thank, thank you Steiner clearly having implementations of, of FPI two or helping to kind of prove out, you know, the integrity of the standard, even as it at this early stage. And to me, importantly, showing that use case for health, communicate sharing of health records and empowering people to move, give permissions, to move their medical information across a wide spectrum of providers. There's a topic which we won't have time to go into today on health and like the general application of these standards for the health community. So great to see Norway taking a leadership role in how to use these kinds of standards to solve Federation problems, because it's, for my observation kind of endemic around the world in the us, I know it's incredibly painful and they're trying to figure out how to solve some of those data sharing questions, but I think it's, it's problematic more widely. And then of course, even cross-border, it's somewhat hopeless, right? So global interoperability would be something we'd like to see another question for, for, for the audience. So to, to Nat second war, our chairman, who is also one of the co-chairs of the, the working group, Matt, could you share a little bit more about what you're seeing in the global adoption FPI one, the movement of FPI two and the security analysis that we're trying to get done? Could you, could you share a little bit more?
Yeah. So from the beginning security analysis, formal security analysis was really important for us because we wanted to make sure that it is actually secure. Of course we were, we've been following the guidance by David VA, ER, by the Caer is the person who did formal verification of TL 1.3. So that's what I laid out as BC and principles. But, you know, to be, you can just try to be secure like that, but you really need to verify that it actually meets the security target. And that's what we are doing for we've down for F 1.0, and we are doing it for happy 2.0. And that had a very good impact on the global adoption. You know, the, those ecosystems like UK or Australia or Brazil, you know, could adopt it with a, with peace in their mind that it's gonna be secure. Right. So, well, the, I could give you one story that the last trip I made before the COVID thing back in 2000, you know, February, 2019 was to London.
And one of the purpose was to meet the vice governor, so of the Brazilian central bank. And we had a very good exchange there. And, you know, it actually worked out to, you know, have the adoption of this secure protocol there. And also we heard, like Joseph mentioned a lot of voice from other industries than banking, starting from aerospace or the, the international travel or something like that, asking to not financial specific. That's why we put financial gray there, but then we are having, you know, a lot of demand from the healthcare, from the energy industry and so on and so forth, and also FDX in the United States. That's in banking area, but has decided to adopt FPI. And we are seeing that in other jurisdictions as well. So it's coming. So if your jurisdiction, whether it's a country or the sector is thinking of having API protection, please, please look at FPI because that that's going to save a lot of time and, you know, lot of grief for you.
Thank you very much, Nat. And thank you very much, Joseph. I appreciate your comments on FPI and it's rapid rollouts around the world and what it's going to mean in terms of migration from FPI one to FPI two. And I, I have the privilege of talking to many of these government officials and it's again, kind of a pleasure to be able to say to them from the open ID foundation's perspective, we are really just trying to help them, whether they choose to use our standards or not. We wanna help them in their decision making process, whether to select our FPI standards, if they choose to select our FPI standards, can we be of service to them and our certification program? Can we be of service to them, to any local profiles they might need to adopt, but that's the, the upside of being a nonprofit, right?
Who's not seeking to be, we're not rent seeking on the ecosystem as a whole. We're really trying to help empower these government officials or the private sector entities. If it's a market led implementation, we are equal opportunity. And even if they don't choose our standards, there's still the realm of global interoperability and using our standards in that context. So I love having these conversations. It's a real pleasure and a real acute problem, you know, as we see this, this work cascade around the world. So thank you for boots on the ground with Joseph, helping to get it done, not in his leadership of the working group and many other folks in the room. Who've been actually on the implementation side. Thank you for your hard work.
All right.
All right. Okay. So our next speaker is going to be remote. So this will be fun. BN helm, the vice chairman of the open ID foundation leads the working group. He is also with Verizon wireless and Bjorn. Hopefully we can, you can check in. Can you hear us?
Speaker 24 03:30:35 Oh yes. Can you guys hear me?
Yes, we can. So Bjorn, please take us away. I think Mike Les is gonna drive the slides for you.
Speaker 24 03:30:43 That's correct.
Go ahead. Bjorn. If you're speaking,
Speaker 24 03:30:59 We're on the first slide. Well, I, I was waiting for the first slides. Okay. So this is the presentation on the Moderna working group. It was set up initially to support GSM A's technical development of mobile connect and also similar industry and standard development. The intent was to enable mobile network operators Oros for those familiar, the term to become identity providers and to develop a profile as well as extension to connect for use of by UHS to provide identity services go the next. So the group of participant and contributors include a healthy variety of, of both operators as well as vendors and ingested parties. And so the slide shows the logos of those companies that have been contributing and participating in their work by the modern to the next.
Speaker 24 03:32:18 So the working group status. So right now we have wine publication in, in final status. That's Tolbert core. We have the, the initial draft of CBA was, was evolved to C a core and having profiles of the C a core for various use cases. And as you all heard previously for the faculty update, there is a C a profile of C a as part of the working groups work. So the C core is in the final publication published. We have three publications in implementers draft that's the authentication profile count porting and user question API. And we're actively working on bringing the modern CVA profile discovery and registration profile into an implementer draft status, which is a, a formal status of any publication within the open foundation, in terms of the room group status. Again, we are planning to have some certification of a de certification for some of the specifications, and we're also an active engagement for, with other entities beyond the foundation to engage with Manos and others to provide assistance for the next.
Speaker 24 03:34:07 So speaking of outreach, go to the next slide. So I already mentioned GSMA. So besides mobile connect, there's also work by GSMA on what's referred to as RCS or risk communication services. That includes sophistication for chat bots, where the is a use for opend connect. There's also some work within the GSMA of how to configure a device for service connectivity service for an embedded, for using an embedded SIM. The leverage connect move to the next, another party is, was referred to as the third generation partnership project. The third generation project project is a collaboration among multiple global SDOs, and they have defined was referred to as Michigan critical services specifications. And that is for, for first responders and emergency services specific to certain types of communication, and to get a priority to those services. If you are one of those first responders in those specifications, there is a use of identity management, and there is a opener Deconnect profile that has been developed.
Speaker 24 03:35:49 So in addition to that, there is support built based on what was done for mission critical services. There was a, is a work to support different types of vertical applications that makes use of multiple network functionalities, including at management. And there's also an openly connect profile for those type of services. And so we've outreached out to 3d PPP to start a liaison or relationship to take over some of this work by the foundation, go onto the next. And finally, we're in the final stages of establishing a relationship with Etsy. Etsy is also founding members of the 3g PPP. So there are multiple areas where, where there is interest between Etsy and the foundation. The initial focus will be on Etsy, ESA, electronic centers, and infrastructure group, or technical committee based on the work that is done with connect and SYOP. Well, there are multiple areas where there, there is an interest between Etsy and the foundation for collaborating. And I think that is my last slide.
So I have two questions for you if I, if I may. So one with maybe Etsy that we just showed there, I think part of the goal of the, the liaison is to help Etsy provides some support to the EU digital wallet, technical expert group, and their architectural reference framework activities. So maybe we can say a little bit more about the collaboration with them and, and what we hope we might be able to do.
Speaker 24 03:37:52 I'm not sure I think to, is the better speaking for this, but, and the intent is to leverage some other work that has been done within the connect one group and SYOP, and for the Etsy to adopt and reference those work, similar to how we work with GSMA and have them adopt our specifications and reference them rather than develop their own.
I'm I'm seeing some nodding from Torston in the room. So he's agreeing with you, but obviously a mature organization like Etsy, great to formalize that Leah's own relationship and, and see if we can help provide them with some, some solid standards that they can, they can point to. Right. My, my second question was that I, I was personally intrigued when we had that conversation with the open banking Nigeria group, talking about their challenges with feature phones, right? So people in Africa's large number of feature phones that everybody has the smartphone currently, the, the FPI standards, for example, kind of assume you have a smartphone to, to implement those protocols. And so they were like, we, we need to have a capability for feature phones. We have S S D requirements. You know, we think they think about things like mobile connect. Maybe you can talk a little bit about that, that use case Bjorn and you know, what we, as the community are trying to do to help solve that problem.
Speaker 24 03:39:20 And so in, in Africa, and in some parts of south America, there's a, there's a dependency on using having a smartphone and not necessarily a bank account or some other ways of, of having a reference point for the user of the user. And so in many of those cases, the mobile net operator is the identity provider of the user based on their knowledge of not only the connection, but also of their usage of that phone and, and their behavior. So USSD is a, an authenticator of, of, of mobile connect. So the way mobile connect was created or designed was that you would have a way of federating identity, but also way of authenticating the user and, and the way you authenticated the user, they were multiple options. U S S D was one of those auctions and so many service provider, whether those are banks or, or merchants leveraged Theo's network to indicate the user before approving a transaction. And so in this case of Nigeria, one of the authenticator that they're leveraging is U S S D. And one of the operators that have deployed and provide service specifically, Nigeria is orange. So, and that's another way where the were di art group is, is providing feedback or, or support to some of their foundation's work, whether that's specific to the work that we're doing within the organ group or other work in this case. FPI.
Thanks. Thanks for that. B I think it's just important to underscore that, you know, we, we appreciate that the standards are often commonly adopted in, you know, Western markets or more mature markets, but we truly wanna make sure we're in the service of all people globally, and that gaps like this, you know, enabling feature phones and trying to bridge between the standards is a, a really interesting and important opportunity for us to continue to close those gaps. So thank you. Any other questions from the room for Bjorn? All right. Wonderful. Thank you, Bjorn for, for dialing in remotely in the early mornings, in the, on the west coast of the us. So thanks very much Bjorn.
All right. So you've survived. You are the last survivors in the room. Thank you so much for, for joining us today for the, for the open ID foundation's workshop, there will be a string of presentations and keynotes during the course of the week, going in deeper into these topics, we encourage you to, to take part in those forums and obviously to, to approach us individually. And of course our, our winners of our, our Kim Cameron award, please do reach out to them as well. I'm very grateful for your participation in today's session. Let me see if Nat did you or Don, did you wanna add any other final parting comments before we let everybody outta class chairman's duty?
So thank you. Thank you very much for joining us. And do we still have the Kim camera award winner here if Rochelle, maybe in the back? Yeah, yeah, yeah. If he can just come in front so that you guys can show your face, right? Yeah. I found you all. No pressure, no pressure. Yeah. So, you know, we'd love to induce these, you know, young char to you guys and, you know, please have conversations with them and you, you guys will probably learn a lot from the participants in this conference. So let's, you know, help the next generation and generation after to come on board. Right. So that's from me. And thank you very much for joining this workshop and see you all through the week.
Thank you. And class is over. You're free to leave.

Stay Connected

KuppingerCole on social media

Related Videos

Webinar Recording

Unify Identity and Security to Block Identity-Based Cyber Attacks

Join security and identity experts from KuppingerCole Analysts and ARCON as they discuss the importance of securing enterprise credentials, explain why a unified identity security approach in line with Zero Trust principles improve security and efficiency, and describe how to combine…

Analyst Chat

Analyst Chat #152: How to Measure a Market

Research Analyst Marina Iantorno works on determining market sizing data as a service for vendors, service providers, but especially for investors. She joins Matthias to explain key terms and metrics and how this information can be leveraged for a variety of decision-making processes.

Event Recording

Cyber Hygiene Is the Backbone of an IAM Strategy

When speaking about cybersecurity, Hollywood has made us think of hooded figures in a dark alley and real-time cyber defense while typing at the speed of light. However, proper cyber security means, above all, good, clean and clear security practices that happen before-hand and all day,…

Event Recording

The Blueprint for a Cyber-Safe Society: How Denmark provided eIDs to citizens and business

Implementing digital solutions enabling only using validated digital identities as the foundation for all other IAM and cybersecurity measures is the prerequisite to establish an agile ecosystem of commerce and corporation governed by security, protection, management of…

Event Recording

Effects of Malware Hunting in Cloud Environments

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00