We are about to enter a decade where critical business-grade information is protected by the OAuth 2.0 framework. Congratulations are not yet in order. Our mission for the next decade is to not repeat our own identity history, and instead to build a system that is provably secure through test-driven automation and that takes a vicious approach to detecting anomalies.
Can you feel the excitement right now? Can you feel it right? We have been working for a long time. How many innovation awards has Cooper or Cole given in the last six years to the types of technologies that have built us to the point where we can actually think about banking, APIs available to third parties over the internet connected by nothing but the consent of an end user. That's a holy grail. We've had massive large scale architectures for a long, long time, right? They tended to be lower assurance or they attended to be high assurance, but closed in nature. This kind of ad hoc connection is amazing. And if we pull it off, it'll be wonderful, but we should all be prepared to run really fast for the next year.
So as most of you know, standards are not perfect out of the box, as much as I know, people try to make it. So it is an iterative process and it has to be that way. And we are the better for it being an iterative process. So you can see here, we find out about something. We learn about a problem. We fix the problem in the protocol or in the implementation, and then we roll that implementation out. Now that sounds really easy, except this has to now happen over multiple specifications in multiple standards, bodies implemented by multiple vendors and multiple individual developers, and then deployed in multiple libraries to multiple actual physical production instances. It's a ripple effect. It goes all the way through our industry and we have to control those ripples. The more controlled the ripples are, the more reliable the entire system becomes and the less risk we incur as a result.
So this means that it's actually good to fail. It's just better. If it's somebody else who does the failing, not us. That's the trick sort of like, you know, you wanna be faster than the last guy running from the bear. It's kinda like that. So let's talk about some of the experiences we've already had and what we've learned from those experiences. We've been through open ID. I open ID 2.0, we learned a lot from that. We learned about how important it is to sign a request properly so that you can't insert additional query parameters. We learned about, you know, what works for low assurance identity and what works for high assurance identity or does not work. We also learned about ad hoc connections, right? Open ID had a really, really attractive grassroots trust model.
Now Sam is a different kettle of fish and Sam's a lot more active today, but Sam has a couple of really interesting characteristics that need to be examined. When, you know, Sam has been around for a decade and there have been in the past very heavy, very expensive certification efforts that have gone on where are they today? Not every implementation you even interact with will be certified today. Is that bad? Maybe, maybe it's not. Maybe you can trust them anyway, but it sure would be nice to know. Wouldn't it. One last thing about open ID 2.0, there are still libraries in production in this world from 2006, haven't been updated since 2006. So that ripple effect of libraries, it matters when people don't update that software, you end up with vulnerable software left in some instance somewhere. And so in that case setting and forgetting is not good.
And that is a major lesson that we need to learn for our future. So let's talk about the future. It's very exciting, right? OAuth 2.0, has a lot of really cool characteristics. It has an asymmetric trust model, meaning that an authorization server can do the bulk of the work so that clients connecting can quite lightly and easily connect. This has a huge amount of advantage when it comes to scale and ease of adoption, right? It means you do all the work and then you just let people connect to, to you and you watch them and try to make sure that they're doing the right thing at the right time. This is what's going to enable scale. This is what's going to allow us to go from what we have now, which are, you know, contained instances to this idea of an M by an ecosystem of trusted third parties and financial institutions, right?
So, you know how I said, we get to learn from our mistakes. Well, we do make mistakes and we will learn from them. When I talk about federated accountability, that's what I'm talking about. We are all in this together. We have to learn from every single issue that occurs. There are lessons in every single one. So I'm gonna go through four recent issues that have been seen and talk about what this means for the future, how we're going to use these lessons to make sure that open banking is secure. All right, let's start with one. That's just recent. Just happened, right? There is a fishing attack that occurred on Google maybe a day ago, two days, depending on time zones and flying from United States and things like that. This attack was a fishing attack where a, an email link that looked like a Google doc's share was sent to people.
And what happened inside of that fishing link was a perfectly legitimate open ID connect transaction. So the user clicked on the link, which they shouldn't have done in the first place, but they click on the link. They are taken to a consent page from Google, but here's the problem. Google shows self asserted data to the user. So whatever the client specifies as their application name, that's what gets shown to the user. So what's the result. Some people go in and create a bunch of clients called Google docs. And now it's up to the user to notice that Google docs usually doesn't have the permission to manage and delete their email. So there's couple of learnings from this, right? One is client vetting clearly in open banking and PSD two, you're not gonna just be able to Willy nilly rename your clients, whatever you want. You would hope, right.
There is a set of reputational expectations that occur for a trusted third party in that kind of ecosystem. However, you know, this is a garbage in garbage out type of deal. So you have to be paying attention. The other thing is we don't actually know how many people did not click that consent link. I mean, that is the whole point of that consent page is to give the user enough cues that they can stop, that they know something's wrong. So the real learning from this, this is that there's a lot of innovation to be had in the area of consent, right? We need to give people more nuanced sets of information so that they can make more nuanced decisions.
All right, now we talked about reputation, surely PSD two and open banking ecosystems. You know, won't be at risk because clients are going to be good actors because the reputation, no client will want to get kicked out of the club. Well, that's been proven wrong just recently with a system signaling seven error where, you know, in the old days, people thought it was no big deal. If there were flaws in that system signaling protocol, because no telco would risk its reputation by attacking or otherwise abusing those protocols. But now we're seeing that this isn't the case anymore, that you can kind of become a telco now pretty easily. The barrier has been lowered and attackers are trying to take advantage of that right now. So again, another lesson, right? At what point does the barrier become so low that it's worth it for an attacker to just come in and take a swing to see if they can try and, and make an attack work.
The next one is an older one. So I don't know if all of you have seen this, but this is a really interesting one in, I don't know, maybe eight months ago, PayPal had a flaw where they had used testing against local host, right? Which is 1 27 0.001 that the loop back channel, this happens quite frequently, right? It's pretty safe. You don't have to worry about things getting leaked, except there was a little mistake, just a little typo where a developer didn't notice that you, you know, it would work for you to make a request against local host or to redirect back to local hosts as your client. But it also worked if you made it into local host.my domain.com, right? So now you can basically take any domain, put local host in front of it, make a request and ask for the token to be redirected back to your server. Pretty good, right? This is an amazing one because this is a really tough problem to actually find, but somebody found it right. Somebody was looking, someone was testing and they didn't go and sell that flaw to the government. They disclosed it in a responsible way and saved. I don't know how many consumers from possible fraud.
All right, the last one is Tinder. And this is a really interesting one. So Tinder, the Tinder APIs have been fully reverse engineered. Anyone who puts up APIs should expect that this will happen. But here's the interesting thing with Tinder. They don't use client secrets at all. So every client in Tinder looks like every other client. And by the way, every client has the ability to suck down every user's information, right? I mean, that's what it is. Swipe, swipe, swipe, swipe, swipe. So they don't have the ability other than setting a header to tell the difference between a phone client acting against Tinder APIs, to swipe, swipe, swipe, and a script. That's simply hoovering data up one after the other profile, after profile to be, you know, used in whatever, you know, manner, the attacker wishes, right? So there's a lot to learn from that. I would suggest that even if you know your client, can't keep a secret perfectly, at least make your attackers reverse engineer it, right? Why not just make, 'em take that extra step. It doesn't hurt. It doesn't hurt to do it. But the other thing is this should highlight a future direction for the industry. And that is it's no good. When all the clients look the same, you need client's identity just as badly as you need user identity in order to make smart security decisions over time.
All right. So, oh my gosh, the flaws, they're all flaws. They're in oof. And it must be bad. Well, that's not true. I want you to think about what we just talked through. We talked through four issues where the root cause could be determined where actions could be taken, where companies could be sure that if they acted on these examples, they would not be subject to the same issue over again. Compare that to what it means when you hear a report of account harvesting or password fishing, okay? Those are cases where the attackers try the same thing over and over again. And every time it works and every time the best we can say is, oh, you should use a more secure password or you shouldn't reuse a password, right? These are much, much more constructive actions that our entire industry can take. And the high water raises all boats. So once this attack happens, once there is no excuse for anyone here to not notice it and act.
So my definition of federated accountability is fairly straightforward. I hope that you all doubt things all the time. Do not assume that your clients are validating signatures. They probably aren't or maybe 99% of them are. And there's one guy or one girl who just types one wrong thing, right. And rolls out a version of the application for one hour that doesn't do the right thing. How do we fix that? How do we mitigate that risk? I believe that that testing is the absolute future of this industry for IOT and for mobile specifically, right? This is large scale repetitive interactions, and they need to act properly. We all have to work together, collaborative forensics. That that means releasing root cause analysis, reading other people's root cause analysis, acting on it, making sure you're updating your libraries, right? Not just your Federation libraries, your open SSL libraries, your SSL certificate stores, all of these things contribute.
And here is my number one recommendation to you. When I say testing is important, this is the entity that is doing the type of testing that will allow you to simply rely on somebody else's definition of errors to allow collaborate, collaborative construction of a body of tests that you can simply point folks to and have them run. Open ID certification is inexpensive. It is lightweight. It is self asserted and it is open source. Okay. If you don't have to assert or certify anything in the public domain, you can take this test harness. You can pull it into your own environment. You can write your own tests and run them in private. So there is no excuse for you to be releasing clients that don't act properly. And my suggestion would be to take this and turn it into a let's encrypt type of module, right? Don't allow people to certify once and walk away because we know the code bases diverge, make sure that they're coming back again and again and again, to certify and get new credentials or to certify and see their statistics, whatever it is, be creative. This is your future at stake. This is you getting to learn from the examples instead of being an example.
So these are the things I think we have to prepare for. We have to prepare for the day when every single mobile phone will have an instance of an application that has its own identity. That's gonna have a big effect on our systems, right? In individuating, all of those pieces of software, but it has to happen. If not for mobile, then certainly for IOT, you have to have a way to tell them all apart. You have to be able to withstand waves of attacks. When these attacks come, they're not gonna come one and two and three, right? When someone figures out how to get through this system, they will get through the system on mass. And there are, there is preparation. There is disaster recovery that has to occur at that time. Can you revoke a given grant at will? Right? Can you notice something's wrong? And can you act on it? And that's not an easy thing. I say that as a vendor who makes a product where I don't know if I can answer that question, right? So this is hard, hard work, but it really pushes us towards intelligence, making intelligence, inte recognition of what's happening and intelligent decisions. Once we see that something's happened. And with that, I will leave you with this to read and thank you very much.
Thank you much, Pamela. I enjoyed the very technical details of the talk and of the attacks. That was really interesting. Two questions we have been that have posted anyone's writing APIs should expect full reverse engineering is what you basically said. Yes, isn't that dangerous for the I API economy?
Absolutely not. It's only dangerous if you don't expect it to happen. Those APIs you're publishing those APIs. Half of them, you publish, maybe half of them you think are private. They're not private. You should expect that someone is running MITM proxy. If you don't know what MITM proxy is, then hire someone who does, because that's what people do they get. They set up their phone, they run MITM proxy on it, and they watch everything that happens to the mobile app.
And next question, what is exactly looked at the open ID certification process? Is it the compliance or the standard or is it, is it even more, is it some quality check?
Good question. So there are various profiles that you can declare conformance to. So you may only implement part of the specification, but these profiles are sort of set up so that you can claim to do one set of functionality. And you may not have to claim that you do other sets of functionality. What we're expecting for example, is that in the open banking world, that the financially PI working group will have a profile and you should be able to certify specifically for that financial set of functionality. Okay.
Thank you very much again.
Thank you. Ready.
How can we help you