Webinar Recording

Is Multilayered Cyber-Defense Out Already? Against Rising Breaches and Vulnerabilities, Data-Centric Security to the Rescue!

Log in and watch the full video!

As the growing number of high-profile data breaches indicates, even the largest companies are still struggling with implementing consistent enterprise-wide information security measures. Setting up various security tools for numerous different applications, services, and heterogeneous systems and then making them work together efficiently is a massive challenge.

Log in and watch the full video!

Upgrade to the Professional or Specialist Subscription Packages to access the entire KuppingerCole video library.

I have an account
Log in  
Register your account to start 30 days of free trial access
Subscribe to become a client
Choose a package  
All right. Welcome to another Ko call webinar. The gentleman, our topic for today is moderate cyber defense out already. Again, stride BS and vulnerabilities data centric security to the rescue. My name is Alexei Balaganski. I am the lead Analyst Analyst co call. And today I am joined by Felix Rothberg, who is a product manager at Conforti AGA. And before we begin just a few words about our Analyst Analyst company. We are independent Analyst house founded 15 years ago, here in Germany. We are headquartered in VIBA, but we have a pretty solid global presence ranging from the us to UK Germany. Of course, all the way down to Singapore in Australia. We often when the neutral guidance technical expertise and thought leadership in just about any topic concerning information and cyber security identity and access management, risk management, and compliance and anything around digital transformation. What we do basically falls into three different fields.
We do our own research of all the markets in this area, the publish for neutral objective advice on our website, we do a range of different events ranging from free online webinars like this one on the way to pretty world famous industry focused conferences. Like the one we just wrapped up last week, the European identity cloud conference here in Munich Germany. And of course we do offer a project based direct support for our customers in different form of advisory services. Speaking of the events, as I mentioned, you unfortunately missed the EIC 2019. It just ended last week, but definitely looking forward to see you at one of our events this year. And of course at the EAC 2020 next may, before we start with actual content, just a few housekeeping words, you are all muted centrally. So you don't have to worry about that. We are recording the webinar and we'll publish it tomorrow on our website.
And we will send each of you an email with a link. We will have a question since answer section at the end of the webinar, but you are welcome to submit your questions anytime using the special tool in the go to webinar control panel. And as usual, we have three parts in our today's webinar. First, I will give you a general overview of security and compliance challenges of dealing with sensitive data and a basic introduction to the alternative data-centric approach toward that. And then, then I will hand over to Felix SBAR who will be talking or about more real world examples of implementing data-centric approach towards data security and data protection to better address compliance requirements. And as I mentioned, we will have a Q and a session in the end. And without further ado, let's just start with a brief or review of the latest data breaches.
So you have probably heard many different numbers. I've collected a few more or less recent ones from this slide. So here on average, over 6 million data records are stolen every day. And it usually takes up to 200 days for a typical company to even detect. They realize that they have been breached and more often than not, it happens when they informed by a third party. And of course it happens in just about every critical vertical, but if you are not in the critical vertical by no, by no way means that you are safe from such a bridge and only 3% of those breaches were safe, meaning that the data that was stolen wasn't actually compromised because it was protected in one way in one way or another. When we are only focused on really large mega breaches, which mean affecting more than 50 million records, the statistics are even more gloomy.
It's usually takes up to a year to even acknowledge touching breach and much more to detect and fix all the outfall, all the fallout, sorry of all the consequences of such breach. And I listed just a few notable ones. You've probably heard about Equifax and Marriott. Those were last year affecting hundreds of millions of customers around the world, losing not just their address data, but very tangible financial records and payment data. Basically putting them into danger of various cyber attacks. A more recent case is really notable because nobody knows who actually lost the data, but it was discovered in an unprotected database hosted on AWS, listed over 200 million, very detailed records of Indian citizens. And the last one just happened a week ago. Another Indian company left the whole data of their customer or database or 50 million Instagram users unprotected for several days again on Amazon. And the reason I am wanted to mention this one is that apparently even the management of this company did not feel that they did anything wrong. Apparently all this data is publicly available anyway. So why bother bottle? Well, they're lucky they're not based in the EU at least because of course they would immediately face all the consequences or really tight data protection regulation for.
So why, why all this happens, obviously because of this small thing called the digital transformation nowadays, the data is everywhere. Basically data is the daily bread and butter, the product for many modern businesses. And of course, it's, it can be found anywhere, OnPrem, various databases and applications in legacy systems in the cloud or in own and third party services or in various industrial environments, or just anywhere in the world where whenever, wherever your mobile workers or contractors or business partners are. So obviously securing all this data becomes a massive challenge. And or unfortunately this is where the traditional database security with all its massive complexity, which I tried to sketch on the slide comes short because, or obviously the very definition of what database is nowadays is blur is your SharePoint server database is your email server database, either F three bucket on AWS, a database.
Sure. And all those big data and cloud services into all other data stores, regardless of its format, underlying infrastructure security model, probably an external team of admins running in that particular data source. You are still fully responsible for protecting your sensitive data store in that sort data source. And the question of course, or is like, how do you deal with, how do you deal with this huge complexity or the traditional approach is obviously to set up a firewall, a publication firewall, or a privileged access management installation and encryption tool. You name it, all of those or any combination of those database security technologies. You've just seen on the previous slides around each separate database and other data source, but can you really afford it? Do you have enough resources, money experts, and just time to do it? Probably not. And this is why there is an alternative data-centric approach to the rescue, which kind of which has already established itself as a very promising proof method to protect your data regardless where it's being stored at the moment.
But the concept behind it assume that the data, your data has to be kind of smart. This are the basic principles, foundational foundational principles of data-centric security, which were formulated over 10 years ago by one American researcher, rich mogul. In a nutshell, your data has to take care itself in, in a sense it has be, it must be self-describing self-defining, it must be again, quote, accountable to your business policies. And it has to be protected in a, in a similar, consistent manner, regardless of the underlying technology or application or system or whatever, whatever it landscape we are talking about at the moment. Sounds good. So why is it not? Why hasn't it been implemented everywhere? Oh, the problem with it, not all your data was created the same. There are some certain prerequisites which you have to implement even before starting to think about consistent data protection.
Obviously you have to know where your data is. So data discovery is really the step number zero for every data protection strategy. You have to know, you have to be able to answer the question, what am I supposed to protect? Where it is, how sensitive it is, according to our own taxonomy or government regulations or industry standards, whatever. And of course you, you have to know all the time what's going on with the data. Does it move around between systems? Is anyone accessing it at the moment? It your internal employee or an external partner and so on. So you have to have continuous discovery, specification, and visibility to be able to implement consistent and ubiquitous data protection and the basic principle that you should never leave your sensitive data unprotected at any point in time, regardless whether it's being addressed in transit or in processing.
So how do you even approach, how do you implement this interesting concept? There are in fact multiple potential approaches if you followed the market for the last 10 years or so, you've probably heard about zero knowledge encryption. This was held as the ultimate solution to all your data protection roles some years ago. The idea is basically that you never leave your data UN encrypted and you only, you should be the only person keeping the keys to that encrypted data. Sounds good. Doesn't work. Why? Because as I mentioned, the data has to move around and it has to be accessed by third parties. The next idea is so-called homomorphic encryption. It implies that somehow the data can remain encrypted while some interesting operations are performed in the data without encryption. Again, it's a great academic concept. There have been interesting development in that area, nothing ready to be converted to a real product yet fingers crossed.
The next step is information right management. You've probably heard about it with regards to office documents or Adobe PDF documents, for example, where each document is covered in a armor of thoughts, which includes not just the encryption layer, but also a builtin policy layer, which means that each piece of data becomes smart. It knows how to, how it's supposed to be protected, regardless whether it's being opened in a native application or sort party one again, the problem here that not all data is as smart. You cannot really put IM on a database, right? Secure enclaves or trusted execution environments TPMS again, really interesting upcoming technology, which basically say that all your data obviously cannot be processed while encrypted. But if you put them into a, a really highly secure isolated environment first, then encrypt them, do your job, encrypt them back and only then take them out of that TPM enclave.
Then you are fine. Again. It's great concept. It works in very isolated environments already, but it's very far from original, really ubiquitous and cross platform status zero trust. You've heard about this approach definitely many times, unfortunately, nobody knows for sure what zero trust actually is. So we are still very early from seeing it implemented consistently by everyone. And finally, we come to the last one. Definitely not the one, probably the most reasonable approach. Yes. We would say, we cannot protect all of our data, but at least we can make sure that those beats, which are really sensitive can be made, can be desensitized. If you will. Just those fields or database column columns or individual records like credit card numbers, names addresses you name it. Only those beats are kept as secured. The rest just works as usual. It's not perfect, but at least it's compliant.
And it ensures that at least the very sensitive bits are never exposed to the bed bed outside world when it comes to this screen or the sensitation, there are again multiple approaches, but they all boil down to the same principle. You take the sensitive bits of data out and you replace them. You substitute them with other bits of data which somehow serve as replacement for those bits. Some of those approaches are irreversible. The most irreversible one would be what's called pseudonymization onto GDPR. It's not just taking the individual records and making them unreadable or unrecognizable, but actually it didn't some statistical noise to ensure that nobody can ever reconstruct even an approximated view of the dataset. It's useful for many cases, use cases like data, mining, statistics, and so on. But again, this is not our focus for today. Today, we are going to be talking about various types of masking, meaning that we take bits of data and replace them with other bits of data, which are, can be format preserving so that they won't break the existing applications, which expect, for example, a credit card number to contain only numbers and not just some random characters or, and this is where there are multiple ways for you.
You have to implement format preserving encryption, which is a type of encryption, which means that your sensitive data still remains in your data stores, but it's somehow inaccessible without having an encryption key to encrypted back. Or you replace this bit of data is a totally underweight token. The data, the piece of data, which by itself has no relevance to original data at all. And then you would need to have some such some kind of a system, a third party framework to look up those tokens and get the original bits of data when they were needed.
And this is where we come again to the biggest challenge since our data can be anywhere and not just on our own premises, but hosted and processed by multiple third parties and multiple different applications developed by, by different teams. How do we make sure that each of those teams, each of those applications can access the same data consistently without any pauses or interruptions, manual processes. Those are just, everything just works as a should without any scalability problems and so on. So how do you achieve ubiquity and continuity of your data protection? And this is where, yeah, just a nice animation show you that the same question arises for multiple systems with different technologies, different application stack. And this is where I am going to hand over to feelings. But before that just a few key takeaways to reiterate it. Yes, the modern it infrastructure is too complex to rely on a traditional infrastructure based security like firewalls valves, database walls, and other existing technologies. And data-centric security is the alternative because you only protect data once and then can access it securely everywhere. The only question is how do you implement it consistently across different systems, both modern and future looking like cloud and totally backwards and legacy systems like mainframes. How do you ensure that this is implemented consistently without any scalability for tolerance and performance issues? Well, you know what, there are solutions which can do just that. And with that, Felix, the stage is yours. Let's talk about if in detail,
Okay. Thank you. Alexei a for, for giving a great overview about data centric security and thank you all for joining. My name is Phillip SK as Alexei a said, and I'm working with the product team here at comfor, and we are helping enterprises worldwide essentially to protect the data being compliant and also to monetize the data. And yeah, the most companies are asking themselves when looking at data-centric security as an approach, does it make sense for me? How can I implement it? And basically what is needed to do that? What is needed to, to implement data-centric security and in my enterprise. And yeah, we are looking today at a real world customer example. And as far as I know, the audience is yeah. Based in, in various different industries. And we took a payments process as an example today and essentially a payments process as a really good example example because naturally financial institutions like like payments processes are really attractive targets for, for hackers.
One thing is they process sensitive data, personal information of, of their users. And on the other side, they also process payment information and payment information usually is really sensitive because you can easily commit fraud with payments data and yeah, mercury as a payments processor, there they're processing payments for banks for, for merchants. And therefore they have a lot of, a lot of partners. They have to care for. They're operating in, in 10 different markets in Europe and Africa. And that leads to also a lot of different regulations they have to comply with. And of course they have, they're processing a lot of data, a huge amount of data, 1.5 million transactions a day, which is roughly 80 transactions per second. So they're having a lot of data. They, they have to protect and also 6.9 million accounts they have to serve worldwide, which leads to a lot huge customer basis.
And so the, the customer base, the customers want to really want their, their payments to be processed securely and reliable. And that leads to, to their mission. Basically they want to deliver reliable services safely and securely. And of course we all understand with payments really want to make sure that the customers don't don't get stolen or don't the data don't get stolen and hackers can't use the data to commit any fraud. And yeah, there are specific challenges for, for payment service providers. And mercury essentially said that they have three different challenges that were, they were chuckling specifically. And one big challenge for them was compliance as a payment service provider. They had to comply to, to P C I DSS a payment card industry, data security standard as they were processing payments, but also they, they were handling a lot of personal information. And as they were based in Europe, GDPR com applies to their processing.
And of course, being in different countries and having a lot of different subsidiaries there could time come up in other regulations. So they had to make sure that whatever kind of solution they use to protect their data is complying to future regulations also. And of course the banks they are working with have their own requirements to security. So they have to comply to, to these requirements too. So compliance and, and data privacy was a big challenge for mercury when it comes to protecting their customer's data. And of course, trust and reputation looking at, at the work they do, looking at what they do for their customers. Payments really is, is something that, that is all about trust and anything they are doing. Shouldn't have any impact on, on end customers in an ideal world. You even don't know that there's a payments processor taking care of your, of your payments.
So trust and reputation was something that they were really looking for. And that shouldn't be information is, is created. And then you create like PII data and, and puns primary account numbers. And of course, to, to monetize that data, they have to process the data. So they have internal services doing something with the data, they have the processing environment. And on average statistics say that for every data you have in your, in your network, there are 10 copies all over your network. So sensitive data was all over in, in their system. They have also locks and analytics reports and backups. So the data is stored all over. And then of course you have customer service, you have the users that want to access the data. So there's the clear text data, the sensitive data also. And they had to send the data to third parties to do, for example, the settlement at the end of the day.
So for mercury, that was really a mess. They had sensitive data all over and they didn't know how to really protect that data. What they tried was essentially layered approach, implementing a lot of firewalls, making sure that nobody gets access to the data. But as Alexei a said, usually you, you can't be 100% secure that nobody gets access. The layered approach only protects you from non-tech methods. And yeah, you will never be sure that there's no not a, not a patch that you forgot or not a vulnerability that and hacker gets access through your system. And what they also tried is this encryption, but encryption comes with a, with a lot of problems also because to use the data you have to decrypt and yeah, to do that when you, when you decrypt the data, it's in the clear again, and then attacker get access to, to that clear text data.
And the whole point of encryption is, is, is useless. And another thing of encryption is you have the key management. So basically the more, the more complex your network is, the more you have to care about the different points where the decryption or encryption is necessary. And then you have to distribute the keys and you also have key rotation. You have to make sure that you change the keys from time to time. And looking at Marriott Alexei mentioned that they had, for example, the problem, they got a breach, then the sensitive data was stolen. They were insecure if the, the keys were stolen with the sensitive data. So they weren't sure if the Heckers got access to, to that data. So basically Marriott mercury was trying to, to solve that problem. And they came up with a few key to data security solutions to, to make sure that the solution works with their existing system.
And one thing that they wanted to achieve was sustainable compliance. And as we said, they, they had to comply to various different regulations. So you also wanted to wanted to make sure that they comply to future regulations. So that was something they required from a, from a data centric, security solution. Also, as they had legacy systems in place and various different applications, they had to make sure that the solution they were looking for was able to, to plug in easily into their existing infrastructure. The third thing that was really important for them when, when looking for a security solution was reliability and scalability, reliability is really all about their, their trust and their reputation, because if, if a solution breaks their system and it's not possible for them to, to, to process the payments, they really had to make sure that this doesn't happen. And of course, scalability for payments processes.
Usually the, the vacation time, the end of the year is coming with a peak of, of payments because a lot of people are going to shop and therefore they have to make sure that the solution scales with their, their need. And of course the last point seems logic. I mean, no impact on, on customers. Everybody wants that, but the more complex your system is, and you want to pluck in a data-centric security solution, the more important it is to make sure that you don't need to shut down your system to, to switch from the unprotected, the protected state. And also you have to make sure that the data is in the clear is available. Every everywhere that is, that is needed. So basically they have to make sure that that there's no impact on, on customers that they don't even mention that there's a security solution in place.
Okay. So let's look at how they solve the problem going to sustainable compliance. What does it mean? What is sustainable compliance? Basically, we were talking about two different regulations here, P C IDSS and GDPR. So what they have in common is that they, that they require strong data protection for, for any kind of sensitive data P PC ID, SSS that you have to render the account number and readable anywhere you store it. And GDPR does a similar approach. When it comes to, to personal information, they require you to Ize or protect the data anywhere the data is located in your, in your infrastructure. So this is something the regulations have in common. And of course, this is not only when we talk about sustainable compliance, that's not the only thing you you want to do. Basically you have to make sure that you keep systems out of scope because it's, it's not only about complying to all those regulations.
It's also about reducing the, the scope of your overall infrastructure, making sure that those regulations don't, don't apply to, to as many systems as possible. And if you just process pseudonymized or unreadable data on, on various systems, you don't even have to care about GDPR compliance with those systems. And the nice thing, for example, with GDPR is they say that you don't have to disclose a breach. You don't have to go out to the media if only protected data, strongly protected data got stolen from your system. So that's a really good thing. And of course, sustainable compliance means also that if you implement new applications, if you implement new products, you have to make sure that they also comply. So your solution for, for compliance really needs to be flexible enough to, to also secure those new ideas, those new systems, if you, if you wanna undergo a digital transformation, for example.
So they, at some point, at some point they found out that tokenization is a really good approach to, to solve their problem. And so we, we just take a look at what the tokenization does to your data and yeah. Looking at the business applications at some point in your system, the data is created at some point, the data enters your system. And usually the data contains of sensitive information. And as we look here, this is an identity. There are various different data formats. You have the first name, you have the, have the last name. For example, you also can have a credit card. And what you want to do is to protect that data as early as possible, as soon as it got created, because the longer you wait with protecting the data, the more chances are that the data is in the clear when somebody gets access to it.
And of course the later you do protect the data, the more chances are that the data get copied before you even able to protect the data. And therefore you lose the control over the clear data. So what you wanna do is you want to protect the data and send it to a protection mechanism, which then changes the data, for example, with tokenization and what the protection mechanism is is doing is exchanging the clear text data with, with the token and looking at the, the identity here. For example, the credit card now is a, is a token and it's worthless for attackers. So it's not, it's not the real credit card number, but it's looking similar. We call that format preserving. So it means that that any kind of system that is using the data from now on can use the token instead of the clear text data without any, without any problem.
And what you can do with tokenization also is you can protect just the sensitive parts of the data. For example, here, you, we see that the last four digits are still in the clear, and that's a great thing because then you can use the token instead of the clear text data to perform basic analytics, for example, or a customer service can work with the token to identify a person by checking, for example, the last four, the last four digits of the credit card number. And as soon as you protected the data, as soon as you exchanged the clear text data with the token, you can send it anywhere. You can, you only send around the, the protected data. So it can go to a database, a file. You can process it. You don't really have to care because yeah, the, the tokenization or the security travels with that data, and the beauty of that solution is, is pretty clear because in all your network, you only have secured protected data.
But looking at that approach, there are a few questions coming up. One question is how do you integrate that? I mean, it's really important to, to have it close to your business applications. Another question people are asking is how do you get back to the clear text data? Because at some point you have to have the, the clear text data available. And basically mercury was trying to, to find a solution here. And they were looking for us and we helped them. And one, one requirement they had when integrating that solution is that they don't have to change their legacy application because changing legacy infrastructure usually creates a lot of work. You have to change the code. You have to have a big project, making sure that everything works as it was working before. So one thing we offered them, and one thing we, we, we offer with our solution is transparent integration.
What happens here is that you don't have to change your application. There's a transparency layer added, which takes care of the protection mechanism, protects the, the data changes, changes it to tokens, and then the tokens are processed by the legacy application. So Marriott was really happy about mercury reef was really happy about that because they didn't had to change their legacy infrastructure. And there's also another approach to do that, that we offer with our solution. It's an API based approach. So that means you can implement the tokenization via, via different APIs, C plus plus rest.net, just to name a few. And you can really deeply integrate that into your existing applications or with any future applications. And of course, we also offer a platform integration, which means you can integrate our solution with, with your CRM E R P systems and making sure that users only get access to, to data if they are allowed to.
So based on user rights, you can decide if a user is seeing the token or the, the clear text data. And me mercury was really happy with that because they, they are now able to not only implement the solution with their existing environment, but also being able to, to implement any kind of new products or applications, and then making sure that they also only use tokenized data wherever needed, but the integration into your, your applications. Isn't the only thing you want to do. You also have existing it, infrastructure, it security infrastructure, and you have to make sure that a data-centric security solution integrates well with that infrastructure too. And one thing that comes to our mind to pretty quick is IM integration because you have an existing way to, to manage user rights and you have an existing way to, to take care of, of your users and what they're allowed to do and what they're allowed to see.
And if you now think about changing the data in a way that nobody can use it it's, or nobody can yeah. Get access to the Clearex data. If, if they are not allowed deep, IM integration is something you're you really want. And we also allow that, and mercury career is really happy with that, that they're able to, to check if a user is, is having the right role is having the right permission to, to access Clearex data. And you don't create another silo by, by having to manage all those rights for, for the data centric security solution, separately, same counts for the cm and the monitoring integration. It's it's what, what large enterprises always want. They don't want another silo for, for monitoring, for example, another, just another dashboard. It's really important that your data centric security integrates into these systems also. So basically we are coming to the point where we ask ourselves, okay, we, we have the protection mechanism.
How does it look like, is it like a hardware thing, or what do I have to do to implement, or to set up such a protection mechanism? And what we offer here is a virtualized solution, virtualized appliance, which enables you to, to set up any kind of environment, if you have your on premises systems, and you want to make sure that they can, that you can protect the data on premises, you can set up the system on premises, but a lot of companies, especially when it comes to big data, analytics want to use the cloud also, or they want to use the protection mechanism at other locations. And therefore with the virtualized appliance, you're allowed to, to do the tokenization, to do the protection and de protection also in the cloud. And if you set up a similar appliance for the same configuration, you can send around your data, you can send it to the clouds and you can de tokenize there if you want.
So you don't have to send it back to the on-premises systems to do any kind, kind of protection or de protection operation. And yeah. So the basic question now is how does it look like how is it set it up and why was that important for, from, for mercury? And basically our mercury was, was having their operations on premises and they don't want it to, to do the protection in the cloud, for example. So it was easy for them to, to set up the protection mechanism on premises, but also they are flexible enough to, to use their data in the cloud. For example, if they, if they are going to start a big data analytics initiative, they simply can send the tokens instead of the Clearex data to the cloud and work with, with the tokens in the cloud. And that really is something beautiful because you don't have to care about cloud security that much, if you send Clearex data to the cloud, the cloud is just another data center somewhere else.
And then you really have to have to make sure that nobody's accessing those, those care text data sets. And as we see with a lot of recent breaches configuring Amazon web services correctly, really a challenge. So a lot of companies are really happy when they can send tokenized data, use it tokenized. And if you need to de protect it, you can set up a, the protection device or protection device in the clouds. So how does it look like basically instead of having a real server or a hardware solution that takes over the, the protection, we create a cluster of minimalistic protection notes, and these protection are hardened. And so, so if something goes wrong, the cluster is making sure that another protectional takes over if, if something is happening or something goes wrong. So this is really a reliable solution, and you don't have to care about the low balancing or the automatic failover because the enterprise applications just access the, the services via by our integration.
And then our integration makes sure that, that the load balancing and the failover is, is happening. And the beauty of, of this solution, having like very hard net, small virtualized systems taking over the protection is that you can scale pretty easily because you just have to set up a new protection, an additional protection note, and then the performance is increased. And also it comes with another benefit because if you want to set up a protection mechanism at another location, for example, in the cloud, you just have to set up new protection notes with the same configuration somewhere else. And it simply works as it works with the on premises installation. So really don't have to care about that. And yeah, if you take all that, and if you summarize that, what does it, it mean in terms of reliability and scalability? So for mercury, it was really important that they have a high performance, as we just discussed.
They have about 80 transactions a second. That really means they have to make sure that any kind of security solutions they plug in is not making, making a lot of latency. So with that solution, you have high performance, you have minimal overhead, you make sure that all your, your data is protected and de protected very, very fast. Another thing is you can put it close to, to the enterprise application that needs to be protected. And with that, you are able to, to reduce the latency a lot, it's linear, scalable, just at a new protection note, and you will be a fine. And of course, with the intelligence streaming and low distribution, you don't have to care. You don't have to worry about all that. It's just using the protecting de protecting as needed. And you're fine. So looking at the, all that, looking at the requirement of no impact on their customers, that's really summarize of what all of that, what we discussed right now, because if it integrates easily, you don't have to worry about any, any kind of shutdown of your solution. So the customers don't even mention. And if you make sure that it's scalable and reliable, then you don't have to worry about customers mentioning that something is wrong. So there's really no impact on, on your customers. There's really no impact on your existing infrastructure.
So coming back to the, to the infrastructure of, of, of, of Mac, so they still have a point where sensitive data is created. They still have a PRS device, for example, and they, they create sensitive data. So what we add now to the infrastructure, it is organization system, and we protect the data at the earliest possible point. And what does that mean? In the end, they are able to use the protected data in their whole network. If it's their internal services and processes, they're only working with working with tokens instead of the plain text data. If it's their locks, analytics, reports, backups, they only use the tokens. And of course, at some point the customer service or user has to access that data and they need to see, they need to work with the data. And the good thing is, as we discussed, sometimes for a lot of use cases, you can, you can use the token instead of the Clearex data and you can work with the token, but if you have to access the Clearex data, you de protect it.
And then the customer or the, the, the customer service is able to, to use the Clearex data. And the good thing here is you can do that based on user rates. You can really decide on a case by case basis. If the customer services can see the token or they're allowed to, to see the Clearex data. So you only have to de protect only when, when absolutely necessary. And of course, you still have the outgoing communication and you can unprotect the data and then send it to, to a third party without any problem. And if you look at all that, what, what it's mercury achieve here, they have their requirements in terms of sustainable compliance. And the good thing is if you only use tokens instead of the Clearex data, and you plug in a new application, you don't have to worry about the compliance this application, because it's also using the tokens and therefore it's secure. And as tokenization is an approach that is accepted by PCI D SS by GDP, by the, all the regulations, you don't have to worry about any kind of other solutions, or you have to make sure you don't have to make sure that, that the order to understand the organization. It's because it's really, it's an approach that is well known.
Of course, they were looking for, for ease of integration, with the legacy systems, with all the applications they have in place, and yeah, with the, with all the different options to do that, if it's transparent, if it's API, or if you implement our platform, you really don't have to worry about the integration of, of your applications.
When we come to reliability and scalability, the way Howard is set up as it's virtual, as it's a cluster, this is something that, that mercury was really happy with because they, they can easily scale up if they need more. If they enter another market, for example, they just have to set up a new protection note. They just have to change the configuration a little bit, and are they capable of doing the protection and de protection as needed? And of course this all ends up in no impact on customers. They were really happy with that. And they, they did the project and no customer even mentioned that something had changed and they implemented the security and plucked it right into their existing infrastructure. So thank you for your attention. Basically, there are a lot of other use cases. We have, we have a lot of success stories available on, on our website. And the best thing you can do is set up a short conversation with us, talking about your specific use case and having, having us discussing what you need. And I mean, 15 minute discussion really is a, is a good thing. Then it saves you a lot of time and we can, we can work on solution with you. So thank you. And let's go to the Q and a session.
Well, thanks, Phillip, Phillip, that was a pretty comprehensive, and I would even say quite impressive presentation of how you can actually implement data center security in real life, or without producing any additional problems. As you mentioned, customers just never see that anything has changed at all. So yeah, we are now at the Q and a stage. So please submit your questions, using the questions tool on the go control panel. And the first one we have is can you support multi-cloud scenarios?
Yeah, that's a good question, actually. So, so basically, as I said, of course you can, you can support multi-cloud environments. The thing is most companies don't wanna risk vendor locking with, with, with the clouds solutions, especially if you handle customer data, some of them don't want Amazon to have all their data. So they they're looking for setting up their solutions or their, their sending their data to various different cloud environments. And as it's a virtual solution, you can set up protection cluster or the protection mechanism at any kind of cloud you want. And also you can work with the, with the same configuration on, on various different clouds. And therefore you can send around the data from one cloud to another, and it still is allows you to, to work with the data on, on various different configurations. And the beauty of the solution essentially is that you don't need to de protect the data before you send it to the cloud to make it even working there. You just send the protected data, you just send the tokens and therefore you just expand the, the protected area if you will. So, yeah, multicloud is absolutely possible,
Right? I guess, in a way you can compare or the data centric approach, or at least your implementation of the data centric approach to containers. Like what you do to data is the same, what containers did to applications, right? So you have this additional layer of abstraction, which makes the whole solution completely, not just cloud neutral, but environment neutral, completely doesn't matter whether it's on-prem or any virtualized or any cloud environment. It works the same. Consistency is key here, and that's the beauty of it. Okay. Next question. So what about other types of sensitive information beyond just payment data?
Absolutely. That's a good one too.
I mean, you, you can do it. I,
I assume, but what would be the benefits?
Yeah, so, so basically token organization is something that, that really can work with any kind, any type of data. So, so if it's an email number, social security number, first name, last name and organization is, is really flexible. So our solution is, is able to, to do different things with the data. So you can decide, for example, which parts to protect. You don't have to protect the whole thing. You can just protect the sensitive parts of it. You can also decide on how to change the data. So it doesn't really matter as long. It's a, is it a, it is kind of a string. You can do whatever you want. So some customers of asked, for example, they, they want to keep the format of the data, like just having a numbers, just having a number, number format for, for account number, or just having text for, for, for, for names, if you will.
But you, what you also can do is to, to make tokens look like they tokens. So you can change the data in a way that you can identify a token by, by changing the, the, the, the way it looks for example. And what we also allow is to solve the data, which means we can identify different types of, of tokens, and that's for some customers that's really important that they can see, okay, this is a token that is coming from, from this system. This token is coming from another system, and then they can map that map, that tokens to, to different systems. And essentially with that kind of flexibility, you're able to protect any kind of, of data set that is flying around in your network.
If I might follow this up with my own question on top of this, well, I've heard many times people talking about dealing with kind of less structured and more free text information, like names addresses. So as soon as you start encrypting them in the traditional way, you lose many application capabilities like sorting, searching, filtering, stuff like that. Can your solution solve this problem as well?
So, yeah, basically a token is always a substitute of the, of the real text information. And you also have referential integrity, which means if you set it up in the, in the, in the way we do it, the one name always maps back to, to one single token, which is great, because then you can run analytics on those kind of, of data. And you also have the, the possibility to, to, to do basic processing with the data. So yes, that, that, that works. Yeah. Does that answer your question?
Well, if, for example, you have an application which is listing your customers and you only want to make sure that a certain person Onor is his own customers or on the customers from say Germany or only people whose name start with a, can you do that?
Yeah. You, you can do that.
Okay. So it's all about the flexibility of defining the token format. Right?
Exactly. Just have to define it in the right way to make sure that it's still identifying in the way you want to, you want to separate the outcome.
Okay. And on following up on it, the next question, can you explain the difference in detail between format preserved encryption and tokenization?
So, yeah, basically if we, if we take the term tokenization tokenization means that you exchange a clear text data set with a token, which means, which is just, just a substitute for, for the clear text data. And there are several ways to do that. One thing you really want to do is to keep the format of that data data set, because with classic encryption, you can basically also take the data and change it, but then you end up with a long string with the encryption string, if you will. So tokenization basically exchanges the data set with the token, and there are several ways to do that. So table based tokenization, that that's what we are offering with the, what we call classic tokenization is doing that by very specific algorithm. And it's changing the data to a, to a format where it's useless for tech format, preserving encryption is doing a similar thing.
It's also protecting the data in, in a format, preserving weight, but using encryption as a, as an underlying technology, essentially, we offering both with our protection mechanism. You, you can decide based on, on your settings, if you want to use table based organization or format preserving encryption that the, the downfall with form preserving encryption against tokenization is that you still have the keys. And a lot of, a lot of regulations require you to, to, to do key rotation. And therefore with tokenization, you don't have that problem solve. Basically, it's, it's solving the same problem having the similar result, but yeah. Using different approach or different mechanism to do that.
And if I may elaborate on that a little bit from a different point of view, basically FPE. So the forward Porwal and encryption is method of protecting data in place, right? You do not, you just keep the data in the table in the data source, whatever it was just encrypt format, it tokenization, you completely remove the sensitive data, put it elsewhere. And instead of it, you put a token in which in itself has absolutely no sensitive value at all. So you have this benefit of that with encryption, you have the benefit, you keep all the data together. So it's easier in theory, but with tokenization, you avoid those compliance regulations, which will actually make your life more complicated, key rotation and stuff like that.
Exactly. And there's also a difference between wireless tokenization and, and tokenization using vault. For example, with Wal it tokenization, you have a lookup table. So you have a table where the token maps back to the original data set, and then you have, you really have to protect that table and you have to send that table to another place to make sure they can de protect going back from the token to the original data and with Wal tokenization, that's what we offer. It's essentially an algorithm. So you don't have that tokenization vault where, where you do the mapping between the token and the clear text data, but you have like the algorithm that is able to go back from the token to the, to the clear text. And with that, you just have the algorithm that you have to use somewhere else to, to de protect, for example. So it's a better approach if you will.
Okay. And I think we have time left to one last question. And that question is, it's a really interesting one, philosophical one, if you will. So what's the importance of providing data protection across the entire infrastructure? Why not just protect the database level?
Yeah, that's a good one. Basically. Of course you can, you can do tokenization just, just on database level and a lot of our customers doing a face approach. So that means they, they start with a, with the database or they start with, with the application and then doing it face by face in their whole network. But I mean, the benefits of, of data centric security really are that you, that you can send the token to all places in your network and you don't have to worry about security. So if you just do it at one single place, you, you really lose the benefit of, of having not to care about your data because it's the, the security travels with it. So if you de protect it, and if you protect it just on a database and de protected, whenever it's leaving the database or going somewhere else, you really lose the benefit of, of data centric security, if you will.
Yeah, I can. Why spent all the effort if you are only using it once instead of using it everywhere, which is like the most important benefit of the whole approach, but on other hand, I mean, if you just, it all depends on how you define a database. As I mentioned nowadays, the very notion of what a data base or data source is very fuzzy. It SharePoint to database is F three on Amazon, a database. If you just start counting, you will, you won't have enough fingers on both hands to see how many different database types you're using within your business. Probably like if it all depends on how you understand the database level, essentially, if you start thinking about your data as equally important, regardless where it's stored, you end up it's data-centric protection, right? You cannot do it any other way. And I guess with that, we have just reached the top of the hour. I would like to thank Felix for his participation in our today's webinar. And of course, thank all of our attendees for being with us. And for those who will be watching the recording of the webinar later. Thank you again, hope to see you soon. It's one of our next events and have a nice day.
Thanks, Alexei too. Goodbye.

Stay Connected

KuppingerCole on social media

Related Videos

Webinar Recording

Better Business With Smooth and Secure Onboarding Processes

In the modern world of working, organizations need to digitally verify and secure identities at scale. But traditional IAM and CIAM strategies can’t identity-proof people in a meaningful way in the digital era. Finding an automated digital identity proofing system that is passwordless…

Webinar Recording

Advanced Authorization in a Web 3.0 World

Business and just about every other kind of interaction is moving online, with billions of people, connected devices, machines, and bots sharing data via the internet. Consequently, managing who and what has access to what in what context, is extremely challenging. Business success depends…

Analyst Chat

Analyst Chat #146: Do You Still Need a VPN?

Virtual Private Networks (VPNs) are increasingly being promoted as an essential security tool for end users. This is not about the traditional access to corporate resources from insecure environments, but rather about privacy and security protection, but also about concealing one's actual…

Analyst Chat

Analyst Chat #118: A first look at the new Trans-Atlantic Data Privacy Framework

On March 25th, 2022 the European Commission and the US government announced a new agreement governing the transfer of data between the EU and the US. Mike Small and Annie Bailey join Matthias to have a first look as analysts (not lawyers) at this potential milestone for data privacy…

Analyst Chat

Analyst Chat #115: From Third-Party Cookies to FLoC to Google Topics API

Online tracking is a highly visible privacy issue that a lot of people care about. Third-party cookies are most notorious for being used in cross-site tracking, retargeting, and ad-serving. Annie Bailey and Matthias sit down to discuss the most recently proposed approach called…

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00