Good morning, good evening, or good afternoon, depending upon where you are. And welcome to this webinar, bringing Data Back Under Control. My name's Mike Small, and I'm a senior Analyst with KuppingerCole. And I'm joined in this webinar today with my colleagues from Shard Secure Pascal cro, who is head of ema and Julian Weinberger, who's the Field Chief Technical Officer. So as regards this webinar, you are muted centrally and we are controlling this. You don't need to mute and unmute yourself. We will be running some polls during the webinar and we can discuss the results during the q and A. The q and A will be left until the end, but there is a question panel in the go-to webinar panel and you can ask questions at any time using that. And the webinar is being recorded, and both the recording and the presentation slides will be made available for you to download in the next few days. So we're going to start off with a poll question. So how would you best describe where your organization's data is held and processed? Is it mostly in your organization's own systems and data center? Is it evenly split between your own data centers and public cloud services? Is it mostly in public cloud? So, so, so we should be running the poll.
Okay, so the poll has now completed and we can discuss the answers to this as we come to the end of the webinar. So in this webinar, basically we're going to start off with me talking about why it is that, why we need, why you need to bring your data back under control. And then my colleagues from SH Secure will describe their cost effective approach to data security. And then finally, we'll have some questions and answers. So what's behind this problem? Well, the problem really comes from digital transformation, that as we have got more and more powerful computers and as we have got greater and greater connectivity, it's become easier and easier for organizations to collect and exploit data and to do that, in fact, what happened was that we, we needed to use some new approaches to it. The, the traditional IT system was very slow and difficult to change, whereas cloud services were agile and allowed you to quickly implement business led change that the DevOps approach that has typically been involved in cloud is very flexible and easily adapts to business needs. And it's very responsive that if your idea, if your application, if your new business model takes off because of the just in time nature of the cloud, everything can be expanded quickly. And this compares with what it was like with on-premises systems where you would have lead times and massive capital expenditure to deal with.
So if we look at this, this is great, but in fact it has led to three major concerns that organizations have and these must be managed top of mind for many of the organizations is failure to comply. And we will talk about an example of how that has happened, but I think all of you have come across examples. The second thing is that compliance failure often follows a data breach. And the more you have put your business critical data in your cloud and hybrid systems, the the more damage it does if you lose it, and the more likely it is to be data that desires like intellectual property or personal data or some kind of regulated data. And finally, your more exposed through this, since digital transformation tends to have done away with many of the basic processes and manual systems and paper systems that you have, so that if you can't get at your IT systems, for example, through ransomware, then you find that your business has closed down, which is not good.
So a lot of people believe that if they encrypt their data, then they're done. That encrypting your data is in fact going to protect you. And I think the Capital One data breach, which many of you may be aware of illustrated how this just isn't true, that encrypting data protects against certain risks but not protects your data completely. For example, what happened in the Capital One data breach, which was running in one of the major hyperscale cloud providers, that a misconfigured web access web access firewall allowed the wrong kind of traffic through to the, the backend where it should not have been able to get from the internet, which allowed command and control. And command and control was then able to exploit the excessive privileges that were assigned to a vm, which allowed that VM to access an encrypted S3 bucket, which contained the data. And the result of that was compliance failure and a large fine. So that was encrypted data. And there are many different scenarios that allow you to get at encrypted data. And typically that is why many of the, the attack vectors now involve steal theft of credentials. Because once you get credentials that tends to allow you to simply overcome encryption.
So we've all now got data protection paranoia because data and protecting that data is actually fundamental to securing your enterprise and, and complying with rule rules and regulations. Now it's interesting that one of the other aspects of cloud is that compliance can in fact or failure to comply, can deny you access to the the data that you hold. And this has all been crystallized through this SHREMS two judgment, which came from this gentleman, Maximilian Shrems, who was a, an Austrian legal student who didn't like his data being shared in the US by Facebook. Now basically what this elucidated is that that in order to comply, it is not sufficient to simply say you have a strong contract with the provider because local laws can override comp contracts and you need to take technical measures to deal with it. And as an example of the draconian nature of this, a European, a European census agency, the Portuguese one was given 12 hours noticed by the local data protection people that they had to desist using a cloud service which was based in the us which is not good for your business.
Now if you look at this, why should you just be concerned about the personal data? Because if you think about it, your business depends upon data and is protecting the personal data sufficient or do you also need to protect your business data? Now there are not many examples that you can produce specific references to, but here is a court indictment in the US alleging that the Chinese people Republic Army were in fact trying to hack business data from the us. Another concrete example of that was some years ago, rsa, the security company were hacked and the result of that was the secrets for their secure token system were stolen. And it is reckoned that this cost them 70 million in order to rectify. So business data is also critical.
So what is needed is zero trust data protection. Now, zero trust has become one of these words, but it's a good word because it basically means that you really have to make sure that you are taking care of your data everywhere, whether it's in transit, whether it's at rest, or whether it's being processed. And all of those areas need to be protected. And it's up to you to ensure you can't simply trust that your cloud provider, your data processor is looking after it. So as soon as you hold hand your data over to a third party, it's potentially at risk. So what are the solutions to that? Well, you can actually look at the recommendations from the European Data Protection Board in in response to the shreds to judgment. And these apply not only to personal data, but they're good advice for everything. So if you are going to process your data out of your control or hold it out of your direct control, you can use encryption, but encryption is only good providing you look after the keys and you hold them and keep them secret from the exporter, from from the air, the, the, the system to which you export it.
Pseudonymization is another way of dealing with this. That is another form of encryption which potentially allows processing once encrypted. And finally there is this other thing called split processing, which we're going to talk about in more depth in a moment. So let's just look at encryption. So encryption is really good providing you manage the keys. And the first thing that will happen is that your cloud service provider will say, Hey, we look after your data because we automatically encrypt it, but that fails the test because you don't have control of those keys and either somebody could steal them from the service provider or the service provider could be forced by local laws to hand over the data through giving away the keys. So what you need to do is to make sure that you still control the keys and that if you are going to do this efficiently, it means you're going to store your keys in a hardware security module, which gives you temper proof security. But managing those keys remains a risk as does processing. So unless you process the data in a trusted execution environment, then there is a small chance that it could be stolen because there are ram snapshot tools that will be able to seize data while it is being processed.
The second approach is pseudonymization. Now pseudonymization is another form of encryption where the data is transformed into a form that looks processable but supposedly cannot be reconstituted. And that's good because if you do it correctly, you can actually do it in a way that will be processed. And, but on the other hand, there are lots of weak ways of pseudonym data and lots of tools out there that claim to do it, but can easily be thwarted by having some kind of extra information. So pseudonymization is another approach. And the final approach is fragmentation and dispersal. And this is an interesting approach and I'm going to ask you to look carefully at the, at the graphic. So the first thing that happens is we have a little piece of data and what we're going to do is we're going to divide that data up into lots and lots of fragments.
Now I've put colors to show the fragments. Now clearly in the real world, we wouldn't divide them up into words, we could even divide them up into sub bite elements, but this is to illustrate the idea. So now we've got those fragments. What we can now do with the fragments is we can send those fragments different groups of fragments to different places so that in any one storage system you don't have the complete set of data. And you can do this in various clever ways, which means that you can recover that data providing you have enough of the, the, the, the different copies. So it gives an added element of resilience to this. It also has another remarkable side effect, which is it is incredibly difficult to crack. Whilst there are various algorithms that that have been known to attack encryption algorithms, actually being able to un fragment data is very, very difficult. So when you look at what this means, you can see that cloud service encryption is never sufficient for what, what your what what your data is in the cloud.
Customer controlled encryption gives you is okay basically for cloud storage. And it only works with software as a service providing you are able to bring the data back in order to to process it if you've got control of it. Pseudonymization supposedly deals with everything, but in fact there are few strong pseudonymization processes and they tend to be expensive. But encryption plus data dispersal gives you the best of all worlds for all of the use cases except software as a service where it may or may not be useful. So encryption plus data dispersal is a technique that you really need to think about.
Another consideration is what's known as perfect forward secrecy. Now this is the idea that if somebody steals a copy of your data today, then would they be able to decrypt it using a much more powerful computer in the future or if they could get hold of some more information. And with the advent of quantum cryptography, quantum computing, this is becoming a thing that everyone needs to consider because quantum computing is proven to be able to decrypt certain kinds of algorithms such as the RSA algorithm, which is based on being able to factories large prime numbers and to do that in very short periods of time, which with conventional computing would've taken millions of years. So everyone in this potentially post quantum world needs to think about what data do you have and how long do you need to retain it? How are you going to be able to meet your regulatory requirements in this post quantum world and start to build yourself a plan.
And this has a post quantum cryptography project, but it's interesting to note that dispersed and fragmented data is not considered to be at risk from quantum computing because it is not fragmented according to an algorithm that is easily decrypted using a quantum computer. So with that, the summary of the challenge that we are ma meeting here is that organizations have gone through a process of digitalization, which is increasing cyber risk, business continuity, data breaches and compliance failures that businesses depend upon this data in order to function. And the consequences of data breaches or other forms of, of, of, of attack are essential that u d need to respond to this by taking a zero trust approach to data protection, never trust and always ensure that your data is protected everywhere. And the the new game in town that we've been talking about here is data fragmentation and dispersal, which provides a uniquely flexible solution to the problems that I've set out. Now we've got a second poll here and perhaps perhaps Oscar can start the poll off. So this, this poll is what tools and techniques does your organization use to protect its business critical data? And you can select multiple, multiple answers to this. So do you use encryption where your organization holds and manages the encryption keys or do you rely on service providers encryption? Are you using pseudonymization or anonymization or are you using some other form of protection?
So thank you everyone for voting. The more that vote, the more you will be able to tell you at the end. So thank you very much for that. I'm now going to hand over to my colleagues from Shard Secure. So thank you.
My name is Julian Weinberger, I'm field cto and I'm in the field every day. And and we see the challenges Mike has mentioned on on data. We've seen that data is really the center of the attention nowadays, and therefore you have different stakeholders with the data and they have different pain points, right? Like, like Mike said, a lot of the data pain points are actually around security. Like how do I make sure that my data is is protected no matter where it's rec reside or it's computed or where I really deal with it. But there's also a decent amount of stakeholders, for example, in the infrastructure team, right? They, they wanna make sure that the data is accessible, the data is there when you're needed, the data's backed up, you know, there's ransomware coming along and all of those issues. But then you also have other stakeholders which are seeing data as, as a benefit, like machine learning professionals, right?
Business intelligence, the more data you have, the better. So they got a lot of data and see information from it, which, which creates another issue one way or another. All of those data in the end also needs to be stored somewhere. We are seeing data retention policies go up, right? You need more data. We are, we're processing more data. So we've seen that a lot of those different things kind of get out of control and people are trying to get ahold of it. And obviously if you throw compliance and regulation frameworks in it, everything becomes even more complex in that specific scenario. I would love to get in what Chart Secure does to really address those challenges pointed out by Mike and, and how we can help you to actually do that. So let me click ahead. So I wanna dip down on, on what Short Secure has to really protect the data and how we can help it.
So Short, secure has a very easy way of of implementing data protection. I think a lot of the mechanism Mike has said have been around for a while. Fragmentation has been around sooner, normalization has been around, encryption has been around forever, right? But it can be extremely tricky to implement and sometimes, sometimes we as an industry, we implement stuff and it becomes another pain point. And, and we are really in the business of not resolving a pain point, not introducing a new pain point. So we make sure that the, the data protection is really easy. It's easy to implement, it's transparent to anyone who uses data. It's very important because if your users or your applications don't work, you don't get business out of it, you still need to work. We do have an an agentless approach that means in the past a lot of data protection was like you need to install an agent on your end device and then you all of a sudden end up with the pain point of managing that agent on the end device and in capabilities and, and all of those things.
So we make it very, very easy to do that with a basically an abstraction layer approach. I'll show you a bit. And that opens up a lot of opportunities to really easily adopt data protection. But really the main focus is keep the trust of the data. You as a data custodian, as a data owner, as a company, it's your data. You should be in control no matter where it lies. That's especially important if we talk about data in a third party or hosted in a third party like it is in the cloud, right? So if you currently have for example, a file server on-prem, it's fairly easy. It's your file server, it's your your admin doing it right? You pay them. All of that is basically under one umbrella and you can usually have a higher level of trust if you move this out to a third party, if it's a cloud provider, if it's a service, whatever it is.
This trust becomes very tricky. And also as Mike mentioned, there's some compliances which really actually does not allow that. Shrems two would be one of them, for example, which give you a lot of headache if you put European data into an American Cloud provider, right? That needs to be attacked and we can fairly easily do that when we protect the data. We don't see it only as encrypted p anonymize it. We also wanna introduce very basics of the integrity and availability. No matter which security event you've ever gone through, you've heard about confidentiality, integrity, availability, and it should be there, right? Your data should basically protect itself to a certain level. And that's what what Chart Secure does. I'll show you that in a bit and how example, how exactly we do that. So it becomes a little bit more easy to understand.
So Sharp, secure for, for organizations is basically an abstraction layer which sits in between your applications, your servers, and your endpoints. Basically whatever reads and writes data and wherever your data is stored. So a lot of the difficulties with the basics of data security or data protection comes in exactly on like how do I protect the data before it gets stored there and when it's stored, how do I protect there? So this approach makes it very, very easy to implement because neither your things on the left side and all the things on the right side need to be aware of it. It's all transparent, right? So it looks the same for them. They all can, their regular TA can do their regular task, but the data is protected. So when data is created or written on the left side from a server application or whatever it is, and then stored, we introduce basically data security.
So we fragment the data as, as Mike explained, we can actually fragment that into multiple locations and we protect the data before it even gets there. Means it will always be encrypted before it even gets to the storage location. That's very, very important as soon as we talk about the cloud, right? You wanna make sure that none of your data is necessarily in clear text exposed to any of a of a third party. I think one of the main FE benefits of Smart Secure has always been that this is a very easy approach, but it's very tricky to make it performant. And we have basically little to no performance drawback on this. So if data flows through from left to right, from right, left, right, read and right, we introduce almost no performance draw or none in our production customers. None of them actually experience any performance drops, which is great because you have a securities algorithm which introduces data security you need. Nowadays it's easy to implement so no one knows that it's there and from a performance it doesn't introduce a hit. So you're pretty much in great shape on on what it really is very important here since we tackle the data before it even gets to storage, that really helps with all those issues you have around cloud adoption and where do I put my sensitive data if it's in a third provider, a third party provider like it is with shrimps too and, and GDPR as Mike mentioned.
So I mentioned initially that we don't only protect the data from readings, so that's basically we introduce integrity and availability checks. So long story short, we, we sit in between, therefore we know what the original data is and we also know after we fragment data up where the data is gonna be. So basically it means we have complete control and can introduce all these mechanisms to the data for, for you as a customer, right? So if there's any kind of unhealthy data residing in your storage, that could be, for example, ransomware attack hit that could be someone extensively deletes something that happens probably the most to all of us. It could be because you have someone, there's data tempering going on, right? All of those issues which are more related to not the confidentiality of, of, of, of data, but more to really is it, is it the real data?
Is it what I need? We detect it and then we self-heal. So what is, what does the self-healing mean? It is basically a mechanism which reassembles the data when you read the data. So an application server or an endpoint would read the data and wouldn't even know that you got affected within, within attack. Obviously we are gonna let you know there's notifications spilled in that there was an attack and that we recovered from the attack, but it really gives you that easy way of, of implementing data security even from an integrity and availability point of view. Another thing we have, we have seen is that there's outages. So as soon as you rely on a third party, well there have been outages on-prem for a long time, obviously we all have been through it, but if you rely on a third party, their outages become your problem one way or another, right?
A lot of times it's a, it's a shared responsibility model, it's a responsibility model, but it's not a shared accountability model. So if you have an outage, it's still your problem. And we also help with that. So in case we obviously spread the data around as, as Mike said, we fragment the data and disperse it, and that really helps in case there is an outage in a certain provider or a certain location because we have data spread out and we can actually make sure that we can pull data from any of the healthy storage location if that's the, that's the case obviously, and I wanna point this out again, it's all in real time. So your applications or your servers or your endpoints would not notice anything about it. It's a very straightforward approach.
If we look on, on how this really works and how it's integrated, it's, it's fairly easy to understand. So all of, on the top here, you can see I would call 'em data consumers or data creators. So you have like processes that's read and write data, virtual machines, read and write data applications to it microservices and containers or lambda function depending on, on where you are and which process of digitalization, like one of the other you're pretty familiar with. But, but they all use different storage services and that's kind of where the tricky part comes in. So a virtual machine uses a different storage service than usually alumni function, for example, would. So we at chart secure, make sure we provide these interfaces to those different data consumers and then introduce the, the actual data security. One very positive side effect of that is that we can actually get the data and then store it in the most cost effective way.
If you've ever moved to the cloud with a lift and shift, you know that it's not as cheap as they promised initially. And one of the reasons that, that you usually don't adopt native services of the cloud, which gets you a cheaper way. So this one of the things where we can also help you with getting a more cost effective approach to your data and the data storage. So as I mentioned initially we chart secured, we've, we've looked at data and the problems it creates and really try to cater to each of the stakeholders of data to resolve those, those problems. So feel free to reach out if, if there's any questions. So one thing we've actually noticed a lot of times data responsibility or concern around data was up to the data custodian or basically use a company to, to resolve. But then we noticed that more and more this responsibility is passed on to whoever develops the applications, right?
So we see a lot of SaaS service pop up, which are applications, which is a, a very tricky way to do it. And, and if you are a sales provider or you're developing applications to customers and your customers of data concerns, right? You are now responsible to addressing those concerns. And there's a lot of market drivers in there. There's like a, a lot of them actually we mentioned by by Mike too. It's like, how do I actually provide things like customer manage keys or encryption, rest tenant separation, right? If, if, if you have a gotten in a SA that's very important. And then there's the constant change of just maintaining that, but also the security landscape. We're seeing more regulations popping up basically every month. Now I'm reciting in the US I think, I think 49 out of 50 states are gonna adopt a new compliance framework.
So you can see that you need to be very, very agile to get this in and, and basically make sure that whatever you build and whatever you provide to your customers is agile to adopt what comes out. Long story short, if, if you run an application team and you need to introduce data security to your, to your application, it just takes basically work cycles, right? It's, it's, it's time and money you spend on implementing that. So we've seen that nut data security is not only a pain point for our usually security professionals who take care of infrastructures and where data resides and applications, but also the people who actually create the applications. Very, very, very important there. So if we look, why is it so complex to put in an application that's a very simple abstraction of just creating an encryption of a piece of data.
So as, as Mike mentioned, there's something called the hardware security module and key management. So you need to implement that. Then every time you write or read data, you need to get a key, encrypt the data and then work the data or the other way around. So your one step becomes three steps. And then if you implement encryption, you have problems with key rotation. Key rotation is basically necessary because a key can be broken after a while. So you need to refresh the key. One of the problem with the key rotation is that if you refresh the key, you need to like get all the data and re-encrypt it or create something like a key hierarchy. So all of that becomes quite a bit of workload. If, if you work in an application team, it's not easy to provide data security in your application. It's a lot of processes and we've seen that, that we as smart, secure, can actually offload that fairly easy.
So we take care of all the data security you need to do basically the, the key stuff, the how do you provide resilience to it, how do you provide data sovereignty, even everything which is data security. And the application team just needs to take care of reading and writing the data. So you see we come from like 20 point of a laundry list to literally just read and write data, which is very, very convenient for application teams to really introduce data security to their homegrown applications or if you provide them to a customer in the end of the day, what it really does is it, it gets you on a higher level of security and your posture, but it also provides a to your customer and have you functionality customer wants from an enterprise level point of view. Very important here. We can also do the data fragmentation and disperse of data in there. So all of that obviously work, but the work goes away from your software engineers, application engineers and you just upload it to a solution, which is just basically a drag and drop when you're ready to go. So it's fairly, fairly easy.
So there's two different things I wanna mention. Like one of them is obviously if you are data owner and you are concerned, we can definitely introduce that to your, to your infrastructure and you are, if you have an application team, we can also introduce that to your, to your applications. So let us know if you need any help. I'm gonna introduce you to my colleague Pascal. Now Pascal is gonna take us home and then we're more than happy for Q and Nick to answer some questions.
Yeah, hello everyone, I'm Pascal. I'm the head of AMEA and I'm, I'm coming to the summary to the end for the call today and I wanna give you some, some highlights there. And that is on the next slide. Thank you Julian. So we have, we have many use cases that we, that we support basically, and that is keeping your data on that, that is keeping your data under control, your data safe and your data serenity. And today we heard a lot from, from Julian about, about the special use case file level protection about the resiliency and cost optimization, but there's many more. If you're interested, you can every time you can reach out to us. So I wanna summarize quickly on, on, on the data resiliency part that we, that we helping you with the cloud ransomware even with, with mitigating ransomware from, from, from internally.
We keep your data safe during outages you have during the lesions and, and being able to, to reconstruct the data, whatever happens there. And on cost optimization, as you heard, we can easy do an easy data migration. We, we help you with your secure cold storage migration to go into the cloud with, with less cost to use to be able to use less cost effective storage systems. And this is, this is how we, how, how we try to make your life easier and save money to make money basically. And that brings me also to, to the next slide.
So we are in it together. What we are, what we are trying to do is to, to help you with very complex subjects, but we, we want to support you in in with that, with our experience of over a hundred years in data security and in data protection in our team. And also there's so many compliance requirements out there where the sees where the c from today, we knowing them, we knowing a lot of them, they, they're struggling right now with all that compliance things. I, I mean Dova is there, there's, there's ISO 27 0 1, there's gdpr, there's all and, and there's many, many more where makes their their lives really, really complex. And we have to reduce the, the complexity by that by, by minimizing the, the compliance to, to their infrastructure basically. And don't ha don't have to handle anything outside their organizations. And number three, there is cloud makes the data protection subject more difficult of course. And, but we have the experience to to, to support you showing you how European companies adapt cloud providers more easily and faster and more secure. And then number fours, go to the cloud easier and safer with, with new technology cloud secure and, and keep your data under control. That is something where we are happy to help you with our experience as mentioned.
So thank you very much for your time today and I'm heading now over to ramping up.
We, we now have the opportunity for some questions. I don't see any questions yet from the audience and so I'm, I'm going to ask some questions myself. Now, one of the interesting things is that, that Julian was mentioning that it's very efficient now, one of the concerns about encryption pseudonymization is that not only is it very computer intensive, it also increases size and, and various other things like this. So can you give us an idea of what the relative performance of your, your, your, your system is in comparison to say encryption and pseudonymization?
Yeah, I I think it's, it's fairly easy. I mean it all depends. There are a lot of factors in performance, but I would always like to perform or con con compare what's, how fast is without encryption and how fast is it with, right? So if you'd look for example, for, let's say you want to just encrypt the file, right? And a lot of times you see that traditional encryption introduces a head over from like 10 to 30 to 40% depending on the file and how it's read. But you will always feel and, and performance drawback in, in our setting we have a little performance hit on a, on a right level, but an performance increase on a read level, which means it actually makes up for itself. So in production you don't really see in, in performance hit. But really what, what one of the issues of, of encryption has always been if I introduce it, how, how are my databases reacting? How are my files reacting? How are my, my customers or, or users reacting? And we don't really have the performance hit nor the requirement of rework any workflows because that's another problem of pseudonymization for example, you need to rework some of your workflows, right? So if if data all of a sudden is encrypted where applications think they can just read it, you need to let applications know. So there is neither a performance hit really nor a large involvement in the, in the way data flows. So it's very, very easy to implement and run.
Yeah, it's certainly true that of the, the weak points of, for example, database encryption is that there is always a master account with a master password that, that has the encryption keys and everything gets encrypted in that way. Is there anything like that with child secure?
So we're having a keyless approach, which means there's not really the concept of encryption keys to the data. That means that you don't necessarily need that master key again. So a lot of problems in encryption come with that master key, the way you protect it, like as you mentioned like that the master key needs to be in a hardware security module that really is something we don't have. We use something called cryptographic permutation, which is basically a keyless way of, of protecting your data. So all of the head over with key management and the master key and, and all of that basically goes away.
Yeah. Good. So, so how, how do you protect against if somebody else buys shard secure that their data isn't readable by you or your data is readable by them?
So our product is deployed within the customer control means it's either deployed in their on-prem or on their cloud. We do not have access in a third party would not have access, right? So all of it is in there. We do have tenant separation built in into this. None of the privileged act privileged admins within the actual application can, can read data that's not a, not a performant way you define which, which basically process read data and only those processes can read data. So the first step is always provide the control completely to the customer. So we don't even have theoretical access to it. So that that helps a lot.
Yeah, okay. So it isn't just something that does the same thing to everybody's data, but there is something different in the way it treats each individual customer's data.
Oh yeah, yeah. I mean even if the customer would say you would use Chart secure and it's the first same source data, the same, same file with the same content and the same day even that would be different. There's also, there's this randomizing processes in there which really makes sure that, that this is happening. So you cannot replace one basically protected data and just put it in a different cluster and it comes back. No, no, no. That, that, that obviously doesn't work again.
Yeah. Okay. So you were talking earlier on about the challenges of adapting applications and, and this is true, I mean, you know, a lot of people sort of write an application first and then somebody comes along and says you need to to, to deal with encryption. So they, they then hack stuff on and I mean there, there are terrible things that people do like actually embedding the keys into the code. So there's a, there's potentially a lot of work involved in adapting applications to this. So just reminders how easy it is for somebody to use shard secure with their applications without changing them.
Yeah, I I think one of the struggle area created is that we just roll the entire duty of data security onto the application team, but we as data security professionals should, should guide them. And, and the other problem is just every time you develop something, if you've ever been part of a company developing every process takes just time and, and therefore money, right? And time to get to market, time to test, evaluate. So we make sure if you have an application which reads and write data, which is every application in the world, basically we can hook in transparently to that. So you as an application team don't need to rewrite any of your code, it just hooks in and provides those level of security that's way easier than implementing it by yourself and then maintaining it by yourself. And I think one of the problem that we've seen now is that companies have implemented it, right?
Companies have implemented encryption over the years and now with new regulations coming on, they need a new way to implement it. There's shooting, normalization, human fragmentation, so now they need to reinvent the wheel again, all of that with our solutions offloaded. So the application team doesn't need to worry and within our solution it's just a policy driven approach of what you really want to make the data secure. So it's way, way easier, takes a lot of work off the application engineers and the end of day makes a better product, makes a better code, and makes customers happy too. Yeah,
Yeah. So you, you talk about using this to help with different providers. So can you just go over how you deploy it again?
Yeah. Our deployment model is chart secure is run as virtual appliances in a cluster and this cluster can reside on-prem or in a certain cloud provider or in a, in a basically a hybrid environment where the cluster spans across multiple cloud providers and your on-prem. Both of that is possible. So all of that is still under the customer's control, right? We do not have any access no matter where it's, so the virtual appliance of, so of course is highly secure because we're dealing with data security, but it spans across multiple ones. The, the benefit of that is even if you move data from one to the other, you can easily adopt. It means like if you have currently on-prem data and you wanna move it to the cloud, you can do that as well. But it's, it's basically virtual appliances which can be installed on-prem or in a hybrid scenario.
And does it depend upon certain virtualization environment or is it open?
It's open. We can currently deploy as virtual appliances means, means you need a hypervisor for virtual machine one way or another. Or we can also actually run in Kubernetes.
Oh, right, yeah. So you actually run in a, in a container based environment that, that's interesting. Yes. So does it pose any specific problems if you are have a containerized application?
No, I don't think it provides problems or promotes benefits because the scaling is way easier, right? The scaling the rollout is easier than, than VMs become. So it becomes a more easier rollout for your infrastructure team.
Yeah. Okay. We
Do, we do provide scripts to do that automatic rollout. Yeah, you would mention it in the containerized world. Yeah,
Yeah, no, so it was interesting looking at your architecture because you, you effectively sit as you put it, an intermediate layer that effectively mimics the different storage systems. That's correct. Now is the, what are the limits of that? I mean, you've got everything from network file systems through databases right down to sort of the data lakes that you have. Is there any limitation or does it cover everything
So it doesn't have limitations in scale? Right. So that's one of the benefit because we can just put the data wherever we want and we are that, that abstraction layer, there's benefits, especially if we talk about any kind of replacement or anywhere where data is stored on a network storage. And that could be anything from a, you know, an S3 bucket in Amazon or blob storage to a file storage. They're all network storages in the end. So all of them are fairly easy because the concept of of communicating over the network is already there. And then the only one which was tricky especially to, to do that performance was locally attached storage. So what if you currently have a virtual machine with rights on the disc on the same machine? So we even came up there where we can hook in and basically display ourselves as a virtual disc and therefore easily do that and still have a performing way of doing it. So yes, there are, there are limitations sometimes it depends, I mean there's always an exception to the rule. I would like to say it and I'm very realistic on that. But in I would say 99% of the cases, there's no limitation on on where fits in. Databases need very fast in and output very frequent, right? Compared to a file storage is like larger data sets in a less frequent way, but we tackle both of them and we're quite aware of it.
So you, you can actually inter work with the, the big, the big relational databases then? Yes,
Yes. I mean a lot of, a lot of our customers have SQL server have or a call, right? All of those are are day-to-day work for us. Yeah.
Yeah, that's good. It, it's interesting because increasingly people want to use s3. It's, it's one of the modern ways of doing it because you have all this unstructured but not word documents, it's sort of images and all kinds of stuff. So you work happily with, with S3 is one of the key things.
Yeah, yeah. So in the end of the day, we make sure we get the most cost efficient storage in it and, and without compromising performance obviously, but S3 or what's known as Optech storage, right, is, is usually the best offering within cloud providers. It is basically your best bang for the buck if you see performance versus cost. It get, gets you very nice performance where you don't experience a hit. And the reason why people have an issue adopting to it is because it is not a file system or it's not a local attached disc. It's a, it's a certain api. So we can, we can also act as a translation layer for that. Right. So what if your current solution doesn't support s3? As you mentioned, we can act as a translation layer in between because we sit in between and can route the data basically wherever you want it to route to. So it's basically pretty easy for us.
Okay. Well we're sort of coming towards the end, but one really interesting question that I'm sure the audience would like to hear, is that an example of a customer that has used it?
Yeah, one of our customers, one of our customers, one of the largest banks in the world, they're using it for multiple reasons. One of them is protecting the data before it goes in the cloud. That was very important for them. They could not have any data in the cloud, any, any CLEAREX data cannot even hit the cloud. So that was one of the restrictions. The other one, which was a big factor for them is protecting them. So when once the confidentiality is protected, protected them from actual cloud outages, they were burned by outages, which some of the cloud providers had and actually affected their business. And they want to have a transparent way of moving between cloud providers. And we're basically doing that for their underlying storage. Wherever the data is secure, the number one use case I've seen so far in, in those kind of settings are what's early adopted to the cloud is, for example, machine learning datasets, right?
So you have machine learning datasets laying in the cloud, which a dataset by itself does have pii, but it's a very confidential piece because it has intelligence in it. So they're protecting it to do that. And then very common is also protecting your local file servers. So we have a multiple sets of customers which basically want to prevent the file server admin from reading confidential data. And it's not only about personal data, but it's also about data like, you know, like IP rights or engineering plans, all of those, right? How do I prevent that? And that's another way we do it. We basically in check ourself between the endpoint and the file server and and protect the data before it hits the file server.
Yeah, that, that's interesting. And I think I tried to make that point that although people have become obsessed with personal data, personal data is not the whole problem. That there is a lot more business data that is not pi I mean you talked about intellectual property, artificial intelligence, learning data sets and the whole host of industrial data that that is proprietary and, and fundamental
VI video feed as we're talking right now, we're seeing into each other's houses, right? That's not,
Yeah. It, it definitely, I think the world has moved on from focusing on what's considered sensitive from a compliance point of view and see what's actually business critical, right? What's business
Business critical is the key thing. Yeah. So I think we're, we're just about coming up to the end now. So could I ask you if you both wanted to say some final words? Pascal? Pascal,
Ahead. Pascal, you bring,
I just wanna be say thank you to, to everyone joining here, otherwise it doesn't make it not possible to be here as well. And of course I'm glad if you, if you get in touch and we can, we can look even more deeply into your environment and how we can, how we can help you.
Okay. Thank you Julian.
Yeah, thank you so much for, for inviting us and Mike basically doing it with us and, and I think one thing we learned, and especially from you Mike, is data, data security and protection is still complex. So if you need help with it, reach out. We're more than welcome to just have a quick introduction call and give you advice sometimes that's all you really need and, and we've been through it a lot of times, so we can definitely help you. So feel free to reach out if you have any questions around data security or data protection.
Okay, well thank you very much Julian Weinberger and Pascal Cornell of secure and thank you to all the audience for joining and paying attention through this. Thank you. Good evening.
Thank you. Bye. Thank you.