Webinar Recording

Urgent Need to Protect the Most Critical Business Assets: Data & People

Log in and watch the full video!

Data is widely considered the “new oil" because data has great value and provides direct and secondary revenue streams. But, also like oil, data can leak from organizations that depend on it. Therefore, data security and the prevention of data leakage is imperative for business as well as regulatory compliance reasons.

Log in and watch the full video!

Upgrade to the Professional or Specialist Subscription Packages to access the entire KuppingerCole video library.

I have an account
Log in  
Register your account to start 30 days of free trial access
Subscribe to become a client
Choose a package  
Hello and welcome to our webinar today. I'm John Tolbert lead, analyst here at KuppingerCole Analysts. Today, our topic is the urgent need to protect the most critical business assets, data and people. And I'm joined by Kristin Brennan, the channel development manager at Safetica. Hello, Kristin. Hi Kristin here. And I would like to welcome everybody who's tuned in today from all over the world. Thank you. So let's jump right in that a little bit. Before we begin about some of our upcoming other live events, we have a couple of a virtual events in February and March on privileged access management, then zero trust, and then EIC will be hybrid in 2022, meaning it will be online and in Berlin this upcoming year, May 10th through the 13th. So we hope you can join us for that.
Some logistics info everybody's muted. We'll control that there's no need to mute or unmute yourself. We're going to take Q and a at the end, you'll see a question and answer blank in the go-to webinar control panel. You can type those questions in at any time and we'll take it at the end of our session. We're also doing a couple of polls, one at the beginning, one at the end of my session. And we'll talk about the results during Q and a, and then yes, we're recording this and the slides and the recording will be available probably tomorrow.
So I'm going to kick it off. Talk about the information protection lifecycle and the role, the DLP data leakage prevention kinds of tools can play in an overall security architecture. And then I'll turn it over to Kristin and then we'll do the Q and a. So this is not really a surprised anybody. Information protection is difficult. Why? Well, it's a complex task. There's a lot of different kinds of data formats there's structured versus unstructured data types. And then there's data within applications. It's widely distributed, you know, on desktop servers, applications, cloud mobiles data is everywhere and there's lack of consistency, integration, and even interoperability amongst different kinds of security tools. You know, you have security tools that might work at the network layer, but then, or that might be appropriate on desktops, but they don't really work on mobile. So there's lots of different kinds of tools and they don't all necessarily work well together. So of those three causes of our problem, what, which one do you think is the most challenging aspect of protecting information? Is it the complexity, the distributed nature of where data resides or the lack of interoperability? And we'll give you a few seconds to answer the poll question.
Okay. Well, we will, we will take a look at the results of that at the end. Thank you. So I like to go out and look at Hackman again, dot com every once in a while. And I see these are kind of averages, you know, they update it monthly, but you know, what are the nature of these different kinds of data leakage incidents, you know, about 85% of the time it's cyber crime around 13% it's espionage cyberwarfare is only about 1% and then other, maybe unidentified motivations are 1% also. So a few years ago we put together what we call the information protection lifecycle, which is just a way to sort of help understand how the different tools that are in the information security play space can help protect information at different phases of its life. You know, information is created, you know, or maybe discovered in an inventory that's a great time to, you know, figure out what kind of data it is, apply classification or categorization.
Then after that, you know, when it's in the prime of its useful life, you have to control access to it, secure it, monitor it to make sure that it's being accessed properly, detect inappropriate accesses of information or attempts to exfiltrate it, you know, contain and recover from incidents when they are discovered deceive, you know, deception is an interesting and very valid methodology for improving the monitoring and detection of nefarious attempts to get your information. And we'll go into each one of these in a little bit more detail in the next few minutes. And then lastly, there's dispose, you know, once information is no longer useful or you don't have to hold onto it, then you can either archive it or delete it.
So acquire and assess whenever data objects are created, or if you run a data inventory and you find information, that's a great time, you know, for unstructured data, at least to apply some metadata tags that could be used for access control. It can also be used by DLP tools to make policy based decisions about what can be done with the data and DLP tools are, are great in, in many cases for running these kinds of inventories, discovering the data and applying, applying the metadata tags for structured data and databases. Sometimes it's not just like a simple row or column that might constitute sensitive data that maybe groupings of those. So, you know, database security is more complicated. That sense too, because you know, it may be combinations of rows columns, and you may have to implement tools like, you know, excess control proxies in front of databases to make sure that inappropriate access doesn't happen.
So on the control access side, you know, the exact mill reference model I think is, is still a very good way to look at how authorization should proceed within applications or, you know, even just going from, you know, let's say desktop to the data store or collaboration system, you know, a user makes a request that's intercepted by a pep policy enforcement point, which then sort of intermediates and ask a policy decision point, you know, should this user coming from this device, looking for access to this file, be allowed to get it the policy decision point, next those decisions at runtime. But it's, it's getting information from the policy information point, which is where the predetermined access control policies and user information stories live. And those are intern, you know, maintained and created at a policy administration point. So, you know, this, this model still works and you know, is a good way of describing the overall access control methodology that many different systems use today.
So security often we mean encryption for confidentiality, you know, and encryption needs to be present in many different places. So we'll start off by looking at encryption and transit SSL, as you know is outdated. There are plenty of attacks against it. I think everyone should be using at least TLS 1.2 TLS, 1.3 is available and preferred. You know, we have to consider risks of implementation errors. There was a Heartbleed incident a few years ago, a strong implementation, well thought out implementation is absolutely necessary if you, you know, you can't just apply security without making sure that it's done quite right.
There are still, unfortunately un-encrypted protocols that are widely used around many enterprises and, you know, things like telling that an FTP at this point, it's, it's a good recommendation to use encrypted transports things like SSH SFTP there's application level encryption S mine for email's a great example of that. There are network level encrypted tunnels. There's SSH IP sec in any communication on the network should be encrypted by one of those. And then in places where you suspect un-encrypted protocols may be used, use firewalls, access control, lists, some routers to, you know, drop those connections in force encryption by blocking un-encrypted protocols at the firewall network level encryption at rest. You know, there's several different places where you can do that. There's a whole disk encryption, which is good, but I mean, that really only protects you from physical media loss, like somebody, you know, stealing a laptop and taking the, the drive out on it is file level encryption, you know, encrypting specific files based on, you know, the user initiated action or, you know, a program that, that manages file level encryption across your enterprise. That's, that's a good way, especially like if you're doing file sharing in a collaborative environment, there's database level encryption, again, this can be, you know, quite granular at, at the table row column level, and then application encryption things like S mime encrypting, making sure all the data in an application is encrypted.
Lastly, we've got encryption and use. This is probably the hardest part there's homomorphic encryption. This is, you know, performing calculations on encrypted data. And, you know, even the results as they're created are encrypted still, mostly academic. At this point, there aren't a lot of viable products out there that do that, this format, preserving encryption, but this only works with, you know, a few applications on things like input validation. It can break other functions, sometimes vendors in an effort to be more compatible with other solutions in the space, kind of weaken that encryption. And it's fairly limited in the number of like cloud services that support that. Then we have secure enclaves, which really is an encryption I use, it's sort of carving out a secure place or a work environment that, you know, everything is encrypted. You know, organizations will probably set up, you know, like SSH jump boxes to get into the secure enclave once inside the data's only decrypted, you know, within this specialty protected environment, everything else around it is decrypted encrypted, but, you know, that's very difficult administratively to keep secure enclaves secure.
Then there is the monitor and detection in as you'll see it. And you probably know it was just an alphabet soup of different kinds of monitoring and tools. So here we see, you know, your infrastructure and that can include everything from, you know, your on-premise servers, desktop machines, and then all the other security tools that can feed into the SIM, the security incident and event management system. We have things like endpoint protection, end point detection and response next gen firewalls, web application firewalls, various cloud providers, DLP N D R a, and the data leakage prevention, network detection and response tools all should be plumbed into the SIM, excuse me, then security, orchestration, automation, and response. That is a system that can help organize collect information, you know, from cyber threat intelligence sources and help make responses back to enterprise security tools based on what's discovered by the SIM. So it can make changes in your cloud, your firewalls endpoint protection detection. It can make policy level changes for DLP and NDR as well.
So, you know, with the MITRE attack framework, we've seen a shift from let's say the Lockheed Martin kill chain, which was more focused on prevention. MITRE attack is more about detecting incidents that have happened and responding to them. And I think this is a, a pretty significant change as you'll see here, you know, prevention is, is still something that everyone in the security field will continue to focus on, but it mostly applies at the level of, you know, initial access or let's say, execution of malware. The rest of it is about detecting it and responding to it.
And then we have contained and recover in the information protection lifecycle. This is eliminating damage once it's discovered in being able to restore. So how do we go about containment? You know, there are lots of different mechanisms, lots of different, you know, techniques that enterprises use things like network segmentation, everything should not be just placed on a flat general use network. You know, you might separate things like your finance applications in an enterprise. You probably want to separate things off like, you know, environments where you work on your trade, secret intellectual property, that sort of thing. Firewalls can be used in that network segmentation, as well as, you know, creating specific V lands for that zero trust, which we hear a lot about these days, because it's a, it's a really good packaging of like the principle of least privilege and zero dress means authenticating and authorizing every access requests in an environment.
And that looks at, you know, user information, the device they're coming from, what application is it they're trying to access or what resource, you know, and environmental conditions around that time of day location, all those things can, you know, help reduce the risk of unintended disclosure on the recovery side. So, as I was saying a minute ago about soar, you know, you've got various tools in the enterprise, the endpoint protection detection, response, network detection, response, XDR, you know, sort of combines all the other Dr. Types of tools soar helps with the automation and orchestration. And then, you know, there's the concept of playbooks within the Dr. Tools plus soar that allows organizations to say, okay, if, if certain conditions are met, then, then do X and Y. And those things could be like isolated note, if you suspect that it is compromised by malware or stop communications at the network level between, you know, different machines on different ports.
So those are some of the containment options that are available for the tools that we have these days. Then on recovery backups, we often talk about backups need to be both onsite for expedience, but offsite backups are necessary too. We know that a ransomware where that a few high profile cases where ransomwares contaminated onsite backups, they need to be tested. You need to be able to test your restores and make sure that they work. And, you know, it's more than just the data that might be on, you know, collaboration or, you know, file shares. You have to be able to restore application configuration server, configs, cloud configs, even information about your OT, operational technology environment. And again, these things need to be tested the procedures as well as the data to make sure that it restores properly. And lastly, there, you know, we need to think about internal and external incident response and communication as part of the overall recovery process deception.
So deception is sort of an active defensive measure that goes right along with monitoring detect. I think it's more than just the old honeypot. These aren't just standalone machines sitting outside of a perimeter somewhere. There they look like production machines, and they're designed to draw attackers in away from your real assets. You can even make your, the DDP environment look less secure and therefore a little bit more attractive to attackers. Why would you want to do that? Well, it can help you discover attacks faster and understand what the attacker, you know, what their TTPs are. The tactics, techniques, and procedures helps you watch what they may attempt to do against your simulated servers. And you know that if anything happens in a DDP, it's good high quality threat Intel, because the attacker was attending to attack you, and this is what they would have done.
So the last phase of the life cycle is dispose. So, you know, data has its useful life. And once you're at the end of that, which can be governed by different data retention laws, different regulations, you know, some kinds of information you have to retain for certain periods of time afterwards, you know, it can become a burden to just keep information. So if you don't actively need it, but you have to stored, for some reason, you know, legal reason that you can archive it that way you can produce it later, if you need to. But you know, the principle of data minimization is a great thing in general. I mean, it lowers your storage costs, but also lowers things like your exposure to GDPR, fine risk privacy regulations require the right to be forgotten. So, you know, a good practice here is data minimization don't collect and keep an, any information that you don't need.
So where DLP fits, excuse me. So let's say you've got users coming from a laptop or a, a, or a desktop or a mobile device. You can install DLP. DLP usually comes as an agent and potentially, you know, sort of a policy decision point enforcement server. If the user wants to do things like move information to a USB drive or a SD card or out to the cloud, DLP can prevent that, you know, by commanding the agent, not to allow, you know, rights to, you know, cloud resources or to SD or USB cards. KSB is a closely related topic to DLP. We can use CASBY to look for who's using what cloud services and what's going out there very similar to DLP in terms of what many of the capabilities are as, for as discovering, and then also controlling access. It can be used for the monitoring and also implementing controls. It can essentially, you know, serve, like I said, as a policy decision point and policy enforcement point for, you know, anywhere from the device level to the cloud level.
So in that regard, I think DLP data leakage prevention tools are a very important component over the overall information protection life cycle it's necessary for controlling that access for discovering information and helping categorize, classify what that information is. Applying the excuse me, appropriate metadata in cases where that can be done. And then later reading that metadata and making policy based decisions on that to wrap up here. So there are a number of good solutions in the DLP space today, and we will be doing a leadership compass on DLP tools next year, DLP has been around for awhile, you know, and honestly 10, 15 years ago, some organizations found that it was difficult to deploy and maintain good news is we believe that vendors have made really good strides to improve usability things like PII, data breaches and insider threat are two of the primary drivers. Why enterprises of all kinds are looking for DLP tools today and increasingly in the market, customers perceive DLP as being ready to meet the challenges that businesses have today. So to kind of wrap it all up together here, DLB is, can be an important component for information protection all across the life cycle. And without a data leakage prevention tools in place, there are likely significant gaps in your overall security posture.
So we will finish up my time with a second poll here. And I'm curious, does your organization have DLP in place or are you considering it in the near future? And our suggested answers here are yes. We have a deployed effectively a B well, it's something we're planning for or see, no, it's not really under consideration. So yeah, take a few minutes and answer that. And we will discuss the results at the beginning of our Q and a time. Okay. And was that I would like to turn it over to Kristin from Safetica.
Thank you very much, John, for the introduction and overview of how DLP and insider threat protection fit into the overall information protection lifecycle. So now I would like to focus a little bit more on the topic, insider threat protection, and I'm going to start with the bad news. First insider threats are on the rise. Let's talk a little bit about what this means. And I would like to start with a few definitions. So in a fighter is any person who has, or had authorized access to our knowledge of an organization's resources. That's going to include personnel facilities systems, and of course, data and the insider threat is the potential for that person that insider to use their authorized access or understanding of the organization to harm the organization. Now, this harm can include malicious complacent or unintentional acts that negatively affect the organization in any way. Of course today, the big topic is data and the result of such a threat being it's leakage from the organization, according to Securonix is 2019 insider threat survey report. 73% of company is confirmed. Insider attacks, attacks are becoming more frequent.
Let's examine why this is happening. Verizon's 2021 data breach investigations report states that 45% of employees take work-related data with them when leaving the company. Now that's quite a significant number. If you consider that that's maybe roughly half of the employees and we all know what can happen when sensitive data gets in the wrong hands. This can include financial information, company secrets, intellectual property, or even confidential personal data. Now imagine for a minute, the let's say disgruntled employee that leaves the company and uploads vital client information to their personal cloud drive, or the contractor who unintentionally loses an unencrypted USB device to look at designs, the harm to reputation, profitability, and competitive advantage can of course be enormous for these affected organizations, not to mention the direct financial repercussions.
So it's no big surprise that data leaks caused by insiders filled the media. Roughly two thirds of companies are afraid of malicious employees and the number of insider cost cybersecurity incidents has increased by just under 50%. In the previous three years, insider threats are the primary cause for 60% of data breaches. Now, how bad is the financial damage? So according to the Panama Institute, an incident involving employee or contractor negligence can lead to of just over $300,000, well, a malicious or criminal act can cost the company over $755,000. And we've seen about a 31% increase in these costs over the last two years. So it raises the question, why does it cost so much? The answer is, it just takes too long to discover it.
According to recent studies, 88% of companies can not consistently detect insider threats and 68% of data breaches take months to discover with two months being the average time to contain an insider threat incident. So of these costs about 90% of them go towards containment, remediation, incident, response, and investigation. At this point, we should also keep some key market trends in mind. Data is increasingly important and increasingly digital world data and people are what drive modern businesses and protecting them as essential organizations also need to increasingly comply with regulations around data privacy and other compliance regulations, not only to protect their own employees, but the customers and clients that rely on their products and services with the rise of the remote work arrow, which of course we've seen a lot of in the current COVID prices. Companies really need to gain even more control over their data.
And this is where an effective insider threat protection solution becomes essential. And one that secures your data while supporting operational efficiency. Now this graphic here aligns to the IPLC that John showed earlier. So if we start in the top left corner data flow discovery and risk detection directly relates to the acquire and assess. So we can analyze all data created and move by endpoints detect and classify all files, a transit, and use an addressed. And we add information about the risk of the operation and the user's behavior. Moving out into the top right corner based on data classification and DLP rules. We protect data and prevent it from unauthorized collection and transfer out of the organization. We also add application and web filtering. And at the same time, we can educate the user by notifying him when he or she is about to violate the security policies and why in the bottom left-hand corner, you see workspace and behavior analysis. Now this relates to what John was talking about with monitoring and detecting nefarious activities next to the evaluation and logging of all data security events in order to have true data security. You need to know what's going on in your environment and be alerted of suspicious or malicious behavior. So this is enabled by the workspace and behavior analysis, and last but not least, we do all these activities to achieve business and regulatory compliance.
Now, to be able to protect your sensitive data. First, you need to discover red resides and flows. So, as I mentioned earlier, we detect and classify all files at transit and use an addressed and add information about the risk. This is what we call insider threat detection. Data is classified based on content, including OCR and the, for images and per scan PDFs based on context Mets on data. So that means where, how it was created and moved and based on user based classification. What does combination of various approaches with together, with what we call smart scanning? We can enable efficient data discovery and classification with a low impact on your end point and daily work. So in other words, we don't interrupt your work or overload your chip and memory using this classification and both predefined and custom DLP policies that can be set on a very granular level.
We can enforce the security, but also educate the users by showing them notifications. When they're about to break the company rules in general, there are several modes to choose from for the protection part. So we can either assignment, we log all the activities in the background and allow users to do whatever they want. We can show warning notifications, log the event and still allow the user to proceed. We can ask the user to provide a one-time business justification, allow the operation, but immediately notify the admin, or we can simply black the user's operation entirely. And this data protection is executed based on the DLP policies mentioned before. Additionally, the information about the risk level of operations and individual users helps to identify potentially malicious insiders and prevent them from data leakage. Of course, you can approach them in the company or you can tune the company's DLP policies.
So at this point, usually the question rises, how do we actually assess this risk event is assigned one of four risk levels based on various indicators that determine how likely it is to cause a data leak. The calculation of risk level is based on our built-in detection rules, but we've also added a few more risk indicators into the equation. So for example, the event perhaps occurred at an unusual time. The advantage contains what may classify as sensitive data. The file was perhaps uploaded to an unsafe, wet storage or file share. The email was sent outside of a white listed domain. The file was perhaps uploaded to a potentially unsafe external storage device. And in the end, the risk level is the combination of all of these indicators.
The user risk is something that tells us how risky the behavior of the user is compared to their peers in the company. So we calculate this based on two factors, number one, the number of high risk operations and other risk indicators. The user initiated in the past month. Number two, the users deviation from the quote unquote normal behavior of their peers in the company. It's safe to come. We cover all communication channels. So of course, classic email file share instant messaging, mass storage devices, and even traditional printing. We are multi-platform and cloud ready. Safetica one uses a standard client server architecture. So the server can be deployed on premise in a virtualized environment, or are we also have a lightweight cloud native alternative that covers the core security scenarios. We call it safety connects, and I will talk about it a little bit later.
The solution must be installed on the server, but then SQL database and the clients on the endpoints, the policies can be created in the console and applied to the end points. Now it's interesting here to note that the policy enforcement at the end points works, even if the server's an offline mode. So that means you'll still be able to notify the user block, his or her operations, or let him or her justify it on the end points. Even if the server's not connected to the internet at the beginning, we established the problem, the rise of the insider threat. So essentially we have two approaches to the same problem, Safetica one, as I mentioned before, as a self-managed expert data loss prevention and insider threat protection solution with a full range of scenarios and the focus on data security, workspace audit and cost optimization safety connects on the other hand is a sauce, insider threat prevention solution. It's a cloud native service multitenant and MSP ready, fast and easy to use with the coverage of core data security scenarios and the focus on best practice and maximum automation.
Of course, no cyber security solution can effectively protect your organization. If you're not able to implement it, integrated into your, your IC security stack and maintain it in the long run. So safety solutions are easy to use and quick to deploy, provide a clear context for all data automated risk analysis, low hardware requirements for endpoints and servers and able seamless integrations into the it security stack and allow for high environment flexibility. Now, safety as well known for its ease of implementation, integration and administration. For example, Safetica one takes about a month to implement discovery features. And another two months you can implement DLP predict protection and four to five months to implement advanced DLP configurations and enterprise integrations. In the case of Safetica one, these first months are mainly dedicated to regular checks, tunings, or settings and tuning of the DLP policies.
And of course your local safety cup partner of choice will support you with this process. So you can set up and fine tune your own company's data security posture, the lightweight solution safety connects to mentioned earlier. It takes only one mandate to get first security audit results. And when within another two weeks you can expect to have a full data security audit. The tie to a security assessment is also short. So what that basically means is within one month in a fully guided process, you'll have enough insight into data, insider risks and potential incidents to decide if this is the right insider threat protection solution for you.
Now let's see what our customers have to say about it. Excuse me. So in this year, software reviews collected many user surveys and they ranked us first place in ease of implementation among eight DLP vendors with 82% customer satisfaction. And now if you see the dark green characters on top, those are the customers that are absolutely delighted with this capability. Now, as mentioned before, it's also very important to liquid insider threats protection solution, to be able to integrate into other third-party tools. I have to kinda integrate seamlessly into existing CMS network security solutions, such as firewalls, UTMs, secure web or mail gateways and data, data analytics tools like power BI or Tableau, and also offers a one-click integration with Microsoft 365.
Here are a few of our technological alliances. We are a Microsoft gold partner, 14 that fabric ready, and the global partner of ISA and metrics. And we're also compatible with all of these security vendors. Now, of course, it's always nice to show logos, but what does our customer base say? So we were ranked number one in terms of data integration with 80% customer satisfaction, as mentioned before data security and the ease of mind that come with it are of course important to modern businesses, but so is business efficiency. Our solutions are designed to be as non-invasive into day to day workflows as possible.
So how do we achieve this? Well, we continuously optimize the client's installed on the endpoint devices. The usual impact of safety pound, the infant's performance is just below 3% with some exceptions for OCR. And we're also using pre-standing to minimize the impact on end user productivity and business. As usual pre scanning helps to discover and classify data when the computer is idle or working on CPU non-intensive tasks, thanks to granular scanning control. You can configure when scanning should run and when it's not needed, real scans have been optimized to provide results within three seconds. On an average, besides the CPU memory usage is also optimized to work in low priority mode and reserve system resources for tests run by the end user.
And last but not least with the override feature, I mentioned earlier, you can entrust the user to decide whether blocking makes sense, allowing them to continue with a business critical operation, if justification is provided. And we see this in the screenshot on the right side, and since the action is logged, the admin can go back to question the user about it anytime. So it does not compromise security from the backend infrastructure perspective, depending on available capacities and size of the environment, safety one can be deployed to an available service. So that means it does not need to be a dedicated server. Safety connects is cloud native. We mentioned that before, so it does not require any server infrastructure, but it currently only covers core security scenarios and has no DLP features at the moment. These will be introduced in the first half of next year.
We believe an effective insider threat protection solution should also provide clear context to all data. So we give insight into what files are leaving the company and provide clear context by showing when, where, how, and by whom files were used in sent out. Therefore you can identify how company data is used and where it is stored and sent. So based on that, we can provide you with a comprehensive overview of processes, risky events and security incidents throughout your company. And so that information you can then implement the optimal DLP policies for your security posture. So coming to the end of the presentation here, what is the overall rating of safety cut by our customers?
According to the user's evaluation, you can see the safety guys, one of the best DLP solutions in customer satisfaction. Most of our interviewed users would recommend us all of them plan to renew and three-fourths are satisfied with the press performance ratio. Here are a few of our satisfied customers. Once again, these are the things we believe are essential to an effective insider threat protection solution. The coverage of the scenarios, DLP, insider threat protection and support for regulatory compliance and solution. That's easy to use and quick to deploy with low hardware requirements for endpoints and servers, providing a rapid time to value with easy to choose product yours, a wide range of third party integrations and the solution that's available on all the main platforms on premise and cloud. So thank you very much. And I would hand back over to John at this point.
Thanks Kristin. So we'll move into the Q and a section here in just a second, encourage you, if you have thought of any questions over the last 40 minutes or so please feel free to enter them into the questions blank in the go-to webinar control panel. And before we launch into that, we will take a look at our poll or assaults. So the first question I asked was, what do you think is the most challenging aspect of protecting info? 50% agree? It's the complexity 40% almost say it's the distributed nature in 13% say that it's the lack of interoperability, you know, all three kind of come together to make it a more difficult than it certainly should be. I think, to protect data. Next question. So the next one was, does your organization have DLP in place or are you thinking about it? This one's interesting. I'm a little more than a third each say, yes, we have a deployed effectively or not under consideration in a little under a third, say that it is planned. No, I think that's a, that's good news. Two thirds are either deploying or are planning to deploy. Okay. Thank you for participating in those polls.
So now let's go ahead and take a look at the questions. Let's see. First question. How much money does it make sense to spend a year on data security? When the average cost of a data loss by insiders is over $750,000? Well, I think that's a, that's an interesting question. Would you like to handle that one Christian to start with?
Yeah, sure. Based on, let's say best practice ideas from the market, we would say probably about 15% of the yearly revenue for cybersecurity in general, because if you are targeted by an insider who becomes an external malicious actor, let's say you could expect a ransom perhaps about that. Some,
Yeah. You know, a lot of organizations look at this. I mean, it's risk management. That's true. But I think a lot of organizations may not really appreciate what the risks really are. And when you can quantify that about things like regulatory compliance, money's lost in fines, those, those are a couple of things, but you know, there can even be an existential threat to the business if insiders or outsiders for that fact, walk off with your key trade secrets, the things that your company does that nobody else does. It's your competitive advantage. If that goes out the virtual door, then, then you can lose, you know, business to the point of not being able to stay in business. So I think that, you know, businesses need to plan carefully, not only for the, the, the possibilities for, let's say fines associated with regulatory noncompliance, but make sure that you're adequately protecting the information that your organization uses to maintain its competitive advantage. Next question is there are numerous stats about the average cost of data breaches that range from hundreds of thousands to millions of us dollars. How are those costs calculated?
Yeah. So as I mentioned, about 90% of the costs go towards containment, remediation, incident response and investigation, according to the pond mountain Institute. But as you just mentioned, it is not to be underestimated how much money a company can lose in competitive advantage or being put under pressure by the bank to lower their scoring, making loans more expensive, losing customers. These are all part of the costs. So often on cybersecurity. I think we think that we think about fines compliance funds, but there is much more to be considered when you lose your sensitive data.
Yeah. Yeah, for sure. I think if you look at the amount of what it would cost to recreate the data, you know, thinking about not only what does it cost to secure it, but what were to happen if you were to lose it? I mean, I think the numbers that we see, you know, from various places like pony Mont Institute, I think they're great because they help organizations quantify, you know, what average costs that have been experienced around industry are. But you know, if you're at a point where you have loss, you know, some very critical piece of sensitive information that that can be far higher than what is shown in any of these estimates. I mean, if you lose that, that secret sauce that makes your product, what is, you know, what's special though in the marketplace then you can't really recover that. Or if you're going to try to do estimates about the cost per data breach, then you need to think about what does it cost to recreate or create something new that would, you know, preserve your business in case something critical was lost. Let's see the question here. I noticed you put a lot of stress on inflammation, implementation, integration, and maintenance, usually vendors put first the security features. Why are you taking a different approach?
Well, John, I'm actually going to quote you because I wrote this down earlier. You said you cannot apply security without making sure it's done quite right. And we all know that cybersecurity is a big topic and you really cannot have a secure solution. If you can not seamlessly integrated into your stack, if you cannot implement it, implement and maintain it, then the, the super fancy security features do. No good. Of course for us security is first, but we want to have a solution that's easy to implement and to use.
Yeah. Having, having a lot of tools in place can be useful if they're deployed properly. I mean, I think that's correct that, but any, any places where an implementation hasn't been done properly, let's say, you know, rolling out even new network equipment. A lot of times that comes with, you know, default username passwords that are pretty well known that, you know, it says implementation best practices that can make a difference. You know, if you don't do it right, then you sort of negate any value you would've gotten from implementing the tool in the first place. So I think we've reached the end of our questions here. I would like to say thanks to Kristin and safe ticket for joining us today and thanks to everyone who has attended and who will watch the recording later. Thank you for your time. Hope it was useful to you. And any final thoughts, Christian?
I just want to say thank you to everybody and wish everybody either a nice evening or a rest of the day based on their time zones. Thank you very much for joining today.
Thanks everyone.