KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
Get a model and recommendations to quantify cyber security risks including the costs of fines, contractual compensations, service credits, and loss of income. The use of heatmaps with qualitative criteria and arbitrary cocktails of threat and control efficiency data prevents the secure planning of IT services and corporate defense. Learn from a demo on Monte Carlo Simulations in a native MS Excel model. It can be used from comparing service providers to calculating the coverage of cyber insurance. This session will not only allow you to avoid money holes in consultancy but also justify cyber security investments.
Cyber risk isn’t just a technical problem but a strategic one. Through Cyber Risk Quantification CISO’s are enabled to quantify the financial benefit of their cyber security strategy and are empowered to communicate with the Executive Board on eye-level and get the buy-in that you need. Join this session to learn how other companies are finally getting full transparency on their cyber exposure, ensuring not only they’re making the right investments in cyber security but also getting the right ROI of such investments.
In this workshop, we will show you how to implement a risk class-based approach within access management with little effort in order to achieve the highest level of control, compliance and transparency in your own organization. All the necessary rules and templates (e.g., for password management, connection guidelines for protocols used and authorizations) are based on best practices, the BSI risk class model and the requirements of ISO27001.
Continual high-profile cyber incidents demonstrate beyond a doubt that cyber risks exist, but most organizations struggle to quantify cyber risk in a useful way. There is an urgent need for IT security leaders to find a common way to express cyber risk in monetary terms, that business leaders understand to enable effective risk management and security investment.
Thank you so much for the invitation to share some tips on how we can adjust our processes to the new ISO. And also I will finish with the demo on a Montelo simulation tool in order to quantify cyber security risk based on the new ISO. So it's very important to understand that now, eh, in the new, eh, framework in the new list of technological controls, we have only four categories and we can adjust our risk taxonomy to the category that the controls are now being consolidated. We can talk about organizational controls in cybersecurity. We can talk about people.
We can talk about physical controls, and we can also talk about technological controls. These are the four groups domains or categories that the new ISO is organizing the controls, the challenges, and the main impact on the previous version is on the spread intelligence. We are now more focused on collecting information from internal and external sources, losses, incidents, attacks. That is something that as the previous speaker say is very important. And now we have also better tools in order to get a proper analysis.
Another topics and control that was added is how we can delete data in a safety way in order to comply with privacy regulations for the data, the right to be forgotten, data should be properly deleted. And in that direction that are, there is a new control based on this requirement. How can we ensure that the data cannot be recovered anymore? For instance, leading data for all the environments or the type of replications. And we are usually lacking visibility on, on the data that we need to remove.
Another challenge that the ISO is helping to solve are the exit plan for outsourcing and particularly what we need to do for some contracts in cloud services that they have shared responsibilities. So we need to foresee how we can execute responsibilities, which are the controls that we as a company are able to perform, are able to monitor for the vendors. What are the, who is responsible for what is usually a long conversation where we have, when we have shared responsibility models in the contracts, and then what are we gonna do?
If the vendor isn't able to perform those controls and we need to recover the service. And in that direction, there is a lot of attention on exit plans. And this is always a challenge then how we can improve continuity. NIS two is focus much more now on business continuity.
I, so is recognizing that the new world is, is moving from confidentiality risk to continuity risk in terms of the, the regulatory focus. And the ISO is asking to define proper recovery time options and business CPAC assessment.
When we are defining the capacity and the performance for technical application services, also the practical when we are implementing those control is how we are managing the secure configurations, which is the policy that is basically setting the configuration values that we need to apply for security parameters for any device, any hardware, any software, what are the, what are the standard that we have parameter by parameter and how we gonna control that these parameters are finally enforced on all the devices and hardware that we are using.
The, the ice is also covering some controls related to the leakage prevention and how we can basically follow and detect and prevent and escalate, and sometimes park in some transactions say when they are moving data outside our parameters. So how we are monitoring the gateways is becoming very critical and it is not only by having a rule it's by acting on this rule, how we can review what kind of data is trying to move from different parameters in order to prevent, to lose control and how we can separate what is moment and a moment that is trying to accelerate data in that direction.
Also, the, the ISO is adding more attributes related to what filtering, asking to blocks website that they have in malwares is not a big issue. We usually have a very good tools in the direction, but there still is a change.
Also, the ISO is asking to mask personal data, very important for the production environment. Scrambling data masking are becoming more and more important, how we are also implementing secure coding practices and how we can address any normality in the use of the networks network services. Now we have a better set of risks.
In fact, the number of risks have been reduced. And let me just get some thought in how we can address the quantification of our security risk. That is very important for criminal liability laws and corporate defense. We cannot tell the prosecute, or we cannot tell event or, or a client that we are not properly controlling some processes because there is a red risk or a green risk. It means nothing. It's just bias. It's just a malpractice demonstrated by science more than a decade ago. And now we need to move to models.
We need to be able to link the objective for all the it asset services and processes we need to know which is going to be the approach for the organization in order to address risk. If you are addressing it risk coming from assets, it means that you don't, you shouldn't do any assessment based on services and processes because otherwise it's going to be without accounting.
And, and you need to choose whether you are going to follow this assets, the services with different group of assets or some processes and, and deliveries that also are consuming. Different type of services is a big change.
And, and, and you need to have a, a clear strategy on how do you want to start and then setting, what is your goal is in terms of confidentiality, in data availability, for the goals that you got you to identify vulnerabilities, and the vulnerabilities can be linked to the new eyes, controls a control is a reaction to the risk, but the lack of control is there is the vulnerability. And if there is a threat and there is a goal, it will create a perimeter for a risk to reassess. So you need to start getting a, a, a taxonomy of those vulnerabilities that you can use a new ISO standard.
And also you need to have a group, a group, a group list of threats, threat vector that you are able to accommodate, just because you are not encrypting a data doesn't mean that is a risk. A broken control is not a risk. What you may have is that because of that, if there is a threat, if there is an asset that you want to protect information in France, that is not encrypted, you will have maybe a hacking attack, a vector related to money in the middle, less scanning, recognizance, many other things I recommend to use the RI attack list for the, for the taxonomy.
My tip here is to start from the quantify objective for the it assets and not be when the it assets are already in the organization. Risk should be decision making should be planning. You are assessing risk in order to decide what to do with the project, what to do with a contract, what to do with an observing, whether in a strategy for going to the cloud or changing environments is a, is a good idea or not. So you need to be proactive. You need to embed risk management into decision making before those, those services and those processes are created.
Otherwise, it's too late to say that there is a risk here. And then another area that cybersecurity is very focused is in the past back testing, you need to come back to your model and been able to tell how well it was assessing that some reason will happen. So you need to look for the prevalences of threat, the losses, different distributions, and say, okay, I was expecting to have this type of loss losses here. That is the, the exposure that they calculated. And finally I have these losses is there is a reference. You need to think if the model is right, I'm talking about models.
There are a lot of initiative related to data caucus here, data that is coming from threat analysis, control assessments, questionnaires, and assessing is not completing a form. So the more data that you are adding here in the model, like self assessment data, you need to be sure that this finally producing a clear scenario. And then when you can start measuring losses, how accurate the model was. And the goal is standard is to use different scenarios, at least best based and worse scenarios.
You can link that to the number of records when it's, when it comes to confidentiality, integrity, or hours that the service is interrupted. When you are considering availability risks, you also need to know that there are some confidence levels. Some decisions are, you already have plenty of information in order to know how much is going to be the threat, how much you expect the vulnerability to happen. So you are very confident that the number is right, but however, there are new decisions, confidence, maybe low.
So you need to add this variable into the risk modeling, how well, you know, those assumptions in order to foresee how the risk is going to be materialized realized. Also, we have seen here, for instance, the last case related to the Russian operation for the food company it's is a nice case. You have an it reach that then is going to have a reputation impact. You're gonna lose a clients. You are going to also lose a competitive, competitive data.
My recommendation here is to include in the it risk assessment, the secondary impact related to fines, compensations disputes, compliance, contract risk fraud, that they may happen because it rates are not addressed. And also the, the cost in the reputation, the income loss of the opportunity loss. And finally, let me reco recommend a tool that also is getting a lot of attention related to calibration. There are some technologies, some contracts on activities that we have no idea what is going to happen in that case that we are lacking data or the data have not good quality in terms of losses.
We can do a calibration in which we are involving experts, that they can foresee the effect of the risk. And we calibrate those responses. A model is extremely important. Quantifying it risk is extremely important when you are setting the insurance comp coverage for cyber, eh, for cyber insurance is when you are planning it projects. And you know, and you need to know how much the project is going to cost.
When it's going to end, when you are budgeting the cyber security services, the function you need to know, if the investment in the cyber security area is paying off the risk that the organization has when you are assessing different alternative in order to protect an asset in terms of risk that's of cost, what is the best alternative? That is the same when you are assessing different outsourcing service providers is this supplier, is this outsourcing provider good enough for the risk that they will create? Or do I need to pay a premium?
Do I need to go for another supplier that is more expensive, but the risks are going to be lower. So, which is the best supplier to choose. So you need to quantify and you need to have a model. Let me finish very briefly with a, a model here. Basically you can create an scenario. Let's say that you will have recovery a force, eh, a blah blah, because a hack affecting a system. You said the perimeter and of the risk as say, you want finally, so you can say, okay, how much is going to be the maximum loss? How much is going to cost to record this data in the worst scenario, let's say five south.
And in the best scenario, that is so not so many records are compromised. You don't have to reperform any calculation. And so on. Maybe you can say two south end, how sure you are, and you can say, okay, I am 95% sure that the cost for this incident will be between choose cen and five cen, whatever.
Finally, you can do the same with a probability. How often do you expect to have a restoration? Because there is some hot threats affecting this asset and this as objective, and you can say, okay, in the worst scenario, I expect to have one event each 20 years. So it is going to be 5%. And in the, in the best scenario that you are able to be lucky because you are not the target of a, of a hacking attack. Maybe you can say that it's going to be one incident each, I don't know. 10. Yes. And then you also have the confidence you can say, okay, for, for my, no, sorry.
It had to really lower three. And then you can say how sure you are, that you are all going to have one incident between 33 to 20 years. And you can say, okay, I'm 80% sure what it means, 80%.
Sure, sure. It means that you are 20% sure that it's going to be either more frequent than once each is 20 years or less frequent that once each 33 years, finally, you get your log normal tools saying, okay, this is the value of the risk. And if you want to, for all the scenarios, and if you want to cover for in cyber insurance policy, or if you want to set a reserve in order to cover this risk for, and then you decide 60% of the cases. So not the worst 30%, but the 60% that, that they are more common. You need to set apart 150 for reserve each year.
This is how the methodology is, is working. Sorry, 143 is, is working. And please let me know if you want to have this model, I am more than welcome to share the model with you is a native file. It's running all the Monte simulations on time, real on real time. And it's also using a log normal distribution in which we can be, let's say pessimistic, that there are a long tail of events that is going to be exposing the organization to high impacts. That is a tool.
This is a, what they wanted you to be inspired in 20 minutes. And I think that it's all.