As you have certainly already heard, Norsk Hydro, one of the world’s largest aluminum manufacturers and the second biggest hydropower producer in Norway, has suffered a massive cyber-attack earlier today. According to a very short statement issued by the company, the attack has impacted operations in several of its business areas. To maintain the safety and continuity of their industrial processes, many of the operations had to be switched to manual mode.
The details of the incident are still pretty sparse, but according to the statement at their press conference, it may have been hit by a ransomware attack. Researchers are currently speculating that it most likely has been LockerGoga, a strain of malware that affected a French company Altran Technologies back in January. This particular strain is notable for having been signed with a valid digital certificate, although it has been revoked since then. Also, only a few of antimalware products are currently able to detect and block it.
It appears that the IT people at Norsk Hydro are currently trying to contain the fallout from the attack, including asking their employees not to turn on their computers and even shutting down the corporate website. Multiple shifts are working manually at the production facilities to ensure that there is no danger to people’s safety and to minimize financial impact.
We will hopefully see more details about the incident later, but what could we learn from the Norsk Hydro’s initial response? First and foremost, we have another confirmation that this kind of incident can happen to anybody. No company, regardless of its industry, size and security budget can assume that their business or industrial networks are immune to such attacks, or that they already have controls in place that defend against all possible security risks.
Second, here we have another textbook example of how not to handle public relations during a security incident. We can assume that a company of that scale should have at least some kind of plan for worst-case scenarios like this – but does it go beyond playbooks for security experts? Have the company’s executives ever been trained to prepare for such level of media attention? And whose idea was it anyway to limit public communications to a Facebook page?
Studies in other countries (like this report from the UK government) indicate that companies are shockingly unprepared for such occasions, with many lacking even a basic incident response plan. However, even having one on paper does not guarantee that everything will go according to it. The key to effective incident management is preparation and this should include awareness among all the people involved, clearly defined roles and responsibilities, access to external experts if needed, but above anything else – practice!
KuppingerCole’s top three recommendations would be the following:
- Be prepared! You must have an incident response plan that covers not just the IT aspects of a cyberattack, but organizational, legal, financial and public relations and other means of dealing with its fallout. It is essential that company’s senior executives are involved in its design and rehearsals, since they will be the front and center of any actual operation.
- Invest in the right technologies and products to reduce the impact of cyber incidents as well as those to prevent them from happening in the first place. Keep in mind however that no security tool vendor can do the job of assessing the severity and likelihood of your own business risks. Also, always have a backup set of tools and even “backup people” ready to ensure that essential business operations can continue even during a full shutdown.
- You will need help from specialists in multiple areas ranging from cyber forensic to PR, and most companies do not have all those skills internally. Look for partnerships with external experts and do it before the incident occurs.
If you need neutral and independent advice, we are here to assist you as well!
#RSAC2019 is in the history books, and thanks to the expansion of the Moscone Center, there was ample space in the expo halls to house vendor booths more comfortably. In fact, there seemed to be a record number of exhibitors this year. As always, new IAM and cybersecurity products and services make their debut at RSAC.
Despite the extra room, it can be difficult for the security practitioner and executive to navigate the show floor. Some plan ahead and make maps of which booths to visit, others walk from aisle 100 to the end. It can take a good deal of time to peruse and discover what’s new. But most difficult of all it is digesting what we’ve seen and heard, considering it in a business context, and prioritizing possible improvement projects.
Security practitioners tend to hit the booths of vendors they have worked with, those with competing products, and others in their areas of specialty, including startups. For example, an identity architect will likely keep on walking past the “next gen” anti-malware and firewall booths but will stop at the booth offering a new identity proofing service. If a product does something novel or perhaps better than their current vendor’s product, they’ll know it and be open to it, even if it’s a small vendor and it means managing another product or service.
Executives gravitate toward the stack vendors in the front and middle, ignoring the startups on the sides and back. [It’s also increasingly likely execs will have meetings with specific vendors in the hotels surrounding Moscone, and not even set foot in the halls.] Why? IT execs and particularly CISOs are concerned with reducing complexity as well as securing the enterprise. A few stack vendors with consolidated functionality are easier to manage than dozens of point solutions.
Who is right? Well, it depends. Sometimes both, sometimes neither. It depends on knowing your cyber risk in relation to your business and understanding which technology enhancements will decrease your cyber risk and by approximately how much. Oftentimes practitioners and executives disagree on the cyber risk analysis and priorities set as a result.
Risk is conjunction of consequence and likelihood. At RSAC and other conferences we hear anecdotes of consequences and see products that reduce the likelihood and severity of those consequences. Executives and practitioners alike have to ask, “are the threats addressed by product X something we realistically face?”. If not, implementing it won’t reduce your cyber risk. Or, if there are two or more similar products, which one offers the most possible risk reduction?
The biggest risk is that the decision-makers don’t truly understand the threats and risks they face. There are cases where SMBs have built defenses against zero-day APTs that will never come their way yet have neglected to automate patch management or user de-provisioning. In other cases, a few big enterprises have naively dismissed the possibility that they could be the target of corporate or foreign state espionage and failed to protect against such attacks.
The riskiest time for organizations is the period when executive leadership changes and for 12-18 months afterward, or even longer. If an organization brings in a CIO or CISO from a different industry, it takes time for the person to learn the lay of the land and the unique challenges in which that organization operates. Long-held strategies and roadmaps get re-evaluated and changed. Mid-level managers and practitioners may leave during this time. That org’s overall cybersecurity posture is weakened during the transition time. Adversaries know this too.
Risk is a difficult subject for humans to grasp. No one gets it right all the time. Risk involves processing probabilities, and our brains didn’t really evolve to do that well. For an excellent in-depth look at that subject, read Leonard Mlodinow’s book The Drunkard’s Walk.
External risk assessments and benchmarks can be good mechanisms to overcome these circumstances; such as when tech teams and management disagree on priorities, when one or more parties is unsure of the likelihood of threats and risks, and when executive leadership changes. Having an objective view from advisors experienced in your particular industry can facilitate the re-alignment of tactics and strategies that can reduce cyber and overall risk. For information on the types of assessments and benchmarking KuppingerCole offers, see our advisory offerings.
Trust has somehow become a marketing buzzword recently. There is a lot of talks about “redefining trust”, “trust technologies” or even “trustless models” (the latter is usually applied to Blockchain, of course). To me, this has always sounded… weird.
After all, trust is the foundation of the very society we live in, the key notion underlying the “social contract” that allows individuals to coexist in a mutually beneficial way. For businesses, trust has always been a resulting combination of two crucial driving forces – reputation and regulation. Gaining a trustworthy reputation takes time but ruining it can be instantaneous – and it is usually in a businesses’ best interest not to cheat their customers or at least not to get caught (and that’s exactly where regulation comes into play!). Through the lengthy process of trial and error, we have more or less figured out already how to maintain trust in traditional “tangible” businesses. And then the Digital Transformation happened.
Unfortunately, the dawn of the digital era has not only enabled many exciting new business models but also completely shattered the existing checks and balances. On one hand, the growing complexity of IT infrastructures and the resulting skills shortage made sensitive digital data much more vulnerable to cyberattacks and breaches. On the other hand, unburdened by regulations and free from public scrutiny, many companies have decided that the lucrative business of hoarding and reselling personal information is worth more than any moral obligation towards their customers. In a way, the digital transformation has brought back the Wild West mentality to modern businesses – completely with gangs of outlaws, bounty hunters, and snake oil peddlers…
All this has led to a substantial erosion of public trust – between another high-profile data breach and a political scandal about harvesting personal data people no longer know whom to trust. From banks and retailers to social media and tech companies – this “trust meltdown” isn’t just bad publicity, it leads to substantial brand damage and financial losses. The recent introduction of strict data protection regulations like GDPR with their massive fines for privacy violations is a sign that legislation is finally catching up, but will compliance alone fix the trust issue? What other methods and technologies can companies utilize to restore their reputations?
Well, the first and foremost measure is always transparency and open communications with customers. And this isn’t just limited to breach disclosure – on the contrary, the companies must demonstrate their willingness to improve data protection and educate customers about the hidden challenges of the “digital society”. Another obvious approach is simply minimizing personal data collection from customers and implementing proper consent management. Sure, this is already one of the primary stipulations of regulations like GDPR, but compliance isn’t even the primary benefit here: for many companies, the costs savings on data protection and reputation improvements alone will already outweigh the potential (and constantly dwindling) profits from collecting more PII than necessary.
Finally, we come to the notion of security and privacy “by design”. This term has also become a buzzword for security vendors eager to sell you another data protection or cybersecurity solution. Again, it’s important to stress that just purchasing a security product does not automatically make a business more secure and thus more trustworthy. However, incorporating certain security- and privacy-enhancing technologies into the very fabric of your business processes may, in fact, bring noticeable improvements, and not just to your company’s public reputation.
Perhaps, the most obvious example of such a technology is encryption. It’s ubiquitous, cheap to implement and gives you a warm feeling of safety, right? Yes, but making encryption truly inclusive and end-to-end, ensuring that it covers all environments from databases to cloud services, and, last but not least, that the keys are managed properly is not an easy challenge. However, to make data-centric security the foundation of your digital business, you would need to go deeper still. Without identity, modern security simply cannot fulfill its potential, so you’ll need to add dynamic centralized access control to the mix. And then security monitoring and intelligence with a pinch of AI. Thus, step by step, you’ll eventually reach the holy grail of the modern IT – Zero Trust (wait, weren’t we going to boost trust, not get rid of it? Alas, that’s the misleading nature of many popular buzzwords nowadays).
For software development companies, investing into security by design can look complicated at first, too. From source code testing to various application hardening techniques to API security – writing secure applications is hard, and modern technologies like containers and microservices make it even harder, don’t they? This cannot be farther from the truth, however: modern development methodologies like DevOps and DevSecOps are in fact focusing on reducing the strain on programmers with intelligent automation, unified architectures across hybrid environments, and better experience for users, who are learning to appreciate programs that do not break under high load or cyberattacks.
But it does not even have to be that complicated. Consider Consumer Identity and Access Management platforms, for example. Replacing a homegrown user management system with such a platform not only dramatically improves the experience for your current and potential customers – with built-in privacy and consent management features, it also gives users better control over their online identities, boosting their trust considerably. And in the end, you get to know your customers better while reducing your own investments into IT infrastructure and operations. It can’t really get better than this.
You see, trust, privacy, and security don’t have to be a liability and a financial burden. With an open mind and a solid strategy, even the harshest compliance regulations can be turned into new business enablers, cost-saving opportunities and powerful messages to the public. And we are always here to support you on this journey.
The Wrong Click: It Can Happen to Anyone of Us
The dream of being able to create systems that can simulate human thought and behaviour is not new. Now that this dream appears to be coming closer to reality there is both excitement and alarm. Famously, in 2014 Prof. Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race”. Should we be alarmed by these developments and what in practice does this mean today?
The origins of today’s AI (Artificial Intelligence) can be traced back to the seminal work on computers by Dr Alan Turing. He proposed an experiment that became known as the “Turing Test”, to define the standard for a machine to be called "intelligent". A computer could only be said to "think" if a human was not able to distinguish it from a human being through a conversation with it.
The theoretical work that underpins today’s AI and ML (Machine Learning) was developed in the 1940s and 1950s. The early computers of that era were slow and could only store limited amounts of data, this restricted what could practically be implemented. This has now changed – the cloud provides the storage for vast amounts of data and the computing power needed for ML.
The theoretical basis for ML stems from work published in 1943 by Warren McCulloch and Walter Pitts on a computational model for neural networks based on mathematics and algorithms called threshold logic. Artificial neural networks provide a framework for machine learning algorithms to learn based on examples without being formally programmed. This learning needs large amounts of data and the significant computing power which the cloud can provide.
Analysing this the vast amount of data now available in the cloud creates its own challenges and ML provides a potential solution to these. Normal statistical approaches may not be capable of spotting patterns that a human would see, and programming individual analyses is laborious and slow. ML provides a way to supercharge human ability to analyse data. However, it changes the development cycle from programming to training based on curated examples overseen by human trainers. Self-learning systems may provide a way around the programming bottleneck. However, the training-based development cycle creates new challenges around testing, auditing and assurance.
ML has also provided a way to enhance algorithmic approaches to understanding visual and auditory data. It has for example enabled facial recognition systems as well as chatbots for voice-based user interactions. However, ML is only as good as the training and is not able to provide explanations for the conclusions that it reaches. This leads to the risk of adversarial attacks – where a third party spots a weakness in the training and exploits this to subvert the system. However, it has been applied very successfully to visual component inspection in manufacturing where it is faster and more accurate than a human.
One significant challenge is how to avoid bias – there are several reported examples of bias in facial recognition systems. Bias can come from several sources. There may be insufficient data to provide a representative sample. The data may have been consciously or unconsciously chosen in a way that introduces bias. This latter is difficult to avoid since every human is part of a culture which is inherently founded on a set of shared beliefs and behaviours which may not be the same as in other cultures.
Another problem is one of explanation – ML systems are not usually capable of providing an explanation for their conclusions. This makes training ML doubly difficult because when the system being trained gets the wrong answer it is hard to figure out why. The trainer needs to know this to correct the error. In use, an explanation may be required to justify a life-changing decision to the person that it affects, to provide the confidence needed to invest in a project based on a projection, or to justify why a decision was taken in a court of law.
A third problem is that ML systems do not have what most people would call “common sense”. This is because currently each is narrowly focussed on one specialized problem. Common sense comes from a much wider understanding of the world and allows the human to recognize and discard what may appear to be a logical conclusion because in the wider context it is clearly stupid. This was apparent when Microsoft released a chatbot that was supposed to train itself did not recognize mischievous behaviour.
Figure: AI, Myths, Reality and Challenges
In conclusion, AI systems are evolving but they have not yet reached the state portrayed in popular science fiction. ML is ready for practical application and major vendors offer tools to support this. The problems where AI is ready can be applied today can be described in two dimensions – the scope of knowledge required and the need for explanation. Note that the need for explanation is related to the need for legal justification or where potential consequences of mistakes are high.
Organizations are recommended to look for applications that fit the green area in the diagram and to use caution when considering those that would lie in the amber areas. The red area is still experimental and should only be considered for research.
For more information on this subject attend the AI track at EIC in Munich in May 2019.
Hype topics are important. They are important for vendors, startups, journalists, consultants, analysts, IT architects and many more. The problem with hypes is that they have an expiration date. Who remembers 4GL or CASE tools as an exciting discussion topic in IT departments? Well, exactly, that's the point...
From that expiration date on, they either have to be used for some very good purposes within a reasonable period of time, or they turn out to be hot air. There have been quite a few hype topics lately. Think for example of DevOps, Machine Learning, Artificial Intelligence, IoT, Containers and Microservices, Serverless Computing, and the Blockchain. All of these will be evaluated against their impact in the real world. The Blockchain can even be called a prototype for hype topics. The basic concept of trust in hostile environments through technology and the implementation of crypto currencies laid the groundwork for an unparalleled hype. However, there are still no compelling new implementations of solutions using this technology, which any IT-savvy hype expert could refer to immediately.
This week I attended the Berlin AWS Summit as an analyst for KuppingerCole. Many important (including many hype) topics, which have now arrived in reality, were looked at in the keynotes, combined with exciting success stories and AWS product and service offerings. These included migration to the cloud, big data, AI and ML, noSQL databases, more AI and ML, containers and microservices, data lakes and analytics, even more AI and ML and much more that is available for immediate use in the cloud and "as a service" to today's architects, developers and creators of new business models.
But if you weren't attentive just for a short moment, you could have missed the first appearance of the Blockchain topic: at the bottom of the presentation slide about databases in the column "Purpose-Built" you could find "Document-DBs", "Key-Value"-, "In-Memory-", "Time series-" and Graph databases as well as "Ledger: Amazon QLDB".
Even the word "Blockchain" was missing. A clear technological and conceptual categorization.
Behind this first dry mention is the concept of QLDB as a fully managed ledger solution in the AWS cloud, announced on the next presentation slide as "a transparent, immutable, cryptographically verifiable transaction log owned by a central trusted authority" which many purists will not even think of as a Blockchain. Apart from that AWS provides also a preview of a fully managed Blockchain based on Hyperledger Fabric or Ethereum.
This development, which has of course already manifested before in several other comparable offers from competitors, is not the end, but probably only the beginning of the real Blockchain hype. It proves that there is demand for these conceptional and technological building blocks and that this technology has come to stay.
This clearly corresponds directly and stunningly accurate to the development depicted in the trend compass for Blockchain and Blockchain Identity that Martin Kuppinger presented in this video blog post. Less hype, less volume in investment, but much better understood.
Figure: The Trend Compass - Blockchain Hype
Like every good hype topic that is getting on in years, it has lost a bit of its striking attractiveness to laymen, but gained in maturity for IT, security and governance professionals. In practice, however, it can now play a central role in the choice of the adequate tools for the right areas of application. And we will for sure need trust in hostile environments through software, technology and processes in the future.
The QLDB product offered by AWS and the underlying concept cited above is certainly not the only possible and meaningful form of Blockchain or decentralized, distributed and public digital ledger in general. But for an important class of applications of this still disruptive technology another efficient and cost-effective implementation for real life (beyond the hype) becomes available. Having the Blockchain available in such an accessible form will potentially drive Blockchain in a maturing market on to the upper right sector of the trend compass, as an established technology with substantial market volume, even if might not even be called explicitly „Blockchain“ in every context.
An organization’s need to support communication and collaboration with external parties such as business partners and customers is just as an essential technical foundation today as it has been in the past. Web Access Management and Identity Federation are two vital and inseparable technologies that organizations can use to manage access to and from external systems, including cloud services, consistently. While the core Web Access Management and Identity Federation technologies have been well established for years, organizations will still need a strategic approach to address the growing requirement list that can support a Connected and Intelligent Enterprise.
New IT challenges are driving the shift in IT from a traditional, internal-facing approach towards an open IT infrastructure supporting this Connected and Intelligent Enterprise. At the core of these changes is the need to become more agile in an increasingly complex and competitive business environment. Because of this, business models have to adapt more rapidly, and organizations need to react more quickly to new attack vectors that are continually changing. Having a Connected Enterprise means that organizations have to deal with more and larger user populations than ever before. Given these new challenges, the technologies that help to support this complex and changing landscape include Cloud, Mobile, Social and Intelligent Computing.
As the changing workforce looks to work from anywhere from any device, the need to manage mobile devices are being leveraged onto organizations. Amongst these other technologies are new types of cloud-based directory services as well as various other kinds of Cloud services that include Cloud Identity Services that give flexibility and control for both internal and external identities. Support for social logins such as Facebook, Google+, etc., are also needed and is now considered standard support for established Cloud Service Providers today. In addition to the foundational Access Management and Identity Federation capabilities, improvements to authentication and authorization technologies such as risk- and context-based Access Management, sometimes called “adaptive” authentication and authorization, are needed too.
Figure: Overall Leadership rating for the Access Management and Federation market segment
In the market segment of Web Access Management and Identity Federation, KuppingerCole is seeing an evolutionary shift in vendor solutions towards the support of the Connected and Intelligent Enterprise in various degrees. In the latest Web Access Management and Identity Federation Leadership Compass, we evaluated 15 vendors in this market segment as depicted here in this overall leadership chart. So, when considering your organizational requirements for Web Access Management and Identity Federation, you should also think about how your IT infrastructure is connecting and intelligently adapting on-premise IT to the outer world in its many different and changing ways.
To get the latest information on the market, that includes detailed technical descriptions of the leading solutions, see our most recent Web Access Management and Identity Federation Leadership Compass.
Blockchain - Just a Hype?
Beyond the new data privacy regulations: how to improve customer understanding and the customer experience?
When it comes to state-of-the-art sales and marketing, customer experience (CX) is a highly important topic. Creating and analyzing outstanding customer journeys while considering attractive and suitable marketing touchpoints are seen as key to success when it comes to omnichannel marketing.
The customer experience depends on many factors, all of which have to be considered in terms of strategic and operational marketing. A key topic is the individualization of various marketing touchpoints. Individual content, recommendations, user interfaces, and product offers can lead to a win-win situation: resulting in improved customer satisfaction and marketing success.
Artificial intelligence is a recent megatrend facilitating the customer experience—enabling advanced profiling, predictions, and continuous improvements to be made. The emerging Internet of Things (IoT) is opening the door to consumer’s living rooms, enabling “everywhere marketing.”
But what about privacy? Undoubtedly, data protection must be part of the story—whether you like it or not. Individual customer journeys depend on the processing of personally identifiable information (PII), which is restricted by regulations in various countries, such as GDPR.
Anyway, complying with the relevant legislation is mandatory. Creating outstanding customer experience while balancing privacy and marketing is a challenge.
Consumers are well aware of the issue of privacy, e.g., due to the latest news about social network data leaks. Furthermore, new technologies, such as AI and IoT, are seen as critical points of concern by many consumers as they are not yet fully aware of the consequences of how these handle their privacy.
Nevertheless, as long as customers are convinced that their data is in safe hands and they see an added value of providing it, they will give consent to process their PII. Transparency and customer understanding are thus essential in this context; this can be achieved e.g., by providing context-oriented information in addition to mandatory privacy policies, or by providing easily configurable privacy centers.
In the end, it can even be a marketing opportunity to convince consumers and customers to provide PII on the basis that your company has not only a high reputation related to its core business but also if it is seen as a champion of privacy as well. This can lead to sustainable trust—a typical marketing goal.
KuppingerCole provides extensive research and advisory helping privacy and marketing stakeholders to combine the best of their two worlds.
Our research documents give insights into many topics to be considered and balanced when it comes to creating trustful customer experiences, such as marketing automation, consumer identity management, artificial intelligence, privacy, security, governance.
We would be pleased to welcome you to experience a customer journey with KuppingerCole—offering many fresh insights in terms of data privacy and CX.
According to the Ponemon Institute - cyber incidents that take over 30 days to contain cost $1m more than those contained within 30 days. However, less than 25% of organizations surveyed globally say that their organization has a coordinated incident response plan in place. In the UK, only 13% of businesses have an incident management process in place according to a government report. This appears to show a shocking lack of preparedness since it is when not if your organization will be the target of a cyber-attack.
Last week on January 24th I attended a demonstration of IBM’s new C-TOC (Cyber Tactical Operations Centre) in London. The C-TOC is an incident response centre housed in an 18-wheel truck. It can be deployed in a wide range of environments, with self-sustaining power, an on-board data centre and cellular communications to provide a sterile environment for cyber incident response. It is designed to provide companies with immersion training in high pressure cyber-attack simulations to help them to prepare for and to improve their response to these kinds of incidents.
The key to managing incidents is preparation. There are 3 phases to a cyber incident, these are the events that led up to the incident, the incident itself and what happens after the incident. Prior to the incident the victim may have missed opportunities to prevent it. When the incident occurs, the victim needs to detect what is happening, to manage and contain its effects. After the incident the victim needs to respond in a way that not only manages the cyber related aspects but also deals with potential customer issues as well as reputational damage.
Prevention is always better than cure, so it is important to continuously improve you organization’s security posture, but you still need to be prepared to deal with an incident when it occurs.
The so-called Y2K (Millenium) bug is an example of an incident that was so well managed some people believe it was a myth. In fact, I like many other IT professionals, spent the turn of the century in a bunker ready to help any organization experiencing this problem. However, I am glad to say that the biggest problem that I met was when I returned to my hotel the next morning, I had to climb six flights of stairs because the lifts had been disabled as a precaution. There were many pieces of software that contained the error and it was only through the recognition of the problem, rigorous preparation to remove the bug as well as planning to deal with it where it arose that major problems were averted.
In the IBM C-TOC I participated in cyber response challenge involving a fictitious international financial services organization called “Bane and Ox”. This organization has a cyber security team and so called “Fusion Centre” to manage cyber security incident response. This exercise started with an HR Onboarding briefing welcoming me into the team.
We then were then taken through an unfolding cyber incident and asked to respond to the events as they occurred with phone calls from the press, attempts to steal money via emails exploiting the situation, a ransom demand, physical danger to employees, customers claiming that their money is being stolen, a data leak and an attack on the bank’s ATMs. I then underwent a TV interview about the bank’s response to the event with hostile questioning by the news reporter, not a pleasant experience!
According to IBM, organizations need a clear statement of the “Commander’s Intent”. This is needed to ensure that everyone works together towards a common goal that everyone can understand when under pressure and making difficult decisions. IBM gave the example that the D Day Commander’s Intent statement was “Take the beach”.
The next priority is to collect information. “The first call is the most important”. Whether it is from the press, a customer or an employee. You need to get the details, check the details and determine the credibility of the source.
You then need to implement a process to resolve where the problems lie and to take corrective action as well as to inform regulators and other people as necessary. This is not easy unless you have planned and prepared in advance. Everyone needs to know what they must do, and management cover is essential to ensure that resources and budget are available as needed. It may also be necessary to enable deviation from normal business processes.
Given the previously mentioned statistics on organizational preparedness for cyber incidents, many organizations need to take urgent action. The preparation needed involves the many parts of the organization not just IT, it must be supported at the board level and involve senior management. Sometimes the response will require faster decision making with the ability to bypass normal processes - only senior management can ensure that this is possible. An effective response need planning, preparation and above all practice.
- Obtain board level sponsorship for your incident response approach;
- Identify the team of people / roles that must be involved in responding to an incident;
- Ensure that it is clear what constitutes an incident and who can invoke the response plan;
- Make sure that you can contact the people involved when you need to;
- You will need external help – set up the agreement for this before you need it;
- Planning, preparation and practice can avoid pain and prosecution;
- Practice, practice and practice again.
KuppingerCole Advisory Note: GRC Reference Architecture – 72582 provides some advice on this area.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]