Blog posts by Martin Kuppinger

OpenC2 – Standards for Faster Response to Security Incidents

Recently, I came across a rather new and interesting standardization initiative, driven by the NSA (U.S. National Security Agency) and several industry organizations, both Cyber Defense software vendors and system integrators. OpenC2 names itself “a forum to promote global development and adoption of command and control” and has the following vision:

The OpenC2 Forum defines a language at a level of abstraction that will enable unambiguous command and control of cyber defense technologies. OpenC2 is broad enough to provide flexibility in the implementations of devices and accommodate future products and will have the precision necessary to achieve the desired effect.

The reasoning behind it is that an effective prevention, detection, and immediate response to cyber-attacks requires not only isolated systems, but a network of systems of various types. These functional blocks must be integrated and coordinated, to act in a synchronized manner, and in real-time, upon attacks. Communication between these systems requires standards – and that is what OpenC2 is working on.

This topic aligns well with the Real-Time Security Intelligence, an evolving area of software solutions and managed services KuppingerCole has been analyzing for a few years already. The main software and service offerings in that area are Security Intelligence Platforms (SIP) and Threat Intelligence Services. SIPs provide advanced analytical capabilities for identifying anomalies and attacks, while Threat Intelligence Services deliver information about newly detected incidents and attack vectors.

For moving from prevention (e.g. traditional firewalls) to detection (e.g. SIPs) to response, OpenC2 can play an important role, because it allows taking standardized actions based on a standardized language. This allows, for example, a SIP system to coordinate with firewalls for changing firewall rules, with SDNs (Software Defined Networks) for isolating systems targeted by the attacks, or with other analytical systems for a deeper level of analysis.

OpenC2 thus is a highly interesting initiative that can become an important building block in strengthening cyber defense. I strongly recommend looking at that initiative and, if your organization is among the players that might benefit from such language, actively engaging therein.

Follow-Up on “Managing the User's Consent Life Cycle: Challenges, GDPR Compliance and (Business) Rewards”

The GDPR continues to be a hot topic for many organizations, especially for those who store and process customer data. A core requirement for compliance to GDPR is the concept of “consent,” which is fairly new for most data controllers. Coming up with GDPR is that parties processing personally identifiable information need to ask the user for his/her consent to do so and let the user revoke that consent any time and as easily as it was given.

During the KuppingerCole webinar held on April 4th, 2017 and supported by iWelcome, several questions from attendees were left unanswered due to the huge number of questions and a lack of time to answer them all.

Several questions centered around the term “Purpose,” which is key for data processing, but a lot more interesting questions came up, which we think are important to follow up here. Corne van Rooij answers some of the questions which couldn’t be answered live during the webinar.

Q: Purpose is related to your business or more generic things like Marketing, User experience Management, Research, etc.?

Corne van Rooij: Purpose is referring to “the purpose of the processing” and should be specific, explicit and legitimate. “Marketing” (or any other generic thing) is not specific enough; it should state what kind of marketing actions, like profiling or specifically tailored offerings.

Q: Is it true that data collection pure for the fulfillment of contractual obligations and selling a product doesn't require consent?

Corne van Rooij: Yes, that is true, however, keep in mind that data minimisation requires you only to collect data you will actually need to use for the fulfillment of the contract. The collection of extra data or ‘future use’ of data that is not mandatory to fulfill the contract does not fall under this and needs additional consent or another legal basis (Article 6) like “compliance with a legal obligation.“

Q: It appears consent is changing from a static to a dynamic concept.  How can a company manage numerous consent request programs and ensure the right consent is requested at the right time and in the right context?

Corne van Rooij: A very good question and remark. Consent needs its own life cycle management, as it will change over time unless your business is very static itself. The application (e.g. the eBusiness portal) should check if the proper consent is in place and trigger for consent if not, or trigger for an update (of consent or scope) if needed. If the consent status ‘travels’ with the user when he accesses the application/service, let’s say in an assertion, then the application/service can easily check and trigger (or ask itself) for consent or scope change. And register the consent back in the central place that had to send the assertion in the first place, so a close loop. Otherwise, the application can/needs to check the consent (API call) before it can act, ask consent if needed, and write it back (API call).

Q: How does the new ePR publish on the 10/1/2017 impact consent?

Corne van Rooij: The document published on 10th January 2017 is a proposal for a new E-Privacy Regulation. If it came into force in the future, it will not impact the implications of GDPR on ‘consent’ and covers other topics (so complementary) that can require consent. This new proposal updates E-Privacy issues in line with market developments and the GDPR and covers topics like cookies and unsolicited communication. It’s the update of an already existing EU Directive that dates back to 2009.

Q: If I understand it well, I can't collect any data for a first-time visitor to an eCommerce website, and I will first have to give him the possibility to identify himself in order to get into the consent flow?

Corne van Rooij: No, this is actually not true. You can collect data, e.g. based on certain types of cookies for which permission is not required (following ePR rules) and that data could be outside GDPR if you can’t trace it back to an actual individual. If you let him/her register himself for e.g. a newsletter and you ask personal information, then it falls under the GDPR. However, you might be able to stay away from asking consent if you can use another legal basis stated in article 6 for lawful processing. Let’s say a person wants to subscribe to an online magazine, then you only need the email address, and as such, that is enough to fulfill “the contract.” If you ask more, e.g. name, telephone number, etc., which you don’t actually need, then you need to use consent and have to specify a legitimate purpose.

Q: For existing user (customer) accounts, is there a requirement in GDPR to cover proof of previously given consent?

Corne van Rooij: You will have to justify that the processing of the personal data you keep is based on one of the grounds of Article 3 “Lawfulness of processing.“ If your legal basis is consent, you will need proof of this consent, and if consent was given in several steps, proof for all these consents have to be in place.

Q: Please give more detailed information on how to handle all already acquired data from customers and users.

Corne van Rooij: In short: companies need to check what personal data they process and have in their possession. They must then delete or destroy the data when legal basis for processing is no longer there, or the purpose for which the data was obtained or created has been fulfilled.

If the legal basis or the purpose has changed, the data subject needs to be informed, and new consent might be necessary.  Also, when proof of earlier given consent is not available, the data subject has to be asked for consent again.

Q: So there is no need to erase already acquired user/customer/consumer data, as long it is not actively used? - E.g. for already provisioned customer data - especially where the use of personal data had been already agreed by accepting agreements before? Is there a need to renew the request for data use when GDPR goes live?

Corne van Rooij: There is a difference in “not actively used” and “no legal basis or allowed the purpose of using it.” If it’s the latter, you need to remove the data or take action to meet the GDPR requirements. The processing which is necessary for the performance of a contract could comply with Article 6, as GDPR is for most of these things not new. There was already a lot of national legislation in place based on the EU Directive which also covers the topic of the lawfulness of processing.

Q: How long are you required to keep the consent information, after the customer has withdrawn all consents and probably isn't even your customer anymore?

Corne van Rooij: We advise that you keep proof of consent for as long as you keep the personal data. This is often long after consent is withdrawn, as companies have legal obligations to keep data under f.i. Business and tax laws.

GDPR as an Opportunity to Build Trusted Relationships with Consumers

During the KuppingerCole webinar run March 16th, 2017, which has been supported by ForgeRock, several questions from attendees were left unanswered due to a huge number of questions and a lack of time to cover them all. Here are answers to questions that couldn’t be answered live during the webinar.

Q: How does two factor authentication play into GDPR regulations?

Karsten Kinast: Two factor authentication does not play into GDPR at all.

Martin Kuppinger: While two factor authentication is not a topic of GDPR, it e.g. plays a major role in another upcoming EU regulation, the PSD2 (revised Payment Services Directive), which applies to electronic payments.

Q: How do you see North American companies adhering to GDPR regulations? Do you think it will take a fine before they start incorporating the regulations into their privacy and security policies?

Eve Maler: As I noted on the webinar itself, from my conversations, these companies are even slower than European companies (granting Martin's point that European companies are far from full awareness yet) to "wake up". It seems like a Y2K phenomenon for our times. We at ForgeRock spend a lot of time working with digital transformation teams, and we find they have much lower awareness vs. risk teams. So, we encourage joint stakeholder conversations so that those experienced in the legal situation and those experienced in A/B testing of user experience flows can get together and do better on building trusted digital relationships!

Karsten Kinast: My experience is, that North American companies are adhering better and preparing more intensely for the upcoming GDPR than companies elsewhere. So, I don’t think it will need fines, because they already started preparing.

Q: Sometimes, there seems being a conflict between the “right to be forgotten” and practical requirements, e.g. for clinical trial data. Can consent override the right to be forgotten?

Karsten Kinast: While there might be a consent, the consent can be revoked. Thus, using consent to override the right to be forgotten will not work in practice.

Q: The fines for violating the GDPR regulations can be massive, up to 20 Mio € or 4% of the annual group revenue, whichever is higher. Can the fines be paid over a period of time or compensated by e.g. trainings?

Karsten Kinast: If the fine is imposed, it commonly will be in cash and in one payment.

Q: Where to learn more on consent life cycle management?

Eve Maler: Here are some resources that may be helpful:

  • My recent talk at RSA on designing a new consent strategy for digital transformation, including a proposal for a new classification system for types of permission
  • Information on the emerging Consent Receipts standard
  • Recent ForgeRock webinar on the general topic of data privacy, sharing more details about our Identity Platform and its capabilities

Martin Kuppinger: From our perspective, this is a both interesting and challenging area. Organizations must find ways to gain consent without losing their customers. This will only work when the value of the service is demonstrated to the customers and consumers. On the other hand, this also bears the opportunity of differentiating from others by demonstrating a good balance between the data collected and the value provided.

Q: Who is actually responsible for trusted digital relationships in the enterprise? Is this an IAM function?

Eve Maler: Many stakeholders in an organization have a role to play in delivering on this goal. IAM has a huge role to play, and I see consumer- and customer-facing identity projects more frequently sitting in digital transformation teams. It's my hope that the relatively new role of Chief Trust Officer will grow out of "just" privacy compliance and external evangelism to add more internal advocacy for transparency and user control.

Martin Kuppinger: It depends of the role of the IAM team in the organization. If it is more the traditional, administration and security focused role, this most commonly will an IAM function. However, the more IAM moves towards an entity that understands itself as a business enabler, closely working with other units such as marketing, the more IAM is positioned to take such central role.

Q: How big a role does consent play in solving privacy challenges overall?

Eve Maler: One way to look at it, GDPR-wise, is that it's just one-sixth of the legal bases for processing personal data, so it's a tiny part -- but we know better, if we remember that we're human beings first and ask what we'd like done if it were us in the user's chair! Another way to look at it is that asking for consent is something of an alternative to one of the other legal bases, "legitimate interests". Trust-destroying mischief could be perpetrated here. With the right consent technology and a comprehensive approach, it should be possible for an enterprise to ask for consent -- offer data sharing opportunities -- and enable consent withdrawal more freely, proving its trustworthiness more easily.

The Role of Artificial Intelligence in Cyber Security

Over the last few weeks I’ve read a lot about the role AI or Artificial Intelligence (or should I better write “Artificial” Intelligence?) will play in Cyber Security. There is no doubt that advanced analytical technologies (frequently subsumed under the AI term), such as pattern matching, machine learning, and many others, are already affecting Cyber Security. However, the emphasis here is on “already”. It would be wrong to say “nothing new under the sun”, given that there is a lot of progress in this space. But it is just as wrong to ignore the evolution of the past couple of years.

At KuppingerCole, we started looking at what we call Real Time Security Intelligence (RTSI) a couple of years back. We published our first report on this topic back in May 2014 and covered the topic in our predictions for 2014. The topic was covered in a session at EIC 2014. And we published a series of blogs on that topic during that year.

There is no doubt that advanced analytical technologies will help organizations in their fight against cyber-attacks, because they help in detecting potential attacks at an earlier stage, as well as enabling the identification of complex attack patterns that span various systems. AI also might help, such as in IBM Watson for Cyber Security, to provide a better understanding of cyber risks by collecting and analyzing both structured and unstructured information. Cognitive Security solutions such as IBM Watson for Cyber Security are part of the AI evolution in the field of cyber-security. But again: The journey started a couple of years ago, and we are just in the very early stages.

So why this hype now? Maybe it is because of achieving a critical mass of solutions. More and more companies have entered the field in recent years. Maybe it is because of some big players actively entering that market. At the beginning, most of the players were startups (and many of these rooted in Israel). Now, large companies such as IBM have started pushing the topic, gaining far more awareness in public. Maybe it is because of AI in Cyber Security being the last hope for a solution that helps the good guys win in their fight against cyber criminals and nation-state attackers (hard to say where the one ends and the other starts).

Anyway: We will see not only more solutions in the market and advancements in that field of technology in 2017 and beyond, but we will see a strong increase in awareness for “AI in Cyber Security” as well as the field of Real Time Security Intelligence. This is, regardless of all skepticism regarding the use of terms and regarding hypes, a positive evolution.

PSD II, Adaptive Authentication, and Multi-Factor Authentication

The upcoming updated Payment Services Directive (PSD II) will, among other changes, request Multi-Factor Authentication (MFA) for all payments above 10€ which aren’t done electronically. This is only one major change PSD II brings (another major change are the mandatory open APIs), but one that is heavily discussed and criticized, e.g. by software vendors, by credit card companies such as VISA, and others.

It is interesting to look at the published material. The major point is that it only talks about MFA, without going into specifics. The regulators also point out clearly that an authentication based on one factor in combination with Risk-Based Authentication (RBA) is not sufficient. RBA analyzes the transactions, identifies risk based on, e.g., the amount, the geolocation of the IP address, and other factors, and requests a second means or factor if the risk rating is above a threshold.

That leads to several questions. One question is what level of MFA is required. Another is what this means for Adaptive Authentication (AA) and RBA in general. The third question is whether and how this will affect credit card payments or services such as PayPal, that commonly still rely on one factor for authentication.

First, let me clarify some terms. MFA stands for Multi Factor Authentication, i.e. all approaches involving more than one factor. The most common variant is Two Factor Authentication (2FA), i.e. the use of two factors. There are three factors: Knowledge, Possession, Biometrics – or “what you know”, “what you have”, “what you are”. For each factor, there might be various “means”, e.g. username and password for knowledge, a hard token or a phone for possession, fingerprint and iris for biometrics.

RBA defines authentication that, as described beforehand, analyzes the risk involved in authentication and subsequent interaction and transactions and might request additional authentication steps depending on the risk rating.

Adaptive Authentication, on the other hand, is a combination of what sometimes is called “versatile” authentication with RBA. It combines the ability to use various means (and factors) for authentication in a flexible way. In that sense, it is adaptive to the authenticator that someone has. The other aspect of adaptiveness is RBA, i.e. adapting the required level of authentication to the risk. AA can be MFA, but it also – with low risk – can be One Factor Authentication (1FA).

Based on these definitions, it becomes clear that the statement “PSD II does not allow AA” is wrong. It also is wrong that “PSD II permits RBA”. The point simply is: Using AA (i.e. flexible authenticators plus RBA) or RBA without versatility is only in compliance with the PSD II requirements if at least two factors for authentication (2FA) are used.

And to put it more clearly: AA, i.e. versatility plus RBA, absolutely makes sense in the context of PSD II – to fulfill the regulatory requirements of MFA in a way that adapts to the customer and to mitigate risks beyond the baseline MFA requirement of PSD II.

MFA by itself is not necessarily secure. You can use a four-digit PIN together with the device ID of a smartphone and end up with 2FA – there is knowledge (PIN) and possession (a device assigned to you). Obviously, this is not very secure, but it is MFA. Thus, there should be (and most likely will be) additional requirements that lead to a certain minimum level of MFA for PSD II.

For providers, following a consequent AA path makes sense. Flexible use of authenticators to support what customers prefer and already have helps increase convenience and reduce cost for deploying authenticators and subsequent logistics – and it will help in keeping retention rates high. RBA as part of AA also helps to further mitigate risks, beyond a 2FA, whatever the authentication might look like.

The art in the context of PSD II will be to balance customer convenience, authentication cost, and risk. There is a lot of room for doing so, particularly with the uptake in biometrics and standards such as the FIDO Alliance standards which will help payment providers in finding that balance. Anyway, payment providers must rethink their authentication strategies now, to meet the changing requirements imposed by PSD II.

While this might be simple and straightforward for some, others will struggle. Credit card companies are more challenged, particularly in countries such as Germany where the PIN of credit cards is rarely used. However, the combination of a PIN with a credit card works for payments – if the possession of the credit card is proven, e.g. at a POS (Point of Sale) terminal. For online transactions, things become more complicated due to the lack of proof of the credit card. Even common approaches such as entering the credit card number, the security number from the back of the card (CVV, Card Verification Number), and the PIN will not help, because all could be means of knowledge – I know my credit card number, my CVV, and my PIN, and even the bank account number that sometimes is used in RBA by credit card processors. Moving to MFA here is a challenge that isn’t easy to solve.

The time is fast approaching for all payment providers to define an authentication strategy that complies with the PSD II requirements of MFA, as fuzzy as these still are. Better definitions will help, but it is obvious that there will be changes. One element that is a must is moving towards Adaptive Authentication, to support various means and factors in a way that is secure, compliant, and convenient for the customer.

Do You Need a Better IAM System to Meet the GDPR Requirements?

GDPR, the EU General Data Protection Regulation, is increasingly becoming a hot topic. That does not come as a surprise, given that the EU GDPR has a very broad scope, affecting every data controller (the one who “controls” the PII) and data processor (the one who “processes” the PII) dealing with data subjects (the persons) residing in the EU – even when the data processors and data controllers are outside of the EU.

Among the requirements of EU GDPR are aspects such as the right to be forgotten, the right to edit the PII stored about one self, or the “consent per purpose” principle, which requires informed consent per purpose of use of PII, in contrast to today’s typical “this site uses cookies and we will do whatever we want with the data collected” style of consent.

Notably, the definition of PII is very broad in the EU. It is not only about data that is directly mapped to the name and other identifiers. If a bit of data can be used to identify the individual, it is PII.

There are obvious effects to social networks, to websites where users are registered, and to many other areas of business. The EU GDPR also will massively affect the emerging field of CIAM (Consumer/Customer Identity and Access Management), where full support for EU GDPR-related features, such as a flexible consent handling, become mandatory.

However, will the EU GDPR also affect the traditional, on-premise IAM systems with their focus on employees and contractors? Honestly, I don’t see that impact. I see it, as mentioned beforehand, for CIAM. I clearly see it in the field of Enterprise Information Protection, by protecting PII-related information from leaking and managing access to such information. That also affects IAM, which might need to become more granular in managing access – but there are no new requirements arising from the EU GDPR. The need for granular management access to PII might lead to a renaissance (or naissance?) of Dynamic Authorization Management (think about ABAC) finally. It is far easier handling complex rules for accessing such data based on flexible, granular, attribute-based policies. We will need better auditing procedures. However, with today’s Access Governance and Data Governance, a lot can be done – and what can’t be done well needs other technologies such as Access Governance in combination with Dynamic Authorization Management or Data Governance that works well for Big Data. Likewise, Privilege Management for better protecting systems that hold PII are mandatory as well.

But for managing access to PII of employees and contractors, common IAM tools provide sufficient capabilities. Consent is handled as part of work contracts and other generic rules. Self-service interfaces for managing the data stored about an employee are a common feature.

The EU GDPR is important. It will change a lot. But for the core areas of today’s IAM, i.e. Identity Provisioning and Access Governance, there is little change.

Accenture to acquire French IAM System Integrator Arismore

Just before Christmas Accenture Security announced the acquisition of French IAM system integrator Arismore, a company with about 270 employees and an estimated turnover of €40M. This makes Arismore a leading IAM system integrator in France, while also being involved in IT transformation initiatives.

The acquisition follows other deals such as the acquisition of Everett by PWC earlier in 2016.

Arismore is of specific interest because it also owns a subsidiary, Memority, which launched an IDaaS offering back in 2014. Memority is one of the various IDaaS offerings that are largely based on COTS software, but offered as a service. In contrast to some others, it was not built as a cloud service from scratch.

Anyway, such service fits into the strategy of companies such as Accenture which are moving from consultancy offerings towards service offerings, such as the Accenture Velocity platform.

The acquisition is thus another indicator of the change in the consulting and system integration market, where former SIs and consultancies are moving towards service offerings – when more and more software is used as a cloud-based service, the traditional system integration business obviously will shrink over time.

However, Memority is still only a small part of the deal. Being strong in security is another requirement of the large consultancies, with security being one of the fastest growing business areas. Thus, the acquisition of Arismore by Accenture delivers value in two areas: More services and more security.

Is Your Software GDPR-Compliant? Is That the Right Question?

I hear this question being asked more and more  of vendors and of us analysts, whether a vendor’s software is GDPR compliant. However, it is the wrong question. The correct question is: “Does the software allow my organization to fulfill the regulatory requirements of EU GDPR?”. Even for cloud services, this (as “Does the service allow…”) is the main question, unless PII is processed by the cloud service.

If an enterprise implements a software package, it still has the requirement for complying with EU GDPR. It is the data controller. If it uses a cloud service, much of this is tenant responsibility. However, the role of the data processor – the one processing the data, ordered by the data controllers – is broader than ever before. Even someone that provides “only” storage that is used for storing PII is a data processor in the context of EU GDPR.

An interesting facet of this discussion is the “Privacy by Design” requirement of EU GDPR. Software (and services) used for handling PII must follow the principle of privacy by design. Thus, a data controller must choose software (or services) that follow these principles. One might argue that he also could choose an underlying software or service without support for privacy by design (whatever this is specifically) and configure or customize it so that it meets these requirements. The open question is whether a software or service must support privacy by design out-of-the-box and thus in consequence all EU GDPR requirements that apply to what the software does or whether it is sufficient that a software can be configured or customized to do so. But as my colleague Dave Kearns states: “The whole point of the ‘privacy by design’ is that it is in the product from the beginning, not added on later.

That is interesting when looking again at the initial question. One answer might be that all features required to fulfill the regulatory requirements of EU GDPR must be built into software and services that are used for handling PII data in the scope of EU GDPR. The other might be that it is sufficient if the software or service can be configured or customized to do so.

In essence, the question – when choosing software and services – is whether they support the EU GDPR requirements, starting from the abstract privacy-by-design principles to the concrete requirements of handling consent per purpose and many of the other requirements. It is not about software being compliant with EU GDPR, but about providing the support required for an organization to fulfill the requirements of EU GDPR. Looking at these requirements, there is a lot to do in many areas of software and services.

It’s not about security vs. safety – it is about security and safety

When you’ve ever been involved in discussions between IT Security people and OT (Operational Technology, everything that runs in manufacturing environments) people – the latter not only security guys – you probably observed that such discussions have a tendency of not being fruitful because they start with a fundamental misunderstanding between the two parties.

IT security people think about security first, which is essentially about protecting against cyber-attacks and internal attackers and the “CIA” – confidentiality, integrity, and availability. OT people don’t think about security first, even if they are OT security people. They first think about safety, which is about physical safety of humans and machines, which is about reliability, and about availability.

Understanding this dichotomy is essential, because there are different requirements, but also a different history in both areas. OT has always focused on safety, reliability, and availability of production environments. Physical damage of humans, but also of machines, due to software issues (such as a non-working patch) is inacceptable. Mistakes in production are inacceptable, because they can lead to massive liability issues and cost. And availability is key for manufacturing. A production line not working can cause very high cost in a very short period of time. In fact, that is where high availability is really critical, far more than for the very most of the IT systems, even the ones that are being considered as critical.

Unfortunately, the world is changing rapidly. Buzzwords such as “Industry 4.0” or “Smart Manufacturing” stand for that change – the change from an isolated to a massively connected world of manufacturing. The quintessence of these changes is that manufacturing environments become connected; and they increasingly become connected bi-directionally, unless regulations prohibit this. The golden rule to keep in mind here is simple: “Once something is connected, it is under attack.” Computer search engines that scan for everything including IoT devices (and including Industrial IoT or IIoT devices), automated attacks, advanced attacks against manufacturing environments: The risk for these connected environments is massive.

Thus, it is time to overcome the dichotomy between security and safety. We need figuring out new ways of both connecting and protecting manufacturing environments against attacks, while keeping them safe, reliable, and available. The answer to this challenge can’t be leaving everything as is. This will not work. Outdated operating systems, a lack of regular patches, a lack of fine-grain security models in OT equipment – all this will not work anymore. On the other hand, it will take years, probably even tens of years, to modernize all these environments.

Thus, we need to find a mix of new, more modern approaches that combine security by design with the specific requirements of OT environments, while protecting all the old stuff – with unidirectional firewalls, with privilege management technologies to protect shared administrative accounts, with advanced analytical tools to identify potential attacks.

However, we will only succeed when both groups, the IT and the OT people, end their culture of not understanding each other and start working on joint initiatives – and that must start by defining a common understanding of the vocabulary, but also understanding that the requirements of both groups are not only valid but mandatory. Let’s start working together.

Cognitive Security – the next big thing in security?

There are good reasons for the move towards “Cognitive Security”. The skill gap in Information Security is amongst the most compelling ones. We just don’t have sufficient skilled people. If we can computers make stepping in here, we might close that gap.

On the other hand, a lot of what we see being labeled “Cognitive Security” is still far away from really advanced, “cognitive” technologies. Marketing tends to exaggeration. On the other hand, there is a growing number of examples of advanced approaches, such as IBM Watson – the latter focusing on filtering the unstructured information and delivering exactly what an Information Security professional needs.

A challenge we must not ignore is the fact that these technologies are based on what is called “machine learning”. The machines must learn before they can do their job. That is not different from humans. An experienced security expert first needs experience. That, on the other hand, leads to two challenges with machines.

One is that machines, if used in Information Security, first must learn about incidents and attacks. With other words: They only can identify attacks after learning. Potentially, that means that there must occur some attacks until the machine can identify and protect against these. There are ways to address this. Machines can share their “knowledge”, better than humans. Thus, the time until they can react on attacks can be massively shortened. Furthermore, the more “cognitive” the machines behave, the better they might detect new attacks by identifying analogies and similarities in patterns, without knowing the specific attack.

On the other hand, training the machines bears the risk that they learn the wrong things. Attackers even might systematically train cognitive security systems in wrong behavior. Botnets might be used for sophisticated “training”, before the concrete attacks occur.

While there is a strong potential for Cognitive Security, we are still in the very early stages of evolution. However, I see a strong potential in these technologies, not in replacing humans but complementing these. Systems can run advanced analysis on masses of data and help finding the few needles in the haystack, the signs of severe attacks. They can help Information Security professionals in making better use of their time, by focusing on the most likely traces of attacks.

Traditional SIEM (Security Information and Event Management) will be replaced by such technologies – an evolution that is already on its way, by applying Big Data and advanced analytical capabilities to the field of Information Security. We at KuppingerCole call this Real Time Security Intelligence (RTSI). RTSI is a first step on the journey towards Cognitive Security. Given the fact that Security on one hand is amongst the most complex challenges to solve and, on the other hand, attacks cause massive damage, this is one of the fields where the evolution in cognitive technologies will take place. It is not as popular as playing Go or chess, but it is a huge market with massive demand. Today, we can observe the first examples of “Cognitive Security”. In 2025, such solutions will be mainstream.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00