Blog posts by Martin Kuppinger
During the KuppingerCole webinar run March 16th, 2017, which has been supported by ForgeRock, several questions from attendees were left unanswered due to a huge number of questions and a lack of time to cover them all. Here are answers to questions that couldn’t be answered live during the webinar.
Q: How does two factor authentication play into GDPR regulations?
Karsten Kinast: Two factor authentication does not play into GDPR at all.
Martin Kuppinger: While two factor authentication is not a topic of GDPR, it e.g. plays a major role in another upcoming EU regulation, the PSD2 (revised Payment Services Directive), which applies to electronic payments.
Q: How do you see North American companies adhering to GDPR regulations? Do you think it will take a fine before they start incorporating the regulations into their privacy and security policies?
Eve Maler: As I noted on the webinar itself, from my conversations, these companies are even slower than European companies (granting Martin's point that European companies are far from full awareness yet) to "wake up". It seems like a Y2K phenomenon for our times. We at ForgeRock spend a lot of time working with digital transformation teams, and we find they have much lower awareness vs. risk teams. So, we encourage joint stakeholder conversations so that those experienced in the legal situation and those experienced in A/B testing of user experience flows can get together and do better on building trusted digital relationships!
Karsten Kinast: My experience is, that North American companies are adhering better and preparing more intensely for the upcoming GDPR than companies elsewhere. So, I don’t think it will need fines, because they already started preparing.
Q: Sometimes, there seems being a conflict between the “right to be forgotten” and practical requirements, e.g. for clinical trial data. Can consent override the right to be forgotten?
Karsten Kinast: While there might be a consent, the consent can be revoked. Thus, using consent to override the right to be forgotten will not work in practice.
Q: The fines for violating the GDPR regulations can be massive, up to 20 Mio € or 4% of the annual group revenue, whichever is higher. Can the fines be paid over a period of time or compensated by e.g. trainings?
Karsten Kinast: If the fine is imposed, it commonly will be in cash and in one payment.
Q: Where to learn more on consent life cycle management?
Eve Maler: Here are some resources that may be helpful:
- My recent talk at RSA on designing a new consent strategy for digital transformation, including a proposal for a new classification system for types of permission
- Information on the emerging Consent Receipts standard
- Recent ForgeRock webinar on the general topic of data privacy, sharing more details about our Identity Platform and its capabilities
Martin Kuppinger: From our perspective, this is a both interesting and challenging area. Organizations must find ways to gain consent without losing their customers. This will only work when the value of the service is demonstrated to the customers and consumers. On the other hand, this also bears the opportunity of differentiating from others by demonstrating a good balance between the data collected and the value provided.
Q: Who is actually responsible for trusted digital relationships in the enterprise? Is this an IAM function?
Eve Maler: Many stakeholders in an organization have a role to play in delivering on this goal. IAM has a huge role to play, and I see consumer- and customer-facing identity projects more frequently sitting in digital transformation teams. It's my hope that the relatively new role of Chief Trust Officer will grow out of "just" privacy compliance and external evangelism to add more internal advocacy for transparency and user control.
Martin Kuppinger: It depends of the role of the IAM team in the organization. If it is more the traditional, administration and security focused role, this most commonly will an IAM function. However, the more IAM moves towards an entity that understands itself as a business enabler, closely working with other units such as marketing, the more IAM is positioned to take such central role.
Q: How big a role does consent play in solving privacy challenges overall?
Eve Maler: One way to look at it, GDPR-wise, is that it's just one-sixth of the legal bases for processing personal data, so it's a tiny part -- but we know better, if we remember that we're human beings first and ask what we'd like done if it were us in the user's chair! Another way to look at it is that asking for consent is something of an alternative to one of the other legal bases, "legitimate interests". Trust-destroying mischief could be perpetrated here. With the right consent technology and a comprehensive approach, it should be possible for an enterprise to ask for consent -- offer data sharing opportunities -- and enable consent withdrawal more freely, proving its trustworthiness more easily.
Over the last few weeks I’ve read a lot about the role AI or Artificial Intelligence (or should I better write “Artificial” Intelligence?) will play in Cyber Security. There is no doubt that advanced analytical technologies (frequently subsumed under the AI term), such as pattern matching, machine learning, and many others, are already affecting Cyber Security. However, the emphasis here is on “already”. It would be wrong to say “nothing new under the sun”, given that there is a lot of progress in this space. But it is just as wrong to ignore the evolution of the past couple of years.
At KuppingerCole, we started looking at what we call Real Time Security Intelligence (RTSI) a couple of years back. We published our first report on this topic back in May 2014 and covered the topic in our predictions for 2014. The topic was covered in a session at EIC 2014. And we published a series of blogs on that topic during that year.
There is no doubt that advanced analytical technologies will help organizations in their fight against cyber-attacks, because they help in detecting potential attacks at an earlier stage, as well as enabling the identification of complex attack patterns that span various systems. AI also might help, such as in IBM Watson for Cyber Security, to provide a better understanding of cyber risks by collecting and analyzing both structured and unstructured information. Cognitive Security solutions such as IBM Watson for Cyber Security are part of the AI evolution in the field of cyber-security. But again: The journey started a couple of years ago, and we are just in the very early stages.
So why this hype now? Maybe it is because of achieving a critical mass of solutions. More and more companies have entered the field in recent years. Maybe it is because of some big players actively entering that market. At the beginning, most of the players were startups (and many of these rooted in Israel). Now, large companies such as IBM have started pushing the topic, gaining far more awareness in public. Maybe it is because of AI in Cyber Security being the last hope for a solution that helps the good guys win in their fight against cyber criminals and nation-state attackers (hard to say where the one ends and the other starts).
Anyway: We will see not only more solutions in the market and advancements in that field of technology in 2017 and beyond, but we will see a strong increase in awareness for “AI in Cyber Security” as well as the field of Real Time Security Intelligence. This is, regardless of all skepticism regarding the use of terms and regarding hypes, a positive evolution.
The upcoming updated Payment Services Directive (PSD II) will, among other changes, request Multi-Factor Authentication (MFA) for all payments above 10€ which aren’t done electronically. This is only one major change PSD II brings (another major change are the mandatory open APIs), but one that is heavily discussed and criticized, e.g. by software vendors, by credit card companies such as VISA, and others.
It is interesting to look at the published material. The major point is that it only talks about MFA, without going into specifics. The regulators also point out clearly that an authentication based on one factor in combination with Risk-Based Authentication (RBA) is not sufficient. RBA analyzes the transactions, identifies risk based on, e.g., the amount, the geolocation of the IP address, and other factors, and requests a second means or factor if the risk rating is above a threshold.
That leads to several questions. One question is what level of MFA is required. Another is what this means for Adaptive Authentication (AA) and RBA in general. The third question is whether and how this will affect credit card payments or services such as PayPal, that commonly still rely on one factor for authentication.
First, let me clarify some terms. MFA stands for Multi Factor Authentication, i.e. all approaches involving more than one factor. The most common variant is Two Factor Authentication (2FA), i.e. the use of two factors. There are three factors: Knowledge, Possession, Biometrics – or “what you know”, “what you have”, “what you are”. For each factor, there might be various “means”, e.g. username and password for knowledge, a hard token or a phone for possession, fingerprint and iris for biometrics.
RBA defines authentication that, as described beforehand, analyzes the risk involved in authentication and subsequent interaction and transactions and might request additional authentication steps depending on the risk rating.
Adaptive Authentication, on the other hand, is a combination of what sometimes is called “versatile” authentication with RBA. It combines the ability to use various means (and factors) for authentication in a flexible way. In that sense, it is adaptive to the authenticator that someone has. The other aspect of adaptiveness is RBA, i.e. adapting the required level of authentication to the risk. AA can be MFA, but it also – with low risk – can be One Factor Authentication (1FA).
Based on these definitions, it becomes clear that the statement “PSD II does not allow AA” is wrong. It also is wrong that “PSD II permits RBA”. The point simply is: Using AA (i.e. flexible authenticators plus RBA) or RBA without versatility is only in compliance with the PSD II requirements if at least two factors for authentication (2FA) are used.
And to put it more clearly: AA, i.e. versatility plus RBA, absolutely makes sense in the context of PSD II – to fulfill the regulatory requirements of MFA in a way that adapts to the customer and to mitigate risks beyond the baseline MFA requirement of PSD II.
MFA by itself is not necessarily secure. You can use a four-digit PIN together with the device ID of a smartphone and end up with 2FA – there is knowledge (PIN) and possession (a device assigned to you). Obviously, this is not very secure, but it is MFA. Thus, there should be (and most likely will be) additional requirements that lead to a certain minimum level of MFA for PSD II.
For providers, following a consequent AA path makes sense. Flexible use of authenticators to support what customers prefer and already have helps increase convenience and reduce cost for deploying authenticators and subsequent logistics – and it will help in keeping retention rates high. RBA as part of AA also helps to further mitigate risks, beyond a 2FA, whatever the authentication might look like.
The art in the context of PSD II will be to balance customer convenience, authentication cost, and risk. There is a lot of room for doing so, particularly with the uptake in biometrics and standards such as the FIDO Alliance standards which will help payment providers in finding that balance. Anyway, payment providers must rethink their authentication strategies now, to meet the changing requirements imposed by PSD II.
While this might be simple and straightforward for some, others will struggle. Credit card companies are more challenged, particularly in countries such as Germany where the PIN of credit cards is rarely used. However, the combination of a PIN with a credit card works for payments – if the possession of the credit card is proven, e.g. at a POS (Point of Sale) terminal. For online transactions, things become more complicated due to the lack of proof of the credit card. Even common approaches such as entering the credit card number, the security number from the back of the card (CVV, Card Verification Number), and the PIN will not help, because all could be means of knowledge – I know my credit card number, my CVV, and my PIN, and even the bank account number that sometimes is used in RBA by credit card processors. Moving to MFA here is a challenge that isn’t easy to solve.
The time is fast approaching for all payment providers to define an authentication strategy that complies with the PSD II requirements of MFA, as fuzzy as these still are. Better definitions will help, but it is obvious that there will be changes. One element that is a must is moving towards Adaptive Authentication, to support various means and factors in a way that is secure, compliant, and convenient for the customer.
GDPR, the EU General Data Protection Regulation, is increasingly becoming a hot topic. That does not come as a surprise, given that the EU GDPR has a very broad scope, affecting every data controller (the one who “controls” the PII) and data processor (the one who “processes” the PII) dealing with data subjects (the persons) residing in the EU – even when the data processors and data controllers are outside of the EU.
Notably, the definition of PII is very broad in the EU. It is not only about data that is directly mapped to the name and other identifiers. If a bit of data can be used to identify the individual, it is PII.
There are obvious effects to social networks, to websites where users are registered, and to many other areas of business. The EU GDPR also will massively affect the emerging field of CIAM (Consumer/Customer Identity and Access Management), where full support for EU GDPR-related features, such as a flexible consent handling, become mandatory.
However, will the EU GDPR also affect the traditional, on-premise IAM systems with their focus on employees and contractors? Honestly, I don’t see that impact. I see it, as mentioned beforehand, for CIAM. I clearly see it in the field of Enterprise Information Protection, by protecting PII-related information from leaking and managing access to such information. That also affects IAM, which might need to become more granular in managing access – but there are no new requirements arising from the EU GDPR. The need for granular management access to PII might lead to a renaissance (or naissance?) of Dynamic Authorization Management (think about ABAC) finally. It is far easier handling complex rules for accessing such data based on flexible, granular, attribute-based policies. We will need better auditing procedures. However, with today’s Access Governance and Data Governance, a lot can be done – and what can’t be done well needs other technologies such as Access Governance in combination with Dynamic Authorization Management or Data Governance that works well for Big Data. Likewise, Privilege Management for better protecting systems that hold PII are mandatory as well.
But for managing access to PII of employees and contractors, common IAM tools provide sufficient capabilities. Consent is handled as part of work contracts and other generic rules. Self-service interfaces for managing the data stored about an employee are a common feature.
The EU GDPR is important. It will change a lot. But for the core areas of today’s IAM, i.e. Identity Provisioning and Access Governance, there is little change.
Just before Christmas Accenture Security announced the acquisition of French IAM system integrator Arismore, a company with about 270 employees and an estimated turnover of €40M. This makes Arismore a leading IAM system integrator in France, while also being involved in IT transformation initiatives.
The acquisition follows other deals such as the acquisition of Everett by PWC earlier in 2016.
Arismore is of specific interest because it also owns a subsidiary, Memority, which launched an IDaaS offering back in 2014. Memority is one of the various IDaaS offerings that are largely based on COTS software, but offered as a service. In contrast to some others, it was not built as a cloud service from scratch.
Anyway, such service fits into the strategy of companies such as Accenture which are moving from consultancy offerings towards service offerings, such as the Accenture Velocity platform.
The acquisition is thus another indicator of the change in the consulting and system integration market, where former SIs and consultancies are moving towards service offerings – when more and more software is used as a cloud-based service, the traditional system integration business obviously will shrink over time.
However, Memority is still only a small part of the deal. Being strong in security is another requirement of the large consultancies, with security being one of the fastest growing business areas. Thus, the acquisition of Arismore by Accenture delivers value in two areas: More services and more security.
I hear this question being asked more and more of vendors and of us analysts, whether a vendor’s software is GDPR compliant. However, it is the wrong question. The correct question is: “Does the software allow my organization to fulfill the regulatory requirements of EU GDPR?”. Even for cloud services, this (as “Does the service allow…”) is the main question, unless PII is processed by the cloud service.
If an enterprise implements a software package, it still has the requirement for complying with EU GDPR. It is the data controller. If it uses a cloud service, much of this is tenant responsibility. However, the role of the data processor – the one processing the data, ordered by the data controllers – is broader than ever before. Even someone that provides “only” storage that is used for storing PII is a data processor in the context of EU GDPR.
An interesting facet of this discussion is the “Privacy by Design” requirement of EU GDPR. Software (and services) used for handling PII must follow the principle of privacy by design. Thus, a data controller must choose software (or services) that follow these principles. One might argue that he also could choose an underlying software or service without support for privacy by design (whatever this is specifically) and configure or customize it so that it meets these requirements. The open question is whether a software or service must support privacy by design out-of-the-box and thus in consequence all EU GDPR requirements that apply to what the software does or whether it is sufficient that a software can be configured or customized to do so. But as my colleague Dave Kearns states: “The whole point of the ‘privacy by design’ is that it is in the product from the beginning, not added on later.
That is interesting when looking again at the initial question. One answer might be that all features required to fulfill the regulatory requirements of EU GDPR must be built into software and services that are used for handling PII data in the scope of EU GDPR. The other might be that it is sufficient if the software or service can be configured or customized to do so.
In essence, the question – when choosing software and services – is whether they support the EU GDPR requirements, starting from the abstract privacy-by-design principles to the concrete requirements of handling consent per purpose and many of the other requirements. It is not about software being compliant with EU GDPR, but about providing the support required for an organization to fulfill the requirements of EU GDPR. Looking at these requirements, there is a lot to do in many areas of software and services.
When you’ve ever been involved in discussions between IT Security people and OT (Operational Technology, everything that runs in manufacturing environments) people – the latter not only security guys – you probably observed that such discussions have a tendency of not being fruitful because they start with a fundamental misunderstanding between the two parties.
IT security people think about security first, which is essentially about protecting against cyber-attacks and internal attackers and the “CIA” – confidentiality, integrity, and availability. OT people don’t think about security first, even if they are OT security people. They first think about safety, which is about physical safety of humans and machines, which is about reliability, and about availability.
Understanding this dichotomy is essential, because there are different requirements, but also a different history in both areas. OT has always focused on safety, reliability, and availability of production environments. Physical damage of humans, but also of machines, due to software issues (such as a non-working patch) is inacceptable. Mistakes in production are inacceptable, because they can lead to massive liability issues and cost. And availability is key for manufacturing. A production line not working can cause very high cost in a very short period of time. In fact, that is where high availability is really critical, far more than for the very most of the IT systems, even the ones that are being considered as critical.
Unfortunately, the world is changing rapidly. Buzzwords such as “Industry 4.0” or “Smart Manufacturing” stand for that change – the change from an isolated to a massively connected world of manufacturing. The quintessence of these changes is that manufacturing environments become connected; and they increasingly become connected bi-directionally, unless regulations prohibit this. The golden rule to keep in mind here is simple: “Once something is connected, it is under attack.” Computer search engines that scan for everything including IoT devices (and including Industrial IoT or IIoT devices), automated attacks, advanced attacks against manufacturing environments: The risk for these connected environments is massive.
Thus, it is time to overcome the dichotomy between security and safety. We need figuring out new ways of both connecting and protecting manufacturing environments against attacks, while keeping them safe, reliable, and available. The answer to this challenge can’t be leaving everything as is. This will not work. Outdated operating systems, a lack of regular patches, a lack of fine-grain security models in OT equipment – all this will not work anymore. On the other hand, it will take years, probably even tens of years, to modernize all these environments.
Thus, we need to find a mix of new, more modern approaches that combine security by design with the specific requirements of OT environments, while protecting all the old stuff – with unidirectional firewalls, with privilege management technologies to protect shared administrative accounts, with advanced analytical tools to identify potential attacks.
However, we will only succeed when both groups, the IT and the OT people, end their culture of not understanding each other and start working on joint initiatives – and that must start by defining a common understanding of the vocabulary, but also understanding that the requirements of both groups are not only valid but mandatory. Let’s start working together.
There are good reasons for the move towards “Cognitive Security”. The skill gap in Information Security is amongst the most compelling ones. We just don’t have sufficient skilled people. If we can computers make stepping in here, we might close that gap.
On the other hand, a lot of what we see being labeled “Cognitive Security” is still far away from really advanced, “cognitive” technologies. Marketing tends to exaggeration. On the other hand, there is a growing number of examples of advanced approaches, such as IBM Watson – the latter focusing on filtering the unstructured information and delivering exactly what an Information Security professional needs.
A challenge we must not ignore is the fact that these technologies are based on what is called “machine learning”. The machines must learn before they can do their job. That is not different from humans. An experienced security expert first needs experience. That, on the other hand, leads to two challenges with machines.
One is that machines, if used in Information Security, first must learn about incidents and attacks. With other words: They only can identify attacks after learning. Potentially, that means that there must occur some attacks until the machine can identify and protect against these. There are ways to address this. Machines can share their “knowledge”, better than humans. Thus, the time until they can react on attacks can be massively shortened. Furthermore, the more “cognitive” the machines behave, the better they might detect new attacks by identifying analogies and similarities in patterns, without knowing the specific attack.
On the other hand, training the machines bears the risk that they learn the wrong things. Attackers even might systematically train cognitive security systems in wrong behavior. Botnets might be used for sophisticated “training”, before the concrete attacks occur.
While there is a strong potential for Cognitive Security, we are still in the very early stages of evolution. However, I see a strong potential in these technologies, not in replacing humans but complementing these. Systems can run advanced analysis on masses of data and help finding the few needles in the haystack, the signs of severe attacks. They can help Information Security professionals in making better use of their time, by focusing on the most likely traces of attacks.
Traditional SIEM (Security Information and Event Management) will be replaced by such technologies – an evolution that is already on its way, by applying Big Data and advanced analytical capabilities to the field of Information Security. We at KuppingerCole call this Real Time Security Intelligence (RTSI). RTSI is a first step on the journey towards Cognitive Security. Given the fact that Security on one hand is amongst the most complex challenges to solve and, on the other hand, attacks cause massive damage, this is one of the fields where the evolution in cognitive technologies will take place. It is not as popular as playing Go or chess, but it is a huge market with massive demand. Today, we can observe the first examples of “Cognitive Security”. In 2025, such solutions will be mainstream.
Martin Kuppinger talks about firewalls and the fact that they are not really dead.
Today, Ping Identity announced the acquisition of UnboundID. The two companies have been partnering already for a while, with a number of joint customers. After the recent acquisition of Ping Identity by Vista Equity Partners, a private equity firm, this first acquisition of Ping Identity can be seen as the result of the new setup of the company. The initial announcement by Vista Equity Partners already included the information that both organic and inorganic – as now has happened with UnboundID – growth is planned.
The acquisition of UnboundID is interesting from two perspectives. One concerns the capabilities of the UnboundID Platform in managing identity data at scale and to capture, store, sync, and aggregate data from a variety of sources such as directories, CRM systems, and others. The other involves the capabilities UnboundID provides for multi-channel customer engagement. This, for example, includes an analytics engine for analyzing customer behavior trends.
Combined with the proven strength of Ping Identity in the Identity Federation and Access Management market, this allows the companies to extend their offering particularly towards the currently massively growing market of CIAM (Customer Identity and Access Management). Furthermore, the technical platform that Ping Identity provides is complemented with an underlying large scale directory and synchronization service.
Due to the fact that both companies have been working closely together for a while, we expect that existing and new customers will benefit rapidly from Ping Identity’s expanded offering.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Today, the Cyber Defence Center (CDC) or Security Operations Center (SOC) is at the heart of enterprise security management. It is used to monitor and analyze security alerts coming from the various systems across the enterprise and to take actions against detected threats. However, the rapidly growing number and sophistication of modern advanced cyber-attacks make running a SOC an increasingly challenging task even for the largest enterprises with their fat budgets for IT security. The overwhelming number of alerts puts a huge strain even on the best security experts, leaving just minutes [...]