Security Intelligence Platforms (SIP) are universal and extensible security analytics solutions that offer a holistic approach towards maintaining complete visibility and management of the security posture across the whole organization. Only by correlating both real-time and historical security events from logs, network traffic, endpoint devices and even cloud services and enriching them with the latest threat intelligence data it becomes possible to identify previously unknown advanced security threats quickly and reliably, to be able to respond to them in time and thus minimize the damage.
They are in a sense “next generation SIEM solutions” based on RTSI technologies, which provide substantial improvements over traditional SIEMs both in functionality and efficiency:
- Performing real-time or near real-time detection of security threats without relying on predefined rules and policies;
- Correlating both real-time and historical data across multiple sources enables detecting malicious operations as whole events, not separate alerts;
- Dramatically decreasing the number of alarms by filtering out statistical noise, eliminating false positives and providing clear risk scores for each detected incident;
- Offering a high level of automation for typical analysis and remediation workflows, thus significantly improving the work efficiency for security analysts;
- Integrating with external Threat Intelligence feeds in industry standards like STIX/TAXII to incorporate the most recent security research into threat analysis.
Another key aspect of many SIP products is incorporation of Incident Response Platforms. Designed for orchestrating and automating incident response processes, these solutions not only dramatically improve a security analyst’s job analyzing and containing a breach, but also provide predefined and highly automated workflows for managing legal and even PR consequences of a security incident to reduce possible litigation costs, compliance fines and brand reputation losses. Modern SIP products either directly include incident response capabilities or integrate with 3rd party products, finally implementing a full end-to-end security operations and response solution.
By dramatically reducing the number of incidents that require interaction with an analyst and by automating forensic analysis and decision making, next generation SIPs can help address the growing lack of skilled people in information security. As opposed to traditional SIEMs, next generation SIPs should not require a team of trained security experts to operate, relying instead on actionable alerts understandable even to business persons, thus making them accessible even for smaller companies, which previously could not afford operating their own SOC.
Now, what about the future developments in this area? First of all, it’s worth mentioning that the market continues to evolve, and we expect its further consolidation through mergers and acquisitions. New classes of security analytics solutions are emerging, targeting new markets like the cloud or the Internet of Things. On the other hand, many traditional security tools like endpoint or mobile security products are incorporating RTSI technologies to improve their efficiency. In fact, the biggest obstacle for wider adoption of these technologies is no longer the budget, but rather the lack of awareness that such products already exist.
However, the next disruptive technology that promises to change the way Security Operations Centers are operated seems to be Cognitive Security. Whereas Real-Time Security Intelligence can provide security analysts with better tools to improve their efficiency, it still relies on humans to perform the actual analysis and make informed decisions about each security incident. Applying cognitive technologies (the thing closest to the artificial intelligence as we know it from science fiction) to the field of cybersecurity promises to overcome this limitation.
Technologies for language processing and automated reasoning not only help to unlock vast amounts of unstructured “dark security data”, which until now were not available for automated analysis, they actually promise to let the AI to do most of the work that a human analyst must perform now: collect context information, define a research strategy, pull in external intelligence and finally make an expert decision on how to respond to the incident in the most appropriate way. Supposedly, the analyst would only have to confirm the decision with a click of a mouse.
Sounds too good to be true, but the first products incorporating cognitive security technologies are already appearing on the market. The future is now!
I have to admit that I find the very concept of a Security Operations Center extremely… cinematic. As soon as you mention it to somebody, they would probably imagine a large room reminiscent of the NASA Mission Control Center – with walls lined with large screens and dozens of security experts manning their battle stations. From time to time, a loud buzzer informs them that a new security incident has been discovered, and a heroic team starts running towards the viewer in slow motion…
Of course, in reality most SOCs are much more boring-looking, but still this cliché image from action movies captures the primary purpose of an SOC perfectly – it exists to respond to security breaches as quickly as possible in order to contain them and minimize the losses. Unfortunately, looking back at the last decade of SOC platform development, it becomes clear that many vendors have been focusing their efforts elsewhere.
Traditional Security Information and Event Management (SIEM) platforms, which have long been the core of security operations centers, have gone long way to become really good at aggregating security events from multiple sources across organizations and providing monitoring and alerting functions, but when it comes to analyzing a discovered incident, making an informed decision about it and finally mitigating the threat, security experts’ job is still largely manual and time-consuming, since traditional SIEM solutions offer few automation capabilities and usually do not support two-way integration with security devices like firewalls.
Another major problem is the sheer number of security events a typical SOC is receiving daily. The more deperimeterized and interconnected modern corporate networks become, the more open they are for new types of cyberthreats, both external and internal, and the number of events collected by a SIEM increases exponentially. Analysts no longer have nearly enough time to analyze and respond to each alert. The situation is further complicated by the fact that an overwhelming majority of these events are false positives, duplicates or otherwise irrelevant. However, a traditional SIEM offers no way to differentiate them from real threats, drowning analysts in noise and leaving them only minutes to make an informed decision about each incident.
All this leads to the fundamental problem IT industry is now facing: because of the immense complexity of setting up and operating a security operations center, which requires a large budget and a dedicated team of security experts, many companies simply cannot afford it, and even those who can are continuously struggling with the lack of skilled workforce to manage their SOC. In the end, even for the best-staffed security operations centers, the average response time to a security incident is measured in days if not weeks, not even close to the ultimate goal of dealing with them in real time.
In the recent years, this has led to the emergence of a new generation of security solutions based on Real-Time Security Intelligence. Such tools utilize Big Data analytics technologies and machine learning algorithms to correlate large amounts of security data, apply threat intelligence from external sources, detect anomalies in activity patterns and provide a small number of actionable alarms clearly ranked by their risk scores. Such tools promise to dramatically reduce the time to mitigate a breach by performing data analysis in real time, eliminating statistical noise and false positives and, last but not least, providing a high degree of automation to make the security analyst’s job easier.
Although KuppingerCole has been promoting this concept for quite a few years already, the first real products have appeared a couple years ago, and since then the market has evolved and matured at an incredible rate. Back in 2015, when KuppingerCole attempted to produce a Leadership Compass on RTSI solutions, we failed to find enough vendors for a meaningful rating. In 2017, however, we could easily identify over 25 Security Intelligence Platform solutions offered by a variety of vendors, from large veteran players known for their SIEM products to newly established innovative startups.
To be continued...
Vault 7, Wikileaks´ recently published plethora of documents and files from internal CIA resources, has created quite some excitement and noise, and it has even been compared with Edward Snowden´s NSA revelations.
My opinion: this is complete nonsense. In looking at what Edward Snowden has done - disclosing information on methods and extent of NSA´s mass surveillance activities which nobody outside the walls of NSA would have thought it would be possible - these latest collections of CIA authored configuration files and documents describing exploits and methods on how to penetrate end user devices, are not much more than a joke. Vault 7 documents show that CIA is doing exactly what we think they are doing and what secret services are supposed to do. Yes, they may be a bit more "cyber" than we thought they would be at this time, but this is it. No zero day exploits, not a single piece of real news. And not at all a reason to rethink cybersecurity.
Looking at the Wikileaks´ press release about Vault 7, one of the headlines says: "CIA malware targets Windows, OSX, Linux, routers". Huh, this is so shocking news for all of us. We should immediately throw all our gadgets away, switch off (better: unplug) TVs and fridges and call Assange to guide us through the evil reality of cyber, and be grateful to him as a hero of the 21st century, who is so much more important than Guardian-style real journalism. My recommendation: don´t feel alienated by such kibosh. Ignore it.
Ok, maybe one thing that comes into my mind while clicking through the contents: some of the Vault 7 files show that CIA cyber activities are very well stuffed and that they collaborate with the British MI5. But our German BND isn´t mentioned anywhere. This is worrying me a little bit, as it could well be that our guys are being left behind...
The European Banking Authority released the final draft of the Regulatory Technical Specifications for PSD2 this week. It contains several improvements and clarifications, but there are still a few areas that fall short of industry expectations.
After the release of the initial drafts, EBA received a multitude of comments and discussion from many organizations and software vendors. One of the top concerns was on the mandate for Strong Customer Authentication (SCA), which was defined traditionally as something you have, something you know, or something you are. Originally it was conceived to apply to any transaction over €10. The limit has been raised to €30, which is better, but still less than the recommended €50.
The revision also takes into account the innovations and benefits of risk-adaptive authentication. Risk-adaptive authentication encompasses several functions, including user behavioral analytics (UBA), two- or multi-factor authentication (2FA or MFA), and policy evaluation. Risk-adaptive authentication platforms evaluate a configurable set of real-time risk factors against pre-defined policies to determine a variety of outcomes. The policy evaluation can yield permit, deny, or “step-up authentication” required.
PSD2 RTS stipulates that banks (Account Servicing Payment Service Providers, or ASPSPs) must consider the following transactional fraud risk detection elements on a per-transaction basis:
- lists of compromised or stolen authentication elements;
- the amount of each payment transaction;
- known fraud scenarios in the provision of payment services;
- signs of malware infection in any sessions of the authentication procedure
Items 1-3 are commonly examined in many banking transactions today. The prescription to look for signs of malware infection is somewhat vague and difficult to achieve technically. Is the bank responsible for knowing the endpoint security posture of all of its clients? If so, is it responsible also for helping remediate malware on clients?
Furthermore, in promoting “continuous authentication” via risk-adaptive authentication, EBA states:
- the previous spending patterns of the individual payment service user;
- the payment transaction history of each of the payment service provider’s payment service user;
- the location of the payer and of the payee at the time of the payment transaction providing the access device or the software is provided by the payment service provider;
- the abnormal behavioural payment patterns of the payment service user in relation to the payment transaction history;
- in case the access device or the software is provided by the payment service provider, a log of the use of the access device or the software provided to the payment service user and the abnormal use of the access device or the software.
The requirements described above, from the PSD2 RTS document, are very much a “light” version of risk-adaptive authentication and UBA. These attributes are useful in predicting the authenticity of the current user of the services. However, there are additional attributes that many risk-adaptive authentication vendors commonly evaluate that would add value to the notion and practice of fraud risk reduction. For example:
- IP address
- Time of day/week
- Device ID
- Device fingerprint
- Known compromised IP/network check
- User attributes
- User on new device check
- Jailbroken mobile device check
Now that limited risk analytics are included in the PSD2 paradigm, the requirement for SCA is reduced to at least once per 90 days. This, too, is in line with the way most modern risk-adaptive authentication systems work.
The PSD2 RTS leaves in place “screen-scraping” for an additional 18 months, a known bad practice that current Third Party Providers (TPPs) use to extract usernames and passwords from HTML forms. This practice is not only subject to Man-in-the-Middle (MITM) attacks, but also perpetuates the use of low assurance username/password authentication. Given that cyber criminals now know that they only have a limited amount of time to exploit this weak mechanism, look for an increase in attacks on TPPs and banks using screen-scraping methods.
In summary, the final draft of PSD2 RTS does make some security improvements, but omits recommending practices that would more significantly and positively affect security in the payments industry, while leaving in place the screen-scraping vulnerability for a while longer.
Big data analytics is getting more and more powerful and affordable at the same time. Probably the most important data within any organisation is knowledge of and insight into its customer's profiles. Many specialized vendors target these organisations. And it is obvious: The identification of customers across devices and accounts, a deep insight into their behaviour and the creation of rich customer profiles comes with many promises. The adjustment, improvement and refinement of existing product and service offerings, while designing new products as customer demand changes, are surely some of those promises.
Dealing with sensitive data is a challenge for any organisation. Dealing with personally identifiable information (PII) of employees or customers is even more challenging.
Recently I have been in touch with several representatives of organisations and industry associations who presented their view on how they plan to handle PII in the future. The potentials of leveraging customer identity information today are clearly understood. A hot topic is of course the GDPR, the general data protection regulation as issued by the European Union. While many organisations aim at being compliant from day one (= May 25, 2018) onward, it is quite striking that there are still organisations around, which don't consider that as being important. Some consider their pre-GDPR data protection with a few amendments as sufficient and subsequently don't have a strategy for implementing adequate measures to achieve GDPR-compliant processes.
To repeat just a few key requirements: Data subject (= customer, employee) rights include timely and complete information about personal data being stored and processed. This includes also a justification for doing this rightfully. Processes for consent management and reliable mechanisms for implementing the right to be forgotten (deletion of PII, in case it is no longer required) need to be integrated into new and existing systems.
It is true: In Europe and especially in Germany data protection legislation and regulations have always been challenging already. But with the upcoming GDPR things are changing dramatically. And they are also changing for organisations outside the EU in case they are processing data of European citizens.
National legislation will fill in details for some aspects deliberately left open within the GDPR. Right now this seems to weaken or “verschlimmbessern” (improve to the worse, as we say in German) several practical aspects of it throughout the EU member states. Quite some political lobbying is currently going on. Criticism grows e.g. over the German plans. Nevertheless, at its core, the GDPR is a regulation, that will apply directly to all European member states (and quite logically also beyond). It will apply to personal data of EU citizens and the data being processed by organisations within the EU.
Some organisations fear that compliance to GDPR is a major drawback in comparison to organisations, e.g. in the US which deal with PII with presumably lesser restrictions. But this is not necessarily true and it is changing as well, as this example shows: The collection of viewing user data, through software installed on 11 million "smart" consumer TVs without their owner's consent or even their information, led to a payment of $2.2 million by the manufacturer of these devices to the (American!) Federal Trade Commission.
Personal data (and the term is defined very broadly in the GDPR) is processed in many places, e.g. in IoT devices or in the smart home, in mobile phones, in cloud services or connected desktop applications. Getting to privacy by design and security by design as core principles should be considered as a prerequisite for building future-proof systems managing PII. User consent for the purposes of personal data usage while managing and documenting proof of consent are major elements for such systems.
GDPR and data protection do not mean the end to Customer Identity Management. On the contrary rather, GDPR needs to be understood as an opportunity to build trusted relationships with consumers. The benefits and promises as described above can still be achieved, but they come at quite a price and substantial effort as this must be well-executed (=compliant). But this is the real business opportunity as well.
Being a leader, a forerunner and the number one in identifying business opportunities, in implementing new business models and in occupying new market segments is surely something worth striving for. But being the first to fail visibly and obviously in implementing adequate measures for e.g. maintaining the newly defined data subject rights should be consider as something that needs be avoided.
KuppingerCole will cover this topic extensively in the next months with webinars and seminars. And one year before coming into effect the GDPR will be a major focus at the upcoming EIC2017 in May in Munich as well.
Consumer identity and access management solutions are bringing value to the organizations which implement them, in terms of higher numbers of successful registrations, customer profiling, authentication variety, identity analytics, and marketing insights. Many companies with deployed CIAM solutions are increasing revenue and brand loyalty. Consumers themselves have better experiences interacting with companies that have mature CIAM technologies. CIAM is a rapidly growing market segment.
CIAM systems typically collect (or at least attempt to collect) the following attributes about consumers: Name, email address, association with one or more social network accounts, age, gender, and location. Depending on the service provider, CIAM products may also pick up data such as search queries, items purchased, items browsed, and likes and preferences from social networks. Wearable technology vendors may collect locations, physical activities, and health-related statistics, and this data may be linked to consumers’ online identities in multiple CIAM implementations. To reduce fraud and unobtrusively increase the users’ authentication assurance levels, some companies may also acquire users’ IP addresses, device information, and location history.
Without the EU user’s explicit consent, all of this data collection will violate the EU’s General Data Protection Regulation (GDPR) in May of 2018. Penalties for violation can be up to €20M or 4% of global revenue, whichever is higher.
Consider a few definitions from the GDPR:
(1) ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;
(2) ‘processing’ means any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction;
(4) ‘profiling’ means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements;
(4) ‘third party’ means a natural or legal person, public authority, agency or body other than the data subject, controller, processor and persons who, under the direct authority of the controller or processor, are authorised to process personal data;
(5) ‘consent’ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her;
This means that companies that are currently deriving benefit from CIAM must:
- Perform a privacy data assessment
- Create new privacy policies as needed
- Plan to clean and minimize user data already resident in systems
- Implement the consent gathering mechanisms within their CIAM solutions
If your deployed CIAM solution is not yet fully GDPR compliant, talk with your vendor about their product roadmaps. Find out when they will release a GDPR compliant version, and determine how to work that into your own release schedule.
If your organization is considering deploying CIAM in the near future, make sure that GDPR compliant consent mechanisms and storage schemes are on your RFP requirements list.
This article is not intended to provide detailed technical or legal advice. For more information, see the full text of GDPR at the link above, and visit www.kuppingercole.com. Over the next few months, we will examine other aspects of GDPR and what it entails for business, IAM, and IT infrastructure.
Over the last few weeks I’ve read a lot about the role AI or Artificial Intelligence (or should I better write “Artificial” Intelligence?) will play in Cyber Security. There is no doubt that advanced analytical technologies (frequently subsumed under the AI term), such as pattern matching, machine learning, and many others, are already affecting Cyber Security. However, the emphasis here is on “already”. It would be wrong to say “nothing new under the sun”, given that there is a lot of progress in this space. But it is just as wrong to ignore the evolution of the past couple of years.
At KuppingerCole, we started looking at what we call Real Time Security Intelligence (RTSI) a couple of years back. We published our first report on this topic back in May 2014 and covered the topic in our predictions for 2014. The topic was covered in a session at EIC 2014. And we published a series of blogs on that topic during that year.
There is no doubt that advanced analytical technologies will help organizations in their fight against cyber-attacks, because they help in detecting potential attacks at an earlier stage, as well as enabling the identification of complex attack patterns that span various systems. AI also might help, such as in IBM Watson for Cyber Security, to provide a better understanding of cyber risks by collecting and analyzing both structured and unstructured information. Cognitive Security solutions such as IBM Watson for Cyber Security are part of the AI evolution in the field of cyber-security. But again: The journey started a couple of years ago, and we are just in the very early stages.
So why this hype now? Maybe it is because of achieving a critical mass of solutions. More and more companies have entered the field in recent years. Maybe it is because of some big players actively entering that market. At the beginning, most of the players were startups (and many of these rooted in Israel). Now, large companies such as IBM have started pushing the topic, gaining far more awareness in public. Maybe it is because of AI in Cyber Security being the last hope for a solution that helps the good guys win in their fight against cyber criminals and nation-state attackers (hard to say where the one ends and the other starts).
Anyway: We will see not only more solutions in the market and advancements in that field of technology in 2017 and beyond, but we will see a strong increase in awareness for “AI in Cyber Security” as well as the field of Real Time Security Intelligence. This is, regardless of all skepticism regarding the use of terms and regarding hypes, a positive evolution.
On December 29th, the FBI together with CERT finally released a Joint Analysis Report on the cyber-attacks on the US Democratic Party during the US presidential election. Every organization, whether they are based in the US or not, would do well to read this report and to ensure that their organization takes account of its recommendations. Once released into the wild – the tools and techniques and processes (TTPs) used by state actors are quickly taken up and become widely used by other adversaries.
This report is not a formal indictment of a crime as was the case with the alleged hacking of US companies by the Chinese filed in 2014. It is however important cyber threat intelligence.
Threat intelligence is a vital part of cyber-defence and cyber-incident response, providing information about the threats, TTPs, and devices that cyber-adversaries employ; the systems and information that they target; and other threat-related information that provides greater situational awareness. This intelligence needs to be timely, relevant, accurate, specific and actionable. This report provides such intelligence.
The approaches described in the report are not new. They involve several phases and some have been observed using targeted spear-phishing campaigns leveraging web links to a malicious website that installs code. Once executed, the code delivers Remote Access Tools (RATs) and evades detection using a range of techniques. The malware connects back to the attackers who then use the RAT tools to escalate privileges, search active directory accounts, and exfiltrate email through encrypted connections.
Another attack process uses internet domains with names that closely resemble those of targeted organizations and trick potential victims into entering legitimate credentials. A fake webmail site that collects user credentials when they log in is a favourite. This time, a spear-phishing email tricked recipients into changing their passwords through a fake webmail domain. Using the harvested credentials, the attacker was able to gain access and steal content.
Sharing Threat Intelligence is a vital part of cyber defence and OASIS recently made available three foundational specifications for the sharing of threat intelligence. These are described in Executive View: Emerging Threat Intelligence Standards - 72528. Indicators of Compromise (IOCs) associated with the cyber-actors are provided using these standards (STIX) as files accompanying the report.
There are several well-known areas of vulnerability that are consistently used by cyber-attackers. These are easy to fix but are, unfortunately, still commonly found in many organizations’ IT systems. Organizations should take immediate steps to detect and remove these from their IT systems:
- SQL Injection - Input from a user field is not checked for escape characters before inclusion into an SQL Select
- Cross Site Scripting - Software fails to neutralize user input before it is placed in output that is used as a web page.
- Excessive or unnecessary administrative privileges – that enable the adversaries to extend their control across multiple systems and applications.
- Unpatched sever vulnerabilities - may allow an adversary access to critical information including any websites or databases hosted on the server.
The majority of these attacks exploit human weaknesses in the first stage. While technical measures can and should be improved, it is also imperative to provide employees, associates and partners training on how to recognize and respond to these threats.
The report describes a set of recommended mitigations and best practices. Organizations should consider these recommendations and takes steps to implement them without delay. KuppingerCole provides extensive research on securing IT systems and on privilege management in particular.
The upcoming updated Payment Services Directive (PSD II) will, among other changes, request Multi-Factor Authentication (MFA) for all payments above 10€ which aren’t done electronically. This is only one major change PSD II brings (another major change are the mandatory open APIs), but one that is heavily discussed and criticized, e.g. by software vendors, by credit card companies such as VISA, and others.
It is interesting to look at the published material. The major point is that it only talks about MFA, without going into specifics. The regulators also point out clearly that an authentication based on one factor in combination with Risk-Based Authentication (RBA) is not sufficient. RBA analyzes the transactions, identifies risk based on, e.g., the amount, the geolocation of the IP address, and other factors, and requests a second means or factor if the risk rating is above a threshold.
That leads to several questions. One question is what level of MFA is required. Another is what this means for Adaptive Authentication (AA) and RBA in general. The third question is whether and how this will affect credit card payments or services such as PayPal, that commonly still rely on one factor for authentication.
First, let me clarify some terms. MFA stands for Multi Factor Authentication, i.e. all approaches involving more than one factor. The most common variant is Two Factor Authentication (2FA), i.e. the use of two factors. There are three factors: Knowledge, Possession, Biometrics – or “what you know”, “what you have”, “what you are”. For each factor, there might be various “means”, e.g. username and password for knowledge, a hard token or a phone for possession, fingerprint and iris for biometrics.
RBA defines authentication that, as described beforehand, analyzes the risk involved in authentication and subsequent interaction and transactions and might request additional authentication steps depending on the risk rating.
Adaptive Authentication, on the other hand, is a combination of what sometimes is called “versatile” authentication with RBA. It combines the ability to use various means (and factors) for authentication in a flexible way. In that sense, it is adaptive to the authenticator that someone has. The other aspect of adaptiveness is RBA, i.e. adapting the required level of authentication to the risk. AA can be MFA, but it also – with low risk – can be One Factor Authentication (1FA).
Based on these definitions, it becomes clear that the statement “PSD II does not allow AA” is wrong. It also is wrong that “PSD II permits RBA”. The point simply is: Using AA (i.e. flexible authenticators plus RBA) or RBA without versatility is only in compliance with the PSD II requirements if at least two factors for authentication (2FA) are used.
And to put it more clearly: AA, i.e. versatility plus RBA, absolutely makes sense in the context of PSD II – to fulfill the regulatory requirements of MFA in a way that adapts to the customer and to mitigate risks beyond the baseline MFA requirement of PSD II.
MFA by itself is not necessarily secure. You can use a four-digit PIN together with the device ID of a smartphone and end up with 2FA – there is knowledge (PIN) and possession (a device assigned to you). Obviously, this is not very secure, but it is MFA. Thus, there should be (and most likely will be) additional requirements that lead to a certain minimum level of MFA for PSD II.
For providers, following a consequent AA path makes sense. Flexible use of authenticators to support what customers prefer and already have helps increase convenience and reduce cost for deploying authenticators and subsequent logistics – and it will help in keeping retention rates high. RBA as part of AA also helps to further mitigate risks, beyond a 2FA, whatever the authentication might look like.
The art in the context of PSD II will be to balance customer convenience, authentication cost, and risk. There is a lot of room for doing so, particularly with the uptake in biometrics and standards such as the FIDO Alliance standards which will help payment providers in finding that balance. Anyway, payment providers must rethink their authentication strategies now, to meet the changing requirements imposed by PSD II.
While this might be simple and straightforward for some, others will struggle. Credit card companies are more challenged, particularly in countries such as Germany where the PIN of credit cards is rarely used. However, the combination of a PIN with a credit card works for payments – if the possession of the credit card is proven, e.g. at a POS (Point of Sale) terminal. For online transactions, things become more complicated due to the lack of proof of the credit card. Even common approaches such as entering the credit card number, the security number from the back of the card (CVV, Card Verification Number), and the PIN will not help, because all could be means of knowledge – I know my credit card number, my CVV, and my PIN, and even the bank account number that sometimes is used in RBA by credit card processors. Moving to MFA here is a challenge that isn’t easy to solve.
The time is fast approaching for all payment providers to define an authentication strategy that complies with the PSD II requirements of MFA, as fuzzy as these still are. Better definitions will help, but it is obvious that there will be changes. One element that is a must is moving towards Adaptive Authentication, to support various means and factors in a way that is secure, compliant, and convenient for the customer.
GDPR, the EU General Data Protection Regulation, is increasingly becoming a hot topic. That does not come as a surprise, given that the EU GDPR has a very broad scope, affecting every data controller (the one who “controls” the PII) and data processor (the one who “processes” the PII) dealing with data subjects (the persons) residing in the EU – even when the data processors and data controllers are outside of the EU.
Notably, the definition of PII is very broad in the EU. It is not only about data that is directly mapped to the name and other identifiers. If a bit of data can be used to identify the individual, it is PII.
There are obvious effects to social networks, to websites where users are registered, and to many other areas of business. The EU GDPR also will massively affect the emerging field of CIAM (Consumer/Customer Identity and Access Management), where full support for EU GDPR-related features, such as a flexible consent handling, become mandatory.
However, will the EU GDPR also affect the traditional, on-premise IAM systems with their focus on employees and contractors? Honestly, I don’t see that impact. I see it, as mentioned beforehand, for CIAM. I clearly see it in the field of Enterprise Information Protection, by protecting PII-related information from leaking and managing access to such information. That also affects IAM, which might need to become more granular in managing access – but there are no new requirements arising from the EU GDPR. The need for granular management access to PII might lead to a renaissance (or naissance?) of Dynamic Authorization Management (think about ABAC) finally. It is far easier handling complex rules for accessing such data based on flexible, granular, attribute-based policies. We will need better auditing procedures. However, with today’s Access Governance and Data Governance, a lot can be done – and what can’t be done well needs other technologies such as Access Governance in combination with Dynamic Authorization Management or Data Governance that works well for Big Data. Likewise, Privilege Management for better protecting systems that hold PII are mandatory as well.
But for managing access to PII of employees and contractors, common IAM tools provide sufficient capabilities. Consent is handled as part of work contracts and other generic rules. Self-service interfaces for managing the data stored about an employee are a common feature.
The EU GDPR is important. It will change a lot. But for the core areas of today’s IAM, i.e. Identity Provisioning and Access Governance, there is little change.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Today, the Security Operations Center (SOC) is at the heart of enterprise security management. It is used to monitor and analyze security alerts coming from the various systems across the enterprise and to take actions against detected threats. However, the rapidly growing number and sophistication of modern advanced cyber-attacks make running a SOC an increasingly challenging task even for the largest enterprises with their fat budgets for IT security. The overwhelming number of alerts puts a huge strain even on the best security experts, leaving just minutes for them to decide whether an [...]