Recently, I came across a rather new and interesting standardization initiative, driven by the NSA (U.S. National Security Agency) and several industry organizations, both Cyber Defense software vendors and system integrators. OpenC2 names itself “a forum to promote global development and adoption of command and control” and has the following vision:
The OpenC2 Forum defines a language at a level of abstraction that will enable unambiguous command and control of cyber defense technologies. OpenC2 is broad enough to provide flexibility in the implementations of devices and accommodate future products and will have the precision necessary to achieve the desired effect.
The reasoning behind it is that an effective prevention, detection, and immediate response to cyber-attacks requires not only isolated systems, but a network of systems of various types. These functional blocks must be integrated and coordinated, to act in a synchronized manner, and in real-time, upon attacks. Communication between these systems requires standards – and that is what OpenC2 is working on.
This topic aligns well with the Real-Time Security Intelligence, an evolving area of software solutions and managed services KuppingerCole has been analyzing for a few years already. The main software and service offerings in that area are Security Intelligence Platforms (SIP) and Threat Intelligence Services. SIPs provide advanced analytical capabilities for identifying anomalies and attacks, while Threat Intelligence Services deliver information about newly detected incidents and attack vectors.
For moving from prevention (e.g. traditional firewalls) to detection (e.g. SIPs) to response, OpenC2 can play an important role, because it allows taking standardized actions based on a standardized language. This allows, for example, a SIP system to coordinate with firewalls for changing firewall rules, with SDNs (Software Defined Networks) for isolating systems targeted by the attacks, or with other analytical systems for a deeper level of analysis.
OpenC2 thus is a highly interesting initiative that can become an important building block in strengthening cyber defense. I strongly recommend looking at that initiative and, if your organization is among the players that might benefit from such language, actively engaging therein.
The GDPR continues to be a hot topic for many organizations, especially for those who store and process customer data. A core requirement for compliance to GDPR is the concept of “consent,” which is fairly new for most data controllers. Coming up with GDPR is that parties processing personally identifiable information need to ask the user for his/her consent to do so and let the user revoke that consent any time and as easily as it was given.
During the KuppingerCole webinar held on April 4th, 2017 and supported by iWelcome, several questions from attendees were left unanswered due to the huge number of questions and a lack of time to answer them all.
Several questions centered around the term “Purpose,” which is key for data processing, but a lot more interesting questions came up, which we think are important to follow up here. Corne van Rooij answers some of the questions which couldn’t be answered live during the webinar.
Q: Purpose is related to your business or more generic things like Marketing, User experience Management, Research, etc.?
Corne van Rooij: Purpose is referring to “the purpose of the processing” and should be specific, explicit and legitimate. “Marketing” (or any other generic thing) is not specific enough; it should state what kind of marketing actions, like profiling or specifically tailored offerings.
Q: Is it true that data collection pure for the fulfillment of contractual obligations and selling a product doesn't require consent?
Corne van Rooij: Yes, that is true, however, keep in mind that data minimisation requires you only to collect data you will actually need to use for the fulfillment of the contract. The collection of extra data or ‘future use’ of data that is not mandatory to fulfill the contract does not fall under this and needs additional consent or another legal basis (Article 6) like “compliance with a legal obligation.“
Q: It appears consent is changing from a static to a dynamic concept. How can a company manage numerous consent request programs and ensure the right consent is requested at the right time and in the right context?
Corne van Rooij: A very good question and remark. Consent needs its own life cycle management, as it will change over time unless your business is very static itself. The application (e.g. the eBusiness portal) should check if the proper consent is in place and trigger for consent if not, or trigger for an update (of consent or scope) if needed. If the consent status ‘travels’ with the user when he accesses the application/service, let’s say in an assertion, then the application/service can easily check and trigger (or ask itself) for consent or scope change. And register the consent back in the central place that had to send the assertion in the first place, so a close loop. Otherwise, the application can/needs to check the consent (API call) before it can act, ask consent if needed, and write it back (API call).
Q: How does the new ePR publish on the 10/1/2017 impact consent?
Corne van Rooij: The document published on 10th January 2017 is a proposal for a new E-Privacy Regulation. If it came into force in the future, it will not impact the implications of GDPR on ‘consent’ and covers other topics (so complementary) that can require consent. This new proposal updates E-Privacy issues in line with market developments and the GDPR and covers topics like cookies and unsolicited communication. It’s the update of an already existing EU Directive that dates back to 2009.
Q: If I understand it well, I can't collect any data for a first-time visitor to an eCommerce website, and I will first have to give him the possibility to identify himself in order to get into the consent flow?
Corne van Rooij: No, this is actually not true. You can collect data, e.g. based on certain types of cookies for which permission is not required (following ePR rules) and that data could be outside GDPR if you can’t trace it back to an actual individual. If you let him/her register himself for e.g. a newsletter and you ask personal information, then it falls under the GDPR. However, you might be able to stay away from asking consent if you can use another legal basis stated in article 6 for lawful processing. Let’s say a person wants to subscribe to an online magazine, then you only need the email address, and as such, that is enough to fulfill “the contract.” If you ask more, e.g. name, telephone number, etc., which you don’t actually need, then you need to use consent and have to specify a legitimate purpose.
Q: For existing user (customer) accounts, is there a requirement in GDPR to cover proof of previously given consent?
Corne van Rooij: You will have to justify that the processing of the personal data you keep is based on one of the grounds of Article 3 “Lawfulness of processing.“ If your legal basis is consent, you will need proof of this consent, and if consent was given in several steps, proof for all these consents have to be in place.
Q: Please give more detailed information on how to handle all already acquired data from customers and users.
Corne van Rooij: In short: companies need to check what personal data they process and have in their possession. They must then delete or destroy the data when legal basis for processing is no longer there, or the purpose for which the data was obtained or created has been fulfilled.
If the legal basis or the purpose has changed, the data subject needs to be informed, and new consent might be necessary. Also, when proof of earlier given consent is not available, the data subject has to be asked for consent again.
Q: So there is no need to erase already acquired user/customer/consumer data, as long it is not actively used? - E.g. for already provisioned customer data - especially where the use of personal data had been already agreed by accepting agreements before? Is there a need to renew the request for data use when GDPR goes live?
Corne van Rooij: There is a difference in “not actively used” and “no legal basis or allowed the purpose of using it.” If it’s the latter, you need to remove the data or take action to meet the GDPR requirements. The processing which is necessary for the performance of a contract could comply with Article 6, as GDPR is for most of these things not new. There was already a lot of national legislation in place based on the EU Directive which also covers the topic of the lawfulness of processing.
Q: How long are you required to keep the consent information, after the customer has withdrawn all consents and probably isn't even your customer anymore?
Corne van Rooij: We advise that you keep proof of consent for as long as you keep the personal data. This is often long after consent is withdrawn, as companies have legal obligations to keep data under f.i. Business and tax laws.
Ransomware attacks have increased in popularity, and many outlets predict that it will be a $1 billion-dollar business this year. Ransomware is a form of malware that either locks users’ screens or encrypts users’ data, demanding that ransom be paid for the return of control or for decryption keys. Needless to say, but paying the ransom only emboldens the perpetrators and perpetuates the ransomware problem.
Ransomware is not just a home user problem, in fact many businesses and government agencies have been hit. Healthcare facilities have been victims. Even police departments have been attacked and lost valuable data. As one might expect, protecting against ransomware has become a top priority for CIOs and CISOs in both the public and private sectors.
Much of the cybersecurity industry has, in recent years, shifted focus to detection and response rather than prevention. However, in the case of ransomware, detection is pretty easy because the malware announces its presence as soon as it has compromised a device. That leaves the user to deal with the aftermath. Once infected, the choices are to:
- pay the ransom and hope that malefactors return control or send decryption keys (not recommended, and it doesn’t always work that way)
- wipe the machine and restore data from backup
Restoration is sometimes problematic if users or organizations haven’t been keeping up with backups. Even if backups are readily available, time will be lost in cleaning up the compromised computer and restoring the data. Thus, preventing ransomware infections is preferred. However, no anti-malware product is 100% effective at prevention. It is still necessary to have good, tested backup/restore processes for cases where anti-malware fails.
Most ransomware attacks arrive as weaponized Office docs via phishing campaigns. Disabling macros can help, but this is not universally effective since many users need to use legitimate macros. Ransomware can also come less commonly come from drive-by downloads and malvertising.
Most endpoint security products have anti-malware capabilities, and many of these can detect and block ransomware payloads before they execute. All end-user computers should have anti-malware endpoint security clients installed, preferably with up-to-date subscriptions. Servers and virtual desktops should be protected as well. Windows platforms are still the most vulnerable, though there are increasing amounts of ransomware for Android. It is important to remember that Apple’s iOS and Mac devices are not immune from ransomware, or malware in general.
If you or your organization do not have anti-malware packages installed, there are some no-cost anti-ransomware specialty products available. They do not appear to be limited-time trial versions, but are instead fully functional. Always check with your organization’s IT management staff and procedures before downloading and installing software. All the products below are designed for Windows desktops:
The links, in alphabetical order by company name, are provided as resources for consideration for the readers rather than recommendations.
Ransomware hygiene encompasses the following short-list of best practices:
- Perform data backups
- Disable Office macros by default if feasible
- Deliver user training to avoid phishing schemes
- Use anti-malware
- Develop breach response procedures
- Don’t pay ransom
The ongoing Digital Transformation has already made a profound impact not just on enterprises, but our whole society. By adopting such technologies as cloud computing, mobile devices or the Internet of Things, enterprises strive to unlock new business models, open up new communication channels with their partners and customers and, of course, save on their capital investments.
For more and more companies, digital information is no longer just another means of improving business efficiency, but in fact their core competence and intellectual property.
Unfortunately, the Digital Transformation does not only enable a whole range of business prospects, it also exposes the company's most valuable assets to new security risks. Since those digital assets are nowadays often located somewhere in the cloud, with an increasing number of people and devices accessing them anywhere at any time, the traditional notion of security perimeter ceases to exist, and traditional security tools cannot keep up with the new sophisticated cyberattack methods.
In the recent years, the IT industry has been busy with developing various solutions to this massive challenge, however each new generation of security tools, be it Next Generation Firewalls (NGFW), Security Information and Event Management (SIEM) or Real-Time Security Intelligence (RTSI) solutions, has never entirely lived up to the expectations. Although they do offer significantly improved threat detection or automation capabilities, their “intelligence level” is still not even close to that of a human security analyst, who still has to operate these tools to perform forensic analysis and make informed decisions quickly and reliably.
All this has led to a massive lack of skilled workforce to man all those battle stations that comprise a modern enterprise’s cyber defense center. There are simply not nearly enough humans to cope with the vast amounts of security-related information generated daily. The fact that the majority of this information is unstructured and thus not available for automated analysis by computers makes the problem much more complicated.
Well, the next big breakthrough promising to overcome this seemingly unsolvable problem is coming from the realm of science fiction. Most people are familiar with the so called cognitive technologies from books or movies, where they are usually referred to as “Artificial Intelligence”. although the true “strong AI” comparable to a human brain may still remain purely theoretical for quite some time, various practical applications of cognitive technologies (like speech recognition, natural language processing, computer vision or machine learning) have found practical uses in many fields already. From Siri and Alexa to market analysis and law enforcement: these technologies are already in use.
More relevant for us at KuppingerCole (and hopefully for you as well) are potential applications for identity management and cybersecurity.
A cognitive security solution can utilize natural language processing to analyze both structured and unstructured security information the way human analysts currently do. This won’t be limited just to pattern or anomaly recognition, but proper semantic interpretation and logical reasoning based on evidence. Potentially, this may save not days but months of work for an analyst, which would ideally only need to confirm the machine’s decision with a mouse click. Similarly, continuous learning, reasoning and interaction can provide significant improvement to existing dynamic policy-based access management solutions. Taking into account not just simple factors like geolocation and time of the day, but complex business-relevant cognitive decisions will increase operational efficiency, provide better resilience against cyber-threats and, last but not least, improve compliance.
Applications of cognitive technologies for Cybersecurity and IAM will be a significant part of this year’s European Identity & Cloud Conference. We hope to see you in Munich on May 9-12, 2017!
During the KuppingerCole webinar run March 16th, 2017, which has been supported by ForgeRock, several questions from attendees were left unanswered due to a huge number of questions and a lack of time to cover them all. Here are answers to questions that couldn’t be answered live during the webinar.
Q: How does two factor authentication play into GDPR regulations?
Karsten Kinast: Two factor authentication does not play into GDPR at all.
Martin Kuppinger: While two factor authentication is not a topic of GDPR, it e.g. plays a major role in another upcoming EU regulation, the PSD2 (revised Payment Services Directive), which applies to electronic payments.
Q: How do you see North American companies adhering to GDPR regulations? Do you think it will take a fine before they start incorporating the regulations into their privacy and security policies?
Eve Maler: As I noted on the webinar itself, from my conversations, these companies are even slower than European companies (granting Martin's point that European companies are far from full awareness yet) to "wake up". It seems like a Y2K phenomenon for our times. We at ForgeRock spend a lot of time working with digital transformation teams, and we find they have much lower awareness vs. risk teams. So, we encourage joint stakeholder conversations so that those experienced in the legal situation and those experienced in A/B testing of user experience flows can get together and do better on building trusted digital relationships!
Karsten Kinast: My experience is, that North American companies are adhering better and preparing more intensely for the upcoming GDPR than companies elsewhere. So, I don’t think it will need fines, because they already started preparing.
Q: Sometimes, there seems being a conflict between the “right to be forgotten” and practical requirements, e.g. for clinical trial data. Can consent override the right to be forgotten?
Karsten Kinast: While there might be a consent, the consent can be revoked. Thus, using consent to override the right to be forgotten will not work in practice.
Q: The fines for violating the GDPR regulations can be massive, up to 20 Mio € or 4% of the annual group revenue, whichever is higher. Can the fines be paid over a period of time or compensated by e.g. trainings?
Karsten Kinast: If the fine is imposed, it commonly will be in cash and in one payment.
Q: Where to learn more on consent life cycle management?
Eve Maler: Here are some resources that may be helpful:
- My recent talk at RSA on designing a new consent strategy for digital transformation, including a proposal for a new classification system for types of permission
- Information on the emerging Consent Receipts standard
- Recent ForgeRock webinar on the general topic of data privacy, sharing more details about our Identity Platform and its capabilities
Martin Kuppinger: From our perspective, this is a both interesting and challenging area. Organizations must find ways to gain consent without losing their customers. This will only work when the value of the service is demonstrated to the customers and consumers. On the other hand, this also bears the opportunity of differentiating from others by demonstrating a good balance between the data collected and the value provided.
Q: Who is actually responsible for trusted digital relationships in the enterprise? Is this an IAM function?
Eve Maler: Many stakeholders in an organization have a role to play in delivering on this goal. IAM has a huge role to play, and I see consumer- and customer-facing identity projects more frequently sitting in digital transformation teams. It's my hope that the relatively new role of Chief Trust Officer will grow out of "just" privacy compliance and external evangelism to add more internal advocacy for transparency and user control.
Martin Kuppinger: It depends of the role of the IAM team in the organization. If it is more the traditional, administration and security focused role, this most commonly will an IAM function. However, the more IAM moves towards an entity that understands itself as a business enabler, closely working with other units such as marketing, the more IAM is positioned to take such central role.
Q: How big a role does consent play in solving privacy challenges overall?
Eve Maler: One way to look at it, GDPR-wise, is that it's just one-sixth of the legal bases for processing personal data, so it's a tiny part -- but we know better, if we remember that we're human beings first and ask what we'd like done if it were us in the user's chair! Another way to look at it is that asking for consent is something of an alternative to one of the other legal bases, "legitimate interests". Trust-destroying mischief could be perpetrated here. With the right consent technology and a comprehensive approach, it should be possible for an enterprise to ask for consent -- offer data sharing opportunities -- and enable consent withdrawal more freely, proving its trustworthiness more easily.
Security Intelligence Platforms (SIP) are universal and extensible security analytics solutions that offer a holistic approach towards maintaining complete visibility and management of the security posture across the whole organization. Only by correlating both real-time and historical security events from logs, network traffic, endpoint devices and even cloud services and enriching them with the latest threat intelligence data it becomes possible to identify previously unknown advanced security threats quickly and reliably, to be able to respond to them in time and thus minimize the damage.
They are in a sense “next generation SIEM solutions” based on RTSI technologies, which provide substantial improvements over traditional SIEMs both in functionality and efficiency:
- Performing real-time or near real-time detection of security threats without relying on predefined rules and policies;
- Correlating both real-time and historical data across multiple sources enables detecting malicious operations as whole events, not separate alerts;
- Dramatically decreasing the number of alarms by filtering out statistical noise, eliminating false positives and providing clear risk scores for each detected incident;
- Offering a high level of automation for typical analysis and remediation workflows, thus significantly improving the work efficiency for security analysts;
- Integrating with external Threat Intelligence feeds in industry standards like STIX/TAXII to incorporate the most recent security research into threat analysis.
Another key aspect of many SIP products is incorporation of Incident Response Platforms. Designed for orchestrating and automating incident response processes, these solutions not only dramatically improve a security analyst’s job analyzing and containing a breach, but also provide predefined and highly automated workflows for managing legal and even PR consequences of a security incident to reduce possible litigation costs, compliance fines and brand reputation losses. Modern SIP products either directly include incident response capabilities or integrate with 3rd party products, finally implementing a full end-to-end security operations and response solution.
By dramatically reducing the number of incidents that require interaction with an analyst and by automating forensic analysis and decision making, next generation SIPs can help address the growing lack of skilled people in information security. As opposed to traditional SIEMs, next generation SIPs should not require a team of trained security experts to operate, relying instead on actionable alerts understandable even to business persons, thus making them accessible even for smaller companies, which previously could not afford operating their own SOC.
Now, what about the future developments in this area? First of all, it’s worth mentioning that the market continues to evolve, and we expect its further consolidation through mergers and acquisitions. New classes of security analytics solutions are emerging, targeting new markets like the cloud or the Internet of Things. On the other hand, many traditional security tools like endpoint or mobile security products are incorporating RTSI technologies to improve their efficiency. In fact, the biggest obstacle for wider adoption of these technologies is no longer the budget, but rather the lack of awareness that such products already exist.
However, the next disruptive technology that promises to change the way Security Operations Centers are operated seems to be Cognitive Security. Whereas Real-Time Security Intelligence can provide security analysts with better tools to improve their efficiency, it still relies on humans to perform the actual analysis and make informed decisions about each security incident. Applying cognitive technologies (the thing closest to the artificial intelligence as we know it from science fiction) to the field of cybersecurity promises to overcome this limitation.
Technologies for language processing and automated reasoning not only help to unlock vast amounts of unstructured “dark security data”, which until now were not available for automated analysis, they actually promise to let the AI to do most of the work that a human analyst must perform now: collect context information, define a research strategy, pull in external intelligence and finally make an expert decision on how to respond to the incident in the most appropriate way. Supposedly, the analyst would only have to confirm the decision with a click of a mouse.
Sounds too good to be true, but the first products incorporating cognitive security technologies are already appearing on the market. The future is now!
I have to admit that I find the very concept of a Security Operations Center extremely… cinematic. As soon as you mention it to somebody, they would probably imagine a large room reminiscent of the NASA Mission Control Center – with walls lined with large screens and dozens of security experts manning their battle stations. From time to time, a loud buzzer informs them that a new security incident has been discovered, and a heroic team starts running towards the viewer in slow motion…
Of course, in reality most SOCs are much more boring-looking, but still this cliché image from action movies captures the primary purpose of an SOC perfectly – it exists to respond to security breaches as quickly as possible in order to contain them and minimize the losses. Unfortunately, looking back at the last decade of SOC platform development, it becomes clear that many vendors have been focusing their efforts elsewhere.
Traditional Security Information and Event Management (SIEM) platforms, which have long been the core of security operations centers, have gone long way to become really good at aggregating security events from multiple sources across organizations and providing monitoring and alerting functions, but when it comes to analyzing a discovered incident, making an informed decision about it and finally mitigating the threat, security experts’ job is still largely manual and time-consuming, since traditional SIEM solutions offer few automation capabilities and usually do not support two-way integration with security devices like firewalls.
Another major problem is the sheer number of security events a typical SOC is receiving daily. The more deperimeterized and interconnected modern corporate networks become, the more open they are for new types of cyberthreats, both external and internal, and the number of events collected by a SIEM increases exponentially. Analysts no longer have nearly enough time to analyze and respond to each alert. The situation is further complicated by the fact that an overwhelming majority of these events are false positives, duplicates or otherwise irrelevant. However, a traditional SIEM offers no way to differentiate them from real threats, drowning analysts in noise and leaving them only minutes to make an informed decision about each incident.
All this leads to the fundamental problem IT industry is now facing: because of the immense complexity of setting up and operating a security operations center, which requires a large budget and a dedicated team of security experts, many companies simply cannot afford it, and even those who can are continuously struggling with the lack of skilled workforce to manage their SOC. In the end, even for the best-staffed security operations centers, the average response time to a security incident is measured in days if not weeks, not even close to the ultimate goal of dealing with them in real time.
In the recent years, this has led to the emergence of a new generation of security solutions based on Real-Time Security Intelligence. Such tools utilize Big Data analytics technologies and machine learning algorithms to correlate large amounts of security data, apply threat intelligence from external sources, detect anomalies in activity patterns and provide a small number of actionable alarms clearly ranked by their risk scores. Such tools promise to dramatically reduce the time to mitigate a breach by performing data analysis in real time, eliminating statistical noise and false positives and, last but not least, providing a high degree of automation to make the security analyst’s job easier.
Although KuppingerCole has been promoting this concept for quite a few years already, the first real products have appeared a couple years ago, and since then the market has evolved and matured at an incredible rate. Back in 2015, when KuppingerCole attempted to produce a Leadership Compass on RTSI solutions, we failed to find enough vendors for a meaningful rating. In 2017, however, we could easily identify over 25 Security Intelligence Platform solutions offered by a variety of vendors, from large veteran players known for their SIEM products to newly established innovative startups.
To be continued...
Vault 7, Wikileaks´ recently published plethora of documents and files from internal CIA resources, has created quite some excitement and noise, and it has even been compared with Edward Snowden´s NSA revelations.
My opinion: this is complete nonsense. In looking at what Edward Snowden has done - disclosing information on methods and extent of NSA´s mass surveillance activities which nobody outside the walls of NSA would have thought it would be possible - these latest collections of CIA authored configuration files and documents describing exploits and methods on how to penetrate end user devices, are not much more than a joke. Vault 7 documents show that CIA is doing exactly what we think they are doing and what secret services are supposed to do. Yes, they may be a bit more "cyber" than we thought they would be at this time, but this is it. No zero day exploits, not a single piece of real news. And not at all a reason to rethink cybersecurity.
Looking at the Wikileaks´ press release about Vault 7, one of the headlines says: "CIA malware targets Windows, OSX, Linux, routers". Huh, this is so shocking news for all of us. We should immediately throw all our gadgets away, switch off (better: unplug) TVs and fridges and call Assange to guide us through the evil reality of cyber, and be grateful to him as a hero of the 21st century, who is so much more important than Guardian-style real journalism. My recommendation: don´t feel alienated by such kibosh. Ignore it.
Ok, maybe one thing that comes into my mind while clicking through the contents: some of the Vault 7 files show that CIA cyber activities are very well stuffed and that they collaborate with the British MI5. But our German BND isn´t mentioned anywhere. This is worrying me a little bit, as it could well be that our guys are being left behind...
The European Banking Authority released the final draft of the Regulatory Technical Specifications for PSD2 this week. It contains several improvements and clarifications, but there are still a few areas that fall short of industry expectations.
After the release of the initial drafts, EBA received a multitude of comments and discussion from many organizations and software vendors. One of the top concerns was on the mandate for Strong Customer Authentication (SCA), which was defined traditionally as something you have, something you know, or something you are. Originally it was conceived to apply to any transaction over €10. The limit has been raised to €30, which is better, but still less than the recommended €50.
The revision also takes into account the innovations and benefits of risk-adaptive authentication. Risk-adaptive authentication encompasses several functions, including user behavioral analytics (UBA), two- or multi-factor authentication (2FA or MFA), and policy evaluation. Risk-adaptive authentication platforms evaluate a configurable set of real-time risk factors against pre-defined policies to determine a variety of outcomes. The policy evaluation can yield permit, deny, or “step-up authentication” required.
PSD2 RTS stipulates that banks (Account Servicing Payment Service Providers, or ASPSPs) must consider the following transactional fraud risk detection elements on a per-transaction basis:
- lists of compromised or stolen authentication elements;
- the amount of each payment transaction;
- known fraud scenarios in the provision of payment services;
- signs of malware infection in any sessions of the authentication procedure
Items 1-3 are commonly examined in many banking transactions today. The prescription to look for signs of malware infection is somewhat vague and difficult to achieve technically. Is the bank responsible for knowing the endpoint security posture of all of its clients? If so, is it responsible also for helping remediate malware on clients?
Furthermore, in promoting “continuous authentication” via risk-adaptive authentication, EBA states:
- the previous spending patterns of the individual payment service user;
- the payment transaction history of each of the payment service provider’s payment service user;
- the location of the payer and of the payee at the time of the payment transaction providing the access device or the software is provided by the payment service provider;
- the abnormal behavioural payment patterns of the payment service user in relation to the payment transaction history;
- in case the access device or the software is provided by the payment service provider, a log of the use of the access device or the software provided to the payment service user and the abnormal use of the access device or the software.
The requirements described above, from the PSD2 RTS document, are very much a “light” version of risk-adaptive authentication and UBA. These attributes are useful in predicting the authenticity of the current user of the services. However, there are additional attributes that many risk-adaptive authentication vendors commonly evaluate that would add value to the notion and practice of fraud risk reduction. For example:
- IP address
- Time of day/week
- Device ID
- Device fingerprint
- Known compromised IP/network check
- User attributes
- User on new device check
- Jailbroken mobile device check
Now that limited risk analytics are included in the PSD2 paradigm, the requirement for SCA is reduced to at least once per 90 days. This, too, is in line with the way most modern risk-adaptive authentication systems work.
The PSD2 RTS leaves in place “screen-scraping” for an additional 18 months, a known bad practice that current Third Party Providers (TPPs) use to extract usernames and passwords from HTML forms. This practice is not only subject to Man-in-the-Middle (MITM) attacks, but also perpetuates the use of low assurance username/password authentication. Given that cyber criminals now know that they only have a limited amount of time to exploit this weak mechanism, look for an increase in attacks on TPPs and banks using screen-scraping methods.
In summary, the final draft of PSD2 RTS does make some security improvements, but omits recommending practices that would more significantly and positively affect security in the payments industry, while leaving in place the screen-scraping vulnerability for a while longer.
Big data analytics is getting more and more powerful and affordable at the same time. Probably the most important data within any organisation is knowledge of and insight into its customer's profiles. Many specialized vendors target these organisations. And it is obvious: The identification of customers across devices and accounts, a deep insight into their behaviour and the creation of rich customer profiles comes with many promises. The adjustment, improvement and refinement of existing product and service offerings, while designing new products as customer demand changes, are surely some of those promises.
Dealing with sensitive data is a challenge for any organisation. Dealing with personally identifiable information (PII) of employees or customers is even more challenging.
Recently I have been in touch with several representatives of organisations and industry associations who presented their view on how they plan to handle PII in the future. The potentials of leveraging customer identity information today are clearly understood. A hot topic is of course the GDPR, the general data protection regulation as issued by the European Union. While many organisations aim at being compliant from day one (= May 25, 2018) onward, it is quite striking that there are still organisations around, which don't consider that as being important. Some consider their pre-GDPR data protection with a few amendments as sufficient and subsequently don't have a strategy for implementing adequate measures to achieve GDPR-compliant processes.
To repeat just a few key requirements: Data subject (= customer, employee) rights include timely and complete information about personal data being stored and processed. This includes also a justification for doing this rightfully. Processes for consent management and reliable mechanisms for implementing the right to be forgotten (deletion of PII, in case it is no longer required) need to be integrated into new and existing systems.
It is true: In Europe and especially in Germany data protection legislation and regulations have always been challenging already. But with the upcoming GDPR things are changing dramatically. And they are also changing for organisations outside the EU in case they are processing data of European citizens.
National legislation will fill in details for some aspects deliberately left open within the GDPR. Right now this seems to weaken or “verschlimmbessern” (improve to the worse, as we say in German) several practical aspects of it throughout the EU member states. Quite some political lobbying is currently going on. Criticism grows e.g. over the German plans. Nevertheless, at its core, the GDPR is a regulation, that will apply directly to all European member states (and quite logically also beyond). It will apply to personal data of EU citizens and the data being processed by organisations within the EU.
Some organisations fear that compliance to GDPR is a major drawback in comparison to organisations, e.g. in the US which deal with PII with presumably lesser restrictions. But this is not necessarily true and it is changing as well, as this example shows: The collection of viewing user data, through software installed on 11 million "smart" consumer TVs without their owner's consent or even their information, led to a payment of $2.2 million by the manufacturer of these devices to the (American!) Federal Trade Commission.
Personal data (and the term is defined very broadly in the GDPR) is processed in many places, e.g. in IoT devices or in the smart home, in mobile phones, in cloud services or connected desktop applications. Getting to privacy by design and security by design as core principles should be considered as a prerequisite for building future-proof systems managing PII. User consent for the purposes of personal data usage while managing and documenting proof of consent are major elements for such systems.
GDPR and data protection do not mean the end to Customer Identity Management. On the contrary rather, GDPR needs to be understood as an opportunity to build trusted relationships with consumers. The benefits and promises as described above can still be achieved, but they come at quite a price and substantial effort as this must be well-executed (=compliant). But this is the real business opportunity as well.
Being a leader, a forerunner and the number one in identifying business opportunities, in implementing new business models and in occupying new market segments is surely something worth striving for. But being the first to fail visibly and obviously in implementing adequate measures for e.g. maintaining the newly defined data subject rights should be consider as something that needs be avoided.
KuppingerCole will cover this topic extensively in the next months with webinars and seminars. And one year before coming into effect the GDPR will be a major focus at the upcoming EIC2017 in May in Munich as well.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Today, the Cyber Defence Center (CDC) or Security Operations Center (SOC) is at the heart of enterprise security management. It is used to monitor and analyze security alerts coming from the various systems across the enterprise and to take actions against detected threats. However, the rapidly growing number and sophistication of modern advanced cyber-attacks make running a SOC an increasingly challenging task even for the largest enterprises with their fat budgets for IT security. The overwhelming number of alerts puts a huge strain even on the best security experts, leaving just minutes [...]