It seems that there is simply no end to a long series of Facebook’s privacy blunders. This time, a security researcher has stumbled upon an unprotected server hosting several huge databases containing phone numbers of 419 million Facebook users from different countries. Judging by the screenshot included in an article by Techcrunch, this looks like another case of a misconfigured MongoDB server exposed to the Internet without any access controls. Each record in those databases contains a Facebook user’s unique ID that can be easily linked to an existing profile along with that user’s phone number. Some also contained additional data like name, gender or location.
Facebook has denied that it has anything to do with those databases, and there is no reason to doubt that; the sheer negligence of the case rather points to a third party lacking even basic security competence, perhaps a former Facebook marketing partner. This is by far not the first case of user data being harvested off Facebook by unscrupulous third parties, perhaps the biggest one being the notorious Cambridge Analytica scandal of early 2018. After that, Facebook has disabled access to users’ phone numbers to all its partners, so the data leaked this time is perhaps not the most current.
Still, the huge number of affected users and the company’s apparent inability to find any traces of the perpetrators clearly indicate that Facebook hasn’t done nearly enough to protect their users’ privacy in recent times. Until any further details emerge, we can only speculate about the leak itself. What we could do today, however, is to try and figure out what users can possibly do to protect them from this leak and to minimize the impact of future similar data breaches.
First of all, the most common advice “don’t give your phone number to Facebook and the likes” is obviously not particularly helpful. Many online messaging services (like WhatsApp or Telegram) use phone numbers as the primary user identities and simply won’t work without them. Others (like Google, Twitter or even your own bank) rely on phone numbers to perform two-factor authentication. Second, for hundreds of millions of people around the world, this advice comes too late – their numbers are already at the disposal of spammers, hackers, and other malicious actors. And those guys have a few lucrative opportunities to exploit them…
Besides the obvious use of these phone numbers for unsolicited advertising, they can be used to expose people who use pseudonyms on social media and link those accounts to real people – for suppressing political dissent or simply to further improve online user tracking. Alas, the only sensible method of preventing this breach of privacy is to use a separate dedicated phone number for your online services, which can be cumbersome and expensive (not to mention that it had to be done before the leaks!)
Unfortunately, in some countries (including the USA), leaked phone numbers can also be used for SIM swap attacks, where a fraudster tricks a mobile operator to issue them a new SIM card with the same number, effectively taking full control over your “mobile identity”. With that card, they can pose as you in a phone call, intercept text messages with one-time passwords and thus easily take over any online service that relies on your mobile number as the means of authentication.
Can users do anything to prevent SIM swap attacks? Apparently not, at least, until mobile operators are forced by governments to collaborate with police or banks on fighting this type of fraud. Again, the only sensible way to minimize its impact is to move away from phone-based (not-so-)strong authentication methods and adopt a more modern MFA solution: for example, invest in a FIDO2-based hardware key like Yubikey or at least switch to an authenticator app like Authy. And if your bank still offers no alternative to SMS OTP, maybe today is the right time to switch to another bank.
Remember, in the modern digital world, your phone number is the key to your online identity. Keep it secret, keep it safe!
The other day I found a notebook on a train. It was in a compartment on the seat of a first-class car. The compartment was empty, no more passengers to see, no luggage, nothing.
And no, it wasn't a laptop or tablet, it was a *notebook*. One made of paper, very pretty, with the name of a big consulting company printed on it. So, it was either a promotional gift or one that employees use. Two thirds of it had been used, which could be seen from the edge of the paper.
Everyone knows these notebooks, from simple A4 college pads with cheap ballpoint pens to expensive, leather-bound prestige models combined with an equally expensive writing device such as a fountain pen.
They serve as brain extensions in meetings, for planning and conducting conversations. They contain details about the owner. And they contain sketches, meeting minutes, information about contact persons (--> GDPR), your business, the business of your partners. You can find sales figures, business plans, product developments, vulnerability analyses and architectural plans. The private mobile phone number of the important point of contact, the passwords to company infrastructure along with computer addresses. Confidential and critical data is thoughtlessly recorded on paper and then elaborated on the way home on the train, at home on the couch or the next day in the office.
Everyone worries about the loss of their computer or of the still ubiquitous, unencrypted USB stick. Rightly so. And today you also have to think about the cloud, because it bears a multitude of risks, which you have to address consistently, comprehensively and correctly (and yes, we can help you with that, but that's not the point here).
However, leakage of sensitive data does not necessarily require a nation state hacker or a violation of the confidentiality of credentials. Clumsiness, haste and forgetfulness can sometimes be enough. And that's why you should be particularly concerned about your paper notes.
You can encrypt a USB stick (yes, you can). You can encrypt whole computers, too. Your corporate laptop should be, anyway, and the encryption of your private computers and data carriers is your own personal responsibility. Most mobile phones and tablets today come with biometrics and also with potential encryption.
But this notebook is still beautiful and has so many free pages, so on to the next meeting? - So let me ask you: What is written in your current notebook? Would you have wanted me to have read all that on the train? Got a bad conscience now? Rightly so.
Paper cannot be encrypted. So, there are only the following two main approaches of data avoidance and data deletion to mitigate these risks: Give the next promotional notebook to a child for drawing (--> avoidance). Destroy all the notebooks you still have (and possibly still use) by means of your home or office shredder (--> deletion). What is still important can be scanned before and stored safely and of course encrypted.
I did not open this notebook and instead handed it over to the conductor and thus to the Deutsche Bahn "lost and found" service. But we can't expect everyone to handle it that way.
As a recommendation: For the future, for all notes that go beyond your private poems (and perhaps for your own self-protection include those as well), use mechanisms that meet your company's security requirements. Notebooks for sure don’t.
Artificial intelligence (AI) and machine learning tools are already disrupting other professions. Journalists are concerned automation being used to produce basic news and weather reports. Retail staff, financial workers and some healthcare staff are also in danger, according to US public policy research organization, Brookings.
However, it may come as a surprise to learn that Brookings also reports that lawyers have a 38% chance of being replaced by AI services soon. AI is already being used to conduct paralegal work: due diligence, basic research and billing services. A growing number of AI based law platforms are available to assist in contract work, case research and other time-consuming but important back office legal functions. These platforms include LawGeex, RAVN and IBM Watson based ROSS Intelligence.
While these may threaten lower end legal positions, it would free up lawyers to spend more time analyzing results, thinking, and advising their clients with deeper research to hand. Jobs may well be added as law firms seek to hire AI specialists to develop in house applications.
What about adding AI into the criminal justice system, however? This is where the picture becomes more complicated and raises ethical questions. There are those who advocate AI to select potential jurors. They argue that AI could gather data about jurors, including accident history, whether they have served before and the verdict of those trials, and perhaps more controversially, a juror’s political affiliations. AI could also be used to analyze facial reactions and body language indicating how a potential juror feels about an issue, demonstrating a positive or negative bias. Proponents of AI in jury selection say it could optimize this process, facilitating greater fairness.
Others are worried that rushing into such usage could might have the opposite effect. Song Richardson, Dean of the University of California-Irvine School of Law, says that people often view AI and algorithms as being objective without considering the origins of the data being used in the machine-learning process. “Biased data is going to lead to biased AI. When training people for the legal profession, we need to help future lawyers and judges understand how AI works and its implications in our field.” she told Forbes magazine.
A good example would be Autonomous vehicles. Where does the legal blame lie for an accident? The driver, the car company, the software vendor or another third party? These are questions that are best answered by human legal experts who can understand the impact of IA and IoT on our changing society.
Perhaps a good way to illustrate the difference between human thinking and AI is that it usually wins in the game of Go because, while it plays according to formal Go rules, it does so in a way no human would ever choose.
If AI oversaw justice it might very well “play by the rules” also but this would may involve a strict interpretation of the law in every case, with no room for the nuances and consideration that experienced human lawyers and judges possess. Our jails may fill up very quickly!
Assessing guilt or innocence, cause and motive in criminal cases needs empathy and instinct as well as experience – something that only humans can provide. At the same time, it is not unknown for skilled lawyers to get an acquittal for guilty parties due to their own charisma, theatrics and the resources available to them. Greater involvement of AI could potentially lead to a more fact based and logical criminal justice system, but it’s unlikely robots will take the place of prosecution or defence lawyers in a court room. But at some point, AI may well be used in court, but its reasoning would still have to be weighted and checked against a tool like IBM Watson OpenScale to check the validity of its results.
For the foreseeable future, AI in the legal environment is best to enhance research, and even then, we should not trust it blindly, but understand what happens and whether results are valid and, as far as possible, how they are achieved.
The wider ethical debate around AI in law should not prevent us from using it right now in those areas that it will being immediate benefit and open new legal services and applications. Today, AI could benefit those seeking legal help. Time saving AI based research tools will drive down the cost of legal services making it accessible to those on lower incomes. It is not hard to envisage AI driven cloud based legal services that provide advice to consumers without any human involvement, either from startups or as add-ons to traditional legal firms.
For now, the impact of AI on the legal profession is undeniably positive if it reduces costs and frees up lawyers to do more thinking and communicating with clients. And with further development it may soon play a more high-level role in legal environments in tandem with its human law experts.
It’s not been a good couple of weeks for Apple. The company that likes to brand itself as superior to rivals in its approach to security has been found wanting. Early in August it was forced to admit that contractors had been listening in to conversations on its Siri network. It has now temporarily stopped the practice, claiming that only “snippets” of conversations were captured to improve data.
At the end of last week, a much more serious security and privacy threat was made public. Google researchers revealed that hackers have put monitoring implants into iPhones for years, affecting thousands of users per week. The hacking operation, which started in 2017, used several web sites to deliver malware onto iPhones. Users did not have to interact with the site: just visiting was enough. From there, criminals were able to siphon passwords and chat histories from WhatsApp, iMessage and Telegram – bypassing the encryption designed to protect the integrity of these messaging apps. According to the researchers, attackers used five different exploits across 14 pieces of malware.
This is undoubtedly a major incident. It strongly undermines Apple’s reputation for securing users’ devices and the (personal) data residing on these. In an age where all tech companies are facing criticisms for misuse of customer data it comes as a body blow to Apple’s security management expertise; something it has consistently portrayed itself as superior.
What is worse is the revelation that Apple was made aware of the flaw in the iPhone in February this year. Apple did release a patch for the flaw, but why did it not make a much more urgent public announcement back In February to warn all iPhone users to update iOS software urgently? This is Apple’s real failure: trying to make everyone believe it has the best security controls but not delivering. It’s not the first time that Apple’s culture of secrecy has undermined security as a previous blog by Martin Kuppinger illustrates.
Not surprisingly, others were making hay at Apples expense on social media last week. “This is a huge find by Google’s team,” said Alex Stamos, Facebook’s former security chief and now a researcher at Stanford University, while Marcus Hutchins, a security researcher who helped stop the WannaCry attack in 2017 wrote, “Maybe I’m missing something, but it feels like Apple should have found this themselves.”
Apple did not fail to patch but it failed to act swiftly and adequately communicate the flaw, and now it finds itself on the backfoot. Was all this the result of hubris or carelessness? Either way it’s not a good look as it gears up to launch the iPhone 11 and promote its new credit card as a secure alternative to conventional bank cards. As ever the best advice for users of iPhones or any device is to ensure you always have the most up to date operating system installed by making a regular check.
Imperva, a US-based cybersecurity company known for its web application security and data protection products, has disclosed a breach of their customer data. According to the announcement, a subset of the customers for its cloud-based Web Application Firewall solution (formerly known as Incapsula) had their data exposed, including their email addresses, password hashes, API keys, and SSL certificates.
Adding insult to injury, this breach seems to be that of the worst kind: it happened long ago, probably in September 2017, and was unnoticed until a third party notified Imperva a week ago. Even though the investigation is still ongoing and not many details are revealed yet, the company did the right thing by providing a prompt full disclosure along with recommended security measures.
Still, what can we learn or at least guess from this story? First and foremost, even the leading cybersecurity vendors are not immune (or should I say, “impervious”?) to hacking and data breaches, exposing not only their own corporate infrastructures and sensitive data, but creating unexpected attack vectors for their customers. This is especially critical for SaaS-based security solutions, where a single data leak may give a hacker convenient means to attack multiple other companies using the service.
More importantly, however, this highlights the critical importance of having monitoring and governance tools in place in addition to traditional protection-focused security technologies. After all, having an API key for a cloud-based WAF gives a hacker ample opportunity to silently modify its policies, weakening or completely disabling protection of the application behind it. If the customer has no means of detecting these changes and reacting quickly, he will inevitably end up being the next target.
Having access to the customer’s SSL certificates opens even broader opportunities for hackers: application traffic can be exposed to various Man-in-the-Middle attacks or even silently diverted to a malicious third party for all kinds of misuse: from data exfiltration to targeted phishing attacks. Again, without specialized monitoring and detection tools in place, such attacks may go unnoticed for months (depending on how long your certificate rotation cycles are). Quite frankly, having your password hashes leaked feels almost harmless in comparison.
So, does this mean that Imperva’s Cloud WAF should no longer be trusted at all? Of course not, but the company will surely have to work hard to restore its product’s reputation after this breach.
Does it mean that SaaS-based security products, in general, should be avoided? Again, not necessarily, but additional risks of relying on security solutions outside of your direct control must be taken into account. Alas, finding the right balance between complexity and costs of an on-premises solution vs. scalability and convenience of “security from the cloud” has just become even more complicated than it was last week.
The bottom line is that although 100% security is impossible to achieve, even with multi-layered security architecture, the only difference between a good and a bad strategy here is in properly identifying the business risks and investing in appropriate mitigation controls. However, without continuous monitoring and governance in place, you will inevitably end up finding about a data breach long after it has occurred – and you’ll be extremely lucky if you learn it from your security vendor and not from the morning news.
The ultimate security of an organization, and thus its residual risk, depends on the proper mix of complementary components within an IT security portfolio. Gaps in safeguarding of sensitive systems must be identified and eliminated. Functional overlaps and ineffective measures must give way to more efficient concepts. The KuppingerCole Analysts Portfolio Compass Advisory Services offers you support in the evaluation and validation of existing security controls in your specific infrastructure, with the aim of designing a future-proof and cost-efficient mix of measures. Learn more here or just talk to us.
Reports of a data breach against Mastercard began surfacing in Germany early last week with Sueddeutsche Zeitung (in German) one of the first news outlets to report on the loss. As is often the case in major corporate breaches, the company was slow to react officially. On Monday it said only that it was aware of an “issue”. The next day the company had someone to blame: a third-party provider it said had lost data which included usernames, addresses and email addresses, but no credit card details.
By Wednesday however this statement was proved incorrect when persons unknown uploaded an Excel file with full credit card numbers to the Internet, without CVV or expiration numbers. However, a credit card number with names and addresses is still a highly valued and dangerous item on the dark web. It took until the end of the week before Mastercard admitted that 90,000 customers had been affected and reported the incident to the German Data Protection Authority (DPA). Mastercard confirmed a third party running its German rewards program Priceless Specials had been attacked.
The company said that the breach had no connection to Mastercard’s payment transaction network, and it was “taking every possible step to investigate and resolve the issue,” including informing and supporting cardholders. The company shut down the German Specials website.
There are two lessons from this breach. It took Mastercard five days to fully admit it had been attacked. Not only does this potentially contravene GDPR which requires 72 hours, but more importantly left its customers without any information and unsure of their exposure. This suggests a failure or absence of incident response management policies and processes at Mastercard, which should be put into action at first sign of a potential breach. It cannot be emphasised enough that companies must scrupulously prepare for disaster and incidents, including PR and executive response strategies to avoid telling conflicting stories.
Secondly, the fact that the breach occurred at a service provider proves once again that oversight and due diligence are essential when confidential data is at stake. GDPR quite clearly states that the data controller remains responsible for a breach from a third-party provider. And this case is a perfect example of how Mastercard may be judged to have failed in this regard when the DPA investigates.
Last week, VMware has announced its intent to acquire Carbon Black, one of the leading providers of cloud-based endpoint security solutions. This announcement follows earlier news about acquiring Pivotal, a software development company known for its Cloud Foundry cloud application platform, as well as Bitnami, a popular application delivery service. The combined value of these acquisitions would reach five billion dollars, so it looks like a major upgrade of VMware’s long-term strategy with regards to the cloud.
Looking back at the company’s 20-year history, one cannot but admit VMware’s enormous influence on the very foundation and development of cloud computing, yet their relationship with the cloud was quite uneven. As a pioneer in hardware virtualization, VMware has basically laid the technology foundation for scalable and manageable computing infrastructures, first in on-premises datacenters and later in the public cloud. Over the years, the company has dabbed in IaaS and PaaS services as well, but those attempts weren’t particularly successful: The Cloud Foundry platform was spun out as a separate company in 2013 (the very same Pivotal that VMware is about to buy back now!) and the vCloud Air service was sold out in 2017.
This time however the company seems quite resolute to try it again. Why? What has changed in recent years that may give VMware another chance? Quite a lot, to be fair.
First of all, Cloud is no longer a buzzword: most businesses have already figured out its capabilities and potential limitations, outlined their long-term strategies and are now working on integrating cloud technologies into their business goals. Becoming cloud-native is no longer an answer to all problems, nowadays it always raises the next question: which cloud is good enough for us?
Second, developing modern applications, services or other workloads specifically for the public cloud to fully unlock all its benefits is not an easy job: old-school development tools and methods, legacy on-premises applications (many of which run on VMware-powered infrastructure, by the way) and strict compliance regulations limit the adoption rate. “Lift and shift” approach is usually frowned upon, but many companies have no other alternative: the best thing they can dream of is a method of making their applications work the same way in every environment, both on-prem and in any of the existing clouds.
Last but not least, the current state of cloud security leaves a lot to be desired, as numerous data breaches and embarrassing hacks of even the largest enterprises indicate. Even though cloud service providers are working hard to offer numerous security tools for their customers, implementing and managing dozens of standalone agents and appliances without leaving major gaps between them is a challenge few companies can master.
This is where VMware’s new vision is aiming at: offering an integrated platform for developing, running and securing business applications that work consistently across every on-premises or mobile device and in every major cloud, with consistent proactive security built directly into this unified platform instead of being bolted on it in many places. VMware’s own infrastructure technologies, which can now run natively on AWS or Azure clouds, combined with Pivotal’s Kubernetes-powered application platform and Carbon Black’s cloud-native security analytics that can now monitor every layer of the computing stack are expected to provide an integrated foundation for such a platform in the very near future.
How quickly and consistently VMware will be able to deliver on this promise remains to be seen, of course. Hopefully, third time’s a charm!
After the recent Capital One breach, some commentators have suggested that cloud security is fundamentally flawed. Like many organizations today, Capital One uses Amazon Web Services (AWS) to store data, and it was this that was targeted and successfully stolen.
In the case of Capital One it was process, not technology, that failed. The company failed on three points to secure its data properly using the extended tool sets that AWS provides. It relied only on the default encryption settings in AWS, suggesting a lack of product knowledge or complacency in security teams. The Access Control policies had not been properly configured and allowed anonymous access from the web. Finally, the breach was not discovered until four months after it happened because Capital One had not turned on the real-time monitoring capabilities in AWS. This last point would put the company in a tricky position if any of the data belonged to EU citizens – in this case it looks like only US citizens were affected.
The lesson from the incident isn’t that cloud security is not up to the job. Certainly, putting data in the cloud without protection is foolish but modern cloud platforms such as AWS and Azure, for example, have advanced configuration controls to defend robustly against breach attempts. The cloud is here to stay; the digital transformation essential to modern business depends on it. To suggest we curtail its usage because of security concerns is avoiding our responsibility and ability to secure it with the tools at our disposal.
To learn how KuppingerCole Analysts can assist you establish a compliant and secure cloud strategy please download our Advisory Services brochure.
A new strain of Sodinokibi ransomware is being used against companies in the United States and Europe. Already notable for a steep increase in ransoms demanded ($500,000 on average), the malware can now activate itself, bypassing the need for services users to click a phishing link for example. In addition, the Financial Times reports that criminals are targeting Managed Service Providers (MSPs) to find backdoors into their client’s data, as well as attacking companies directly. “They are getting into an administration system, finding lists of client privileged credentials and then installing Sodinokibi on all the clients’ systems,” the report warns.
Ransomware has proven to be highly effective for cyber criminals, as many companies have no alternative but to pay up after they have been locked out of their own systems. This is particularly true of smaller companies who often have no cyber insurance to cover their losses. Criminal hackers have also become more ruthless – sometimes refusing to unlock systems even after the ransom has been paid.
But the sophistication of this new strain of Sodinokibi and the inflated ransom demands tells us that the criminal developers and distributors have raised the bar. The ransomware does not need to find vulnerabilities, as it gains “legitimate” access to data through stolen credentials. Left unchecked, Sodinokibi threatens to be as damaging as its notorious predecessor, Petya.
Even Managed Security Service Providers (MSSPs) are not immune. According to reports, one such MSSP was attacked through an unpatched version of the Webroot Management Console, enabling attackers to spread the ransomware to all its clients. Webroot responded by sending out a warning email to all its customers, saying it had logged out everyone and activated mandatory two-factor authentication.
Webroot’s warning email after one of its MSSP customers was attacked by Sodinikobi
Notwithstanding the fact that any MSSP clients should expect them to take robust and regular proactive security steps as part of an SLA, it shows that diligent use of IAM and authentication controls can do much to prevent ransomware from doing its worse. But it is privileged accounts that are the true nectar for cyber criminals as these unlock so many doors to critical data and services. Which is why PAM (Privileged Account Management) is essential in today’s complex, hybrid organizations and if this responsibility is outsourced to MSP or MSSPs it is doubly important. (For more on PAM please see our recent Leadership Compass and Whitepaper research documents).
The success of any ransomware, which is not a complex piece of code in itself, depends on the lack of preparedness by organizations, and a lack of due diligence on patching systems to prevent it reaching its intended targets. In the case of Sodinikobi, it’s new ability to execute unaided makes this more important than ever.
When too many users have access to critical data and systems, it makes life much easier for ransomware. A properly configured and up to date PAM platform, either on premises or at an MSP will do much to stop this and prevent the situation found at many organizations where Privileged Account and Admins often have too much access. Best practice for today’s enterprise environments is to set credentials for single tasks and be strictly time limited - and setting two-factor authentication as default for privileged accounts. This would stop ransomware from spreading too far into an organization. Another nice concept for MSPs and MSSPs is fully automated administration of client services with well tested runbooks, and no personalized access to the systems at all.
Of course, a management platform should be patched to stop any form of ransomware reaching those credentials in the first place - patches for Sodinikobo are widely available – but as we have seen organizations cannot rely on that to happen. Given what happened with the WebRoot platform there is a strong argument for organizations to host IAM on premises, at least for privileged account management so that they have control over patch management. A robust IAM and PAM solution will prevent “access creep” by ensuring the consistent application of rules and policies across an organization. After all, hackers can’t demand a ransom if they can’t get access to your critical systems.
The EU European Banking Authority issued clarifications about what constitutes Strong Customer Authentication (SCA) back in late June. The definition states that two or more of the following categories are required: inherence, knowledge, and possession. These are often interpreted as something you are, something you know, and something you have, respectively. We have compiled and edited the following table from the official EBA opinion:
|Inherence elements||Compliant with SCA?|
|Hand and face geometry||Yes|
|Retina and iris scanning||Yes|
|Behavioral biometrics, including keystroke dynamics, heart rate or other body movement patterns that uniquely identify PSUs (Payment Service Users), and mobile device gyroscopic data||Yes|
|Information transmitted using EMV 3-D Secure 2.0||No|
|Password, Passphrase, or PIN||Yes|
|Knowledge-based authentication (KBA)||Yes|
|Memorized swiping path||Yes|
|Email address or username||No|
|Card details (including CVV codes on the back)||No|
|Possession of a device evidenced by an OTP generated by, or received on, a device (hardware/software token generator, SMS OTP)||Yes|
|Possession of a device evidenced by a signature generated by a device (hardware or software token)||Yes|
|Card or device evidenced through a QR code (or photo TAN) scanned from an external device||Yes|
|App or browser with possession evidenced by device binding — such as through a security chip embedded into a device or private key linking an app to a device, or the registration of the web browser linking a browser to a device||Yes|
|Card evidenced by a card reader||Yes|
|Card with possession evidenced by a dynamic card security code||Yes|
|App installed on the device||No|
|Card with possession evidenced by card details (printed on the card)||No|
|Card with possession evidenced by a printed element (such as an OTP list, e.g. “Grid Cards”)||No|
The list and details about implementations are subject to change. Check the EBA site for updates. KuppingerCole will also follow and provide updates and interpretations.
The EBA appears to be rather generous in what can be used for SCA, especially considering the broad range of biometric types on the list. However, a recent survey by GoCardless indicates that not all consumers trust and want to use biometrics, and these attitudes vary by country across the EU.
Although KBA is still commonly used, it should be deprecated due to the ease with which fraudsters can obtain KBA answers. The acceptance of smart cards or other hardware tokens is unlikely to make much of an impact, since most consumers aren’t going to carry special devices for authenticating and authorizing payments. Inclusion of behavioral biometrics is probably the most significant and useful clarification on the list, since it allows for frictionless and continuous authentication.
In paragraph 13, the EBA opinion opened the door for possible delays in SCA implementation: “The EBA therefore accepts that, on an exceptional basis and in order to avoid unintended negative consequences for some payment service users after 14 September 2019, CAs may decide to work with PSPs and relevant stakeholders, including consumers and merchants, to provide limited additional time to allow issuers to migrate to authentication approaches that are compliant with SCA…”
Finextra reported this week that the UK Financial Conduct Authority has announced an extension to March 2021 for all parties to prepare for SCA. The Central Bank of Ireland is following a similar course of delays. Given that various surveys place awareness of and readiness for PSD2 SCA on the part of merchants between 40-70%, it is not surprising to see such extensions. In fact, it is likely that the Competent Authorities in more member states will likely follow suit.
While these moves are disappointing in some ways, they are also realistic. Complying with SCA provisions is not a simple matter: many banks and merchants still have much work to do, including modernizing their authentication and CIAM infrastructures to support it.
Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]