Blog posts by John Tolbert

Top 5 Work from Home Cybersecurity Recommendations for Enterprises

As the business world moves to rapidly enable work-from-home (WFH), enterprise IT teams need to shift resources and priorities to ensure that remote workers are protected. Already we see malicious actors adapting and targeting remote workers more. My colleague Alexei Balaganski published a list of recommendations for small businesses.

The Situation

  • CheckPoint reports 4,000 domains related to coronavirus have been registered since January 2020, of which 3% are malicious and 5% are suspicious. Phishing attacks are increasing, which aim to capture remote workers credentials.
  • VPNs are under attack. Many companies utilize VPNs to allow remote access to on-premise computing resources. US-Cert reports that attackers are finding and exploiting VPNs as a method to get into organizational resources.
  • WFH does not mean users should send sensitive information to their personal accounts, but it’s happening. Enterprises need to retain control as much as possible. Even if your organization allows BYOD, devices which handle company info have to be protected. As Matthias notes, a quick move to the cloud may be a good course of action but it must be managed properly, with security in mind.

Recommendations

  • MFA ASAP – turn on Multi-Factor Authentication now. VPNs and webmail are easy targets if only protected by password authentication. MFA should be enabled for all applications as soon as expedient. FIDO is an excellent standard for MFA, which increases phishing resistance and preserves privacy. Provide simple, illustrated guidelines on how to use MFA.
  • Endpoint Protection – every device needs anti-malware capabilities. Keep endpoint clients up to date. Provide simple, illustrated guidelines to your users on how to check and turn on.
  • Patch everything – turn on automatic patching. Some organizations still prefer to do in-house OS and app patch testing, but for remote workers this can no longer be an option. If your users are using personal devices, urge them to allow automatic patching. Patch your VPNs. Patch your mail servers. If you’re using SaaS mail, opt for extra screening.
  • Security Training – warn your WFH workers that they are at increased risk to phishing and other attacks. Update your training and increase frequency of reminders. Provide short videos explaining the most important challenges.
  • Update or deploy DLP (Data Leakage Prevention) and CASB (Cloud Access Security Brokers). As work becomes more distributed in response to this crisis, it will become more difficult to identify and protect information. If your organization uses these types of solutions, they may need to be tweaked to accommodate a massive relocation of workers.  If your organization does not use DLP and CASB, it should be considered as a potentially strong risk mitigation strategy. Deploying these kinds of tools won’t happen overnight, but now is the time to consider them.

There are many other possible actions to take, but these five are a good place to start to reduce risks of data breaches. For solutions reviews and comparisons, see our research. For actionable guidance, our team of advisors can assist you with developing tactics and strategies.

Malicious Actors Exploiting Coronavirus Fears

Security researchers are discovering a number of malicious attacks designed to exploit public fears around COVID-19, more commonly just called coronavirus. The attacks to date take two major forms: a map which looks legitimate but downloads #malware, and various document attachments that purport to provide health and safety information related to COVID-19.

The coronavirus heat map may look legitimate, in that it takes information from Johns Hopkins University’s page, which is itself clean. However, nefarious actors have created a package for sale on the dark web called “corona-virus-map.com”, which uses AzoRult malware. It can steal credentials and credit card info. Links to sites bearing this malware have been spread through email.

The second type of attack also arrives via email. These contain attachments that look like official information, complete with stolen pictures and logos, on how to prevent coronavirus. Some download trojans and other malware, and others ask victims to verify email addresses and passwords, which are captured by the attackers.

Unfortunately, such attacks and scams are likely to continue in the weeks ahead.

Recommendations

KuppingerCole’s advice is:

  1. Beware of phishing. Remind users not to click suspicious links and attachments. Make enterprise users and friends and family aware of these scams.
  2. Use email security gateways. If you’re using a SaaS-delivered email service, opt for any additional security screening if available.
  3. Use anti-malware products on all endpoints. Keep subscriptions current.

Find out what IT should avoid in times of crisis.

For more information on anti-malware, see our list of publications on the subject.

External Sources

https://www.grahamcluley.com/coronavirus-map-used-to-spread-malware/

https://krebsonsecurity.com/2020/03/live-coronavirus-map-used-to-spread-malware/

https://www.pcrisk.com/removal-guides/17270-corona-virus-map-com-trojan

https://blog.malwarebytes.com/social-engineering/2020/02/battling-online-coronavirus-scams-with-facts/

https://nakedsecurity.sophos.com/2020/02/05/coronavirus-safety-measures-email-is-a-phishing-scam/

https://www.kaspersky.com.au/blog/coronavirus-used-to-spread-malware-online/25737/

High Assurance MFA Options for Mobile Devices

In recent years much of the focus in the authentication space has been on MFA, mobile devices, and biometrics. Many technical advances have been made which also serve to increase usability and improve consumer experiences. There are a few reasons for this.

MFA

Multi-factor authentication is the number 1 method to reduce ATO (account takeover) fraud and prevent data breaches. We all know password authentication is weak and the easiest way in for malicious actors. MFA has been mandated by security policy in many organizations and government agencies for years. MFA is now also required in the consumer space by regulations. EU PSD2, for example, calls for Strong Customer Authentication (SCA = 2FA + risk-adaptive) for financial app customers.

Mobile devices

Smartphones are commonplace and studies show that consumers tend to protect them better than even their wallets. People have gotten used to using a phone as a 2nd-factor with SMS OTP, although that method has security problems. Phone + PIN is a reasonable 2nd-factor method. Mobile push notifications are also an accepted paradigm. Increasingly we see mobile apps for authenticating users, generally built using SDKs from authentication service providers. In some cases these SDKs allow for the use of security features such as Global Platform Secure Element (SE) and Trusted Execution Environment (TEE).

Mobile biometrics

Apple’s Touch ID and Face ID brought mobile biometrics into the mainstream. Samsung and other Android models also offer native capabilities. More advanced 3rd-party mobile biometric apps are available that add behavioral/passive and other modalities such as voice recognition. From the standpoint of False Acceptance Rate (FAR – the measure of how often an impostor can get unauthorized access) Apple reports an impressive one in a million for Face ID, and one in 50,000 for Touch ID. Though these numbers look great, mobile biometrics remain susceptible to presentation attacks despite vendor solutions using liveness detection methods.

High identity assurance solutions for mobile

Back in 2014, US NIST released SP 800-157 which provided guidelines for the Derived PIV credential. PIV cards are Smart Cards that used by some US government and other agencies for in-person and electronic authentication. The process for obtaining a PIV card is rigorous which allows it to be considered a high assurance credential. NIST SP 800-157 was designed to provide an alternate way of using PIV credentials with mobile devices. Rather than using card readers, a parallel (not a copy) set of credentials including keys and certificates could be issued to and installed on mobile devices. These implementations require the use of security mechanisms such as SE & TEE. The vendors listed below provide compliant solutions:

This Derived PIV Credential approach would also work well in the private sector. Many companies use Smart Cards or other hardware tokens for authentication today. Keys and certificates generated by enterprise PKI could likewise be issued in parallel to employees’ devices. Moreover, the ability to combine high assurance credentials on mobile with FIDO 2 opens many possibilities; for example, using Smart Card strength credentials stored on phones to authenticate to laptops, desktops, and web-based applications.

Recommendation

KuppingerCole believes that companies or other organizations that are looking to modernize IAM solutions in general and authentication services in particular should consider these options. High assurance mobile MFA solutions are suited for organizations that:

  1. Have high identity assurance level requirements, either by policy or regulation
  2. Have existing investments and expertise in PKI
  3. Issue mobile devices to their workforces
  4. Have existing UEM or EMM solutions in place

High assurance mobile MFA can be deployed alongside current Smart Card or hardware token (PKI) infrastructure, allowing for a controlled phased-in rollout. To maintain separation but preserve compatibility, a new intermediate CA (certificate authority) can be installed under the root to issue parallel keys and certificates for mobile devices. In this scenario, there is no need to “rip and replace”.

In the long run, high assurance mobile MFA solutions utilizing FIDO and WebAuthN protocols can increase usability, decrease costs associated with issuing and replacing Smart Cards or hard tokens, and promote interoperability between mobile and traditional computing devices, web apps, and web services.

For organizations that don’t currently have PKI-based IAM solutions, there is no need to build out CAs and issue certificates to mobiles. In this case, it would be more efficient to implement FIDO 2 authentication. A pure FIDO solution provides similar benefits, such as unique key pair generation on a per-application basis and standards-based communication protocols, without the weight of PKI. Many FIDO authenticators available today can provide strong authentication assurance, but only the processes and PKI described in the previous sections can provide high identity assurance.

The authentication market has a plethora of options today. The time is right to upgrade to strong MFA and risk-adaptive authentication. The challenges reside in understanding your business and regulatory environments and choosing the right mix of authenticators, risk analytics capabilities, and management tools.

For more information or assistance in evaluating high assurance mobile MFA solutions or other authentication services, see https://www.kuppingercole.com/advisory.

Applying the Information Protection Life Cycle and Framework to CCPA

The California Consumer Privacy Act (CCPA) became effective on January 1, 2020. Enforcement is slated to start by July 1, 2020. CCPA is complex regulation which does bear some similarities with EU GDPR. For more information on how CCPA and GDPR compare, see our webinar. Both regulations deal with how organizations handle PII (Personally Identifiable Information). CCPA intends to empower consumers to give them a choice to disallow onward sales of their PII by organizations that hold that information.  A full discussion of what CCPA entails is out of scope. In this article, I want to focus how our Information Protection Lifecycle (IPLC) and Framework can help organizations prepare for CCPA. 

What is considered PII under CCPA?

Essentially, anything that be used to identify individuals or households of California residents. A summarized list (drawn from the text of the law) includes:

  • Identifiers such as a real name, alias, postal address, unique personal identifier, online identifier, IP address, email address, account name, SSN, driver’s license number, passport number, or other similar identifiers.
  • Commercial information, including records of personal property, products or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies.
  • Biometric information.
  • Internet or other electronic network activity information, including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an Internet Web site, application, or advertisement.
  • Geolocation data.
  • Professional or employment-related information.
  • Education information, defined as information that is not publicly available.
  • Inferences drawn from any of the information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.

The list of data types that are designated as PII by CCPA is quite extensive.

How does a company or organization that is subject to CCPA go about protecting this information from unauthorized disclosure?

The IPLC offers a place to start. Discovery/classification is the first phase in the IPLC. You have to understand what kinds of information you have in order to know if you’re subject to CCPA (or any other pertinent regulations). As with GDPR, a Data Protection Impact Assessment (DPIA) type exercise is a good first step. Organizations that have, sell, or process California resident PII need to conduct data inventories to discover what kinds of PII they may have. There are automated tools that can greatly improve your chances of finding all such data across disparate systems, from on-premise applications and databases to cloud-hosted repositories and apps. Many of these tools can be quite effective, due to the well-known formats of PII. For example, Data Leakage Prevention (DLP) and Data Classification tools have been finding and categorizing data objects such as SSNs, credit card numbers, email addresses, driver’s license numbers, etc. for years.

DLP and classification tools generally provide two ways of applying those classifications to data objects:

  • Metadata tagging – adding data about the data to the object itself to signify what type it is and how it should be handled by applications and access control / encryption systems. This method works well for unstructured data objects such as XML, Office documents, PDFs, media files, etc. In some cases, the metadata tags can be digitally signed and encrypted too for additional security and non-repudiation.
  • Database registration – adding database elements (additional tables, or columns and rows) to databases to indicate which rows, columns, or cells constitute certain data types. This is usually needed for applications that have SQL or NoSQL back-ends that contain PII, since metadata tagging will not work. This approach is more cumbersome and may require database access proxies (or API gateways) to mediate access and integrate with centralized attribute-based access control (ABAC) systems.

Thus, we see that the first phase in IPLC and the tool types related to that phase (Discovery/Classification) are the way to begin preparing for CCPA enforcement. For additional information on these kinds of tools and more guidance on CCPA and GDPR, see https://plus.kuppingercole.com/. Also, watch our blogs in the days ahead as we will be publishing more about CCPA and how to prepare.

More SEs + TEEs in Products = Improved Security

Global Platform announced in 4Q2019 that more than 1 billion TEE (Trusted Execution Environment) compliant devices shipped in 2018, and that is a 50% increase from the previous year. Moreover, 6.2 billion SEs (Secure Elements) were shipped in 2018, bringing the total number of SEs manufactured to over 35 billion since 2010.

This is good news for cybersecurity and identity management. TEEs are commonly found in most Android-based smartphones and tablets. A TEE is the secure area in the processor architecture and OS that isolates programs from the Rich Execution Environment (REE) where most applications execute. Some of the most important TEE characteristics include:

  • All code executing in the TEE has been authenticated
  • Integrity of the TEE and confidentiality of data therein is assured by isolation, cryptography, and other security mechanisms
  • The TEE is designed to resist known remote and software attacks, as well as some hardware attacks.

See Introduction to Trusted Execution Environments for more information.

A Secure Element (SE) is a tamper-resistant component which is used in a device to provide the security, confidentiality, and multiple application environments required to support various business models. Such a Secure Element may exist in any form factor such as UICC, embedded SE, smartSD, smart microSD, etc. See Introduction to Secure Elements for more information.

Global Platform has functional and security certification programs, administered by independent labs, to ensure that vendor products conform to their standards.

These features make TEEs the ideal place to run critical apps and apps that need high security, such as mobile banking apps, authentication apps, biometric processing apps, mobile anti-malware apps, etc. SEs are the components where PKI keys and certificates, FIDO keys, or biometrics templates that are used for strong or multi-factor authentication apps should be securely stored.

The FIDO Alliance™ has partnered with Global Platform on security specifications. FIDO has three levels of authenticator certification, and using a TEE is required for Level 2 and above. For example:

  • FIDO L2: UAF implemented as a Trusted App running in an uncertified TEE
  • FIDO L2+: FIDO2 using a keystore running in a certified TEE
  • FIDO L3: UAF implemented as a Trusted App running in a certified TEE using SE

See FIDO Authenticator Security Requirements for more details.

KuppingerCole recommends as a best practice that all such apps should be built in to run in a TEE and store credentials in the SE. This architecture provides for the highest security levels, ensuring that unauthorized apps cannot get access to the stored credentials, interfere with operation of the trusted app; and this combination presents a Trusted User Interface (TUI) which prevents other apps from recording or tampering with user input, as in cases where PIN authentication is included.

In recent Leadership Compasses, we have asked whether vendor products for mobile and IoT can utilize the TEE, and if key and certificate storage is required, whether vendor products can store those data assets in the SE. To see which vendors use SEs and TEEs, see the following Leadership Compasses:

In addition to mobile devices, Global Platform specifications pertain to IoT devices. IoT device adoption is growing, and there have been a myriad of security concerns due to the generally insecure nature of many types of IoT devices. Global Platform’s IoTopia initiative directly addresses these security concerns as they work to build a comprehensive framework for designing, certifying, deploying and managing IoT devices in a secure way.

KuppingerCole will continue to follow developments by Global Platform and provide insights on how these important standards can help organizations improve their security posture.

The 20-Year Anniversary of Y2K

The great non-event of Y2K happened twenty years ago. Those of us in IT at that time weren’t partying like it was 1999, we were standing by making sure the systems we were responsible for could handle the date change. Fortunately, the hard work of many paid off and the entry into the 21st century was smooth. Many things have changed in IT over the last 20 years, but many things are pretty similar.

What has changed?

  • Pagers disappeared (that’s a good thing)
  • Cell phones became smartphones
  • IoT devices began to proliferate
  • The cloud appeared and became a dominant computing architecture
  • CPU power and storage has vastly increased
  • Big data and data analytics
  • More computing power has led to the rise of Machine Learning in certain areas
  • Cybersecurity, identity management, and privacy grew into discrete disciplines to meet the exponentially growing threats
  • Many new domain- and geographic-specific regulations
  • Attacker TTPs have changed and there are many new kinds of security tools to manage
  • Businesses and governments are on the path to full digital transformation

What stayed (relatively) the same?

  • Patching is important; for security rather than Y2K functionality
  • Identity as an attack and fraud vector
  • Malware has evolved dramatically into many forms, and is a persistent and growing threat
  • IT is still a growing and exciting field, especially in the areas of cybersecurity and identity management
  • There aren’t enough people to do all the work

What will we be working on in the years ahead?

  • Securing operational tech and IoT
  • Using and securing AI & ML
  • Blockchain
  • Cybersecurity, Identity, and Privacy

What are the two constants we have to live with in IT?

  • Change
  • Complexity

Though we may not have big significant industry-wide dates like Y2K to work toward, cybersecurity, identity, and privacy challenges will always need to be addressed. Thanks to methodologies like Agile, DevOps, and SecDevOps, these challenges will continue to accelerate.

Check KC Plus for regular updates on our research into these ever-changing technologies, and please join us for EIC (The European Identity and Cloud Conference) in Munich in May 2020.

The Information Protection Life Cycle and Framework

The Information Protection Life Cycle (IPLC) and Framework describes the phases, methods, and controls associated with the protection of information. Though other IT and cybersecurity frameworks exist, none specifically focus on the protection of information across its use life. The IPLC documents 3 stages in the life of information and 6 categories of controls which can be applied as controls to secure information.

Stages in the life of information

Information is created, used, and (sometimes) disposed of when it is no longer needed or valid. Information can be actively created, such as when you start a new document, add records to a database, take photos, post blogs, etc. Information is also passively created when users and devices digitally interact with one another and with applications. Passively generated information often takes the form of log files, telemetry, or records added to databases without the explicit action of users. During its use life, information can be analyzed and modified in various ways by users, devices, and applications.  After a certain point, information may cease to be useful, perhaps due to inaccuracies, inconsistencies, migrations to new platforms, incompatibility with new systems, and/or the regulatory mandates to store it has passed. When information is no longer useful, it needs to be disposed of by archival or deletion, depending on the case.

The types of controls applicable to information protection at each phase are briefly described below.

Discovery and classification

To properly protect information, it must be discovered and classified. The company picnic announcement is not as sensitive and valuable as the secret sauce in your company’s flagship product. Information can be discovered and classified at the time of creation and a result of data inventories. Thanks to GDPR’s Data Protection Impact Assessments (DPIAs), such inventories are more commonly being conducted.

Classification schemes depend on the industry, regulatory regimes, types of information, and a host of other factors. Classification mechanisms depend on the format. For structured data in databases, tools may add rows/columns/tables for tracking cell-level sensitivity. For unstructured data such as documents in file systems, metadata can be applied (“tagged”) to individual data objects.

Access Control

Access to information must be granular, meaning only authorized users on trusted devices should be able to read, modify, or delete it. Access control systems can evaluate attributes about users, devices, and resources in accordance with pre-defined policies. Several access control standards, tools, and token formats exist. Access control can be difficult to implement across an enterprise due to the disparate kinds of systems involved, from on-premise to mobile to IaaS to SaaS apps. It is still on the frontier of identity management and cybersecurity.

Encryption, Masking, and Tokenization

These are controls that can protect confidentiality and integrity of information in-transit and at-rest. Encryption tools are widely available but can be hard to deploy and manage. Interoperability is often a problem.

Masking means irreversible substitution or redaction in many cases. For personally identifiable information (PII), pseudonymization is often employed to allow access to underlying information while preserving privacy. In the financial space, vaulted and vaultless tokenization are techniques that essentially issue privacy-respecting tokens in place of personal data. This enables one party to the transaction to assume and manage the risk while allowing other parties to not have to store and process PII or payment instrument information.

Detection

Sometimes attackers get past other security controls. It is necessary to put tools in place that can detect signs of nefarious activities at the endpoint, server, and network layers.  On the endpoint level, all users should be running current Endpoint Protection (EPP, or anti-malware) products. Some organizations may benefit from EDR (Endpoint Detection & Response) agents. Servers should be outfitted similarly as well as dump event logs to SIEMs (Security Incident and Event Management). For networks, some organizations have used Intrusion Detection Systems (IDS), which are primarily rule-based and prone to false positives. Next generation Network Threat Detection & Response (NTDR) tools have advantages in that they utilize machine learning (ML) algorithms to baseline network activities to be able to better alert on anomalous behavior. Each type of solution has pros and cons, and they all require knowledgeable and experienced analysts to run them effectively.

Deception

This is a newer approach to information protection, derived from the old notion of honeypots. Distributed Deception Platforms (DDPs) deploy virtual resources designed to look attractive to attackers to lure them away from your valuable assets and into the deception environment for the purposes of containment, faster detection, and examination of attacker TTPs (Tools, Techniques, and Procedures). DDPs help reduce MTTR (Mean Time To Respond) and provide an advantage to defenders. DDPs are also increasingly needed in enterprises with IoT and medical devices, as they are facing more attacks and the devices in those environments usually cannot run other security tools.

Disposition

When information is no longer valid and does not need to be retained for legal purposes, it should be removed from active systems. This may include archival or deletion, depending on the circumstances. The principle of data minimization is a good business practice to limit liability.

Conclusions

KuppingerCole will further develop the IPLC concept and publish additional research on the subject in the months ahead. Stay tuned! In the meantime, we have a wealth of research on EPP and EDR, access control systems, and data classification tools at KC PLUS.

Need for Standards for Consumable Risk Engine Inputs

As cybercrime and concerns about cybercrime grow, tools for preventing and interdicting cybercrime, specifically for reducing online fraud, are proliferating in the marketplace. Many of these new tools bring real value, in that they do in fact make it harder for criminals to operate, and such tools do reduce fraud.

Several categories of tools and services compose this security ecosystem. On the supply side there are various intelligence services. The forms of intelligence provided may include information about:

  • Users: Users and associated credentials, credential and identity proofing results, user attributes, user history, behavioral biometrics, and user behavioral analysis. Output format is generally a numerical range.
  • Devices: Device type, device fingerprint from Unified Endpoint Management (UEM) or Endpoint Mobility Management (EMM) solutions, device hygiene (operating system patch versions, anti-malware and/or UEM/EMM clients presence and versions, and Remote Access Trojan detection results), Mobile Network Operator carrier information (SIM, IMEI, etc), jailbreak/root status, and device reputation. Output format is usually a numerical range.
  • Cyber Threat: IP and URL blacklisting status and mapped geo-location reputation, if available. STIX and TAXII are standards used for exchanging cyber threat intel. Besides these standards, many proprietary exchange formats exist as well.
  • Bot and Malware Detection: Analysis of session and interaction characteristics to assess the likelihood of manipulation by bots or malware. Output format can be Boolean, or a numerical range of probabilities, or even text information about suspected malware or botnet attribution.

Risk-adaptive authentication and authorization systems are the primary consumers of these types of intelligence. Conceptually, risk-adaptive authentication and authorization functions can be standalone services or can be built into identity and web access management solutions, web portals, VPNs, banking apps, consumer apps, and many other kinds of applications.

Depending on the technical capabilities of the authentication and authorization systems, administrators can configure risk engines to evaluate one or more of these different kinds of intelligence sources in accordance with policies. For example, consider a banking application. In order for a high-value transaction (HVT) to be permitted, the bank requires a high assurance that the proper user is in possession of the proper registered credential, and that the requested transaction is intended by this user. To accomplish this, the bank’s administrators subscribe to multiple “feeds” of intelligence which can be processed by the bank’s authentication and authorization solutions at transaction time.

The results of a runtime risk analysis that yields ‘permit’ may be interpreted as “yes, there is a high probability that the proper user has authenticated using a high assurance credential from a low risk IP/location, the request is within previously noted behavioral parameters for this user, and the session does not appear to be influenced by malware or botnet activity.”

This is great for the user and for the enterprise. However, it can be difficult to implement by the administrators because there are few standards for representing the results of intelligence-gathering and risk analysis. The numerical ranges mentioned above vary from service to service. Some vendors provide scores from 0 to 99 or 999. Others range from -100 to 100. What do the ranges mean? How can the scores be normalized across vendors? Does a score of 75 from intel source A mean the same as 750 from intel source B?

Perhaps there is room for a little more standardization. What if a few attribute name value pairs were introduced and ranges limited to improve interoperability and to make it easier for policy authors to implement? Consider the following claims set, which could be translated into formats such as JWT, SAML, XACML, etc :

{

    "iss": "IntelSource",

    "iat": 1565823456,

    "exp": 1565823457,

    "aud": "RiskEngine",

    "sub": "wcoyote@example.com",

    "UserAssuranceLevel": "93",

    "DeviceAssuranceLevel": "86",

    "BotProbability": "08"

}

The above example* shows an Issuer of “IntelSource”, with timestamp and expiry, Audience of “RiskEngine”, Subject (user ID), and 3 additional attributes: “UserAssuranceLevel”, “DeviceAssuranceLevel”, and “BotProbability”. These new attributes are composites of the information types listed above for each category. Ranges for all 3 attributes are 0-99. In this example, the user looks legitimate. Low user and device assurance levels and/or high bot probability would make the transaction look like a fraud attempt.

KuppingerCole believes that standardization of a few intelligence attributes as well as normalization of values may help with implementation of risk-adaptive authentication and authorization services, thereby improving enterprise cybersecurity posture.

*Thanks to http://jwtbuilder.jamiekurtz.com/ for the JWT sample.

EU EBA Clarifies SCA and Implementation Exceptions

The EU European Banking Authority issued clarifications about what constitutes Strong Customer Authentication (SCA) back in late June. The definition states that two or more of the following categories are required: inherence, knowledge, and possession. These are often interpreted as something you are, something you know, and something you have, respectively. We have compiled and edited the following table from the official EBA opinion:

 Inherence elements Compliant with SCA?
 Fingerprint scanning Yes
 Voice recognition Yes
 Vein recognition Yes
 Hand and face geometry Yes
 Retina and iris scanning Yes
 Behavioral biometrics, including keystroke dynamics, heart rate or other body movement patterns that uniquely identify PSUs (Payment Service Users), and mobile device gyroscopic data Yes
 Information transmitted using EMV 3-D Secure 2.0 No
 Knowledge elements
 Password, Passphrase, or PIN Yes
 Knowledge-based authentication (KBA) Yes
 Memorized swiping path Yes
 Email address or username No
Card details (including CVV codes on the back) No
 Possession elements
 Possession of a device evidenced by an OTP generated by, or received on, a device (hardware/software token generator, SMS OTP) Yes
 Possession of a device evidenced by a signature generated by a device (hardware or software token) Yes
 Card or device evidenced through a QR code (or photo TAN) scanned from an external device Yes
 App or browser with possession evidenced by device binding — such as through a security chip embedded into a device or private key linking an app to a device, or the registration of the web browser linking a browser to a device Yes
 Card evidenced by a card reader Yes
 Card with possession evidenced by a dynamic card security code Yes
 App installed on the device No
 Card with possession evidenced by card details (printed on the card) No
 Card with possession evidenced by a printed element (such as an OTP list, e.g. “Grid Cards”) No

The list and details about implementations are subject to change. Check the EBA site for updates. KuppingerCole will also follow and provide updates and interpretations.

The EBA appears to be rather generous in what can be used for SCA, especially considering the broad range of biometric types on the list. However, a recent survey by GoCardless indicates that not all consumers trust and want to use biometrics, and these attitudes vary by country across the EU.

Although KBA is still commonly used, it should be deprecated due to the ease with which fraudsters can obtain KBA answers. The acceptance of smart cards or other hardware tokens is unlikely to make much of an impact, since most consumers aren’t going to carry special devices for authenticating and authorizing payments. Inclusion of behavioral biometrics is probably the most significant and useful clarification on the list, since it allows for frictionless and continuous authentication.

In paragraph 13, the EBA opinion opened the door for possible delays in SCA implementation: “The EBA therefore accepts that, on an exceptional basis and in order to avoid unintended negative consequences for some payment service users after 14 September 2019, CAs may decide to work with PSPs and relevant stakeholders, including consumers and merchants, to provide limited additional time to allow issuers to migrate to authentication approaches that are compliant with SCA…”

Finextra reported this week that the UK Financial Conduct Authority has announced an extension to March 2021 for all parties to prepare for SCA. The Central Bank of Ireland is following a similar course of delays. Given that various surveys place awareness of and readiness for PSD2 SCA on the part of merchants between 40-70%, it is not surprising to see such extensions. In fact, it is likely that the Competent Authorities in more member states will likely follow suit.

While these moves are disappointing in some ways, they are also realistic. Complying with SCA provisions is not a simple matter: many banks and merchants still have much work to do, including modernizing their authentication and CIAM infrastructures to support it.

For more information, see our list of publications about PSD2. This is also a featured topic at our upcoming Digital Finance World conference, which will be held in Frankfurt, Germany in September.


KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Subscribe to our Podcasts

KuppingerCole Podcasts - watch or listen anywhere


How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00