KuppingerCole Blog

The Sweet Spot for Blockchains: Registries

A couple of days ago, DIACC (Digital ID & Authentication Council of Canada) together with IBM Canada and the Province of British Columbia released information about a PoC (Proof of Concept) for moving corporate registrations to a blockchain-based register. The PoC, which used the Hyperledger Fabric, was for both corporate registries of a single province and across multiple jurisdictions.

Such registries, be it corporate registries, land register, or other types of decentralized ledgers, are the sweet spot for blockchains. Registration is decentralized. The registries and ledgers must be tamper-resistant. The data must be sequenced and, in many cases, time-stamped. All these are the fundamental characteristics of blockchains. Simply said: It is far easier to construct solutions for such registries and ledgers based on blockchain technology than it is based on traditional technologies such as relational databases.

Here, the use case and the solution fit perfectly well. Given that improving such registries and other ledgers can have a massive economic and societal impact, it also proves the value of blockchains, well beyond cryptocurrencies. Blockchains are here to stay, and I expect to see a strong uptake of use cases around registers and ledgers in the next couple of years – beyond PoCs towards practical, widely deployed and used implementations. Honestly, every investment in IT solutions for such registers etc. should be revisited, evaluating whether stopping it and restarting based on blockchains isn’t the better choice nowadays. In most cases it will turn out that blockchains are the better choice.

A Short History of EIC - Europe's Leading Event on Digital ID & Transformation

More than 12 years ago, the first EIC attracted an already surprisingly large number of practitioners dealing with directory services, user provisioning and single sign-on, as well as vendors, domain experts, thought leaders and analysts. I remember Dick Hardt giving an incredibly visionary keynote on "User-Centrism - The Solution to the Identity Crisis?" at EIC 2007 - a topic which still is highly relevant. Or the legendary keynote panel back in 2008 on the question whether there is a difference between the European way of doing IAM and the rest of the world, moderated by KuppingerCole's Senior Analyst Dave Kearns. In the same year of 2009, Kim Cameron of Microsoft gave a keynote on his Claims Based Model, which eventually came true, even if in an unexpected way... Look at Eve Maler's Keynote on "Care and Feeding of Online Relationships", an early and mature vision on Customer Identity. She held that keynote not in 2017, it was already back in 2009! "Extending the Principles of Service-Oriented Security to Cloud Computing" - a remarkable keynote at EIC 2010 held by John Aisien, as well as André Durand's "Identity in the Cloud - Finding Calm in the Storm". Now, for the latest EIC conferences 2011 - 2017, let me give you a selection of my personal favorite keynotes, even if I would change this selection every time I watch videos from past EIC sessions: Doc Searls - Free Customers - The New Platform Doc actually gave a preview on GDPR before politicians started working on it (EIC 2012); Martin Kuppinger's Opening Keynote back in 2014 on the key trends we talk about today; Patrick Parker's (EmpowerID) famous keynote on "IAM Meat and Potatoes Best Practices", describing what can go wrong in an IAM project and feeding a great EIC tradition of keynotes with high practical relevance; Amar Singh on "Heartbleed, NSA & Trust"; Mia Harbitz on IAM, Governance and Forced Migration at EIC 2016; Dr. Emilio Mordini's great talk "In a Wilderness of Mirrors: Do we still need Trust in the Online World?" and a great Analyst/ former Analyst Keynote Panel on "Shaping the Future of Identity & Access Management" with Dan Blum, Gerry Gebel, Ian Glazer, Martin Kuppinger, Eve Maler and Prof. Dr. Sachar Paulus as the moderator. EIC 2017 just blew my mind. So many great contributions make it impossible to choose. Maybe, to give you an idea, look at these 3: William Mougayar - State of Business in Blockchains, Richard Struse - Let Them Chase Our Robots and Joni Brennan - Accelerating Canada's Digital ID Ecosystem Toward a More Trusted Global Digital Economy. Well, I can't but add another one: Balázs Némethi of Taqanu Bank on "Financial Inclusion & Disenfranchised Identification". Ok, one more: Daniel Buchner (Microsoft) on "Blockchain-Anchored Identity: A Gateway to Decentralized Apps and Services". Because it is so relevant. Come and join EIC 2018 enjoy another 120 hours of great and relevant content with speakers from all around the world. Or propose your own talk through the call-for-speakers feature on this website.

See you in Munich Joerg Resch, Head of EIC Agenda

General Data Protection Regulation – Rather an Evolution Than Revolution

Guest post by Tim Maiorino, Counsel of Osborne Clarke

The newest EU legislation on data protection is the General Data Protection Regulation (GDPR) which will be enforceable from May 26th 2018. It will bring several important changes, altering the requirements of data protection law in the European Union.

The GDPR will replace the EU-Directive on Data Protection and, by extension, all transposing national regulation. The GDPR´s objective is to harmonise data protection legislation across the EU and to “protect the fundamental rights of natural persons to the protection of their personal data”, while promoting free movement of data within the EU.

An examination of the GDPR and its rules is inevitable for any business involving personal information, as it provides a uniform standard for data protection throughout the EU and is directly applicable in all member states. Content-wise, German law has served as a role model to the GDPR. Although it replaces most of the current data protection laws across the EU, the changes – though far-reaching – do not override the fundamental principles of the current regime. Rather, it preserves the basic principles while implementing stricter and more extensive rules.

The most significant changes regard the scope and applicability, data governance and allocation of responsibilities, data subjects´ rights (facilitation and expansion), and sanctions, which include heavy fines. The fines in case of non-compliance may be up to EUR 20 Mio. or 4 % of your worldwide turnover within the previous financial year.

Therefore, further action is required by businesses regarding the handling of personal data.

1. Interacting with Data Subjects

The GDPR establishes detailed requirements for both internal and external facing processes and policies, which can be divided into several steps.

Firstly, the internal processes should be identified and any possible information on the future use of personal data gathered.

Secondly, current policies and any external measures taken or information given with regard to the collection and use of personal data should be identified. You should be aware in which circumstances and at what point data is collected and what information the data subject is given on the use of their data. It is also necessary to identify the method used to obtain the data subjects consent, where applicable, and what channels are used by data subjects to file access requests.

Hereafter, discrepancies between internal and external processes can be recognised. It is essential to validate that the external facing policies match the internal processes and actual use of data. Controllers thus need be aware what the data subject (which includes staff, if their data is concerned) is told about how their data is used. When all internal and external processes and policies are known, they can be updated to comply with the detailed requirements in the GDPR. Only if you know your policies and processes, you will be able to ascertain the necessary steps to meet the extended requirements and to provide guidance and training to your representatives and staff.

2. Managing Compliance

You might now consider this more trouble than it´s worth, but there are several ways to facilitate compliance with the GDPR.

For example, there is the option (and sometimes obligation) to appoint a Data Protection Officer who will, amongst others, monitor and work towards compliance with the GDPR. The contact details of the Data Protection Officer should be published and provided to the Data Protection Authorities.

Any data controller should undertake impact assessments and privacy by design as required. All existing processing operations should be identified and current record-keeping arrangements reviewed.

With regard to external policies, controllers have the possibility to use industry codes, which may provide an orientation for handling certain situations to their employees. Also, for reasons of facilitation, using templates for external notifications in case of data breaches may be helpful.

3. Processors and Transfers

Generally, where data is used, it will also be transferred to third parties or processors. As this is regulated by the GDPR, controllers should be aware of and map their (international) data flow.

As requirements for data transfers change, standard form contracts and addenda need to be updated and / or prepared. Also, updates of procurement processes are required and procurement and IT-teams need to be trained accordingly to identify potential issues. Moreover, it is always advisable for customers and suppliers to work together to address changes and potential issues as well as conduct customer and supplier audits to safeguard both parties´ interests.

Although, essentially, it evolves the current data protection law, the GDPR brings several important changes to the obligations of data controllers and processors and to the corresponding rights of the data subjects.

4. Conclusion

Hence, not only is data protection and the requirements it sets our for business dealing with any type of personal information lifted to a significantly higher (meaning: more detailed, stricter and even more complex) level. The prominence of the GDRP brings significantly more attention to the topic and potential breaches are sanctioned more strictly than ever before.

Tomorrow’s Customer Journey Starts In The Buyer's Head

Guest post by Christian Goy, Co-founder and Managing Director of Behavioral Science Lab

The world of customer journeys is a terrible mess. The linear path to purchase does not exist. “Predictable shopping patterns, once so fundamental to marketing and advertising strategy, have gone by the wayside. Persona- and demography driven strategies now fall short – the winners in this new era are the brand and retailers who’ve put a plan in place to meet actual shoppers anywhere along their path to purchase,” says BazaarVoice.

Even though marketers claim to understand, use and predict consumers’ shopping patterns with (1) web and mobile analytics, (2) social analytics, (3) media analytics, (4) customer journey analytics, or (5) voice of the customer analytics, most marketers still do not know why buyers bought their brand or their competitors’. 

Take Jessica for example.  She is looking for a new tote – “a gift for herself,” she would argue. She needs something bigger for her personal items, as well as what her kids need. She wants something that can help her stay organized, is durable, practical and looks good.

She starts her journey on Google search; moves quickly to Pinterest, Instagram, and then reviews a few products sites that sell the bag for which she is looking. Two items look promising. She reads a couple of customer reviews and a week later looks for them in her favorite department store. Unfortunately, the store does not carry either of the two products which were at the top of her list. So she goes back online and finds a retailer’s web site that shows a real person holding the tote she wants and photos of the bag being used to carry diapers. She can tell from the pictures that it will work well for her, and she buys the tote. According to BazzarVoice that journey lasted thirteen days.

The problem is not a lack of understanding touch-points, channels used, time between channels and so forth, but rather a lack of understanding as to why Jessica chose the product she did. Current tools can do an incredible job aggregating the bits and pieces of what a person did — but not why. If marketers don’t understand why people choose a certain product, the path to purchase will always be of secondary value because it is only the means to an end, and not the factors that motivated its use in the first place. Without this basic knowledge, marketers can never put a strategy in place that delivers on what customers truly want, but only how they “get there.”

How Do We Solve That?

Human thinking is complex and trying to fit their behavior, which often appears irrational, is difficult.  The reason Jennifer is using so many channels to gather information is to find a product, which can fulfill the utility she expects from her new tote.

This idea of expected utility or value is at the heart of behavioral economics.  It assumes that the value of each product or service is determined by a very specific set of psychological and economic elements, which play a role in the buyer’s expectation of its value.  The relative value of purchase options determines how much we pay for something and how we decide what to buy.

In Jennifer’s case, we might assume she is looking for a tote bag that has certain attributes such as size which allow it to perform certain functions (economic), a price she can afford (economic), compliments her look (psychological), and perhaps, creates recognition of her shopping savvy among her friends (psychological).  Not only does she put these elements in a certain order, each element is surrounded by a set of decision heuristics (rules of thumb she has developed for herself) that Jennifer uses to evaluate which bag maximizes her expectation of utility.

How Do Marketers Create Utility?

To convince the Jessica’s of the world that you have the perfect tote for her; marketers need to first learn what drives utility and then deliver on those elements. Here are some simple, but important next steps to accomplish that:

Deeply understand what drives the expectation of your products’ utility — These are the psychological and economic decision elements used by the buyer to define utility.

Define buyers by how they make purchase decisions — Group buyers into segments with similar decision elements. This will allow the marketers to be more effective and specific in their messaging and product offering because your customers’ specific needs will be addressed. Jessica’s decision type would have started with functionality, then price, style, size, and so on.  

Create specific communications and channels to address buyers’ psychological and economic needs — Show the potential functionality buyer that one path is easier and more efficient than any other.  This adds value to the search process, rather than just adding to the “work” required to find the “right” product.  Fulfilling specific decision requirements through specific and individualized communication channels is the key.  We have found in our studies that if you are able to only address and fulfill the primary driver in a buyer’s decision system, the likelihood of purchase is very high.  Just imagine the likelihood of purchase if you could address the second and third drivers, as well.

Do not forget, the path to purchase starts in the buyer’s head; marketers should start there to understand how to sell more effectively.

Thanks to Dr. Tim Gohmann, Behavioral Science Lab Chief Science Officer and Ron Mundy, Chief Operations Officer for contributions to this publication.

Finally: Building up Trust as a Foundation for Sustainable Business Strategies

It seems almost ironical, but the currently and constantly growing number of legal and regulatory requirements might be the important (and first actually working) catalyst for changing the attitude of organizations towards privacy. While the true rationale behind it are most probably the substantial fines that come with several of these regulations, first and foremost the GDPR.

The value of customer data, from basic account data to detailed behavioural profiles is undisputed. And whether information is really the new oil in the digitalized economy or if comparisons are misleading anyway: Customer identity and access management (CIAM) is already a core discipline for almost any organization and will be even more so.

Changing the attitude towards consumer and employee privacy does not necessarily mean that all those promising new business models building upon data analytics are prevented by design. But it surely means that all the data can be used for these extended purposes if and only if the data subject (consumer, employee, social user, prospective customer, etc.) gives permission. This user consent is something that will need to be more and more deserved by companies relying on user data.

The problem with trust is that it needs to be strategically grown over long periods of time, but as it is highly fragile it can be destroyed within a very short period of time. This might be through a data breach, which surely is one worst case scenario. But the mere assumption, maybe just a gut feeling or even hearsay that data might be reused or transferred inappropriately or inspected by (foreign) state authorities can immediately destroy trust and make users turn away and turn towards a competitor.

The real question is why many organizationa have not yet started actively building this trusted relationship with their users/customers/consumers/employees. The awareness is rising, so that security and privacy are moving increasingly into the focus of not only tech-savvy users, but also that of everyday customers.

Building up trust truly must be the foundation of any business strategy. Designing businesses to be privacy-aware from ground up is the first and only starting point. This involves both well-thought business processes and appropriate technologies. Trustworthy storage and processing of personal data needs to be well-designed and well-executed and adequate evidence needs to be presented to many stakeholders including the individual Data Processing Authorities and the users themselves.

Being more trusted and more trustworthy than your competitors will be a key differentiator for many customer decisions today and in the future. And trusting users will be more readily willing to share relevant business information with an organization as a data steward. But this must be based on well-executed consent management lifecycles, especially when it turns out to be to the benefit of all involved parties.

KuppingerCole will embark on the Customer Identity World Tour 2017 with 3 major events in Seattle, Paris and Singapore. Trusted and privacy-aware management of customer data will be a main topic for all events. If you want to see your organization as a leader in customer trust, you might want to benefit from the thought leadership and best practices presented there, so join the discussion.

IBM Moves Security to the Next Level – on the Mainframe

In a recent press release, IBM announced that they are moving security to a new level, with “pervasively encrypted data, all the time at any scale”. That sounded cool and, after talks with IBM, I must admit that it is cool. However, it is “only” on their IBM Z mainframe system, specifically the IBM Z14.

By massively increasing the encryption capabilities on the processor and through a system architecture that is designed from scratch to meet the highest security requirements, these systems can hold data encrypted at any time, with IBM claiming support of up to 12 billion encrypted transactions per day. Business data and applications running on IBM Z can be better protected than ever before – and better than on any other platform.

One could argue that this is happening in a system environment that is slowly dying. However, IBM in fact has won a significant number of new customers in the past year. Furthermore, while this is targeted as of now at mainframe customers, there is already one service that is accessible via the cloud: a blockchain service where secure containers for the blockchain are operated in the IBM Cloud in various datacenters across the globe.

It will be interesting to see whether and when IBM will make more of these pervasive encryption capabilities available as a cloud service or in other forms for organizations not running their own mainframes. The big challenge here obviously will be end-to-end security. If there is a highly secure mainframe-based backend environment, but applications accessing these services through secure APIs from less secure frontend environments, there will remain a security risk. Unfortunately, other platforms don’t come with the same level of built-in security and encryption power as the new IBM Z mainframe.

Such a gap between what is available (or will be available soon) on the mainframe and what we find on other platforms is not new. Virtualization was available on the mainframe way before the age of VMware and other PC virtualization software started. Systems for dynamically authorizing requests at runtime such as RACF are the norm in mainframe environments, while the same approach and standards such as XACML are still struggling in the non-mainframe world.

With its new announcement, IBM on one hand again shows that many interesting capabilities are introduced on mainframes first, while also demonstrating a potential path into the future of mainframes: as the system that manages the highest security environments and maybe in future acts as the secure backend environment, accessible via the cloud. I’d love to see the latter.

A Great Day for Information Security: Adobe Announces End-of-Life for Flash

Today, Adobe announced that Flash will go end-of-life. Without any doubt, this is great news from an Information Security perspective. Adobe Flash counted for a significant portion of the most severe exploits as, among others, F-Secure has analyzed. I also wrote about this topic back in 2012 in this blog.

From my perspective, and as stated in my post from 2012, the biggest challenge hasn’t been the number of vulnerabilities as such, but the combination of vulnerabilities with the inability to fix them quickly and the lack a well-working patch management approach.

With the shift to standards such as HTML5, today’s announcement finally moves Adobe Flash into the state of a “living zombie” – and with vendors such as Apple and Microsoft either not supporting it or limiting its use, we are ready to switch to better alternatives. Notably, the effective end-of-life date is the end of 2020, and it will still be in use after that. But there will be an end.

Clearly, there are and will be other vulnerabilities in other operating systems, browsers, applications, and so on. They will not go away. But one of the worst tools ever from a security perspective is finally reaching its demise. That is good, and it makes today a great day for Information Security.

The Return of Authorization

Authorization is one of the key concepts and processes involved in security, both in the real world as well as the digital world.  Many formulations of the definition for authorization exist, and some are context dependent.  For IT security purposes, we’ll say authorization is the act of evaluating whether a person, process, or device is allowed to operate on or possess a specific resource, such as data, a program, a computing device, or a cyberphysical object (e.g., a door, a gate, etc.).

The concept of authorization has evolved considerably over the last two decades.  No longer must users be directly assigned entitlements to particular resources. Security administrators can provision groups of users or select attributes of users (e.g. employee, contractor of XYZ Corp, etc.) as determinants for access. 

For some of the most advanced authorization and access control needs, the OASIS eXtensible Access Control Markup Language (XACML) standard can be utilized. Created in the mid-2000s,  XACML is an example of an Attribute-Based Access Control (ABAC) methodology.  XACML is an XML policy language, reference architecture, and request/response protocol. ABAC systems allow administrators to combine specific subject, resource, environmental, and action attributes for access control evaluation.  XACML solutions facilitate run-time processing of dynamic and complex authorization scenarios.  XACML can be somewhat difficult to deploy, given the complexity of some architectural components and the policy language.  Within the last few years, JSON and REST profiles of XACML have been created to make it easier to integrate into modern line-of-business applications.

Just prior to the development of XACML, OASIS debuted Security Assertion Markup Language (SAML).  Numerous profiles of SAML exist, but the most common usage is for identity federation.  SAML assertions serve as proof of authentication at the domain of origin, which can be trusted by other domains.  SAML can also facilitate authorization, in that, other attributes about the subject can be added to the signed assertion. SAML is widely used for federated authentication and limited authorization purposes.

OAuth 2.0 is a lighter weight IETF standard. It takes the access token approach, passing tokens on behalf of authenticated and authorized users, processes, and now even devices.  OAuth 2.0 now serves as a framework upon which additional standard are defined, such as Open ID Connect (OIDC) and User Managed Access (UMA).  OAuth has become a widely used standard across the web.  For example, “social logins”, i.e. using a social network provider for authentication, generally pass OAuth tokens between authorization servers and relying party sites to authorize the subject user.  OAuth is a simpler alternative to XACML and SAML, but also is usually considered less secure.

From an identity management perspective, authentication has received the lion’s share of attention over the last several years.  The reasons for this are two-fold: 

  • the weakness of username/password authentication, which has led to many costly data breaches
  • proliferation of new authenticators, including 2-factor (2FA), multi-factor (MFA), risk-adaptive techniques, and mobile biometrics

However, in 2017 we have noticed an uptick in industry interest in dynamic authorization technologies that can help meet complicated business and regulatory requirements. As authentication technologies improve and become more commonplace, we predict that more organizations with fine-grained access control needs will begin to look at dedicated authorization solutions.  For an in-depth look at dynamic authorization, including guidelines and best practices for the different approaches, see the Advisory Note: Unifying RBAC and ABAC in a Dynamic Authorization Framework.

Organizations that operate in strictly regulated environments find that both MFA / risk adaptive authentication and dynamic authorization are necessary to achieve compliance.  Regulations often mandate 2FA / MFA, e.g. US HSPD-12, NIST 800-63-3, EU PSD2, etc.  Regulations occasionally stipulate certain that access subject or business conditions, expressed as attributes, be met as a precursor to granting permission.  For example, in export regulations these attributes are commonly access subject nationality or licensed company.

Authorization becomes extremely important at the API level.  Consider PSD2: it will require banks and other financial institutions to expose APIs for 3rd party financial processors to utilize.  These APIs will have tiered and firewalled access into core banking functions.  Banks will of course require authentication from trusted 3rd party financial processors.  Moreover, banks will no doubt enforce granular authorization on the use of each API call, per API consumer, and per account.  The stakes are high with PSD2, as banks will need to compete more efficiently and protect themselves from a much greater risk of fraud.

For more information on authentication and authorization technologies, as well as guidance on preparing for PSD2, please visit the Focus Areas section of our website.

GDPR vs. PSD2: Why the European Commission Must Eliminate Screen Scraping

The General Data Protection Regulation (GDPR) and Revised Payment Service Directive (PSD2) are two of the most important and most talked about technical legislative actions to arise in recent years.  Both emanate from the European Commission, and both are aimed at consumer protection.

GDPR will bolster personal privacy for EU residents in a number of ways.  The GDPR definition of personally identifiable information (PII) includes attributes that were not previously construed as PII, such as account names and email addresses.  GDPR will require that data processors obtain clear, unambiguous consent from each user for each use of user data. In the case of PSD2, this means banks and Third-Party Providers (TPPs).  TPPs comprise Account Information Service Providers (AISPs) and Payment Initiation Service Providers (PISPs).  For more information, please see https://www.kuppingercole.com/report/lb72612

Screen scraping has been in practice for many years, though it is widely known that this method is inherently insecure.  In this context, screen scraping is used by TPPs to get access to customer data.  Some FinTechs harvest usernames, email addresses, passwords, and account numbers to act on behalf of the users when interacting with banks and other FinTechs.  This technique exposes users to additional risks, in that, their credentials are more likely to be misused and/or stored in more locations. 

PSD2 will mandate the implementation of APIs by banks, for a more regular and safer way for TPPs to get account information and initiate payments.  This is a significant step forward in scalability and security.  However, the PSD2 Regulatory Technical Standards (RTS) published earlier this year left a screen scraping loophole for financial organizations who have not yet modernized their computing infrastructure to allow more secure access via APIs.  The European Banking Authority (EBA) now rejects the presence of this insecure loophole:  https://www.finextra.com/newsarticle/30772/eba-rejects-commission-amendments-on-screen-scraping-under-psd2.   

KuppingerCole believes that the persistence of the screen scraping exception is bad for security, and therefore ultimately bad for business.  The proliferation of TPPs expected after PSD2 along with the attention drawn to this glaring weakness almost ensures that it will be exploited, and perhaps frequently. 

Furthermore, screen scraping implies that customer PII is being collected and used by TPPs.  This insecure practice, then, by definition goes against the spirit of consumer protection embodied in GDPR and PSD2.  Furthermore, GDPR calls for the principle of Security by Design, and a screen scraping exemption would contravene that.  TPPs can obtain consent for the use of consumer PII, or have it covered contractually, but such a workaround is unnecessary if TPPs utilize PSD2 open banking APIs.  An exemption in a directive should not lead to potential violations of a regulation.  

PSD2 – the EBA’s Wise Decision to Reject Commission Amendments on Screen Scraping

In a response to the EC Commission, the EBA (European Banking Authority) rejected amendments on screen scraping in the PSD2 regulation (Revised Payment Services Directive) that had been pushed by several FInTechs. While it is still the Commission’s place to make the final decision, the statement of the EBA is clear. I fully support the position of the EBA: Screen scraping should be banned in future.

In a “manifesto”, 72 FinTechs had responded to the PSD2 RTS (Regulatory Technical Standards), focusing on the ban of screen scraping or as they named it, “direct access”. In other comments from that FinTech lobby, we can find statements such as “… sharing login details … is perfectly secure”. Nothing could be more wrong. Sharing login details with whomever never ever is perfectly secure.

Screen scraping involves sharing credentials and providing full access to financing services such as online banking to the FinTechs using these technologies. This concept is not new. It is widely used in such FinTech services because there has been a gap in APIs until now. PSD2 will change that, even while we might not end with a standardized API as quickly as we should.

But what is the reasoning of the FinTechs in insisting on screen scraping? The main arguments are that screen scraping is well-established and works well – and that it is secure. The latter obviously is wrong – neither sharing credentials nor injecting credentials into websites can earnestly be considered a secure approach. The other argument, screen scraping being something that works well, also is fundamentally wrong. Screen scraping relies on the target website or application to always have the same structure. Once it changes, the applications (in that case FinTech) accessing these services and websites must be changed as well. Such changes on the target systems might happen without prior notice.

I see two other arguments that FinTech lobby does not raise. One is about liability issues. If a customer gives his credentials to someone else, this is a fundamentally different situation regarding liability then in the structured access via APIs. Just read the terms and conditions of your bank regarding online banking.

The other argument is about limitations. PSD2 request providing APIs for AISP (Account Information Service Providers) and PISP (Payment Initiation Service Providers) – but only for these services. Thus, APIs might be more restrictive than screen scraping.

However, the EBA has very good arguments in favor of getting rid of screen scraping. One of the main targets of PSD2 is a better protection of customers in using online services. That is best achieved by a well-thought-out combination of SCA (Strong Customer Authentication) and defined, limited interfaces for TPPs (Third Party Providers) such as the FinTechs.

Clearly, this means a change for both the technical implementations of FinTech services that rely on screen scraping as, potentially, for the business models and the capabilities provided by these services. When looking at technical implementations, even while there is not yet an established standard API supported by all players, working with APIs is straightforward and far simpler than screen scraping ever can be. If there is not a standard API, work with a layered approach which maps your own, FinTech-internal, API layer to the various variants of the banks out there. There will not be that many variants, because the AISP and PISP services are defined.

Authentication and authorization can be done far better and more convenient to the customer if they are implemented from a customer perspective – I just recently wrote about this.

Yes, that means changes and even restrictions for the FinTechs. But there are good reasons for doing so. EBA is right on their position on screen scraping and hopefully, the EC Commission will finally share the EBA view.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Connected Consumer Learn more

Connected Consumer

When dealing with consumers and customers directly the most important asset for any forward-thinking organisation is the data provided and collected for these new type of identities. The appropriate management of consumer identities is of utmost importance. Handing over personal data to a commercial organisation the consumer typically does this with two contrasting expectations. On one hand the consumer wants to benefit from the organisation as a contract partner for goods or services. Customer-facing organizations get into direct contact with their customers today as they are accessing their [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00