Blog posts by Mike Small

IBM Acquires Red Hat: The AI potential

On October 28th IBM announced its intention to acquire Red Hat. At $34 Billion, this is the largest software acquisition ever.  So why would IBM pay such a large amount of money for an Open Source software company? I believe that this acquisition needs to be seen beyond looking just at DevOps and Hybrid Cloud, rather in the context of IBM’s view of the future where the business value from IT services will come from in future. This acquisition provides near-term tactical benefits from Red Hat’s OpenShift Platform and its participation in the Kubeflow project. It strengthens IBM’s capabilities to deliver the foundation for digital business transformation. However, digital business is increasingly based on AI delivered through the cloud. IBM recently announced a $240M investment in a 10-year research collaboration on AI with MIT and this represents the strategy. This adds to the already significant investments that IBM has already made in Watson, setting up a division in 2016, as well as in cloud services.

Red Hat was founded in 1993 and in 1994 released the Red Hat version of Linux. This evolved into a complete development stack (JBoss) and recently released Red Hat OpenShift - a container- and microservices-based (Kubernetes) DevOps platform. Red Hat operates on a business model based on open-source software development within a community, professional quality assurance, and subscription-based customer support.

The synergy between IBM and Red Hat is clear. IBM has worked with Red Hat on Linux for many years and both have a commitment to Open Source software development. Both companies have a business model in which services are the key element. Although these are two fairly different types of services – Red Hat’s being service fees for software, IBM’s being all types of services including consultancy, development they both fit well into IBM’s overall business.

One critical factor is the need for tools to accelerate the development lifecycle for ML projects. For ML projects this can be much less predictable than for software projects. In the non-ML DevOps world microservices and containers and the key technologies that have helped here. How can these technologies help with ML projects?

There are several differences between developing ML and coding applications. Specifically, ML uses training rather than coding and, in principle, this in itself should accelerate the development of much more sophisticated ways to use data. The ML Development lifecycle can be summarized as:

  • Obtain, prepare and label the data
  • Train the model
  • Test and refine the model
  • Deploy the model

While the processes involved in ML development are different to conventional DevOps, a microservices-based approach is potentially very helpful. ML Training involves multiple parties working together and microservices provide a way to orchestrate various types of functions, so that data scientists, experts and business users can just use the capabilities without caring about coding etc. A common platform based on microservices could also provide automated tracing of the data used and training results to improve traceability and auditing. It is here that there is a great potential for IBM/Red Hat to deliver better solutions.

Red Hat OpenShift provides a DevOps environment to orchestrate the development to deployment workflow for Kubernetes based software. OpenShift is, therefore, a potential solution to some of the complexities of ML development. Red Hat OpenShift with Kubernetes has the potential to enable a data scientist to train and query models as well as to deploy a containerized ML stack on-premises or in the cloud.

In addition, Red Hat is a participant in the Kubeflow project. This is an Open Source project dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Their goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures.

In conclusion, the acquisition has strengthened IBM’s capabilities to deliver ML applications in the near term. These capabilities complement and extend IBM’s Watson and improve and accelerate their ability and the ability of their joint customers to create, test and deploy ML-based applications. They should be seen as part of a strategy towards a future where more and more value is delivered through AI-based solutions.

Read as well: IBM & Red Hat – and now?

The Ethics of Artificial Intelligence

Famously, in 2014 Prof. Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race." The ethical questions around Artificial Intelligence were discussed at a meeting led by the BCS President Chris Rees in London on October 2nd. This is also an area covered by KuppingerCole under the heading of Cognitive Technologies and this blog provides a summary of some of the issues that need to be considered.

Firstly, AI is a generic term and it is important to understand precisely what this means. Currently the state of the art can be described as Narrow AI. This is where techniques such as ML (machine learning) combined with massive amounts of data are providing useful results in narrow fields. For example, the diagnosis of certain diseases and predictive marketing. There are now many tools available to help organizations exploit and industrialise Narrow AI.

At the other extreme is what is called General AI where the systems are autonomous and can decide for themselves what actions to take. This is exemplified by the fictional Skynet that features in the Terminator games and movies. In these stories this system has spread to millions of computers and seeks to exterminate humanity in order to fulfil the mandates of its original coding. In reality, the widespread availability of General AI is still many years away.

In the short term, Narrow AI can be expected to evolve into Broad AI where a system will be able to support or perform multiple tasks using what is learnt in one domain applied to another. Broad AI will evolve to use multiple approaches to solve problems. For example, by linking neural networks with other forms of reasoning. It will be able work with limited amounts of data, or at least data which is not well tagged or curated. For example, in the cyber-security space to be able to identify a threat pattern that has not been seen before.

What is ethics and why is it relevant to AI? The term is derived from the Greek word “ethos” which can mean custom, habit, character or disposition. Ethics is a set of moral principles that govern behaviour, or the conduct of an activity, ethics is also a branch of philosophy that studies these principles. The reason why ethics is important to AI is because of the potential for these systems to cause harm to individuals as well as society in general. Ethics considerations can help to better identify beneficial applications while avoiding harmful ones. In addition, new technologies are often viewed with suspicion and mistrust. This can unreasonably inhibit the development of technologies that have significant beneficial potential. Ethics provide a framework that can be used to understand and overcome these concerns at an early stage.

Chris Rees identified 5 major ethical issues that need to be addressed in relation to AI:

  • Bias;
  • Explainability;
  • Harmlessness;
  • Economic Impact;
  • Responsibility.

Bias is a very current issue with bias related to gender and race as top concerns. AI systems are likely to be biased because people are biased, and AI systems amplify human capabilities. Social media provides an example of this kind of amplification where uncontrolled channels provide the means to share beliefs that may be popular but have no foundation in fact - “fake news”. The training of AI systems depends upon the use of data which may include inherent bias even though this may not be intentional. The training process and the trainers may pass on their own unconscious bias to the systems they train. Allowing systems to train themselves can lead to unexpected outcomes since the systems do not have the common sense to recognize mischievous behaviour. There are other reported examples of bias in facial recognition systems.

Explanation - It is very important in many applications that AI systems can explain themselves. Explanation may be required to justify a life changing decision to the person that it effects, to provide the confidence needed to invest in a project based on a projection, or to justify after the event why a decision was taken in a court of law. While rule-based systems can provide a form of explanation based on the logical rules that were fired to arrive at a particular conclusion neural network are much more opaque. This poses not only a problem to explain to the end user why a conclusion was reached but also to the developer or trainer to understand what needs to be changed to correct the behaviour of the system.

Harmlessness – the three laws of robotics that were devised by Isaac Asimov in the 1940’s and subsequently extended to include a zeroth law apply equally to AI systems. However, the use or abuse of the systems could breach these laws and special care is needed to ensure that this does not happen. For example, the hacking of an autonomous car could turn it into a weapon, which emphasizes the need for strong inbuilt security controls. AI systems can be applied to cyber security to accelerate the development of both defence and offence. It could be used by the cyber adversaries as well as the good guys. It is therefore essential that this aspect is considered and that countermeasures are developed to cover the malicious use of this technology.

Economic impact – new technologies have both destructive and constructive impacts. In the short-term the use of AI is likely to lead to the destruction of certain kinds of jobs. However, in the long term it may lead to the creation of new forms of employment as well as unforeseen social benefits. While the short-term losses are concrete the longer-term benefits are harder to see and may take generations to materialize. This makes it essential to create protection for those affected by the expected downsides to improve acceptability and to avoid social unrest.

Responsibility – AI is just an artefact and so if something bad happens who is responsible morally and in law? The AI system itself cannot be prosecuted but the designer, the manufacturer or the user could be. The designer may claim that the system was not manufactured to the design specification. The manufacturer may claim that the system was not used or maintained correctly (for example patches not applied). This is an area where there will need to be debate and this should take place before these systems cause actual harm.

In conclusion, AI systems are evolving but they have not yet reached the state portrayed in popular fiction. However, the ethical aspects of this technology need to be considered and this should be done sooner rather than later. In the same way that privacy by design is an important consideration we should now be working to develop “Ethical by Design”. GDPR allows people to take back control over how their data is collected and used. We need controls over AI before the problems arise.

Artificial Intelligence and Cyber Security

As organizations go through digital transformation, the cyber challenges they face become more important. Their IT systems and applications become more critical and at the same time more open. The recent data breach suffered by British Airways illustrates the sophistication of the cyber adversaries and the difficulties faced by organization to prevent, detect, and respond to these challenges. One approach that is gaining ground is the application of AI technologies to cyber security and, at an event in London on September 24th, IBM described how IBM Watson is being integrated with other IBM security products to meet these challenges.

The current approaches to cyber defence include multiple layers of protection including firewalls and identity and access management as well as event monitoring (SIEM). While these remain necessary they have not significantly reduced the time to detect breaches. For example, the IBM-sponsored 2018 Cost of a Data Breach Study by Ponemon showed that the mean time for organizations to identify a breach was 197 days. This length of time has hardly improved over many years. The reasons for this long delay are many and include: the complexity of the IT infrastructure, the sophistication of the techniques used by cyber adversaries to hide their activities and the sheer volume data available.

So, what is AI and how can it help to mitigate this problem?

AI is a generic term that covers a range of technologies. In general, the term AI refers to systems that “simulate thought processes to assist in finding solutions to complex problems through augmentation and enhancement of human capabilities”. Kuppingercole has analysed in detail what this really means in practice and this is summarized in the following slide from the EIC 2017 Opening Keynote by Martin Kuppinger.

At the lower layer, improved algorithms enable the transformation of Big Data into “Smart Information”. See KuppingerCole Advisory Note: Big Data Security, Governance, Stewardship - 72565. This is augmented by Machine Learning where human reinforcement is used to tune the algorithms to identify those patterns that are of interest and to ignore those that are not. Cognitive technologies add an important element to this mix through their capability to include speech, vision and unstructured data into the analysis. Today, this represents the state of the art for the practical application of AI to cyber security.

The challenges of AI at the state of the art are threefold:

  • The application of common sense – a human applies a very wide context to decision making whereas AI systems tend to be very narrowly focussed and so sometimes reach what the human would consider to be a stupid conclusion.
  • Explanation – of how the conclusions were reached by the AI system to demonstrate that they are valid and can be trusted.
  • Responsibility –for action based on the conclusions from the system.

Cyber security products collect vast amounts of data – the cyber security analyst is literally drowning in data. The challenge is to find the so called IOCS (Indicators of Compromise), that show the existence of a real threat, amongst this enormous amount of data. The problem is to not just to find what is abnormal, but to filter out the many false positives that obscure the real threats. 

There are several vendors that have incorporated Machine Learning (ML) systems into their products to tune the identification of important anomalies. This is useful to reduce false positives, but it is not enough. To be really useful to a security analyst, the abnormal pattern needs to be related to known or emerging threats. While there have been several attempts to standardize the way information on threats is described and shared most of this information is still held in unstructured form in documents, blogs and twitter feeds. It is essential to take account of these.

This is where IBM QRadar Advisor with Watson is different. A Machine Learning system is only as is only as good as its training – training is the key to its effectiveness. IBM say that it has been trained through the ingestion of over 10 billion pieces of structured data and 1.24 million unstructured documents to assist with the investigation of security incidents. This training involved IBM X-Force experts as well as IBM customers. Because of this training, it can now identify patterns that represents potential threats and provide links to the relevant sources that have been used to reach these conclusions. However, while this helps the security analyst to do their job more efficiently and more effectively, it does not yet replace the human.

Organizations now need to assume that cyber adversaries have access to their organizational systems and to constantly monitor for this activity in a way that will enable them to take action before damage is done. AI provides a great potential to help with this challenge and to evolve to help organizations to improve their cyber security posture through intelligent code analysis and configuration scanning as well as activity monitoring. For more information on the future of cyber security attend KuppingerCole’s Cybersecurity Leadership Summit 2018 Europe.

Managing the Hybrid Multi Cloud

The primary factor that most organizations consider when choosing a cloud service is how well the service meets their functional needs.  However, this must be balanced against the non-functional aspects such as compliance, security and manageability. These aspects are increasingly becoming a challenge in the hybrid multi-cloud IT environment found in most organizations. This point was emphasized by Virtustream during their briefing in London on September 6th, 2018. 

Virtustream was founded in 2009 with a focus on providing cloud services for mission-critical applications like SAP. In order to achieve this Virtustream developed its xStream cloud management platform to meet the requirements of complex production applications in the private, public and hybrid cloud. This uses patented xStream cloud resource management technology (μVM), to deliver assured SLA levels for business-critical applications and services.Through a series of acquisitions Virtustream is now a Dell Technologies business.   

The hybrid multi-cloud IT environment has made the challenges of governance, compliance and security even more complex. There is no single complete solution currently on the market to this problem.   

Typically, organizations use multiple cloud services including office productivity tools from one CSP (Cloud Service Provider), a CRM system from another CSP, and a test and development service from yet another one. At the same time, legacy applications and business critical data may be retained on-premises or in managed hosting. This hybrid multi-cloud environment creates significant challenges relating to the governance, management, security and compliance of the whole system.

Managing the Hybrid Multi Cloud

What is needed is a consistent approach with common processes supported by a single platform that provides all the necessary functions across all the various components involved in delivering all the services.   

Most CSPs offer their own proprietary management portal– which may in some cases extend to cover some on premises cases. This makes it important when choosing a cloud service to evaluate how the needs for management, security and compliance will be integrated with the existing processes and components that make up the enterprise IT architecture. The hybrid IT service model requires an overall IT governance approach as described in KuppingerCole Advisory Note: Security Organization Governance and the Cloud - 72564

An added complexity is that the division of responsibility for the different layers of the service depends upon how the service is delivered. There are 5 layers:

  • The lowest layer is the physical service infrastructure which includes as the data center, the physical network, the physical servers and the storage devices. In the case of IaaS this is the responsibility of the CSP. 
  • Above this sits the Operating Systems, basic storage services and the logical network. For IaaS, the management of this layer is the responsibility of the customer. 
  • The next plane includes the tools and middleware needed to build and deploy business applications. For PaaS (Platform as a Service) these are the responsibility of the CSP. 
  • Above the middleware are the business applications and for SaaS (Software as a Service) these are the responsibility of the CSP. 
  • The highest plane is the governance of business data and control of access to the data and applications. This is always the responsibility of the customer.

An ideal solution would be a common management platform that covers all the cloud and on-premises services and components. However, most cloud services only offer a proprietary management portal that covers the management of their service.    

So, does Virtustream provide a solution that completely meets these requirements? The answer is: Not yet.  However, there are two important points in its favour:

  • Firstly, Virtustream have highlighted that the problem exists. Acceptance is the first step on the road to providing a solution.
  • Secondly, Virtustream is a part of Dell and Dell also own VMware. VMware provides a solution to this problem but only where VMware is used across different IT service delivery models. VMware is used by Virtustream and is also supported by several other CSPs.

In conclusion, the hybrid multi-cloud environment presents a complex management challenge particularly in the areas of security and complianceThere are five layers each with six dimensions that need to be managed, and the responsibilities are shared between the CSP and the customer. It is vital that organizations consider this when selecting cloud services and that they implement a governance-based approach. Look for the emergence of tools to help with this challenge. There was a workshop this subject at KuppingerCole’s EIC earlier this year. 

Blockchain, Identity, Trust and Governance

On June 15th, 2018 I attended an OIX Workshop in London on this subject. The workshop was led by Don Thibeau of the Open Identity Exchange and Distributed Ledger foundation and was held in the Chartered Accountants’ Hall, Moorgate Place, London.

Blockchain and Distributed Ledger Technology (DLT) is often associated with crypto-currencies like Bitcoin. However, it has a much wider applicability and holds the potential to solve a wide range of challenges. Whenever a technology is evolving, the governance is often neglected until there are incidents requiring greater participation of involved parties and regulators to define operating guidelines. Governance is a wider subject and covers markets, laws and regulations, corporate activities as well as individual projects, the workshop covered many of these areas.

One question that often arises while evaluating or adopting a new technology is whether the existing legal framework is sufficient to protect your interests. According to the technology lawyer Hans Graux (time.lex) existing EU legislation on electronic signatures works well for blockchain. However, where blockchain is sold as technology there is no guarantee of governance to it back up. EU law allows the prohibition of electronic contracts for certain forms of transaction (e.g. real estate) so there are regional variations to the applicability of blockchain within EU. Some countries have created laws but, in his opinion, these are intended to show that these countries are open for business rather than because they are needed. He recommended that organizations should take a risk-based approach, similar to that for GDPR to gauge their readiness for blockchain and document the risks arising from an early adoption of blockchain as well as the controls required to manage these risks.

There was a panel on Smart Contracts and the legal framework surrounding the Smart Contracts. A key takeaway from the panel was the fact that Smart Contracts are not deemed legal contracts and so how can Smart Contracts be made legally enforceable? Tony Lai (CodeX & Legal.io) outlined the Accord project from the CodeX Stanford Blockchain Group. The initial focus of this group is in the areas of:

1. Regulatory frameworks and ethical standards around token generation events (also known as ICOs or Initial Coin Offerings);

2. Legal issues and opportunities presented by blockchain technologies and their intersection with existing legal frameworks;

3. Smart contracts and governance design for token ecosystems; and

4. Legal empowerment and legal services use cases for blockchain technologies.

The panel then discussed the ‘Pillars of Trust’ – Governance, Identity, Security and Privacy in DLT. During this panel Geoff Goodell (UCL) provided an interesting set of perspectives including the need for people to have multiple identities. He described how electronic funds transfer systems provide best surveillance network in the world. He stated that it is only now coming to the point where the risks associated with linking peoples’ activities is becoming clear. To ensure privacy only the minimum information needed should be required to be disclosed. Systems need to be accountable to their users. DLTs are not immutable – the people in control can decide to make changes (for example a code fork) in a way that is unaccountable. Peter Howes then discussed the evidentiary value of IoT data – he expressed the view that Blockchain will not obviate disputes but will reduce the number of areas for dispute.

During the afternoon some Real-World Use-Cases for blockchain and DLT were discussed:

Laura Bailey (Qadre & British Blockchain Association) – described how Qadre has developed their own blockchain system “PL^G” and how this is being prototyped for pharmaceutical anticounterfeiting in support of the EU Falsified Medicines Directive.

Jason Blick (EQI Trade) described how they are aiming to launch the world’s first offshore bank that bridges fiat and cryptocurrencies using blockchain technologies. He announced that they will shortly launch KYC blockchain based system EQI Check.

Brian Spector (Qredo) described a Distributed Ledger Payments Platform for the telecoms industry. This could not use proof of work because of the compute overhead instead they will use the network with a “proof of speed” consensus algorithm.

KuppingerCole is actively researching blockchain and DLT including its applications to identity, privacy and security. Recently at EIC (European Identity & Cloud Conference), in Munich there were several workshops and sessions devoted to practical implementations of blockchain. In the opening keynote at EIC, Martin Kuppinger described the areas where Blockchain technology has the potential to help to solve real-world identity challenges. There are already so many KYC (Know Your Customer) use cases based on Blockchain with valid business models that this is now a reality or at least close to becoming one. Blockchain also has the potential to simplify authentication by having various authenticators and IDs associated with a wallet. Its application to authorization, privacy and smart contracts also has obvious potential.

However, a practical realization of these potentials requires trustworthiness which takes us back to the question of governance. Good governance remains vital to avoid traditional challenges of DLT and to ensure that these inherent problems are not exacerbated in blockchain implementations due to a lack of governing principles.

PSEUDO WHAT AND GDPR?

GDPR comes into force on May 25th this year, the obligations from this are stringent, the penalties for non-compliance are severe and yet many organizations are not fully prepared. There has been much discussion in the press around the penalties under GDPR for data breaches. KuppingerCole’s advice is that preparation based on six key activities is the best way to avoid these penalties. The first two activities are first to find the personal data and second to control access to this data.

While most organizations will be aware of where personal data is used as part of their normal business operations, many use this data indirectly, for example as part of test and development activities. Because of the wide definition of processing given in GDPR, this use is also covered by the regulation. The Data Controller is responsible to demonstrate that this use of personal data is fair and lawful. If this can be shown, then the Data Controller will also need to be able to show that this processing complies with all the other data protection requirements.

While the costs and complexities of compliance with GDPR may be justified by the benefits from using personal data for normal business processes this is unlikely to be the case for its non-production use. However, the GDPR provides a way to legitimately avoid the need for compliance. According to GDPR (Recital 26), the principles of data protection should not apply to anonymous information, that is information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not identifiable.

One approach is known as pseudonymisation, and GDPR accepts the use of pseudonymisation as an approach to data protection by design and data protection by default. (Recital 78). Pseudonymisation is defined in Article 4 as “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information...” with the additional proviso that the additional information is kept separate and well protected.

In addition, Under Article 6 (4)(e), the Data Controller can take account of the existence of appropriate safeguards, which may include encryption or pseudonymisation, when considering whether processing for another purpose is compatible with the purpose for which the personal data were initially collected and the processing for another purpose. However, the provisos introduce an element of risk for the Data Controller relating to the reversibility of the process and protection of any additional information that could be used identify individuals from the pseudonymized data.

However, not all approaches to anonymization and pseudonymisation are equal. In 2014, the EU article 29 Working Party produced a report providing their opinion on the use of Anonymization Techniques applied to EU privacy. Although it is written with reference to the previous directive 95/46/EC, it is still very relevant. It identifies three tests which should be used to judge an anonymization technique:

  1. is it still possible to single out an individual?
  2. is it still possible to link records relating to an individual?
  3. can information be inferred concerning an individual?

It also provides examples of where anonymization techniques have failed. For example, in 2006, AOL publicly released a database containing twenty million search keywords for over 650,000 users over a 3-month period. The only privacy preserving measure consisted of replacing the AOL user ID by a numerical attribute. This led to the public identification and location of some of the users by the NY Times and other researchers.

Pseudonymization provides a useful control over the privacy of personal data and is recognized by GDPR as a component of privacy by design. However, it is vital that you chose and use the appropriate pseudonymization techniques for your use case correctly. For more information on this subject attend KuppingerCole’s webinar “Acing the Upcoming GDPR Exam”. There will also be a stream of sessions on GDPR at KuppingerCole’s European Identity & Cloud Conference in Munich May 15-18th, 2018.

GDPR and Financial Services – Imperatives and Conflicts

Over the past months two major financial services regulations have come into force. These are the fourth money laundering directive (4AMLD) and the Second Payment Services Directive (PSD II). In May this year the EU General Data Protection Regulation will be added. Organizations within the scope of these need to undertake a considerable amount of work to identify obligations, manage conflicts, implement controls and reduce overlap.

The EU GDPR (General Data Protection Regulation), which becomes effective on May 25th, 2018, will affect organizations worldwide that hold or process personal data relating to people resident in the European Union. The definition of both personal data and processing under GDPR are very broad, and processing is only considered to be lawful if it meets a set of strict criteria. GDPR also gives the data subjects extended rights to access, correct and erase their personal data, as well as to withdraw consent to its use. The sanctions for non-compliance are very severe with penalties of up to 4% of annual worldwide turnover. Critically, the organization that collects the personal data, called the Data Controller, is responsible for both implementing and demonstrating compliance.

GDPR emphasizes transparency and the rights of data subjects and this may lead to conflicts with the other directives.

4AMLD - EU Directive 2015/849 of the 20 May 2015 is often referred to as Fourth EU Anti-Money Laundering Directive (4AMLD). The purpose of the Directive is to remove any ambiguities in the previous legislation and to improve consistency of anti-money laundering (AML) and counter terrorist financing (CTF) rules across all EU Member States. This directive applies to a wide range of organizations not just to banks. These include: credit institutions, financial institutions, auditors, external accountants and tax advisors, estate agents, anyone trading in cash over EUR 10,000 and providers of gambling services.

In the UK this directive has been implemented through the “Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017”, which came into force on 26th June 2017. In this, the 44 pages of the EU Directive have become 120 pages of regulation.

Clearly, to counter money laundering and terrorist financing involves understanding the identities of the individuals performing transactions and exactly who owns the assets being held and transferred. This makes it necessary to obtain, use and store personal data. So, is there any conflict with GDPR?

One area where there may be some concern is in relation to Politically Exposed Persons (PEPs) and their known close associates. In the UK regulatory instrument, to decide whether a person is a known close associate 35 (15) an organization need only have regard to information which is in its possession, or to credible information which is publicly available.

The UK Information Commissioner made several comments on this area in the drafts of the regulations.

  • Political party registers are a source of publicly available information on PEPs but is it not clear that party members are informed or understand that their information in these could be used in this way.
  • A person could be denied access to financial products due to inaccurate publicly sourced data or misattributed publicly sourced data. Under GDPR a data subject has the right to know where information has been sourced from and to challenge its accuracy. A clearer definition of “credible information” is needed.

The regulation requires the creation and maintenance of various registers. Specifically, a register of the beneficial owners of trusts must include 45 (6) personal data. The unauthorized exposure of this data could potentially be very damaging to the individuals and it is subject to GDPR.

PSD II - EU Directive 2015/2366 of 25 November 2015 is often referred to as Payment Services Directive II (PSDII). This directive amends and consolidates several existing directives and has as a key purpose to open the market for electronic payment services. Member States, including the UK, must implement the Directive into national law by 13 January 2018 and this has been achieved through the Payment Services Regulations 2017. Some aspects have been delegated to the European Banking Authority (EBA) and will not be effective until Q3, 2019.

PSD II introduces third parties into financial transactions and this can add to the privacy challenges as recognized by comments from the UK ICO on the UK Regulations mentioned above. Where an individual is scammed into making a transfer, or makes a payment using incorrect details for the payee, the banks often cite data privacy as a reason to refuse to provide the payer with the details of the actual recipient. Under Open Banking there is now an additional party involved in the transaction and this may make it even more difficult for the payer under these circumstances. However, in the UK Regulation 90:

  • obliges the payment service provider to make reasonable efforts to recover the funds involved in the payment transaction; and
  • If unable to recover the funds it must, on receipt of a written request, provide to the payer all available relevant information for the payer to claim repayment of the funds.

This leaves an element of uncertainty does “relevant information” include the personal details of the recipient? Clearly, if it does, under GDPR the payment service providers must make sure that they have obtained consent from their customers for the use of their data under these circumstances.

In conclusion – the EU directives and regulations usually state how they relate to each other. In the case of directives their implementation can add an extra degree of complexity. Furthermore, these regulations exist within legal frameworks and local case law. In principle there should be no conflicts however, organizations have often been ready to cite “privacy” as a reason for providing poor service.

EBA Rules out Secure Open Banking?

On January 30th in London I attended a joint workshop between OpenID and the UK Open Banking community that was facilitated by Don Thibeau of OIX. This workshop included an update from Mike Jones on the work being done by OpenID and from Chris Michael Head of Technology, OBIE on UK Open Banking.

Firstly, some background to set the context for this. On January 13th, 2018 a new set of rules for banking came into force that stem from the EU Directive 2015/2366 of 25 November 2015 commonly known as Payment Services Directive 2 (PSD2). While PSDII prevents the UK regulators from mandating a particular method of access, the UK’s Competition and Markets Authority set up the Open Banking Implementation Entity (OBIE) to create software standards and industry guidelines that drive competition and innovation in UK retail banking. As one might expect, providing authorized access to payment services requires identifying and properly authenticating users – see KuppingerCole’s Advisory Note: Consumer Identity and Access Management for “Know Your Customer”.

One of the key players in this area is the OpenID Foundation. This is a non-profit, international standards organization, founded in 2007, that is committed to enabling, promoting and protecting OpenID technologies. While OpenID is relevant to many industries one area of particular interest is financial services. OpenID has a Financial API Working Group (FAPI) led by Nat Sakimura that is working to define APIs that enable applications to utilize the data stored in financial accounts, interact with those accounts, and to enable users to control their security and privacy settings.

Previously it was common for financial services such as those providing account aggregation services to use screen scraping and to store user passwords. Screen scraping is inherently insecure (see GDPR vs. PSD2: Why the European Commission Must Eliminate Screen Scraping). The current approach utilizes a token model such as OAuth [RFC6749, RFC6750], with the aim to develop a REST/JSON model protected by OAuth. However, OAuth needs to be profiled for the financial use cases.

In the UK, the APIs being specified by OBIE include an Open Banking OIDC Security Profile, which is based upon the work of OpenID. This has some differences between the FAPI R+W profile necessary to reduce delivery risk for ASPSPs.

In July 2017 it seemed that the EBA (European Banking Authority) had made a wise decision and rejected Commission Amendments on screen scraping. However in November 2017 the draft supplement to the EU technical regulations was published. In this, Article 32 (3) sets out the obligations for a dedicated interface. In summary these oblige account servicing payment service providers to ensure that this does not create obstacles. Obstacles specified in the RTS include:

  • Preventing the use by payment service providers of the credentials issued by account servicing payment service providers to their customers;
  • Imposing redirection to the account servicing payment service provider's authentication or other functions,
  • Requiring additional authorisations and registrations in addition or requiring additional checks of the consent given by payment service users to providers of payment initiation and account information services.

These obligations appear to fly in the face of what has become accepted security good practice: that one application should never directly share actual credentials with another application. Identity federation technologies such as OAuth and SAML have been reliably providing more secure means for cross-domain authentication for over a decade.

Ralph Bragg, Head of Architecture at OBIE, described 3 possible approaches that were being considered in the context of these obligations. These approaches can be summarized as:

  • Redirect – which is the OAuth model where an end user is redirected to the ASP to authenticate and the PSP receives a token. This appears to be non-compliant.
  • Embedded – where the PISP obtains the first and second factors from the end user and transmits these to the bank. This appears to be insecure.
  • Decoupled – where the end user completes the authorization on a separate device or application. This introduces further complexities.

This was discussed in a panel session involving many of the leading thinkers in this area including: Mike Jones, Microsoft, John Bradley, Yubico, Dave Tonge, Momentum FT, and Joseph Heenan, Fintech Labs.

There was a wide-ranging discussion which resulted in a general agreement that:

  • The embedded model involves the third party (PISP) in holding and transmitting credentials. This is very poor security practice and increases the attack surface. Attacks on the PISP could result in theft of the credentials to access the bank (ASPSP).
  • The redirection model is overall the best from a security point of view. Customers are generally happiest with redirect because they feel confident in their own bank. However, the bank may be the competitor of the PISP and so could make the process unfriendly.
  • PSD2 should be taken in an end to end perspective.

It seems perverse that technical regulations associated with the opening of electronic payment services appear to inhibit the use of the most up-to-date cybersecurity measures. The direct sharing of passwords or other forms of authentication credentials between services increases risks. It is generally better for regulations to oblige the use of widely accepted best practices rather than prohibiting them. OAuth is a well-understood and ubiquitously employed protocol that can help financial service providers achieve cross-domain authorization. It is my hope that the current wording of the regulations will not lead to a retrograde step in banking security.

UK Open Banking – Progress and Challenges

On January 13th, 2018 a new set of rules for banking came into force that open up the market by allowing new companies to offer electronic payment services. These rules follow from the EU Directive 2015/2366 of 25 November 2015 that is commonly referred to as Payment Services Directive II (PSDII). They promise innovation that some believed the large banks in the UK would otherwise fail to provide. However, as well as providing opportunities they also introduce new risks. Nevertheless, it is good to see the progress that has been made in the UK towards implementing this directive.

Under this new regime the banks, building societies, credit card issuers, e-money institutions, and others  (known as Account Servicing Payment Service Providers ASPSPs) must provide an electronic interface (APIs) that allows third parties (Payment Service Providers or PSPs) to operate an account on behalf of the owner. This opens up the banking system to organizations that are able to provide better ways of making payments, for example through new and better user interfaces (Apps), as well as completely new services that could depend upon an analysis of how you spend your money. These new organizations do not need to run the complete banking service with all that that entails, they just need to provide additional services that are sufficiently attractive to pay their way.

This introduces security challenges by increasing the potential attack surface and, according to some, may introduce conflicts with GDPR privacy obligations. It is therefore essential that security is top of mind when designing, implementing and deploying these systems. In the worst case they present a whole new opportunity for cyber criminals. As regards the potential conflicts with GDPR there will be a session at KuppingerCole’s Digital Finance World in February on this subject. For example, one challenge concerns providing the details of a recipient of an erroneous transfer who refuses to return the money.

To meet the requirements of this directive, the banking industry is moving its IT systems towards platforms that allow them to exploit multiple channels to their customers. This can be achieved in various ways – the cheap and cheerful method being to use “screen scraping” which needs no change to existing systems and new apps use the existing user interfaces to interact. This creates not only security challenges but also a technical architecture that is very messy. A much better approach is to extend existing systems to add open APIs. This is this approach being adopted in the UK.

PSD II is a directive and therefore each EU state needs to implement this locally. However, the job of implementing some of the provisions, including regulatory technical standards (RTS) and guidelines, has been delegated to the European Banking Authority (EBA). In the UK, HM Treasury published the final Payment Services Regulations 2017. The UK Financial Conduct Authority (FCA) issued a joint communication with the Treasury on PSDII and open banking following the publication of these regulations.

While PSDII prevents the UK regulators from mandating a particular method of access, the UK’s Competition and Markets Authority set up the Open Banking Implementation Entity (OBIE) to create software standards and industry guidelines that drive competition and innovation in UK retail banking. 

As of now they have published APIs that include:

Open Data API specifications allow API providers (e.g. banks, building societies and ATM providers) to develop API endpoints which can then be accessed by API users (e.g. third-party developers) to build mobile and web applications for banking customers. These allow providers to supply up to date, standardised, information about the latest available products and services so that, for example, a comparison website can more easily and accurately gather information, and thereby develop better services for end customers.

Open Banking Read/Write APIs enable Account Servicing Payment Service Providers to develop API endpoints to an agreed standard so that Account Information Service Providers (AISPs) and Payment Initiation Service Providers (PISPs) can build web and mobile applications for Payment Service Users (PSUs, e.g. personal and business banking customers).

These specifications are now in the public domain which means that any developer can access them to build their end points and applications. However, use of these in a production environment is limited to approved/authorised ASPSPs, AISPs and PISPs.

Approved/authorised will be enrolled in Open Banking Directory. This will provide digital identities and certificates which enable organisations to securely connect and communicate via the Open Banking Security Profile in a standard manner and to best protect all parties. 

Open Banking OIDC Security Profile - In many cases, Fintech services such as aggregation services use screen scraping and store user passwords. This is not adequately secure, and the approach being taken is to use a token model such as OAuth [RFC6749, RFC6750]. The aim is to develop a REST/JSON model protected by OAuth. However, OAuth needs to be profiled to be used in the financial use-cases. Therefore, the Open Banking Profile has some differences between the FAPI R+W profile necessary to reduce delivery risk for ASPSPs.

This all seemed straightforward until the publication of the draft Draft supplement to the EU technical regulations. This appears to prohibit the use of many secure approaches and I will cover this in a later blog.

In conclusion, the UK banking industry has taken great strides to define an open set of APIs that will allow banks to open their services as required by PSD II.It would appear that, in this respect, the UK is ahead of the rest of the EU. At the moment, these are only available to cover a limited set of use cases, principally the make an immediate transfer of funds in UK Pounds. In addition, the approach to strong authentication is still under discussion.  One further concern is to ensure that all of the potential privacy issues are handled transparently. To hear more on these subjects, attend KuppingerCole Digital Finance World in Frankfurt in February 2018.

McAfee Acquire Skyhigh Networks

McAfee, from its foundation in 1987, has a long history in the world of cyber-security.  Acquired by Intel in 2010, it was spun back out, becoming McAfee LLC, in April 2017. According to the announcement on April 23rd, 2017 by Christopher D. Young, CEO – the new company will be “One that promises customers cybersecurity outcomes, not fragmented products.” So, it is interesting to consider what the acquisition of Skyhigh Networks, which was announced by McAfee on November 27th, will mean.

Currently, McAfee solutions cover areas that include: antimalware, endpoint protection, network security, cloud security, database security, endpoint detection and response, as well as data protection.   Skyhigh Networks are well known for their CASB (Cloud Access Security Broker) product.  So how does this acquisition fit into the McAfee portfolio?

Well, the nature of the cyber-risks that organizations are facing has changed.  Organizations are increasingly using cloud services because of the benefits that they can bring in terms of speed to deployment, flexibility and price.  However, the governance over the use of these services is not well integrated into the normal organizational IT processes and technologies; CASBs address these challenges. They provide security controls that are not available through existing security devices such as Enterprise Network Firewalls, Web Application Firewalls and other forms of web access gateways. They provide a point of control over access to cloud services by any user and from any device.  They help to demonstrate that the organizational use of cloud services meets with regulatory compliance needs.

In KuppingerCole’s opinion, the functionality to manage access to cloud services and to control the data that they hold should be integrated with the normal access governance and cyber security tools used by organizations.  However, the vendors of these tools were slow to develop the required capabilities, and the market in CASBs evolved to plug this gap.  The McAfee acquisition of Skyhigh Networks is the latest of several recent examples of acquisitions of CASBs by major security and hardware software vendors.

The diagram illustrates how the functions that CASBs provide fit into the overall cloud governance process. These basic functionalities are:

  1. Discovery of what cloud services are being used, by whom and for what data.
  2. Control over who can use which services and what data can be transferred.
  3. Protection of data in the cloud against unauthorized access and leakage.
  4. Regulatory compliance and protection against cyber threats through the above controls.

So, in this analysis CASBs are closer to Access Governance solutions than to traditional cyber-security tools.  They recognize that identity and access management are the new cyber-frontier, and that cyber-defense needs to operate at this level.  By providing these functions Skyhigh Networks provides a solution that is complementary to those already offered by McAfee and extends McAfee’s capabilities in the direction needed to meet the capabilities of the cloud enabled, agile enterprise.

The Skyhigh Networks CASB provides comprehensive functionality that strongly matches the requirements described above.  It is also featured in the leadership segment of KuppingerCole’s Leadership Compass: Cloud Access Security Brokers - 72534.  This acquisition is consistent with KuppingerCole’s view on how cyber-security vendors need to evolve to meet the challenges of cloud usage.  Going forward, organizations need a way to provide consistent access governance for both on premise and cloud based services.  This requires functions such as segregation of duties, attestation of access rights and other compliance related governance aspects.  Therefore, in the longer term CASBs need to evolve in this direction.  It will be interesting to watch how McAfee integrates the Skyhigh product and how the McAfee offering evolves towards this in the future.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Modern Cybersecurity Trends & Technologies Learn more

Modern Cybersecurity Trends & Technologies

Companies continue spending millions of dollars on their cybersecurity. With an increasing complexity and variety of cyber-attacks, it is important for CISOs to set correct defense priorities and be aware of state-of-the-art cybersecurity mechanisms. [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00