Blog posts by Mike Small

PSEUDO WHAT AND GDPR?

GDPR comes into force on May 25th this year, the obligations from this are stringent, the penalties for non-compliance are severe and yet many organizations are not fully prepared. There has been much discussion in the press around the penalties under GDPR for data breaches. KuppingerCole’s advice is that preparation based on six key activities is the best way to avoid these penalties. The first two activities are first to find the personal data and second to control access to this data.

While most organizations will be aware of where personal data is used as part of their normal business operations, many use this data indirectly, for example as part of test and development activities. Because of the wide definition of processing given in GDPR, this use is also covered by the regulation. The Data Controller is responsible to demonstrate that this use of personal data is fair and lawful. If this can be shown, then the Data Controller will also need to be able to show that this processing complies with all the other data protection requirements.

While the costs and complexities of compliance with GDPR may be justified by the benefits from using personal data for normal business processes this is unlikely to be the case for its non-production use. However, the GDPR provides a way to legitimately avoid the need for compliance. According to GDPR (Recital 26), the principles of data protection should not apply to anonymous information, that is information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not identifiable.

One approach is known as pseudonymisation, and GDPR accepts the use of pseudonymisation as an approach to data protection by design and data protection by default. (Recital 78). Pseudonymisation is defined in Article 4 as “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information...” with the additional proviso that the additional information is kept separate and well protected.

In addition, Under Article 6 (4)(e), the Data Controller can take account of the existence of appropriate safeguards, which may include encryption or pseudonymisation, when considering whether processing for another purpose is compatible with the purpose for which the personal data were initially collected and the processing for another purpose. However, the provisos introduce an element of risk for the Data Controller relating to the reversibility of the process and protection of any additional information that could be used identify individuals from the pseudonymized data.

However, not all approaches to anonymization and pseudonymisation are equal. In 2014, the EU article 29 Working Party produced a report providing their opinion on the use of Anonymization Techniques applied to EU privacy. Although it is written with reference to the previous directive 95/46/EC, it is still very relevant. It identifies three tests which should be used to judge an anonymization technique:

  1. is it still possible to single out an individual?
  2. is it still possible to link records relating to an individual?
  3. can information be inferred concerning an individual?

It also provides examples of where anonymization techniques have failed. For example, in 2006, AOL publicly released a database containing twenty million search keywords for over 650,000 users over a 3-month period. The only privacy preserving measure consisted of replacing the AOL user ID by a numerical attribute. This led to the public identification and location of some of the users by the NY Times and other researchers.

Pseudonymization provides a useful control over the privacy of personal data and is recognized by GDPR as a component of privacy by design. However, it is vital that you chose and use the appropriate pseudonymization techniques for your use case correctly. For more information on this subject attend KuppingerCole’s webinar “Acing the Upcoming GDPR Exam”. There will also be a stream of sessions on GDPR at KuppingerCole’s European Identity & Cloud Conference in Munich May 15-18th, 2018.

GDPR and Financial Services – Imperatives and Conflicts

Over the past months two major financial services regulations have come into force. These are the fourth money laundering directive (4AMLD) and the Second Payment Services Directive (PSD II). In May this year the EU General Data Protection Regulation will be added. Organizations within the scope of these need to undertake a considerable amount of work to identify obligations, manage conflicts, implement controls and reduce overlap.

The EU GDPR (General Data Protection Regulation), which becomes effective on May 25th, 2018, will affect organizations worldwide that hold or process personal data relating to people resident in the European Union. The definition of both personal data and processing under GDPR are very broad, and processing is only considered to be lawful if it meets a set of strict criteria. GDPR also gives the data subjects extended rights to access, correct and erase their personal data, as well as to withdraw consent to its use. The sanctions for non-compliance are very severe with penalties of up to 4% of annual worldwide turnover. Critically, the organization that collects the personal data, called the Data Controller, is responsible for both implementing and demonstrating compliance.

GDPR emphasizes transparency and the rights of data subjects and this may lead to conflicts with the other directives.

4AMLD - EU Directive 2015/849 of the 20 May 2015 is often referred to as Fourth EU Anti-Money Laundering Directive (4AMLD). The purpose of the Directive is to remove any ambiguities in the previous legislation and to improve consistency of anti-money laundering (AML) and counter terrorist financing (CTF) rules across all EU Member States. This directive applies to a wide range of organizations not just to banks. These include: credit institutions, financial institutions, auditors, external accountants and tax advisors, estate agents, anyone trading in cash over EUR 10,000 and providers of gambling services.

In the UK this directive has been implemented through the “Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017”, which came into force on 26th June 2017. In this, the 44 pages of the EU Directive have become 120 pages of regulation.

Clearly, to counter money laundering and terrorist financing involves understanding the identities of the individuals performing transactions and exactly who owns the assets being held and transferred. This makes it necessary to obtain, use and store personal data. So, is there any conflict with GDPR?

One area where there may be some concern is in relation to Politically Exposed Persons (PEPs) and their known close associates. In the UK regulatory instrument, to decide whether a person is a known close associate 35 (15) an organization need only have regard to information which is in its possession, or to credible information which is publicly available.

The UK Information Commissioner made several comments on this area in the drafts of the regulations.

  • Political party registers are a source of publicly available information on PEPs but is it not clear that party members are informed or understand that their information in these could be used in this way.
  • A person could be denied access to financial products due to inaccurate publicly sourced data or misattributed publicly sourced data. Under GDPR a data subject has the right to know where information has been sourced from and to challenge its accuracy. A clearer definition of “credible information” is needed.

The regulation requires the creation and maintenance of various registers. Specifically, a register of the beneficial owners of trusts must include 45 (6) personal data. The unauthorized exposure of this data could potentially be very damaging to the individuals and it is subject to GDPR.

PSD II - EU Directive 2015/2366 of 25 November 2015 is often referred to as Payment Services Directive II (PSDII). This directive amends and consolidates several existing directives and has as a key purpose to open the market for electronic payment services. Member States, including the UK, must implement the Directive into national law by 13 January 2018 and this has been achieved through the Payment Services Regulations 2017. Some aspects have been delegated to the European Banking Authority (EBA) and will not be effective until Q3, 2019.

PSD II introduces third parties into financial transactions and this can add to the privacy challenges as recognized by comments from the UK ICO on the UK Regulations mentioned above. Where an individual is scammed into making a transfer, or makes a payment using incorrect details for the payee, the banks often cite data privacy as a reason to refuse to provide the payer with the details of the actual recipient. Under Open Banking there is now an additional party involved in the transaction and this may make it even more difficult for the payer under these circumstances. However, in the UK Regulation 90:

  • obliges the payment service provider to make reasonable efforts to recover the funds involved in the payment transaction; and
  • If unable to recover the funds it must, on receipt of a written request, provide to the payer all available relevant information for the payer to claim repayment of the funds.

This leaves an element of uncertainty does “relevant information” include the personal details of the recipient? Clearly, if it does, under GDPR the payment service providers must make sure that they have obtained consent from their customers for the use of their data under these circumstances.

In conclusion – the EU directives and regulations usually state how they relate to each other. In the case of directives their implementation can add an extra degree of complexity. Furthermore, these regulations exist within legal frameworks and local case law. In principle there should be no conflicts however, organizations have often been ready to cite “privacy” as a reason for providing poor service.

EBA Rules out Secure Open Banking?

On January 30th in London I attended a joint workshop between OpenID and the UK Open Banking community that was facilitated by Don Thibeau of OIX. This workshop included an update from Mike Jones on the work being done by OpenID and from Chris Michael Head of Technology, OBIE on UK Open Banking.

Firstly, some background to set the context for this. On January 13th, 2018 a new set of rules for banking came into force that stem from the EU Directive 2015/2366 of 25 November 2015 commonly known as Payment Services Directive 2 (PSD2). While PSDII prevents the UK regulators from mandating a particular method of access, the UK’s Competition and Markets Authority set up the Open Banking Implementation Entity (OBIE) to create software standards and industry guidelines that drive competition and innovation in UK retail banking. As one might expect, providing authorized access to payment services requires identifying and properly authenticating users – see KuppingerCole’s Advisory Note: Consumer Identity and Access Management for “Know Your Customer”.

One of the key players in this area is the OpenID Foundation. This is a non-profit, international standards organization, founded in 2007, that is committed to enabling, promoting and protecting OpenID technologies. While OpenID is relevant to many industries one area of particular interest is financial services. OpenID has a Financial API Working Group (FAPI) led by Nat Sakimura that is working to define APIs that enable applications to utilize the data stored in financial accounts, interact with those accounts, and to enable users to control their security and privacy settings.

Previously it was common for financial services such as those providing account aggregation services to use screen scraping and to store user passwords. Screen scraping is inherently insecure (see GDPR vs. PSD2: Why the European Commission Must Eliminate Screen Scraping). The current approach utilizes a token model such as OAuth [RFC6749, RFC6750], with the aim to develop a REST/JSON model protected by OAuth. However, OAuth needs to be profiled for the financial use cases.

In the UK, the APIs being specified by OBIE include an Open Banking OIDC Security Profile, which is based upon the work of OpenID. This has some differences between the FAPI R+W profile necessary to reduce delivery risk for ASPSPs.

In July 2017 it seemed that the EBA (European Banking Authority) had made a wise decision and rejected Commission Amendments on screen scraping. However in November 2017 the draft supplement to the EU technical regulations was published. In this, Article 32 (3) sets out the obligations for a dedicated interface. In summary these oblige account servicing payment service providers to ensure that this does not create obstacles. Obstacles specified in the RTS include:

  • Preventing the use by payment service providers of the credentials issued by account servicing payment service providers to their customers;
  • Imposing redirection to the account servicing payment service provider's authentication or other functions,
  • Requiring additional authorisations and registrations in addition or requiring additional checks of the consent given by payment service users to providers of payment initiation and account information services.

These obligations appear to fly in the face of what has become accepted security good practice: that one application should never directly share actual credentials with another application. Identity federation technologies such as OAuth and SAML have been reliably providing more secure means for cross-domain authentication for over a decade.

Ralph Bragg, Head of Architecture at OBIE, described 3 possible approaches that were being considered in the context of these obligations. These approaches can be summarized as:

  • Redirect – which is the OAuth model where an end user is redirected to the ASP to authenticate and the PSP receives a token. This appears to be non-compliant.
  • Embedded – where the PISP obtains the first and second factors from the end user and transmits these to the bank. This appears to be insecure.
  • Decoupled – where the end user completes the authorization on a separate device or application. This introduces further complexities.

This was discussed in a panel session involving many of the leading thinkers in this area including: Mike Jones, Microsoft, John Bradley, Yubico, Dave Tonge, Momentum FT, and Joseph Heenan, Fintech Labs.

There was a wide-ranging discussion which resulted in a general agreement that:

  • The embedded model involves the third party (PISP) in holding and transmitting credentials. This is very poor security practice and increases the attack surface. Attacks on the PISP could result in theft of the credentials to access the bank (ASPSP).
  • The redirection model is overall the best from a security point of view. Customers are generally happiest with redirect because they feel confident in their own bank. However, the bank may be the competitor of the PISP and so could make the process unfriendly.
  • PSD2 should be taken in an end to end perspective.

It seems perverse that technical regulations associated with the opening of electronic payment services appear to inhibit the use of the most up-to-date cybersecurity measures. The direct sharing of passwords or other forms of authentication credentials between services increases risks. It is generally better for regulations to oblige the use of widely accepted best practices rather than prohibiting them. OAuth is a well-understood and ubiquitously employed protocol that can help financial service providers achieve cross-domain authorization. It is my hope that the current wording of the regulations will not lead to a retrograde step in banking security.

UK Open Banking – Progress and Challenges

On January 13th, 2018 a new set of rules for banking came into force that open up the market by allowing new companies to offer electronic payment services. These rules follow from the EU Directive 2015/2366 of 25 November 2015 that is commonly referred to as Payment Services Directive II (PSDII). They promise innovation that some believed the large banks in the UK would otherwise fail to provide. However, as well as providing opportunities they also introduce new risks. Nevertheless, it is good to see the progress that has been made in the UK towards implementing this directive.

Under this new regime the banks, building societies, credit card issuers, e-money institutions, and others  (known as Account Servicing Payment Service Providers ASPSPs) must provide an electronic interface (APIs) that allows third parties (Payment Service Providers or PSPs) to operate an account on behalf of the owner. This opens up the banking system to organizations that are able to provide better ways of making payments, for example through new and better user interfaces (Apps), as well as completely new services that could depend upon an analysis of how you spend your money. These new organizations do not need to run the complete banking service with all that that entails, they just need to provide additional services that are sufficiently attractive to pay their way.

This introduces security challenges by increasing the potential attack surface and, according to some, may introduce conflicts with GDPR privacy obligations. It is therefore essential that security is top of mind when designing, implementing and deploying these systems. In the worst case they present a whole new opportunity for cyber criminals. As regards the potential conflicts with GDPR there will be a session at KuppingerCole’s Digital Finance World in February on this subject. For example, one challenge concerns providing the details of a recipient of an erroneous transfer who refuses to return the money.

To meet the requirements of this directive, the banking industry is moving its IT systems towards platforms that allow them to exploit multiple channels to their customers. This can be achieved in various ways – the cheap and cheerful method being to use “screen scraping” which needs no change to existing systems and new apps use the existing user interfaces to interact. This creates not only security challenges but also a technical architecture that is very messy. A much better approach is to extend existing systems to add open APIs. This is this approach being adopted in the UK.

PSD II is a directive and therefore each EU state needs to implement this locally. However, the job of implementing some of the provisions, including regulatory technical standards (RTS) and guidelines, has been delegated to the European Banking Authority (EBA). In the UK, HM Treasury published the final Payment Services Regulations 2017. The UK Financial Conduct Authority (FCA) issued a joint communication with the Treasury on PSDII and open banking following the publication of these regulations.

While PSDII prevents the UK regulators from mandating a particular method of access, the UK’s Competition and Markets Authority set up the Open Banking Implementation Entity (OBIE) to create software standards and industry guidelines that drive competition and innovation in UK retail banking. 

As of now they have published APIs that include:

Open Data API specifications allow API providers (e.g. banks, building societies and ATM providers) to develop API endpoints which can then be accessed by API users (e.g. third-party developers) to build mobile and web applications for banking customers. These allow providers to supply up to date, standardised, information about the latest available products and services so that, for example, a comparison website can more easily and accurately gather information, and thereby develop better services for end customers.

Open Banking Read/Write APIs enable Account Servicing Payment Service Providers to develop API endpoints to an agreed standard so that Account Information Service Providers (AISPs) and Payment Initiation Service Providers (PISPs) can build web and mobile applications for Payment Service Users (PSUs, e.g. personal and business banking customers).

These specifications are now in the public domain which means that any developer can access them to build their end points and applications. However, use of these in a production environment is limited to approved/authorised ASPSPs, AISPs and PISPs.

Approved/authorised will be enrolled in Open Banking Directory. This will provide digital identities and certificates which enable organisations to securely connect and communicate via the Open Banking Security Profile in a standard manner and to best protect all parties. 

Open Banking OIDC Security Profile - In many cases, Fintech services such as aggregation services use screen scraping and store user passwords. This is not adequately secure, and the approach being taken is to use a token model such as OAuth [RFC6749, RFC6750]. The aim is to develop a REST/JSON model protected by OAuth. However, OAuth needs to be profiled to be used in the financial use-cases. Therefore, the Open Banking Profile has some differences between the FAPI R+W profile necessary to reduce delivery risk for ASPSPs.

This all seemed straightforward until the publication of the draft Draft supplement to the EU technical regulations. This appears to prohibit the use of many secure approaches and I will cover this in a later blog.

In conclusion, the UK banking industry has taken great strides to define an open set of APIs that will allow banks to open their services as required by PSD II.It would appear that, in this respect, the UK is ahead of the rest of the EU. At the moment, these are only available to cover a limited set of use cases, principally the make an immediate transfer of funds in UK Pounds. In addition, the approach to strong authentication is still under discussion.  One further concern is to ensure that all of the potential privacy issues are handled transparently. To hear more on these subjects, attend KuppingerCole Digital Finance World in Frankfurt in February 2018.

McAfee Acquire Skyhigh Networks

McAfee, from its foundation in 1987, has a long history in the world of cyber-security.  Acquired by Intel in 2010, it was spun back out, becoming McAfee LLC, in April 2017. According to the announcement on April 23rd, 2017 by Christopher D. Young, CEO – the new company will be “One that promises customers cybersecurity outcomes, not fragmented products.” So, it is interesting to consider what the acquisition of Skyhigh Networks, which was announced by McAfee on November 27th, will mean.

Currently, McAfee solutions cover areas that include: antimalware, endpoint protection, network security, cloud security, database security, endpoint detection and response, as well as data protection.   Skyhigh Networks are well known for their CASB (Cloud Access Security Broker) product.  So how does this acquisition fit into the McAfee portfolio?

Well, the nature of the cyber-risks that organizations are facing has changed.  Organizations are increasingly using cloud services because of the benefits that they can bring in terms of speed to deployment, flexibility and price.  However, the governance over the use of these services is not well integrated into the normal organizational IT processes and technologies; CASBs address these challenges. They provide security controls that are not available through existing security devices such as Enterprise Network Firewalls, Web Application Firewalls and other forms of web access gateways. They provide a point of control over access to cloud services by any user and from any device.  They help to demonstrate that the organizational use of cloud services meets with regulatory compliance needs.

In KuppingerCole’s opinion, the functionality to manage access to cloud services and to control the data that they hold should be integrated with the normal access governance and cyber security tools used by organizations.  However, the vendors of these tools were slow to develop the required capabilities, and the market in CASBs evolved to plug this gap.  The McAfee acquisition of Skyhigh Networks is the latest of several recent examples of acquisitions of CASBs by major security and hardware software vendors.

The diagram illustrates how the functions that CASBs provide fit into the overall cloud governance process. These basic functionalities are:

  1. Discovery of what cloud services are being used, by whom and for what data.
  2. Control over who can use which services and what data can be transferred.
  3. Protection of data in the cloud against unauthorized access and leakage.
  4. Regulatory compliance and protection against cyber threats through the above controls.

So, in this analysis CASBs are closer to Access Governance solutions than to traditional cyber-security tools.  They recognize that identity and access management are the new cyber-frontier, and that cyber-defense needs to operate at this level.  By providing these functions Skyhigh Networks provides a solution that is complementary to those already offered by McAfee and extends McAfee’s capabilities in the direction needed to meet the capabilities of the cloud enabled, agile enterprise.

The Skyhigh Networks CASB provides comprehensive functionality that strongly matches the requirements described above.  It is also featured in the leadership segment of KuppingerCole’s Leadership Compass: Cloud Access Security Brokers - 72534.  This acquisition is consistent with KuppingerCole’s view on how cyber-security vendors need to evolve to meet the challenges of cloud usage.  Going forward, organizations need a way to provide consistent access governance for both on premise and cloud based services.  This requires functions such as segregation of duties, attestation of access rights and other compliance related governance aspects.  Therefore, in the longer term CASBs need to evolve in this direction.  It will be interesting to watch how McAfee integrates the Skyhigh product and how the McAfee offering evolves towards this in the future.

Grizzly Steppe – What Every Organization Needs to Do

On December 29th, the FBI together with CERT finally released a Joint Analysis Report on the cyber-attacks on the US Democratic Party during the US presidential election.  Every organization, whether they are based in the US or not, would do well to read this report and to ensure that their organization takes account of its recommendations.  Once released into the wild – the tools and techniques and processes (TTPs) used by state actors are quickly taken up and become widely used by other adversaries. 

This report is not a formal indictment of a crime as was the case with the alleged hacking of US companies by the Chinese filed in 2014.  It is however important cyber threat intelligence.

Threat intelligence is a vital part of cyber-defence and cyber-incident response, providing information about the threats, TTPs, and devices that cyber-adversaries employ; the systems and information that they target; and other threat-related information that provides greater situational awareness.  This intelligence needs to be timely, relevant, accurate, specific and actionable.  This report provides such intelligence.

The approaches described in the report are not new.  They involve several phases and some have been observed using targeted spear-phishing campaigns leveraging web links to a malicious website that installs code.  Once executed, the code delivers Remote Access Tools (RATs) and evades detection using a range of techniques.  The malware connects back to the attackers who then use the RAT tools to escalate privileges, search active directory accounts, and exfiltrate email through encrypted connections.

Another attack process uses internet domains with names that closely resemble those of targeted organizations and trick potential victims into entering legitimate credentials.  A fake webmail site that collects user credentials when they log in is a favourite.  This time, a spear-phishing email tricked recipients into changing their passwords through a fake webmail domain. Using the harvested credentials, the attacker was able to gain access and steal content.

Sharing Threat Intelligence is a vital part of cyber defence and OASIS recently made available three foundational specifications for the sharing of threat intelligence.  These are described in Executive View: Emerging Threat Intelligence Standards - 72528.  Indicators of Compromise (IOCs) associated with the cyber-actors are provided using these standards (STIX) as files accompanying the report.

There are several well-known areas of vulnerability that are consistently used by cyber-attackers.  These are easy to fix but are, unfortunately, still commonly found in many organizations’ IT systems.  Organizations should take immediate steps to detect and remove these from their IT systems:

The majority of these attacks exploit human weaknesses in the first stage.  While technical measures can and should be improved, it is also imperative to provide employees, associates and partners training on how to recognize and respond to these threats.

The report describes a set of recommended mitigations and best practices.  Organizations should consider these recommendations and takes steps to implement them without delay.  KuppingerCole provides extensive research on securing IT systems and on privilege management in particular. 

What Value Certification?

In the past weeks, there have been several press releases from CSPs (Cloud Service Providers) announcing new certifications for their services.  In November, BSI announced that Microsoft Azure had achieved Cloud Security Alliance (CSA) STAR Certification. On December 15th, Amazon Web Services (AWS) announced that it had successfully completed the assessment against the compliance standard of the Bundesamt für Sicherheit in der Informationstechnik (BSI), the Cloud Computing Compliance Controls Catalogue (C5).

What value do these certifications bring to the customer of these services?

The first value is compliance. A failure by the cloud customer to comply with laws and industry regulations in relation to the way data is stored or processed in the cloud could be very expensive.  Certification that the cloud service complies with a relevant standard provides assurance that data will be processed in a way that is compliant.

The second value is assurance.  The security, compliance and management of the cloud service is shared between the CSP and the customer.  Independent certification provides reassurance that the CSP is operating the service according to the best practices set out in the standard.  This does not mean that there is no risk that something could go wrong – it simply demonstrates that the CSP is implementing the best practices to reduce the likelihood of problems and to mitigate their effects should they occur.

There are different levels of assurance that a CSP can provide – these include:

CSP Assertion – the CSP describes the steps they take.  This value of this level of assurance depends upon the customer’s trust in the CSP.

Contractual assurance – the contract for the service provides specific commitments concerning the details of the service provided.  The value of this commitment is determined by the level of liability specified in the contract under circumstances where the CSP is in default as well as the cost and difficulties in its enforcement.

Independent validation – the cloud service has been evaluated by an independent third party that provides a certificate or attestation.  Examples of this include some forms of Service Organization Control (SOC) reports using the standards SSAE 16 or ISAE 3402.  The value of this depends upon the match between the scope of the evaluation and the customer’s requirements as well as its how frequently the validation is performed.

Independent testing – the service provided has been independently tested to demonstrate that it conforms to the claims made by the CSP.  This extends the assessment to include measuring the effectiveness of the controls.  Examples include SOC 2 type II reports as well as some levels of certification with the Payment Card Industry data security Standard (PCI-DSS).  The value of this depends upon the match between the scope of the evaluation and the customer’s requirements as well as how frequently the testing is performed.

The latter of these – Independent testing – is what customers should be looking for.  However, it is important that the customer asks the following questions:

1)      What is the scope of the certification?  Does it cover the whole service delivered or just parts of it – like the data centre?

2)      How does the standard compare with the customer’s own internal controls?  Are the controls in the standard stronger or weaker?

3)      Is the standard relevant to the specific use of the cloud service by the customer?  Many CSPs now offer an “alphabet soup” of certifications.  Many of these certifications only apply to certain geographies or certain industries.

4)      How well is your side of cloud use governed?  Security and compliance of the use of cloud services is a shared responsibility.  Make sure that you understand what your organization is responsible for and that you meet these responsibilities.

For more information on this subject see: Executive View: Using Certification for Cloud Provider Selection - 71308 - KuppingerCole

AWS re:Invent 2016 Blog

In the last week of November I attended the AWS re:Invent conference in Las Vegas – this was an impressive event with around 32,000 attendees. There were a significant number of announcements at this event; many were essentially more of the same but bigger, better based on what their customers were asking for. It is clear that AWS is going from strength to strength. AWS announced many faster compute instances with larger amounts of memory optimized for various specific tasks. This may seem boring - but these announcements were received with rapturous applause from the audience. This is the AWS bread and butter and just what many customers are looking for. The value of these improvements is that a customer can switch their workload onto one of these new instances without the need to specify, order, pay for, and await delivery of new hardware as they would have had to do for on premise equipment. Continuing on that theme - James Hamilton, VP & Distinguished Engineer – described the work that AWS does behind the scenes to deliver their services. The majority of AWS traffic runs on a private network (except in China) this guarantees: improved latency, packet loss and overall quality, avoids capacity conflicts and gives AWS greater operational control. AWS designs and manages its own network routers, its own custom compute nodes to optimize power versus space and even its own custom power utility controls to cater for rare power events.

You may think - well so what? The reason why this matters is that an AWS customer gets all of this included in the service that they receive. These are time consuming processes that the customer would otherwise have to manage for their on premise IT facilities. Furthermore these processes need specialized skills that are in short supply. In the opening keynote at the conference, AWS CEO Andy Jassy compared AWS with the “legacy software vendors”. He positioned these vendors as locking their customers into long term, expensive contracts. In comparison he described how AWS allows flexibility and works to save customers’ money through price reductions and customer reviews.

However, to get the best out of AWS services, just like most IT technology, you need to exploit proprietary functionality. Once you use proprietary features it becomes more difficult to migrate from that technology. Mr. Jassy also gave several examples of how customers had been able to migrate around 13,000 proprietary database workloads to the AWS database services. While this shows the care that AWS has put into its database services it also slightly contradicts the claim that customers are being locked-in to proprietary software.

Continuing on the theme of migration – while AWS is still strong among the “born on the cloud” startups and for creating new applications, organizations are increasingly looking to migrate existing workloads. This has not always been straightforward since any differences between the on premise IT and the AWS environment can make changes necessary. The announcements previously made at VM World that a VMWare service will be offered on AWS will be welcomed by many organizations. This will allow the many customers using VMWare and the associated vSphere management tools to migrate their

workloads to AWS and while continuing to manage the hybrid cloud / on premise IT using the tools they are already using.

Another problem related to migration is that of transferring data. Organizations wishing to move their workloads to the cloud need to move their data and, for some, this can be a significant problem. The practical bandwidth of communications networks can be the limiting factor and the use of physical media introduces security problems. In response to these problems, AWS have created a storage device that can be used to physically transfer Terabytes of data securely. This first of these devices, the “AWS Snowball”, was announced at AWS last year and has now been improved and upgraded to the “AWS Snowball Edge”. However, the highlight of the conference was the announcement of the “AWS Snowmobile”. This is system mounted in a shipping container carried on a transport truck that can be used to securely transfer Exabytes of data. Here is a ‘photo that I took of one of these that was driven into the conference hall.

So, is this just an eye-catching gimmick? Not so according to the first beta customer.  The customer’s on premise datacenter was bursting at the seams and could no longer support their expanding data based business.  They wanted to move to the AWS cloud but it was not practical to transfer the amount of data they had over a network and they needed an alternative secure and reliable method.  AWS Snowmobile provided exactly the answer to this need.

Last but not least, security -  at the event AWS announced AWS Shield.  This is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS.   The value of this was illustrated in an interesting talk SAC327 – “No More Ransomware: How Europol, the Dutch Police, and AWS Are Helping Millions Deal with Cybercrime”.  This talk described a website set up to help victims of Ransomware attacks recover their data.  Not surprisingly, this site has come under sustained attacks from cyber-criminals. The fact that this site has withstood these attacks is a confirmation that AWS can be used to create and securely host applications, and that AWS Shield can add an extra layer of protection.

In conclusion, this event demonstrates that AWS is going from strength to strength.  Its basic value proposition of providing cost effective, flexible and secure IT infrastructure remains strong and continues to be attractive.  AWS is developing services to become more Hybrid Cloud and enterprise friendly while extending its services upwards to include middleware and intelligence in response to customer demand.  

For KuppingerCole’s opinion on cloud services see our research reports Cloud Reports - KuppingerCole

Democratized Security

At the AWS Enterprise Security Summit in London on November 8th, Stephen Schmidt, CISO at AWS gave a keynote entitled “Democratized Security”.  What is Democratized Security and does it really exist? 

Well, to quote Humpty Dumpty from the book Alice in Wonderland “When I use a word it means just what I choose it to mean—neither more nor less."  So, what Mr. Schmidt meant by this phrase may or may not be what other people would understand it to mean.  This is my interpretation.

The word democracy originates in ancient Greece and where it meant the rule of the common people.  It described the opposite of the rule by an elite.  More recently, the “democratization of technology” has come to mean the process whereby sophisticated technology becomes accessible to more and more people.  In the 1990s, Andrew Feenberg described a theory for democratizing technological design. He argued for what he calls “democratic rationalization” where participants intervene in the technological design process to shape it toward their own ends.

How does this relate to cloud services?  Cloud services are easily accessible to a wide range of customers from individual consumers to large organizations.  These services survive and prosper by providing the functionality that their customers value at a price that is driven down by their scale.  Intense competition means that they need to be very responsive to their customers’ demands.  Cloud computing has made extremely powerful IT services available at an incredibly low cost in comparison with the traditional model, where the user had to invest in the infrastructure, the software and the knowledge before they could event start.

What about security? There have been many reports of cyber-attacks, data breaches and legal government data intercepts impacting on some consumer cloud services (not AWS).  The fact that many of these services still survive seems to indicate that individual consumers are not overly concerned.   Organizations however have a different perspective – they do care about security and compliance.  They are subject to a wide range of laws and regulations that define how and where data can be processed with significant penalties for failure.  Providers of cloud services that are aimed at organizations have a very strong incentive to provide the security and compliance that this market demands.

Has the security elite been eliminated?  The global nature of the internet and cyber-crime has made it extremely difficult for the normal guardians – the government and the law – to provide protection.  Even worse, the attempts by governments to use data interception to meet the challenges of global crime and terrorism have made them suspects.  The complexity of the technical challenges around cyber-threats make it impractical for all but the largest organizations to build and operate their own cyber-defences.  However, the cloud service provider has the necessary scale to afford this.  So, the cloud service providers can be thought of as representing a new security elite – albeit one that is subject to the market demands for the security of their services.

With democracy comes responsibility.  In relation to security this means that the cloud customer must take care of the aspects under their control.  Many, but not all, of the previously mentioned consumer data breaches involved factors under the customers’ control, like weak passwords.  For organizations using cloud services the customer must understand the sensitivity of their data and ensure that it is appropriately processed and protected.  This means taking a good governance approach to assure that the cloud services used meet these requirements.

Cloud services now provide a wide range of individuals and organizations with access to IT technology and services that were previously beyond their reach.  While the main driving force behind cloud services has been their functionality; security and compliance are now top of the agenda for organizational customers.  The cloud can be said to be democratizing security because organizations will only choose those services that meet their requirements in this area.  In this world, the cloud service providers have become the security elite through their scale, knowledge and control.  The cloud customer can choose which provider to use based on their trust in this provider to deliver what they need.

For more information see KuppingerCole’s research in this area: Reports - Cloud Security.

Be careful not to DROWN

On March 1st OpenSSL published a security advisory CVE-2016-0800, known as “DROWN”. This is described as a cross-protocol attack on TLS using SSLv2 and is classified with a High Severity. The advice given by OpenSSL is:

“We strongly advise against the use of SSLv2 due not only to the issues described below, but to the other known deficiencies in the protocol as described at https://tools.ietf.org/html/rfc6176

This vulnerability illustrates how vigilant organizations need to be over the specific versions of software that they use. However, this is easier said than done. Many organizations have a website or application that was built by a third party. The development may have been done some time ago and used what were the then current versions of readily available Open Source components. The developers may or may not have a contract to keep the package they developed up to date.

The application or website may be hosted on premise or externally; wherever it is hosted, the infrastructure upon which it runs also needs to be properly managed and kept up to date. OpenSSL is part of the infrastructure upon which the website runs. While there may be some reasons for continuing to use SSLv2 for compatibility, there is no possible excuse for reusing SSL Private Keys between websites. It just goes against all possible security best practices.

It may be difficult to believe but I have heard auditors report that when they ask “what does that server do?” they get the response “I don’t know – it’s always been here and we never touch it”. The same can be true of VMs in the cloud which get created, used and then forgotten (except by the cloud provider who keeps on charging for them).

So as vulnerabilities are discovered, there may be no process to take action to remediate the operational package. The cyber criminals just love this. They can set up an automated process to externally scan to find where known vulnerabilities exist unpatched and exploit the results at their leisure.

There are two basic lessons from this:

  1. Most code contains exploitable errors and its evolution generally leads to a deterioration in its quality over time unless there are very stringent controls over change. It is attractive to add functionality but increase in size and complexity leads to more vulnerabilities. Sometimes it is useful to go back to first principles and recode using a stringent approach.
    I provided an example of this in my blog AWS Security and Compliance Update. AWS has created a replacement for OpenSSL TLS - S2N Open Source implementation of TLS. S2N replaces the 500,000 lines code in OpenSSL with approximately 6,000 lines of audited code. This code has been contributed to Open Source and is available from S2N GitHub Repository.

  2. Organizations need to demand maintenance as part of the development of code by third parties. This is to avoid the need to maintain out of date infrastructure components for compatibility.
    The infrastructure, whether on premise or hosted, should be kept up to date. This will require change management processes to ensure that changes do not impact on operation. This should be supported by regular vulnerability scanning of operational IT systems using one of the many tools available together with remediation of the vulnerabilities detected.

IT systems need to have a managed lifecycle. It is not good enough to develop, deploy and forget.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Privacy & the European Data Protection Regulation Learn more

Privacy & the European Data Protection Regulation

The EU GDPR (General Data Protection Regulation), becoming effective May 25, 2018, will have a global impact not only on data privacy, but on the interaction between businesses and their customers and consumers. Organizations must not restrict their GDPR initiatives to technical changes in consent management or PII protection, but need to review how they onboard customers and consumers and how to convince these of giving consent, but also review the amount and purposes of PII they collect. The impact of GDPR on businesses will be far bigger than most currently expect. [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00