KuppingerCole Blog

The importance of consent management: CIAM vs. GDPR

Consumer identity and access management solutions are bringing value to the organizations which implement them, in terms of higher numbers of successful registrations, customer profiling, authentication variety, identity analytics, and marketing insights.  Many companies with deployed CIAM solutions are increasing revenue and brand loyalty.  Consumers themselves have better experiences interacting with companies that have mature CIAM technologies.  CIAM is a rapidly growing market segment.

CIAM systems typically collect (or at least attempt to collect) the following attributes about consumers:  Name, email address, association with one or more social network accounts, age, gender, and location.  Depending on the service provider, CIAM products may also pick up data such as search queries, items purchased, items browsed, and likes and preferences from social networks.  Wearable technology vendors may collect locations, physical activities, and health-related statistics, and this data may be linked to consumers’ online identities in multiple CIAM implementations.  To reduce fraud and unobtrusively increase the users’ authentication assurance levels, some companies may also acquire users’ IP addresses, device information, and location history. 

Without the EU user’s explicit consent, all of this data collection will violate the EU’s General Data Protection Regulation (GDPR) in May of 2018.  Penalties for violation can be up to €20M or 4% of global revenue, whichever is higher.

Consider a few definitions from the GDPR:

(1) ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;

(2) ‘processing’ means any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction;

(4) ‘profiling’ means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements;

(4) ‘third party’ means a natural or legal person, public authority, agency or body other than the data subject, controller, processor and persons who, under the direct authority of the controller or processor, are authorised to process personal data;

(5) ‘consent’ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her;

This means that companies that are currently deriving benefit from CIAM must:

  1. Perform a privacy data assessment
  2. Create new privacy policies as needed
  3. Plan to clean and minimize user data already resident in systems
  4. Implement the consent gathering mechanisms within their CIAM solutions

If your deployed CIAM solution is not yet fully GDPR compliant, talk with your vendor about their product roadmaps.  Find out when they will release a GDPR compliant version, and determine how to work that into your own release schedule. 

If your organization is considering deploying CIAM in the near future, make sure that GDPR compliant consent mechanisms and storage schemes are on your RFP requirements list.

This article is not intended to provide detailed technical or legal advice.  For more information, see the full text of GDPR at the link above, and visit www.kuppingercole.com. Over the next few months, we will examine other aspects of GDPR and what it entails for business, IAM, and IT infrastructure.

The Role of Artificial Intelligence in Cyber Security

Over the last few weeks I’ve read a lot about the role AI or Artificial Intelligence (or should I better write “Artificial” Intelligence?) will play in Cyber Security. There is no doubt that advanced analytical technologies (frequently subsumed under the AI term), such as pattern matching, machine learning, and many others, are already affecting Cyber Security. However, the emphasis here is on “already”. It would be wrong to say “nothing new under the sun”, given that there is a lot of progress in this space. But it is just as wrong to ignore the evolution of the past couple of years.

At KuppingerCole, we started looking at what we call Real Time Security Intelligence (RTSI) a couple of years back. We published our first report on this topic back in May 2014 and covered the topic in our predictions for 2014. The topic was covered in a session at EIC 2014. And we published a series of blogs on that topic during that year.

There is no doubt that advanced analytical technologies will help organizations in their fight against cyber-attacks, because they help in detecting potential attacks at an earlier stage, as well as enabling the identification of complex attack patterns that span various systems. AI also might help, such as in IBM Watson for Cyber Security, to provide a better understanding of cyber risks by collecting and analyzing both structured and unstructured information. Cognitive Security solutions such as IBM Watson for Cyber Security are part of the AI evolution in the field of cyber-security. But again: The journey started a couple of years ago, and we are just in the very early stages.

So why this hype now? Maybe it is because of achieving a critical mass of solutions. More and more companies have entered the field in recent years. Maybe it is because of some big players actively entering that market. At the beginning, most of the players were startups (and many of these rooted in Israel). Now, large companies such as IBM have started pushing the topic, gaining far more awareness in public. Maybe it is because of AI in Cyber Security being the last hope for a solution that helps the good guys win in their fight against cyber criminals and nation-state attackers (hard to say where the one ends and the other starts).

Anyway: We will see not only more solutions in the market and advancements in that field of technology in 2017 and beyond, but we will see a strong increase in awareness for “AI in Cyber Security” as well as the field of Real Time Security Intelligence. This is, regardless of all skepticism regarding the use of terms and regarding hypes, a positive evolution.

Grizzly Steppe – what every organization needs to do

On December 29th, the FBI together with CERT finally released a Joint Analysis Report on the cyber-attacks on the US Democratic Party during the US presidential election.  Every organization, whether they are based in the US or not, would do well to read this report and to ensure that their organization takes account of its recommendations.  Once released into the wild – the tools and techniques and processes (TTPs) used by state actors are quickly taken up and become widely used by other adversaries. 

This report is not a formal indictment of a crime as was the case with the alleged hacking of US companies by the Chinese filed in 2014.  It is however important cyber threat intelligence.

Threat intelligence is a vital part of cyber-defence and cyber-incident response, providing information about the threats, TTPs, and devices that cyber-adversaries employ; the systems and information that they target; and other threat-related information that provides greater situational awareness.  This intelligence needs to be timely, relevant, accurate, specific and actionable.  This report provides such intelligence.

The approaches described in the report are not new.  They involve several phases and some have been observed using targeted spear-phishing campaigns leveraging web links to a malicious website that installs code.  Once executed, the code delivers Remote Access Tools (RATs) and evades detection using a range of techniques.  The malware connects back to the attackers who then use the RAT tools to escalate privileges, search active directory accounts, and exfiltrate email through encrypted connections.

Another attack process uses internet domains with names that closely resemble those of targeted organizations and trick potential victims into entering legitimate credentials.  A fake webmail site that collects user credentials when they log in is a favourite.  This time, a spear-phishing email tricked recipients into changing their passwords through a fake webmail domain. Using the harvested credentials, the attacker was able to gain access and steal content.

Sharing Threat Intelligence is a vital part of cyber defence and OASIS recently made available three foundational specifications for the sharing of threat intelligence.  These are described in Executive View: Emerging Threat Intelligence Standards - 72528 - KuppingerCole.  Indicators of Compromise (IOCs) associated with the cyber-actors are provided using these standards (STIX) as files accompanying the report.

There are several well-known areas of vulnerability that are consistently used by cyber-attackers.  These are easy to fix but are, unfortunately, still commonly found in many organizations’ IT systems.  Organizations should take immediate steps to detect and remove these from their IT systems:

The majority of these attacks exploit human weaknesses in the first stage.  While technical measures can and should be improved, it is also imperative to provide employees, associates and partners training on how to recognize and respond to these threats.

The report describes a set of recommended mitigations and best practices.  Organizations should consider these recommendations and takes steps to implement them without delay.  KuppingerCole provides extensive research on securing IT systems and on privilege management in particular. 

PSD II, Adaptive Authentication, and Multi-Factor Authentication

The upcoming updated Payment Services Directive (PSD II) will, among other changes, request Multi-Factor Authentication (MFA) for all payments above 10€ which aren’t done electronically. This is only one major change PSD II brings (another major change are the mandatory open APIs), but one that is heavily discussed and criticized, e.g. by software vendors, by credit card companies such as VISA, and others.

It is interesting to look at the published material. The major point is that it only talks about MFA, without going into specifics. The regulators also point out clearly that an authentication based on one factor in combination with Risk-Based Authentication (RBA) is not sufficient. RBA analyzes the transactions, identifies risk based on, e.g., the amount, the geolocation of the IP address, and other factors, and requests a second means or factor if the risk rating is above a threshold.

That leads to several questions. One question is what level of MFA is required. Another is what this means for Adaptive Authentication (AA) and RBA in general. The third question is whether and how this will affect credit card payments or services such as PayPal, that commonly still rely on one factor for authentication.

First, let me clarify some terms. MFA stands for Multi Factor Authentication, i.e. all approaches involving more than one factor. The most common variant is Two Factor Authentication (2FA), i.e. the use of two factors. There are three factors: Knowledge, Possession, Biometrics – or “what you know”, “what you have”, “what you are”. For each factor, there might be various “means”, e.g. username and password for knowledge, a hard token or a phone for possession, fingerprint and iris for biometrics.

RBA defines authentication that, as described beforehand, analyzes the risk involved in authentication and subsequent interaction and transactions and might request additional authentication steps depending on the risk rating.

Adaptive Authentication, on the other hand, is a combination of what sometimes is called “versatile” authentication with RBA. It combines the ability to use various means (and factors) for authentication in a flexible way. In that sense, it is adaptive to the authenticator that someone has. The other aspect of adaptiveness is RBA, i.e. adapting the required level of authentication to the risk. AA can be MFA, but it also – with low risk – can be One Factor Authentication (1FA).

Based on these definitions, it becomes clear that the statement “PSD II does not allow AA” is wrong. It also is wrong that “PSD II permits RBA”. The point simply is: Using AA (i.e. flexible authenticators plus RBA) or RBA without versatility is only in compliance with the PSD II requirements if at least two factors for authentication (2FA) are used.

And to put it more clearly: AA, i.e. versatility plus RBA, absolutely makes sense in the context of PSD II – to fulfill the regulatory requirements of MFA in a way that adapts to the customer and to mitigate risks beyond the baseline MFA requirement of PSD II.

MFA by itself is not necessarily secure. You can use a four-digit PIN together with the device ID of a smartphone and end up with 2FA – there is knowledge (PIN) and possession (a device assigned to you). Obviously, this is not very secure, but it is MFA. Thus, there should be (and most likely will be) additional requirements that lead to a certain minimum level of MFA for PSD II.

For providers, following a consequent AA path makes sense. Flexible use of authenticators to support what customers prefer and already have helps increase convenience and reduce cost for deploying authenticators and subsequent logistics – and it will help in keeping retention rates high. RBA as part of AA also helps to further mitigate risks, beyond a 2FA, whatever the authentication might look like.

The art in the context of PSD II will be to balance customer convenience, authentication cost, and risk. There is a lot of room for doing so, particularly with the uptake in biometrics and standards such as the FIDO Alliance standards which will help payment providers in finding that balance. Anyway, payment providers must rethink their authentication strategies now, to meet the changing requirements imposed by PSD II.

While this might be simple and straightforward for some, others will struggle. Credit card companies are more challenged, particularly in countries such as Germany where the PIN of credit cards is rarely used. However, the combination of a PIN with a credit card works for payments – if the possession of the credit card is proven, e.g. at a POS (Point of Sale) terminal. For online transactions, things become more complicated due to the lack of proof of the credit card. Even common approaches such as entering the credit card number, the security number from the back of the card (CVV, Card Verification Number), and the PIN will not help, because all could be means of knowledge – I know my credit card number, my CVV, and my PIN, and even the bank account number that sometimes is used in RBA by credit card processors. Moving to MFA here is a challenge that isn’t easy to solve.

The time is fast approaching for all payment providers to define an authentication strategy that complies with the PSD II requirements of MFA, as fuzzy as these still are. Better definitions will help, but it is obvious that there will be changes. One element that is a must is moving towards Adaptive Authentication, to support various means and factors in a way that is secure, compliant, and convenient for the customer.

Do you need a better IAM system to meet the GDPR requirements?

GDPR, the EU General Data Protection Regulation, is increasingly becoming a hot topic. That does not come as a surprise, given that the EU GDPR has a very broad scope, affecting every data controller (the one who “controls” the PII) and data processor (the one who “processes” the PII) dealing with data subjects (the persons) residing in the EU – even when the data processors and data controllers are outside of the EU.

Among the requirements of EU GDPR are aspects such as the right to be forgotten, the right to edit the PII stored about one self, or the “consent per purpose” principle, which requires informed consent per purpose of use of PII, in contrast to today’s typical “this site uses cookies and we will do whatever we want with the data collected” style of consent.

Notably, the definition of PII is very broad in the EU. It is not only about data that is directly mapped to the name and other identifiers. If a bit of data can be used to identify the individual, it is PII.

There are obvious effects to social networks, to websites where users are registered, and to many other areas of business. The EU GDPR also will massively affect the emerging field of CIAM (Consumer/Customer Identity and Access Management), where full support for EU GDPR-related features, such as a flexible consent handling, become mandatory.

However, will the EU GDPR also affect the traditional, on-premise IAM systems with their focus on employees and contractors? Honestly, I don’t see that impact. I see it, as mentioned beforehand, for CIAM. I clearly see it in the field of Enterprise Information Protection, by protecting PII-related information from leaking and managing access to such information. That also affects IAM, which might need to become more granular in managing access – but there are no new requirements arising from the EU GDPR. The need for granular management access to PII might lead to a renaissance (or naissance?) of Dynamic Authorization Management (think about ABAC) finally. It is far easier handling complex rules for accessing such data based on flexible, granular, attribute-based policies. We will need better auditing procedures. However, with today’s Access Governance and Data Governance, a lot can be done – and what can’t be done well needs other technologies such as Access Governance in combination with Dynamic Authorization Management or Data Governance that works well for Big Data. Likewise, Privilege Management for better protecting systems that hold PII are mandatory as well.

But for managing access to PII of employees and contractors, common IAM tools provide sufficient capabilities. Consent is handled as part of work contracts and other generic rules. Self-service interfaces for managing the data stored about an employee are a common feature.

The EU GDPR is important. It will change a lot. But for the core areas of today’s IAM, i.e. Identity Provisioning and Access Governance, there is little change.

Accenture to acquire French IAM System Integrator Arismore

Just before Christmas Accenture Security announced the acquisition of French IAM system integrator Arismore, a company with about 270 employees and an estimated turnover of €40M. This makes Arismore a leading IAM system integrator in France, while also being involved in IT transformation initiatives.

The acquisition follows other deals such as the acquisition of Everett by PWC earlier in 2016.

Arismore is of specific interest because it also owns a subsidiary, Memority, which launched an IDaaS offering back in 2014. Memority is one of the various IDaaS offerings that are largely based on COTS software, but offered as a service. In contrast to some others, it was not built as a cloud service from scratch.

Anyway, such service fits into the strategy of companies such as Accenture which are moving from consultancy offerings towards service offerings, such as the Accenture Velocity platform.

The acquisition is thus another indicator of the change in the consulting and system integration market, where former SIs and consultancies are moving towards service offerings – when more and more software is used as a cloud-based service, the traditional system integration business obviously will shrink over time.

However, Memority is still only a small part of the deal. Being strong in security is another requirement of the large consultancies, with security being one of the fastest growing business areas. Thus, the acquisition of Arismore by Accenture delivers value in two areas: More services and more security.

Is your software GDPR compliant? Is that the right question?

I hear this question being asked more and more  of vendors and of us analysts, whether a vendor’s software is GDPR compliant. However, it is the wrong question. The correct question is: “Does the software allow my organization to fulfill the regulatory requirements of EU GDPR?”. Even for cloud services, this (as “Does the service allow…”) is the main question, unless PII is processed by the cloud service.

If an enterprise implements a software package, it still has the requirement for complying with EU GDPR. It is the data controller. If it uses a cloud service, much of this is tenant responsibility. However, the role of the data processor – the one processing the data, ordered by the data controllers – is broader than ever before. Even someone that provides “only” storage that is used for storing PII is a data processor in the context of EU GDPR.

An interesting facet of this discussion is the “Privacy by Design” requirement of EU GDPR. Software (and services) used for handling PII must follow the principle of privacy by design. Thus, a data controller must choose software (or services) that follow these principles. One might argue that he also could choose an underlying software or service without support for privacy by design (whatever this is specifically) and configure or customize it so that it meets these requirements. The open question is whether a software or service must support privacy by design out-of-the-box and thus in consequence all EU GDPR requirements that apply to what the software does or whether it is sufficient that a software can be configured or customized to do so. But as my colleague Dave Kearns states: “The whole point of the ‘privacy by design’ is that it is in the product from the beginning, not added on later.

That is interesting when looking again at the initial question. One answer might be that all features required to fulfill the regulatory requirements of EU GDPR must be built into software and services that are used for handling PII data in the scope of EU GDPR. The other might be that it is sufficient if the software or service can be configured or customized to do so.

In essence, the question – when choosing software and services – is whether they support the EU GDPR requirements, starting from the abstract privacy-by-design principles to the concrete requirements of handling consent per purpose and many of the other requirements. It is not about software being compliant with EU GDPR, but about providing the support required for an organization to fulfill the requirements of EU GDPR. Looking at these requirements, there is a lot to do in many areas of software and services.

What Value Certification?

In the past weeks, there have been several press releases from CSPs (Cloud Service Providers) announcing new certifications for their services.  In November, BSI announced that Microsoft Azure had achieved Cloud Security Alliance (CSA) STAR Certification. On December 15th, Amazon Web Services (AWS) announced that it had successfully completed the assessment against the compliance standard of the Bundesamt für Sicherheit in der Informationstechnik (BSI), the Cloud Computing Compliance Controls Catalogue (C5).

What value do these certifications bring to the customer of these services?

The first value is compliance. A failure by the cloud customer to comply with laws and industry regulations in relation to the way data is stored or processed in the cloud could be very expensive.  Certification that the cloud service complies with a relevant standard provides assurance that data will be processed in a way that is compliant.

The second value is assurance.  The security, compliance and management of the cloud service is shared between the CSP and the customer.  Independent certification provides reassurance that the CSP is operating the service according to the best practices set out in the standard.  This does not mean that there is no risk that something could go wrong – it simply demonstrates that the CSP is implementing the best practices to reduce the likelihood of problems and to mitigate their effects should they occur.

There are different levels of assurance that a CSP can provide – these include:

CSP Assertion – the CSP describes the steps they take.  This value of this level of assurance depends upon the customer’s trust in the CSP.

Contractual assurance – the contract for the service provides specific commitments concerning the details of the service provided.  The value of this commitment is determined by the level of liability specified in the contract under circumstances where the CSP is in default as well as the cost and difficulties in its enforcement.

Independent validation – the cloud service has been evaluated by an independent third party that provides a certificate or attestation.  Examples of this include some forms of Service Organization Control (SOC) reports using the standards SSAE 16 or ISAE 3402.  The value of this depends upon the match between the scope of the evaluation and the customer’s requirements as well as its how frequently the validation is performed.

Independent testing – the service provided has been independently tested to demonstrate that it conforms to the claims made by the CSP.  This extends the assessment to include measuring the effectiveness of the controls.  Examples include SOC 2 type II reports as well as some levels of certification with the Payment Card Industry data security Standard (PCI-DSS).  The value of this depends upon the match between the scope of the evaluation and the customer’s requirements as well as how frequently the testing is performed.

The latter of these – Independent testing – is what customers should be looking for.  However, it is important that the customer asks the following questions:

1)      What is the scope of the certification?  Does it cover the whole service delivered or just parts of it – like the data centre?

2)      How does the standard compare with the customer’s own internal controls?  Are the controls in the standard stronger or weaker?

3)      Is the standard relevant to the specific use of the cloud service by the customer?  Many CSPs now offer an “alphabet soup” of certifications.  Many of these certifications only apply to certain geographies or certain industries.

4)      How well is your side of cloud use governed?  Security and compliance of the use of cloud services is a shared responsibility.  Make sure that you understand what your organization is responsible for and that you meet these responsibilities.

For more information on this subject see: Executive View: Using Certification for Cloud Provider Selection - 71308 - KuppingerCole

Revision of the Payment Services Directive (PSD2) - A significant set of new requirements for financial institutions

The European Commission´s revision of the Payment Services Directive (PSD2) is coming along with a significant set of new requirements for financial institutions with and without a banking licence – and therefore doesn´t only have friends

It all started with the 1st release of PSD back in 2007, which aimed at simplifying payments and their processing throughout the whole EU, i.e. in providing the legal platform for the Single Euro Payments Area (SEPA). In 2013, The European Commission proposed a revised version of PSD, which is aiming at opening the financial services market for innovation and new players, making it more transparent, standards based, more efficient as well as raising the level of security for customers using these services.

These are the PSD2 requirements:

  1. Banks have to open their infrastructure to 3rd parties through APIs and give them access to data and payments following the XS2A rule (access to account)
  2. Secure Customer Authentication (SCA) through the use of Multifactor Authentication (MFA)
  3. Secure communication through encryption

The European Banking Association (EBA) currently is working on a set of Regulatory Technical Standards (RTS), which will be binding after the final version is published. The current RTS draft is taking a principles based approach, not a risk based one. It is requiring a minimum of 2-Factor Authentication (2FA) out of 3 possible factors (password, Card or something else you own, Biometrics) for any transaction exceeding the value of 10€. In making it very clear during a hearing last September that every single user has to be protected from fraudulent activity by all means, EBA is explicitly refusing risk based approaches, where authentication is kept as simple for the user as the value of a transaction in relation to an artificially calculated risk of fraud allows. As a consequence, this could mean the end of credit card and 1-click-payments we have been using and enjoying for years now.

An impressive number of industry interest groups are now trying to convince the European Commission, that the EBA is going beyond common sense and should be guided into a more reasonable position. Risk based approaches have been in place since years and work well. According to a recent meeting between EBA and ECB with the Commission´s Economic and Monetary Affairs Committee (ECON), a record number of 200+ comments with concerns and requests for clarification have been received, with 147 of them having been published in the meantime.

The 3 main items of cristicism expressed in these comments are:

  1. Giving direct access to bank accounts for 3rd parties
  2. The 10 € limit for transactions without Strong Customer Authentication
  3. Exceptions from MFA are too tight

The Commission has made clear in the meantime that the unconditioned strong authentication requirement without any loopholes is a requirement to open the payment calue chain to 3rd parties. We therefore do not expect profound changes for the final RTS compared to the current draft.

What does this mean to current practices and how do authentication methods need to change so that they at the same time comply with PSD2 and still remain as frictionless as possible for users and open for innovation? This will probably be one of the most urgent questions to be discussed (and solved) in 2017.

Join the discussion at Digital Finance World, March 1-2, 2017 in Frankfurt, Germany!

AWS re:Invent 2016 Blog

In the last week of November I attended the AWS re:Invent conference in Las Vegas – this was an impressive event with around 32,000 attendees. There were a significant number of announcements at this event; many were essentially more of the same but bigger, better based on what their customers were asking for. It is clear that AWS is going from strength to strength. AWS announced many faster compute instances with larger amounts of memory optimized for various specific tasks. This may seem boring - but these announcements were received with rapturous applause from the audience. This is the AWS bread and butter and just what many customers are looking for. The value of these improvements is that a customer can switch their workload onto one of these new instances without the need to specify, order, pay for, and await delivery of new hardware as they would have had to do for on premise equipment. Continuing on that theme - James Hamilton, VP & Distinguished Engineer – described the work that AWS does behind the scenes to deliver their services. The majority of AWS traffic runs on a private network (except in China) this guarantees: improved latency, packet loss and overall quality, avoids capacity conflicts and gives AWS greater operational control. AWS designs and manages its own network routers, its own custom compute nodes to optimize power versus space and even its own custom power utility controls to cater for rare power events.

You may think - well so what? The reason why this matters is that an AWS customer gets all of this included in the service that they receive. These are time consuming processes that the customer would otherwise have to manage for their on premise IT facilities. Furthermore these processes need specialized skills that are in short supply. In the opening keynote at the conference, AWS CEO Andy Jassy compared AWS with the “legacy software vendors”. He positioned these vendors as locking their customers into long term, expensive contracts. In comparison he described how AWS allows flexibility and works to save customers’ money through price reductions and customer reviews.

However, to get the best out of AWS services, just like most IT technology, you need to exploit proprietary functionality. Once you use proprietary features it becomes more difficult to migrate from that technology. Mr. Jassy also gave several examples of how customers had been able to migrate around 13,000 proprietary database workloads to the AWS database services. While this shows the care that AWS has put into its database services it also slightly contradicts the claim that customers are being locked-in to proprietary software.

Continuing on the theme of migration – while AWS is still strong among the “born on the cloud” startups and for creating new applications, organizations are increasingly looking to migrate existing workloads. This has not always been straightforward since any differences between the on premise IT and the AWS environment can make changes necessary. The announcements previously made at VM World that a VMWare service will be offered on AWS will be welcomed by many organizations. This will allow the many customers using VMWare and the associated vSphere management tools to migrate their

workloads to AWS and while continuing to manage the hybrid cloud / on premise IT using the tools they are already using.

Another problem related to migration is that of transferring data. Organizations wishing to move their workloads to the cloud need to move their data and, for some, this can be a significant problem. The practical bandwidth of communications networks can be the limiting factor and the use of physical media introduces security problems. In response to these problems, AWS have created a storage device that can be used to physically transfer Terabytes of data securely. This first of these devices, the “AWS Snowball”, was announced at AWS last year and has now been improved and upgraded to the “AWS Snowball Edge”. However, the highlight of the conference was the announcement of the “AWS Snowmobile”. This is system mounted in a shipping container carried on a transport truck that can be used to securely transfer Exabytes of data. Here is a ‘photo that I took of one of these that was driven into the conference hall.

So, is this just an eye-catching gimmick? Not so according to the first beta customer.  The customer’s on premise datacenter was bursting at the seams and could no longer support their expanding data based business.  They wanted to move to the AWS cloud but it was not practical to transfer the amount of data they had over a network and they needed an alternative secure and reliable method.  AWS Snowmobile provided exactly the answer to this need.

Last but not least, security -  at the event AWS announced AWS Shield.  This is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS.   The value of this was illustrated in an interesting talk SAC327 – “No More Ransomware: How Europol, the Dutch Police, and AWS Are Helping Millions Deal with Cybercrime”.  This talk described a website set up to help victims of Ransomware attacks recover their data.  Not surprisingly, this site has come under sustained attacks from cyber-criminals. The fact that this site has withstood these attacks is a confirmation that AWS can be used to create and securely host applications, and that AWS Shield can add an extra layer of protection.

In conclusion, this event demonstrates that AWS is going from strength to strength.  Its basic value proposition of providing cost effective, flexible and secure IT infrastructure remains strong and continues to be attractive.  AWS is developing services to become more Hybrid Cloud and enterprise friendly while extending its services upwards to include middleware and intelligence in response to customer demand.  

For KuppingerCole’s opinion on cloud services see our research reports Cloud Reports - KuppingerCole

Stay Connected

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Blog

Spotlight

Learn more

Digital Finance

The emergence and prominence of bitcoin and its underlying technology Blockchain with open source, real-time payments capabilities and without centralized regulatory authority has sparked the Financial Services industry into exploring how Blockchain technology might be applied to mainstream banking and insurance sectors. Blockchain technology goes further than just a distributed ledger. Another initiative gaining acceptance is Smart Contracts that use computer protocols to facilitate, verify, or enforce the negotiation or performance of a contract or that obviate the need for a contractual [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00