KuppingerCole Blog

EBA Rules out Secure Open Banking?

On January 30th in London I attended a joint workshop between OpenID and the UK Open Banking community that was facilitated by Don Thibeau of OIX. This workshop included an update from Mike Jones on the work being done by OpenID and from Chris Michael Head of Technology, OBIE on UK Open Banking.

Firstly, some background to set the context for this. On January 13th, 2018 a new set of rules for banking came into force that stem from the EU Directive 2015/2366 of 25 November 2015 commonly known as Payment Services Directive 2 (PSD2). While PSDII prevents the UK regulators from mandating a particular method of access, the UK’s Competition and Markets Authority set up the Open Banking Implementation Entity (OBIE) to create software standards and industry guidelines that drive competition and innovation in UK retail banking. As one might expect, providing authorized access to payment services requires identifying and properly authenticating users – see KuppingerCole’s Advisory Note: Consumer Identity and Access Management for “Know Your Customer”.

One of the key players in this area is the OpenID Foundation. This is a non-profit, international standards organization, founded in 2007, that is committed to enabling, promoting and protecting OpenID technologies. While OpenID is relevant to many industries one area of particular interest is financial services. OpenID has a Financial API Working Group (FAPI) led by Nat Sakimura that is working to define APIs that enable applications to utilize the data stored in financial accounts, interact with those accounts, and to enable users to control their security and privacy settings.

Previously it was common for financial services such as those providing account aggregation services to use screen scraping and to store user passwords. Screen scraping is inherently insecure (see GDPR vs. PSD2: Why the European Commission Must Eliminate Screen Scraping). The current approach utilizes a token model such as OAuth [RFC6749, RFC6750], with the aim to develop a REST/JSON model protected by OAuth. However, OAuth needs to be profiled for the financial use cases.

In the UK, the APIs being specified by OBIE include an Open Banking OIDC Security Profile, which is based upon the work of OpenID. This has some differences between the FAPI R+W profile necessary to reduce delivery risk for ASPSPs.

In July 2017 it seemed that the EBA (European Banking Authority) had made a wise decision and rejected Commission Amendments on screen scraping. However in November 2017 the draft supplement to the EU technical regulations was published. In this, Article 32 (3) sets out the obligations for a dedicated interface. In summary these oblige account servicing payment service providers to ensure that this does not create obstacles. Obstacles specified in the RTS include:

  • Preventing the use by payment service providers of the credentials issued by account servicing payment service providers to their customers;
  • Imposing redirection to the account servicing payment service provider's authentication or other functions,
  • Requiring additional authorisations and registrations in addition or requiring additional checks of the consent given by payment service users to providers of payment initiation and account information services.

These obligations appear to fly in the face of what has become accepted security good practice: that one application should never directly share actual credentials with another application. Identity federation technologies such as OAuth and SAML have been reliably providing more secure means for cross-domain authentication for over a decade.

Ralph Bragg, Head of Architecture at OBIE, described 3 possible approaches that were being considered in the context of these obligations. These approaches can be summarized as:

  • Redirect – which is the OAuth model where an end user is redirected to the ASP to authenticate and the PSP receives a token. This appears to be non-compliant.
  • Embedded – where the PISP obtains the first and second factors from the end user and transmits these to the bank. This appears to be insecure.
  • Decoupled – where the end user completes the authorization on a separate device or application. This introduces further complexities.

This was discussed in a panel session involving many of the leading thinkers in this area including: Mike Jones, Microsoft, John Bradley, Yubico, Dave Tonge, Momentum FT, and Joseph Heenan, Fintech Labs.

There was a wide-ranging discussion which resulted in a general agreement that:

  • The embedded model involves the third party (PISP) in holding and transmitting credentials. This is very poor security practice and increases the attack surface. Attacks on the PISP could result in theft of the credentials to access the bank (ASPSP).
  • The redirection model is overall the best from a security point of view. Customers are generally happiest with redirect because they feel confident in their own bank. However, the bank may be the competitor of the PISP and so could make the process unfriendly.
  • PSD2 should be taken in an end to end perspective.

It seems perverse that technical regulations associated with the opening of electronic payment services appear to inhibit the use of the most up-to-date cybersecurity measures. The direct sharing of passwords or other forms of authentication credentials between services increases risks. It is generally better for regulations to oblige the use of widely accepted best practices rather than prohibiting them. OAuth is a well-understood and ubiquitously employed protocol that can help financial service providers achieve cross-domain authorization. It is my hope that the current wording of the regulations will not lead to a retrograde step in banking security.

Successful IAM Projects Are Not a Rocket Science – if You Do It Right

While we still regularly see and hear about IAM (Identity & Access Management) projects that don’t deliver to the expectations or are in trouble, we all see and hear about many projects that ran well. There are some reasons for IAM projects being more complex than many other IT projects, first and foremost the fact that they are cross-system and cross-organization. IAM integrates a variety of source systems such as HR and target systems, from the mainframe to ERP applications, cloud services, directory services, and many others. They also must connect business and IT, with the business people requesting access, defining business roles, and running recertifications.

In a new whitepaper by One Identity, we compiled both the experience of a number of experts from companies out of different regions and industries, and our own knowledge and experience, to provide concrete, focused recommendations on how to make your IAM project a success. Amongst the top recommendations, we find the need for setting the expectations of stakeholders and sponsors right. Don’t promise what you can’t deliver. Another major recommendation is splitting the IAM initiative/program into smaller chunks, which can be run successfully as targeted projects. Also, it is essential not to run IAM as a technology project only. IAM needs technology, but it needs more – the interaction with the business, well-defined processes, and well-thought-out models for roles and entitlements.

Don’t miss that new whitepaper when you are already working on your IAM program or when you will have to do in future.

Free Tools That Can Save Millions? We Need More of These

When IT visionaries give presentations about the Digital Transformation, they usually talk about large enterprises with teams of experts working on exciting stuff like heterogeneous multi-cloud application architectures with blockchain-based identity assurance and real-time behavior analytics powered by deep learning (and many other marketing buzzwords). Of course, these companies can also afford investing substantial money into building in-depth security infrastructures to protect their sensitive data.

Unfortunately, for every such company there are probably thousands of smaller ones, which have neither budgets nor expertise of their larger counterparts. This means that these companies not only cannot afford “enterprise-grade” security products, they are often not even aware that such products exist or, for that matter, what problems they are facing without them. And yet, from the compliance perspective, these companies are just as responsible for protecting their customer’s personal information (or other kinds of regulated digital data) as the big ones and they are facing the same harsh punishments for GDPR violations.

One area where this is especially evident is database security. Databases are still the most widespread technology for storing business information across companies of all sizes. Modern enterprise relational databases are extremely sophisticated and complex products, requiring trained specialists for their setup and daily maintenance. The number of security risks a business-critical database is open to is surprisingly large, ranging from the sensitivity of the data stored in it all the way down to the application stack, storage, network and hardware. This is especially true for popular database vendors like Oracle, whose products can be found in every market vertical.

Of course, Oracle itself can readily provide a full range of database security solutions for their databases, but needless to say, not every customer can afford spending that much, not to mention having the necessary expertise to deploy and operate these tools. The recently announced Autonomous Database can solve many of those problems by completely taking management tasks away from DBAs, but it should be obvious that at least in the short term, this service isn’t a solution for every possible use case, so on-premises Oracle databases are not going anywhere anytime soon.

And exactly for these, the company has recently (and without much publicity) released their Database Security Assessment Tool (DBSAT) – a freeware tool for assessing the security configuration of Oracle databases and for identifying sensitive data in them. The tool is a completely standalone command-line program that does not have any external dependencies and can be installed and run on any DB server in minutes to generate two types of reports.

Database Security Assessment report provides a comprehensive overview of configuration parameters, identifying weaknesses, missing updates, improperly configured security technologies, excessive privileges and so on. For each discovered problem, the tool provides a short summary and risk score, as well as remediation suggestions and links to appropriate documentation. I had a chance to see a sample report and even with my quite limited DBA skills I was able to quickly identify the biggest risks and understand which concrete actions I’d need to perform to mitigate them.

The Sensitive Data Assessment report provides a different view on the database instance, showing the schemas, tables and columns that contain various types of sensitive information. The tool supports over 50 types of such data out of the box (including PII, financial and healthcare for several languages), but users can define their own search patterns using regular expressions. Personally, I find this report somewhat less informative, although it does its job as expected. If only for executive reporting, it would be useful not just to show how many occurrences of sensitive data were found, but to provide an overview of the overall company posture to give the CEO a few meaningful numbers as KPIs.

Of course, being a standalone tool, DBSAT does not support any integrations with other security assessment tools from Oracle, nor it provides any means for mass deployment across hundreds of databases. What it does provide is the option to export the reports into formats like CSV or JSON, which can be then exported into third party tools for further processing. Still, even in this rather simple form, the program helps a DBA to quickly identify and mitigate the biggest security risks in their databases, potentially saving the company from a breach or a major compliance violation. And as we all know, these are going to become very expensive soon.

Perhaps my biggest disappointment with the tools, however, has nothing to do with its functionality. Just like other companies before, Oracle seems to be not very keen on letting the world know about tools like this. And what use is even the best security tool or feature if people do not know of its existence? Have a look at AWS, for example, where misconfigured permissions for S3 buckets have been the reason behind a large number of embarrassing data leaks. And even though AWS now offers a number of measures to prevent them, we still keep reading about new personal data leaks every week.

Spreading the word and raising awareness about the security risks and free tools to mitigate them is, in my opinion, just as important as releasing those tools. So, I’m doing my part!

UK Open Banking – Progress and Challenges

On January 13th, 2018 a new set of rules for banking came into force that open up the market by allowing new companies to offer electronic payment services. These rules follow from the EU Directive 2015/2366 of 25 November 2015 that is commonly referred to as Payment Services Directive II (PSDII). They promise innovation that some believed the large banks in the UK would otherwise fail to provide. However, as well as providing opportunities they also introduce new risks. Nevertheless, it is good to see the progress that has been made in the UK towards implementing this directive.

Under this new regime the banks, building societies, credit card issuers, e-money institutions, and others  (known as Account Servicing Payment Service Providers ASPSPs) must provide an electronic interface (APIs) that allows third parties (Payment Service Providers or PSPs) to operate an account on behalf of the owner. This opens up the banking system to organizations that are able to provide better ways of making payments, for example through new and better user interfaces (Apps), as well as completely new services that could depend upon an analysis of how you spend your money. These new organizations do not need to run the complete banking service with all that that entails, they just need to provide additional services that are sufficiently attractive to pay their way.

This introduces security challenges by increasing the potential attack surface and, according to some, may introduce conflicts with GDPR privacy obligations. It is therefore essential that security is top of mind when designing, implementing and deploying these systems. In the worst case they present a whole new opportunity for cyber criminals. As regards the potential conflicts with GDPR there will be a session at KuppingerCole’s Digital Finance World in February on this subject. For example, one challenge concerns providing the details of a recipient of an erroneous transfer who refuses to return the money.

To meet the requirements of this directive, the banking industry is moving its IT systems towards platforms that allow them to exploit multiple channels to their customers. This can be achieved in various ways – the cheap and cheerful method being to use “screen scraping” which needs no change to existing systems and new apps use the existing user interfaces to interact. This creates not only security challenges but also a technical architecture that is very messy. A much better approach is to extend existing systems to add open APIs. This is this approach being adopted in the UK.

PSD II is a directive and therefore each EU state needs to implement this locally. However, the job of implementing some of the provisions, including regulatory technical standards (RTS) and guidelines, has been delegated to the European Banking Authority (EBA). In the UK, HM Treasury published the final Payment Services Regulations 2017. The UK Financial Conduct Authority (FCA) issued a joint communication with the Treasury on PSDII and open banking following the publication of these regulations.

While PSDII prevents the UK regulators from mandating a particular method of access, the UK’s Competition and Markets Authority set up the Open Banking Implementation Entity (OBIE) to create software standards and industry guidelines that drive competition and innovation in UK retail banking. 

As of now they have published APIs that include:

Open Data API specifications allow API providers (e.g. banks, building societies and ATM providers) to develop API endpoints which can then be accessed by API users (e.g. third-party developers) to build mobile and web applications for banking customers. These allow providers to supply up to date, standardised, information about the latest available products and services so that, for example, a comparison website can more easily and accurately gather information, and thereby develop better services for end customers.

Open Banking Read/Write APIs enable Account Servicing Payment Service Providers to develop API endpoints to an agreed standard so that Account Information Service Providers (AISPs) and Payment Initiation Service Providers (PISPs) can build web and mobile applications for Payment Service Users (PSUs, e.g. personal and business banking customers).

These specifications are now in the public domain which means that any developer can access them to build their end points and applications. However, use of these in a production environment is limited to approved/authorised ASPSPs, AISPs and PISPs.

Approved/authorised will be enrolled in Open Banking Directory. This will provide digital identities and certificates which enable organisations to securely connect and communicate via the Open Banking Security Profile in a standard manner and to best protect all parties. 

Open Banking OIDC Security Profile - In many cases, Fintech services such as aggregation services use screen scraping and store user passwords. This is not adequately secure, and the approach being taken is to use a token model such as OAuth [RFC6749, RFC6750]. The aim is to develop a REST/JSON model protected by OAuth. However, OAuth needs to be profiled to be used in the financial use-cases. Therefore, the Open Banking Profile has some differences between the FAPI R+W profile necessary to reduce delivery risk for ASPSPs.

This all seemed straightforward until the publication of the draft Draft supplement to the EU technical regulations. This appears to prohibit the use of many secure approaches and I will cover this in a later blog.

In conclusion, the UK banking industry has taken great strides to define an open set of APIs that will allow banks to open their services as required by PSD II.It would appear that, in this respect, the UK is ahead of the rest of the EU. At the moment, these are only available to cover a limited set of use cases, principally the make an immediate transfer of funds in UK Pounds. In addition, the approach to strong authentication is still under discussion.  One further concern is to ensure that all of the potential privacy issues are handled transparently. To hear more on these subjects, attend KuppingerCole Digital Finance World in Frankfurt in February 2018.

Consolidation in Privilege Management Market Continues: Bomgar Acquires Lieberman Software

Just two weeks after One Identity has acquired Balabit, the news spread about the next acquisition in this market segment: Bomgar acquires Lieberman Software. Both vendors have been active in this market. While Bomgar entered the market a couple of years ago, having a long history in Remote Control solutions, Lieberman Software is one of the Privilege Management veterans.

Looking at their portfolios, there is some functional overlap. However, while the strength of Bomgar comes from Session Management related to their Remote Control features, Lieberman Software is stronger in the Shared Account Password Management and related capabilities. The two companies will be able to deliver strong capabilities in most areas of Privilege Management by joining their forces.

With that second merger in a row, the Privilege Management market dynamics are under change. Aside from the established leaders in the market, there are now two vendors about to bring strong combined offerings to the market. This will foster competition among the leaders, but also increase pressure on smaller vendors that need to rethink their positioning and strategy to find their sweet spots in the market. However, from a customer perspective, more competition and more choice is always a good thing.

One Identity Acquires Balabit

Yesterday, One Identity announced that they have acquired Balabit, a company specialized on Privileged Management, headquartered in Luxembourg but with their main team located in Hungary. One Identity, a Quest Software business, counts amongst the leading vendors in the Identity Management market. Aside of their flagship product One Identity Manager, they deliver a number of other products, including Safeguard as their Privilege Management offering. Balabit, on the other hand, is a pure-play Privilege Management vendor, offering several products with particular strengths around Session Management and Privileged Behavior Analytics.

One Identity already has a technical integration with Balabit’s Session Management product as a part of their Safeguard offering. With the acquisition, One Identity gets direct access to one of the leading Session Management technologies, but also the Privileged Behavior Analytics capabilities of Balabit. Combined with the One Identity Safeguard capabilities, this results in a comprehensive Privilege Management offering, from Shared Account Password Management to Session Management and Privileged Behavior Analytics. Given that there is already some integration, we expect One Identity to progress fast on creating a fully integrated solution. Another advantage might occur from the fact that still a significant portion of the One Identity Manager team is based in Germany, geographically relatively close to Hungary.

The acquisition strengthens the position of One Identity in both the Privilege Management market and the overall Identity Management market. For Privilege Management, the combined portfolio and the expected close integration moves One Identity into the group of the market leaders, with respect to both the number of customers and technical capabilities. One Identity becomes a clear pick for every shortlist, when evaluating vendors in this market segment.

When looking at the overall Identity Management market, One Identity improves its position as one of the vendors that cover all major areas of that market, with particular strengths in IGA (Identity Governance and Administration, i.e. Identity Provisioning and Access Governance) and Privilege Management, but also in Identity Federation and Cloud SSO, plus other capabilities such as cloud-based MFA (Multi-Factor Authentication). For companies that focus on single sourcing for Identity Management or at least one core supplier, One Identity becomes an even more interesting choice now.

The acquisition underpins the strategy that One Identity had announced after the split of Quest Software from Dell and the creation of One Identity as a separate business of Quest Software: playing a leading role in the overall Identity Management market as a vendor that covers all major areas of this market segment.

Spectre and Meltdown: A Great Start Into the New Year!

Looks like we the IT people have gotten more New Year presents than expected for 2018! The year has barely started, but we already have two massive security problems on our hands, vulnerabilities that dwarf anything discovered previously, even the notorious Heartbleed bug or the KRACK weakness in WiFi protocols. Discovered back in early 2017 by several independent groups of researchers, these vulnerabilities were understandably kept from the general public to give hardware and operating system vendors time to analyze the effects and develop countermeasures for them and to prevent hackers from creating zero-day exploits.

Unfortunately, the number of patches recently made for the Linux kernel alone was enough to raise suspicion of many security experts. This has led to a wave of speculations about the possible reasons behind them: has it something to do with the NSA? Will it make all computers in the world run 30% slower? Why is Intel’s CEO selling his stock? In the end, the researchers were forced to release their findings a week earlier just to put an end to wild rumors. So, what is this all about after all?

Technically speaking, both Meltdown and Spectre aren’t caused by some bugs or vulnerabilities. Rather, both exploit the unforeseen side effects of speculative execution, a core feature present in most modern processors that’s used to significantly improve calculation performance. The idea behind speculative execution is actually quite simple: every time a processor must check a condition in order to decide which part of code to run, instead of waiting till some data is loaded from memory (which may take hundreds of CPU cycles to complete), it makes an educated guess and starts executing the next instruction immediately. If later the guess proves to be wrong, the processor simply discards those instructions and reverts its state to a previously saved checkpoint, but if it was correct, the resulting performance gain can be significant. Processors have been designed this way for over 20 years, and potential security implications of incorrect speculative execution were never considered important.

Well, not any more. Researchers have discovered multiple methods of exploiting side effects of speculative execution that allow malicious programs to steal sensitive data they normally should not have access to. And since the root cause of the problem lies in the fundamental design in a wide range of modern Intel, AMD and ARM processors, nearly every system using those chips is affected including desktops, laptops, servers, virtual machines and cloud services. There is also no way to detect or block attacks using these exploits with an antivirus or any other software.

In fact, the very name “Spectre” was chosen to indicate that the problem is going to haunt us for a long time, since there is no common fix for all possible exploit methods. Any program designed according to the best security practices is still vulnerable to a carefully crafted piece of malicious code that could extract sensitive data from it by manipulating the processor state and measuring the side effects of speculative execution. There is even a proof-of-concept implementation in JavaScript, meaning that even visiting a website in a browser may trigger an attack, although popular browsers like Chrome and Firefox have already been patched to prevent it.

The only way to fully mitigate all variants of the Spectre exploit is to modify every program explicitly to disable speculative execution in sensitive places. There is some consolation in the fact that exploiting this vulnerability is quite complicated and there is no way to affect the operating system kernel this way. This cannot be said about the Meltdown vulnerability, however.

Apparently, Intel processors take so many liberties when applying performance optimizations to the executed code that the same root cause gives hackers access to arbitrary system memory locations, rendering (“melting”) all memory isolation features in modern operating systems completely useless. When running on an Intel processor, a malicious code can leak sensitive data from any process or OS kernel. In a virtualized environment, a guest process can leak data from the host operating system. Needless to say, this scenario is especially catastrophic for cloud service providers, where data sovereignty is not just a technical requirement, but a key legal and compliance foundation for their business model.

Luckily, there is a method of mitigating the Meltdown vulnerability completely on an operating system level, and that is exactly what Microsoft, Apple and Linux Foundation have been working on in the recent months. Unfortunately, to enforce separation between kernel and user space memory also means to undo performance optimizations processors and OS kernels are relying on to make switching between different execution modes quicker. According to independent tests, for different applications these losses may be anywhere between 5 and 30%. Again, this may be unnoticeable to average office users, but can be dramatic for cloud environments, where computing resources are billed by execution time. How would you like to have your monthly bill suddenly increased by 30% for… nothing, really.

Unfortunately, there is no other way to deal with this problem. The first and most important recommendation is as usual: keep your systems up-to-date with the latest patches. Update your browsers. Update your development tools. Check the advisories published by your cloud service provider. Plan your mitigation measures strategically.

And keep a cool head – conspiracy theories are fun, but not productive in any way. And by the way: Intel officially states that their CEO selling stocks in October has nothing to do with this vulnerability.

Obsession? Hype? Revolution? It Might Be a Bit of Everything: Moving Into the Age of Blockchain in Cybersecurity, Identity and Privacy

In looking at the current investor craze mainly around the primary use case of blockchain, the Bitcoin, it sometimes gets a bit difficult to think beyond the bubble and track those blockchain projects, which indeed are on their way to becoming useful in changing the way we do things like selling or buying stuff, digitally moving value, immutably store any kind of documents and data, consume information, create and manage digital IDs, or otherwise influence and change most aspects of our social, political and economic interactions. What we see happening in the crypto-world, is an explosion of creativity and innovation, well-funded through initial coin offerings (ICOs). Most of the blockchain projects we are observing show a high potential for disrupting whole industries.    

Blockchain in Cybersecurity

Based on decades of research in cryptography and resilience, cybersecurity and blockchain technology have the same roots and look like natural allies. In offering a totally new way of securing information integrity, performing transactions and creating trust relationships between parties that don´t know each other, blockchains are secure by design and suit well for use cases with high security requirements. It is therefore easily understandable that DARPA (US Defense Advanced Research Projects Agency) has been funding a number of interesting blockchain startups experimenting with secure, private and failsafe communication infrastructures. DARPA’s program manager behind the blockchain effort, Timothy Booher, well describes the paradigm shift blockchain implies to cybersecurity in an analogy: “Instead of trying to make the walls of a castle as tall as possible to prevent an intruder from getting in, it’s more important to know if anyone has been inside the castle, and what they’re doing there.”

Blockchain Identity & Privacy: It all Depends on the Governance Model

Managing digital identities as well as linking them to real humans (identification) is becoming a primary playground for blockchain technology, as it is fundamental for any blockchain use case and as it seems to not only reduce vulnerabilities of traditional infrastructures, but finally offer a solution to give control over personal information back to the user it belongs to (“Self-Sovereign Identity – SSI”). However, the assumption that blockchain is the only way to repair the missing internet identity layer would be as wrong as the opposite assumption. There is no doubt about that blockchain will change the way we deal with identity and privacy, but there are some vital challenges to be solved before -  with Blockchain Governance being the one that matters most, as all other problems that are being discussed depend on selecting the right governance model:

  • How do we deal with change? We have been in the IT space long enough to know that the only constant is permanent change. Who would decide on “updating” the blockchain? How much of the pure-play blockchain do we need to give up avoiding messing with hard-forks?
  • Scalability: The proof-of-work based Bitcoin blockchain has its limits.  Is proof-of-stake the only viable alternative or will we soon see massive parallel blockchain infrastructures?
  • Private vs. Public, "permissioned vs. unpermissioned": Are we facing a future of walled blockchain gardens?
  • Off-Chain vs. On-Chain Governance: What are the risks of on-chain Governance? Will self-amending ledgers be the ones that rule the identity field?
  • Future Governance Models based on prediction markets

Shaping the Future of Blockchain ID, Privacy & Security: Be part of it!

The Blockchain discussion will continue to be a core element in KuppingerCole´s Upcoming Events

For the 1st time ever, we´ll offer a “Blockchain ID Innovation Night” at #EIC18, where you will meet with developers, evangelists and experts from most or all blockchain ID projects out there.

McAfee Acquire Skyhigh Networks

McAfee, from its foundation in 1987, has a long history in the world of cyber-security.  Acquired by Intel in 2010, it was spun back out, becoming McAfee LLC, in April 2017.  According to the announcement on April 23rd, 2017 by Christopher D. Young, CEO – the new company will be “One that promises customers cybersecurity outcomes, not fragmented products.   So, it is interesting to consider what the acquisition of Skyhigh Networks, which was announced by McAfee on November 27th, will mean. 

Currently, McAfee solutions cover areas that include: antimalware, endpoint protection, network security, cloud security, database security, endpoint detection and response, as well as data protection.   Skyhigh Networks are well known for their CASB (Cloud Access Security Broker) product.  So how does this acquisition fit into the McAfee portfolio? 

Well, the nature of the cyber-risks that organizations are facing has changed.  Organizations are increasingly using cloud services because of the benefits that they can bring in terms of speed to deployment, flexibility and price.  However, the governance over the use of these services is not well integrated into the normal organizational IT processes and technologiesCASBs address these challengesThey provide security controls that are not available through existing security devices such as Enterprise Network Firewalls, Web Application Firewalls and other forms of web access gateways. They provide a point of control over access to cloud services by any user and from any device.  They help to demonstrate that the organizational use of cloud services meets with regulatory compliance needs. 

In KuppingerCole’s opinion, the functionality to manage access to cloud services and to control the data that they hold should be integrated with the normal access governance and cyber security tools used by organizations.  However, the vendors of these tools were slow to develop the required capabilities, and the market in CASBs evolved to plug this gap.  The McAfee acquisition of Skyhigh Networks is the latest of several recent examples of acquisitions of CASBs by major security and hardware software vendors. 

The diagram illustrates how the functions that CASBs provide fit into the overall cloud governance process. These basic functionalities are:

  1. Discovery of what cloud services are being used, by whom and for what data.
  2. Control over who can use which services and what data can be transferred.
  3. Protection of data in the cloud against unauthorized access and leakage.
  4. Regulatory compliance and protection against cyber threats through the above controls.

So, in this analysis CASBs are closer to Access Governance solutions than to traditional cyber-security tools.  They recognize that identity and access management are the new cyber-frontier, and that cyber-defense needs to operate at this level.  By providing these functions Skyhigh Networks provides a solution that is complementary to those already offered by McAfee and extends McAfee’s capabilities in the direction needed to meet the capabilities of the cloud enabled, agile enterprise.

The Skyhigh Networks CASB provides comprehensive functionality that strongly matches the requirements described above.  It is also featured in the leadership segment of KuppingerCole’s Leadership Compass: Cloud Access Security Brokers - 72534.  This acquisition is consistent with KuppingerCole’s view on how cyber-security vendors need to evolve to meet the challenges of cloud usage.  Going forward, organizations need a way to provide consistent access governance for both on premise and cloud based services.  This requires functions such as segregation of duties, attestation of access rights and other compliance related governance aspects.  Therefore, in the longer term CASBs need to evolve in this direction.  It will be interesting to watch how McAfee integrates the Skyhigh product and how the McAfee offering evolves towards this in the future.

Please! No More GDPR Related Blog Posts!

You have heard it all before: May 25th, 2018, enormous fines, "you have to act now", the "right to be forgotten", DPO and breach notification. Every manufacturer whose marketing database contains your data will send you information, whitepapers, webinars, product information and reminders about GDPR. And they of course can help you in getting towards compliance. So you have set up a filter in your mail client that sorts GDPR messages directly into spam and #gdpr is muted in your Twitter client.  

Because you have started your journey towards compliance to GDPR early? Compliance activities have long been established and your employees are informed? Consent management is not just theory? Data Leakage is prevented, monitored, detected and if it does occur, communicated appropriately?  

But there might be still a good reason to read on: Unlike other regulations, there is no regular inspection of compliance with the requirements. Rather, individuals (including customers, employees or other relevant data subjects) and the competent supervisory authorities are able to make enquiries if alleged or actual omissions or offences are to be investigated. However, as yet there is no proof of GDPR compliance as a regular and permanent seal of quality. 

It is difficult to identify sufficient indicators for good preparation. Yes, vendors and integrators provide some basic questionnaires… But you still might be in need of a neutral set of criteria determining the maturity level of your organization's readiness in the areas of compliance with regulatory or industry-specific regulations or frameworks. To support such reviews, KuppingerCole provides Maturity Level Matrixes that are specifically targeted to distinct areas of the IT market, in this case, GDPR readiness.  

Assessing the quality and maturity of the controls, systems and processes implemented by your organization is essential. Given the level of agility required from business and market requirements this assessment needs to be executed on a regular basis. Continuous improvements are essential to achieve an adequate level of compliance in all key areas of the GDPR. 

To achieve the highest level 5 of GDPR maturity it is essential to continuously measure GDPR readiness to enable an organization to understand their status quo, document it and, if possible, realize the potential benefits of investing in improving data protection. Then you might happily ignore further GDPR-related blogposts. 

The KuppingerCole Maturity Level Matrix for GDPR readiness provides neutral criteria exactly for that purpose. Find it here for download. 

And get in touch with us if you feel that an independent assessment (along the lines of exactly the same maturity levels) might be even more meaningful. 

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Privacy & the European Data Protection Regulation Learn more

Privacy & the European Data Protection Regulation

The EU GDPR (General Data Protection Regulation), becoming effective May 25, 2018, will have a global impact not only on data privacy, but on the interaction between businesses and their customers and consumers. Organizations must not restrict their GDPR initiatives to technical changes in consent management or PII protection, but need to review how they onboard customers and consumers and how to convince these of giving consent, but also review the amount and purposes of PII they collect. The impact of GDPR on businesses will be far bigger than most currently expect. [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00