KuppingerCole Blog

The C5:2020 - A Valuable Resource in Securing the Provider-Customer Relationship for Cloud Services

KuppingerCole has accompanied the unprecedented rise of the cloud as a new infrastructure and alternative platform for a multitude of previously unimaginable services – and done this constructively and with the necessary critical distance right from the early beginnings (blog post from 2008). Cybersecurity, governance and compliance have always been indispensable aspects of this.

When moving to the use of cloud services, it is most important to take a risk-based approach. There is nothing like “just the cloud”. It is not a single model but covers a wide and constantly growing spectrum of applications, services and virtualized infrastructure.

The “wild west phase” of early cloud deployments, based on quick decisions and individual, departmental “credit card”-based cloud subscriptions without corporate oversight should lie behind us. An organization adopting a cloud service needs to ensure that it remains in compliance with laws and industry regulations. There are many aspects to look at, including but not limited to compliance, service location, data security, availability, identity and access management, insider abuse of privilege, virtualization, isolation, cybersecurity threats, monitoring and logging.

Moving to the cloud done right

When moving to the use of cloud services, it is most important to take a risk-based approach. There is nothing like one single version of “the cloud”. It is not a single model but covers a wide and constantly growing spectrum of applications, services and virtualized infrastructure all summed up as the cloud service providers. While many people think mainly of the large platform providers like AWS or Microsoft Azure there is a growing number of companies providing services in and from the cloud. To ensure the security of their customers’ data the provider of cloud services should comply with best practice for the provision of the services they offer.

Moving services into the cloud or creating new services within the cloud substantially changes the traditional picture of typical responsibilities for an application/infrastructure and introduces the Cloud Service Provider (CSP) as a new stakeholder to the network of functional roles already established. Depending on the actual decision of which parts of the services are provided by the CSP on behalf of the customer and which parts are implemented by the tenant on top of the provided service layers, the responsibilities are assigned to either the CSP or the tenant.

Shared responsibilities between the provider and the tenant are a key characteristic of every deployment scenario of cloud services. For every real-life cloud service model scenario, all responsibilities identified have to be clearly assigned individually to the appropriate stakeholder. This might be drastically different in scenarios where only infrastructure is provided, for example the provisioning of plain storage or computing services, compared to scenarios where complete "Software as a Service" (SaaS, e.g. Office 365) is provided. Therefore, the prerequisite for an appropriate service contract between provider and the tenant has to be a comprehensive identification of all responsibilities and an agreement on which contract partner within a cloud service scenario these responsibilities have been assigned to.

However, the process involved is often manual and time consuming, and there is a multitude of aspects to consider. From the start it was important to us to support organizations in understanding the risks that come with the adoption of cloud services and in assessing the risks around their use of cloud services in a rapid and repeatable manner.

Best practices as a baseline

There are several definitions of best practice including: ITIL, COBIT, ISO/IEC 270xx, but also industry-specific specifications from the Cloud Security Alliance (CSA). For a primarily German audience (but de facto far beyond that), the BSI (the German Federal Office for Information Security) created the Cloud Computing Compliance Criteria Catalogue (BSI C5 for short) several years ago as a guideline for all those involved (users, vendors, auditors, security providers and service providers and many more) in the process of evaluating cloud services.

It is available free of charge to anyone interested. And many should be interested: The readership benefits from a well-curated and proofread current catalogue of criteria. It is worth noting that the document is updated regularly, while it is openly available for anyone to learn and use.

These criteria can be used by cloud services users to evaluate the services offered. In reverse, service providers can integrate these criteria already at the conceptual phase of their services and thus ensure "compliance by design" in technology and processes.

C5 reloaded – the 2020 version

BSI just published a major update of the C5 entitled C5:2020. Many areas have been thoroughly revised to cover current trends and developments like DevOps. Two further areas have been added:

  • “Product security” focuses on the security of the cloud service itself so that the requirements of the EU Cybersecurity Act are included in the questionnaire.
  • Especially with regard to US authorities, dealing with “Investigation requests from government agencies” for European customers regularly raises questions. For this reason, the second block of questions was designed to ensure appropriate handling of these requests with regard to legal review.

The C5:2020 is clearly an up-to-date and valuable resource for securing the shared responsibility between cloud customer and cloud service provider.

Applying best practices to real-life scenarios

The process of implementing and securing the resulting technical concepts and necessary mitigating measures requires an individual consideration of the specific requirements of a customer company. This includes a risk-oriented approach to identify the criticality of data, services and processes and to evaluate a deep understanding of the effectiveness and impact of implemented measures.

KuppingerCole Research can provide essential information as a valuable foundation for technologies and strategies. KuppingerCole Advisory Services support our clients strategically in the definition and implementation of necessary conceptual and actionable measures. This is particularly true when it comes to finding out how to efficiently close gaps once they have been identified. This includes mitigating measures, accompanying organizational and technical activities, and the efficient selection of the appropriate and optimal portfolio of tools. Finally, the KuppingerCole Academy with its upcoming master classes for Incident Response Management and Privileged Access Management supports companies and employees in creating knowledge and awareness.

The Next Best Thing After "Secure by Design"

There is an old saying that goes like this: “you can lead a horse to water, but you can’t make it drink”. Nothing personal against anyone in particular, but it seems to me that it perfectly represents the current state of cybersecurity across almost any industry. Although the cybersecurity tools are arguably becoming better and more sophisticated, and, for example, cloud service providers are constantly rolling out new security and compliance features in their platforms, the number of data breaches and hacks continues to grow. But why?

Well, the most obvious answer is that security tools, even the best ones, are still just tools. When a security feature is implemented as an optional add-on to a business-relevant product or service, someone still has to know that it exists to deploy and configure it properly and then operate and monitor it continuously, taking care of security alerts, as well as bug fixes, new features and the latest best practices.

The skills gap is real

Perhaps the most notorious example of this problem is the Simple Storage Service (better known as S3) from AWS. For over a decade, this cloud storage platform has been one of the most popular places to keep any kind of data, including the most sensitive kinds like financial or healthcare records. And even though over the years AWS has introduced multiple additional security controls for S3, the number of high-profile breaches caused by improper access configuration leaving sensitive data open to the public, is still staggering. A similar reputation stain – when their database installations were exposed to the whole Internet without any authentication – still haunts MongoDB even though they have fixed this issue years ago.

Of course, every IT expert is supposed to know better and never make such disastrous mistakes. Unfortunately, to err is human, but the even bigger problem is that not every company can afford to have a team of such experts. The notorious skills gap is real – only the largest enterprises can afford to hire the real pros, and for smaller companies, managed security services are perhaps the only viable alternative. For many companies, cybersecurity is still some kind of a cargo cult, when a purchased security tool isn’t even properly deployed or monitored for alerts.

“Secure by design” is too often not an option

Wouldn’t it be awesome if software just were secure on its own, without any effort from its users? This idea has been the foundation for “secure by design” principles that have been established years ago, defining various approaches towards creating software that is inherently free from vulnerabilities and resilient against hacking attacks. Alas, writing properly secured software is a tedious and costly process, which in most cases does not provide any immediate ROI (with a few existing exceptions like space flight or highly regulated financial applications). Also, these principles do not apply well to existing legacy applications – it is very difficult to refactor old code for security without breaking a lot of stuff.

So, if making software truly secure is so complicated, what are more viable alternatives? Well, the most trivial, yet arguably still the most popular one is offering software as a managed service, with a team of experts behind it to take care of all operational maintenance and security issues. The only major problem with this approach is that it does not scale well for the same reason – the number of experts in the world is finite.

Current AI technologies lack flexibility for different challenges

The next big breakthrough that will supposedly solve this challenge is replacing human experts with AI. Unfortunately, most people tend to massively overestimate the sophistication of existing AI technologies. While they are undoubtedly much more efficient than us at automating tedious number-crunching tasks, the road towards fully autonomous universal AI capable of replacing us in mission-critical decision making is still very long. While some very interesting developments for narrow security-related AI-powered solutions already exist (like Oracle’s Autonomous Database or automated network security solutions from vendors like Darktrace), they are nowhere nearly flexible enough to be adapted for different challenges.

And this is where we finally get back to the statement made in this post’s title. If “secure by design” and “secure by AI” are undoubtedly the long-term goals for software vendors now, what is the next best thing possible in the shorter term? My strong belief has always been that the primary reason for not doing security properly (which in the worst cases degenerates into a cargo cult mentioned above) is insufficient guidance and a lack of widely accepted best practices in every area of cybersecurity. The best security controls do not work if they are not enabled, and their existence is not communicated to users.

“Secure by default” should be your short-term goal

Thus, the next best thing after “secure by design” is “secure by default”. If a software vendor or service provider cannot guarantee that their product is free of security vulnerabilities, they should at least make an effort to ensure that every user knows the full potential of existing security controls, has them enabled according to the latest best practices and, ideally, that their security posture cannot be easily compromised through misconfiguration.

The reason for me to write this blog post was the article about security defaults introduced by Microsoft for their Azure Active Directory service. They are a collection of settings that can be applied to any Azure tenant with a single mouse click and which will ensure that all users are required to use multi-factor authentication, that legacy, insecure authentication protocols are no longer used and that highly privileged administration activities are protected by additional security checks.

There isn’t really anything fancy behind this new feature – it’s just a combination of existing security controls applied according to the current security best practices. It won’t protect Azure users against 100% of cyberattacks. It’s not even suitable for all users, since, if applied, it will conflict with more advanced capabilities like Conditional Access. However, protecting 95% of users against 95% of attacks is miles better than not protecting anyone. Most importantly, however, is that these settings will be applied to all new tenants as well as to existing ones that have no idea about any advanced security controls.

Time to vaccinate your IT now

In a way, this approach can be compared to vaccinations against a few known dangerous diseases. There will always be a few exemptions and an occasional ill effect, but the notion of population immunity applies to cybersecurity as well. Ask your software vendor or service provider for security defaults! This is the vaccination for IT.

Quantum Computing and Data Security - Pandora's Box or a Good Opportunity?

Not many people had heard of Schroedinger's cat before the CBS series "The Big Bang Theory" came out. Dr. Sheldon Cooper used this thought experiment to explain to Penny the state of her relationship with Lennard. It could be good and bad at the same time, but you can't be sure until you've started (to open) the relationship.

Admittedly, this is a somewhat simplified version of Schroedinger's thoughts by the authors of the series, but his original idea behind it is still relevant 100 years later. Schroedinger considered the following: "If you put a cat and a poison, which is randomly effective in time, into a box and seal it, as an observer you cannot tell whether the cat is alive or not. Therefore, it will be both until someone opens the box and checks.”

Superposition states lead to parallel calculations

This is a metaphor for superposition as it applies to quantum mechanics. One bit (the cat) can have several states at the same time and is therefore fundamentally different from the classical on/off or 0/1 representation in today's computer science, which is based on physical laws. Due to this possibility of superposition states, parallel computing operations can also be performed according to the laws of quantum mechanics, which accelerates the time of complex calculations. Google announced a few months ago that they have managed to build a quantum computer with 53 (Q)bits, capable of handling computations much faster than current supercomputers can; it can solve a selected problem in 3 minutes instead of 10,000 years, for example.

The way we decrypt data actually is in danger

This is precisely where the dangers for our current IT lie. Almost all encryption of data at rest and in transit is based on complex calculations that can only be efficiently decrypted with the right "key". If quantum computers become able to efficiently calculate, our current security concept for data collapses entirely.

Moreover, it would also have a massive impact on cryptographic currencies. Their added value is based on complex calculations in the blockchain, which requires a certain amount of computing power. If this could from now on be done in milliseconds, this market would also suddenly become obsolete.

Quantum based calculations offer a lot of potential

Of course, quantum computing also has advantages, because the biggest disadvantage (as it stands today) is also the biggest advantage: Complex calculations can be completed in a very short time. Everything that is based on many variables and various parameters can be calculated efficiently and with a realistic forecast. Good examples are environmental events and weather forecasts. These are based on an extremely large number of variables and are currently predicted using approximate algorithms rather than correct calculation. The same applies to traffic data. Cities or navigation systems could use all parameters and thus calculate the perfect route for each participant and adjust signal circuits if necessary. This is a dream come true from an environmental and traffic planning perspective: fewer traffic jams, fewer emissions and faster progress.

We need to change the way we encrypt

But what remains, despite all the enthusiasm for the potential, is the bland aftertaste regarding the security of data. However, scientists are already working on this as well, developing new algorithms based on other paradigms such as complex calculations. For example, instead of today's private keys, an encryption based on the value system itself can be defined. For example, if the algorithm does not know what value the number 4 really represents, it cannot decipher it easily. The key to encrypt is the underlying coordinate system. Future algorithms with the use of artificial intelligence will emerge and of course there are also considerations on how to use quantum computing for encryption.


In the end, quantum computing is just one more step towards more efficient computers, which might be replaced by artificial brains in another 100 years, bringing mankind another step forward in technology.


Applying the Information Protection Life Cycle and Framework to CCPA

The California Consumer Privacy Act (CCPA) became effective on January 1, 2020. Enforcement is slated to start by July 1, 2020. CCPA is complex regulation which does bear some similarities with EU GDPR. For more information on how CCPA and GDPR compare, see our webinar. Both regulations deal with how organizations handle PII (Personally Identifiable Information). CCPA intends to empower consumers to give them a choice to disallow onward sales of their PII by organizations that hold that information.  A full discussion of what CCPA entails is out of scope. In this article, I want to focus how our Information Protection Lifecycle (IPLC) and Framework can help organizations prepare for CCPA. 

What is considered PII under CCPA?

Essentially, anything that be used to identify individuals or households of California residents. A summarized list (drawn from the text of the law) includes:

  • Identifiers such as a real name, alias, postal address, unique personal identifier, online identifier, IP address, email address, account name, SSN, driver’s license number, passport number, or other similar identifiers.
  • Commercial information, including records of personal property, products or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies.
  • Biometric information.
  • Internet or other electronic network activity information, including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an Internet Web site, application, or advertisement.
  • Geolocation data.
  • Professional or employment-related information.
  • Education information, defined as information that is not publicly available.
  • Inferences drawn from any of the information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.

The list of data types that are designated as PII by CCPA is quite extensive.

How does a company or organization that is subject to CCPA go about protecting this information from unauthorized disclosure?

The IPLC offers a place to start. Discovery/classification is the first phase in the IPLC. You have to understand what kinds of information you have in order to know if you’re subject to CCPA (or any other pertinent regulations). As with GDPR, a Data Protection Impact Assessment (DPIA) type exercise is a good first step. Organizations that have, sell, or process California resident PII need to conduct data inventories to discover what kinds of PII they may have. There are automated tools that can greatly improve your chances of finding all such data across disparate systems, from on-premise applications and databases to cloud-hosted repositories and apps. Many of these tools can be quite effective, due to the well-known formats of PII. For example, Data Leakage Prevention (DLP) and Data Classification tools have been finding and categorizing data objects such as SSNs, credit card numbers, email addresses, driver’s license numbers, etc. for years.

DLP and classification tools generally provide two ways of applying those classifications to data objects:

  • Metadata tagging – adding data about the data to the object itself to signify what type it is and how it should be handled by applications and access control / encryption systems. This method works well for unstructured data objects such as XML, Office documents, PDFs, media files, etc. In some cases, the metadata tags can be digitally signed and encrypted too for additional security and non-repudiation.
  • Database registration – adding database elements (additional tables, or columns and rows) to databases to indicate which rows, columns, or cells constitute certain data types. This is usually needed for applications that have SQL or NoSQL back-ends that contain PII, since metadata tagging will not work. This approach is more cumbersome and may require database access proxies (or API gateways) to mediate access and integrate with centralized attribute-based access control (ABAC) systems.

Thus, we see that the first phase in IPLC and the tool types related to that phase (Discovery/Classification) are the way to begin preparing for CCPA enforcement. For additional information on these kinds of tools and more guidance on CCPA and GDPR, see https://plus.kuppingercole.com/. Also, watch our blogs in the days ahead as we will be publishing more about CCPA and how to prepare.

RPA and AI: Don’t Isolate Your Systems, Synchronize Them

We already hear a lot about artificial intelligence (AI) systems being able to automate repetitive tasks. But AI is such a large term that encompasses many types of very different technologies. What type of solutions are really able to do this?

Robotic Process Automation (RPA) configures software to mimic human actions on a graphic user interface (GUI) to carry out a business process.  For example, an RPA system could open a relevant email, extract information from an attached invoice, and input it in an internal billing system. Although modern RPA solutions are already relying on various AI-powered technologies like image recognition to perform their functions, positioning RPA within the spectrum of AI-powered tools is still somewhat premature: on its own, RPA is basically just an alternative to scripting for non-technical users.

Enterprises that are currently beginning with automating prescribed tasks hope to adopt more advanced capabilities like data-based analytics, machine learning, and ending with cognitive decision making; they should however realize that existing RPA solutions might not yet be intelligent enough for such aspirations.

Filling in the Gaps

If RPA sounds limited, then you are correct; it is not a one-stop-shop for intelligent automation. RPA only automates the button clicks of a multi-step process across multiple programs. If you’re under the impression that RPA can deliver end-to-end process automation, pause and reassess. RPA can do a limited and explicitly defined set of tasks well, but faces serious limitation when flexibility is required.

As soon as any deviation from the defined process is needed, RPA cannot and does not function. However, it can be part of a larger business process orchestration that operates from an understanding of what must be done instead of how. RPA delivers some value in isolation, but much more is possible when coordinated with other AI systems.

The weaknesses of RPA systems overlap nicely with the potential that machine learning (ML)-based AI can offer. ML happens to be capable of adding flexibility to a process based on data inputs. Solutions are coming available that learn from each situation – unlike RPA – and produce interchangeable steps so that the system can assess the type of issue to be solved, and build the correct process to handle it from the repository of already learned steps. It widens the spectrum of actions that an RPA system can make.

Synchronization Adds Value

AI does have strengths that overlap with RPA weaknesses like handling unstructured data. An AI-enabled RPA system can process unstructured data from multiple channels (email, document, web) in order to input information later in the RPA process. The analytics functionality of ML can add value to an RPA process, such as identifying images of a defective product in a customer complaint email and downloading them to the appropriate file. There are aspects that the pairing of RPA and AI do not solve, such as end-to-end process automation, or understanding context (at least not yet).

Overall, RPA’s value to a process increases when used in combination with other relevant AI tools.

Proper Patch Management Is Risk-Oriented

With regard to cybersecurity, the year 2020 kicks off with considerable upheavals. Few days ago, my colleague Warwick wrote about the security problems that arise with some of Citrix's products and that can potentially affect any company, from start-ups and SMEs to large corporations and critical infrastructure operators.

Just a few hours later, NSA and many others reported a vulnerability in the current Windows 10 and Windows Server 2016 and '19 operating systems that causes them to fail to properly validate certificates that use Elliptic Curve Cryptography (ECC). This results in an attacker being able to spoof the authenticity of certificate chains. The effects that can be concealed behind the fabrication of supposedly valid signatures are many and varied. For example, they can identify unwanted code as valid, or corrupt trustworthy communication based on ECC-based X.509 certificates. More information is now available through Microsoft.

Immediate Patching as the default recommendation

What both of these news items have in common is that default recommendations are typically: Patch immediately when a patch is available and implement mitigating measures until then. And you can't really argue with that either. However, this must be executed properly.

If you take a step back from the current, specific events, the patching process as a pivotal challenge for cybersecurity management becomes evident. First and foremost, a comprehensive approach to patch management must exist at all, ideally integrated into a comprehensive release management system. The high number of long-term unpatched systems, such as during the ‘heartbleed’ vulnerability, shows that this is far from being a comprehensively solved problem.

Criticality and number of affected systems as the key parameters

Security patches have a high criticality. Therefore, they usually have to be implemented on all affected systems as quickly as possible. This inevitably leads to a conflict of objectives between the speed of reaction (and thus the elimination of a vulnerability) and the necessary validation of the patch for actual problem resolution and possible side effects. A patch that changes mission-critical systems from the status "vulnerable" to the status "unusable" is the "worst case scenario" for business continuity and resilience.

The greater the number of affected systems, the greater the risk of automatically installing patches. If patching has to be carried out manually (e.g. on servers) and in the context of maintenance windows, questions about a strategy regarding the sequence and criticality of affected systems arise as the number of affected systems increases. Patches affect existing functionalities and processes deeply, so criticalities and dependencies must be taken into account.

Modern DevOps scenarios require the patching of systems also in repositories and tool chains, so that newly generated systems meet the current security requirements and existing ones can be patched or replaced appropriately.

Automated patches are indispensable

It is essential that software vendors provide automated (and well-tested and actually working) patches. There are huge differences when it comes to speed, timeliness and potential problems encountered, no matter how big the vendor. Automated patching is certainly a blessing in many situations in today's security landscape.

The risk assessment between automated patch risk and security risk for an unpatched system in an increasingly hostile Internet has been shifting from 2010 to today (2020). In many cases, the break-even point that occurred somewhere in this period can be used with some conscience as justification for automated patching and some basic confidence in the quality of the patches provided.

But simply patching everything automatically and unmonitored can be a fatal default policy. This is especially true for OT-Systems (Operational technology), e.g. on the factory floor: The risk inherent to automated patches going wrong in such a mission critical might be considered much higher, increasing the desire to manually control the patching process. And even a scheduled update might be a challenge, as maintenance windows require downtimes, which must be coordinated in complex production processes.

Individual risk assessments and smart policies within patch management

It's obvious there's no one-size-fits-all approach here. But it is also clear that every company and every organization must develop and implement a comprehensive and thorough strategy for the timely and risk-oriented handling of vulnerabilities through patch management as part of cybersecurity and business continuity.

This includes policies for the immediate risk assessment of vulnerabilities and their subsequent remediation. This also includes the definition and implementation of mitigating measures as long as no patch is available, even up to the potential temporary shutdown of a system. Decision processes as to whether patches should be automatically installed in the field largely immediately, which systems require special, i.e. manual attention and which patch requires special quality assurance, depend to a large extent on operational and well-defined risk management. In this case, however, processes with minimal time delays (hours or a few days, certainly not months) and with accompanying "compensatory controls" of an organizational or technical nature are required.

Once the dust has settled around the current security challenges, some organizations might do well to put a comprehensive review of their patch management policies on their cybersecurity agenda. And it should be kept in mind that a risk assessment is far from being a mere IT exercise, because IT risks are always business risks.

Assessing and managing IT risks as business risks integrated into an overall risk management exercise is a challenging task and requires changes in operations and often the organization itself. This is even more true when it comes to using risk assessments as the foundation for actionable decision in the daily patching process. The benefits of a reduced overall risk posture and potentially less downtime however make this approach worthwhile.

KuppingerCole Analysts provide research and advisory in that area and many more areas of cybersecurity and operational resilience. Check out e.g. our “Leadership Brief: Responding to Cyber Incidents – 80209” or for the bigger picture the “Advisory Note: GRC Reference Architecture – 72582”. Find out, where we can support you in helping you getting better by maturing your processes. Don’t hesitate to get in touch with us for a first contact.

And yes: You should *very* soon patch your affected systems, as Microsoft provides an exploitability assessment for the above described vulnerability of “1 - Exploitation More Likely”. How to effectively apply this patch? Well, assess your specific risks...

Mitigate Citrix Vulnerability in Face of PoC Exploits

Despite a Citrix warning in mid-December of a serious vulnerability in Citrix Application Delivery Controller (ADC) and Citrix Gateway (formerly NetScaler and NetScaler Gateway), thousands of companies have yet to put in place the recommended mitigations.

In the meantime, several proof of concept (PoC) exploits have been published on GitHub, making it extremely easy for attackers to gain access to networks and impersonate authorized users.

Thousands of Citrix systems still vulnerable

Initial estimates put the number of vulnerable systems at 80,000 in 158 countries. Researchers reported on 8 January that scans showed the number was probably around 60,000 and that 67% did not have mitigations enabled, including high value targets in finance, government and healthcare.

Any company that uses either of the affected Citrix products should therefore implement the mitigation measures as soon as possible to reduce the risk of unauthenticated attackers using the PoC exploits to carry out arbitrary code execution on their systems.

Citrix “strongly urges” affected customers to apply the provided mitigation and recommends that customers upgrade all vulnerable appliances to a fixed version of the appliance firmware when released.

Window of opportunity for attackers

The first security updates are expected to be available on 20 January 2020 for versions 11.1 and 12.0. A fix for versions 12.1 and 13.0 is expected on 27 January, while a fix for version 10.5 is expected only on 31 January.

In the light of the fact that PoCs have been published and various security research teams have reported evidence that attackers are scanning the internet for vulnerable appliances and attempting exploits, IT admins using affected Citrix products should not wait to implement mitigations to reduce the risk of compromise.

Mitigate and patch as soon as possible

When easily exploitable vulnerabilities are announced by suppliers, it is always a good idea to apply recommended mitigations and security updates as soon as they are available. The importance of this is underlined by the impact of attacks like WannaCry and NotPetya due to the failure of affected organizations to apply patches as soon as they were available.

Patching reduces the attack surface by ensuring that vulnerabilities are mitigated as quickly as possible. Many forms of ransomware exploit known vulnerabilities for which patches are available, for example. For more detail see KuppingerCole’s leadership brief: Defending Against Ransomware and advisory note: Understanding and Countering Ransomware.

Other related research includes:
Leadership Brief: Optimizing your Cybersecurity Spending
Leadership Brief: Penetration Testing Done Right
Leadership Brief: Responding to Cyber Incidents

Related blog posts include:
Akamai to Block Magecart-Style Attacks
Microsoft Partnership Enables Security at Firmware Level
API Security in Microservices Architectures

PAM Can Reduce Risk of Compliance Failure but Is Part of a Bigger Picture

The importance of privilege accounts to digital organizations and their appeal to cyber attackers has made Privilege Access Management (PAM) an essential component of an identity and access management portfolio. Quite often, customers will see this as purely as a security investment, protecting the company’s crown jewels against theft by organized crime and against fraudulent use by internals. More successful cyber-attacks are now enabled by attackers gaining access to privilege accounts.

However, that is only part of the story. Organizations also must worry about meeting governance and compliance demands from governments and industry bodies. Central to these are penalties, often quite stringent, that punish organizations that lose data or fail to meet data usage rules. These rules are multiplying; the most recent being the new California Privacy Act (CCPA) that joins GDPR (personal data), PCI-DSS (payment data) and HIPAA (medical data) in affecting organizations across the globe.

Along came the GDPR

In the run up to GDPR in 2018, alert security and governance managers realised that better control of identity and access management in an organization went some way to achieving compliance. Further, there was a realisation that PAM could give more granular control of those highly privileged accounts that criminals were now actively targeting and putting them in danger of falling foul of compliance laws.

Digital transformation ushered in an era of growth in cloud, big data, IoT, containers as organizations sought to get a competitive advantage. This led to an increase in data and access points and privileged accounts multiplied. Those accounts that had access to personal and customer data were at particular risk.

Digital transformation brings new challenges

The PAM market in 2020 is set for change as vendors realise that customers need to protect privilege accounts in new environments like DevOPs or cloud infrastructures that are part of digital transformation. Increasingly organizations will grant privilege access on a Just in Time (JIT) or One Time Only (OTO) basis to reduce reliance on vaults to store credentials, simplify session management and to achieve their primary goal - speed up business processes. However, this acceleration of the privilege process introduces new risks to compliance if PAM solutions are not able to secure the new processes.

The good news is that vendors are responding to these demands with established players introducing new modules for DevOPs and JIT deployment for their PAM suites, while smaller start-ups are seeing niches in the market and acting accordingly with boutique PAM solutions for more digital environments.

PAM reduces risk but does not guarantee compliance

None of this means that an organization will be fully compliant just because it beefs up its PAM solutions across the board. Done well, it will reduce the risk of data loss through infiltration of privilege accounts by some percentage points, and along the way tick some boxes in every CISO’s favourite security standard, ISO 27001. An organization also needs to harden data centres, improve web security and improve auditing - among other tasks.

More SEs + TEEs in Products = Improved Security

Global Platform announced in 4Q2019 that more than 1 billion TEE (Trusted Execution Environment) compliant devices shipped in 2018, and that is a 50% increase from the previous year. Moreover, 6.2 billion SEs (Secure Elements) were shipped in 2018, bringing the total number of SEs manufactured to over 35 billion since 2010.

This is good news for cybersecurity and identity management. TEEs are commonly found in most Android-based smartphones and tablets. A TEE is the secure area in the processor architecture and OS that isolates programs from the Rich Execution Environment (REE) where most applications execute. Some of the most important TEE characteristics include:

  • All code executing in the TEE has been authenticated
  • Integrity of the TEE and confidentiality of data therein is assured by isolation, cryptography, and other security mechanisms
  • The TEE is designed to resist known remote and software attacks, as well as some hardware attacks.

See Introduction to Trusted Execution Environments for more information.

A Secure Element (SE) is a tamper-resistant component which is used in a device to provide the security, confidentiality, and multiple application environments required to support various business models. Such a Secure Element may exist in any form factor such as UICC, embedded SE, smartSD, smart microSD, etc. See Introduction to Secure Elements for more information.

Global Platform has functional and security certification programs, administered by independent labs, to ensure that vendor products conform to their standards.

These features make TEEs the ideal place to run critical apps and apps that need high security, such as mobile banking apps, authentication apps, biometric processing apps, mobile anti-malware apps, etc. SEs are the components where PKI keys and certificates, FIDO keys, or biometrics templates that are used for strong or multi-factor authentication apps should be securely stored.

The FIDO Alliance™ has partnered with Global Platform on security specifications. FIDO has three levels of authenticator certification, and using a TEE is required for Level 2 and above. For example:

  • FIDO L2: UAF implemented as a Trusted App running in an uncertified TEE
  • FIDO L2+: FIDO2 using a keystore running in a certified TEE
  • FIDO L3: UAF implemented as a Trusted App running in a certified TEE using SE

See FIDO Authenticator Security Requirements for more details.

KuppingerCole recommends as a best practice that all such apps should be built in to run in a TEE and store credentials in the SE. This architecture provides for the highest security levels, ensuring that unauthorized apps cannot get access to the stored credentials, interfere with operation of the trusted app; and this combination presents a Trusted User Interface (TUI) which prevents other apps from recording or tampering with user input, as in cases where PIN authentication is included.

In recent Leadership Compasses, we have asked whether vendor products for mobile and IoT can utilize the TEE, and if key and certificate storage is required, whether vendor products can store those data assets in the SE. To see which vendors use SEs and TEEs, see the following Leadership Compasses:

In addition to mobile devices, Global Platform specifications pertain to IoT devices. IoT device adoption is growing, and there have been a myriad of security concerns due to the generally insecure nature of many types of IoT devices. Global Platform’s IoTopia initiative directly addresses these security concerns as they work to build a comprehensive framework for designing, certifying, deploying and managing IoT devices in a secure way.

KuppingerCole will continue to follow developments by Global Platform and provide insights on how these important standards can help organizations improve their security posture.

The 20-Year Anniversary of Y2K

The great non-event of Y2K happened twenty years ago. Those of us in IT at that time weren’t partying like it was 1999, we were standing by making sure the systems we were responsible for could handle the date change. Fortunately, the hard work of many paid off and the entry into the 21st century was smooth. Many things have changed in IT over the last 20 years, but many things are pretty similar.

What has changed?

  • Pagers disappeared (that’s a good thing)
  • Cell phones became smartphones
  • IoT devices began to proliferate
  • The cloud appeared and became a dominant computing architecture
  • CPU power and storage has vastly increased
  • Big data and data analytics
  • More computing power has led to the rise of Machine Learning in certain areas
  • Cybersecurity, identity management, and privacy grew into discrete disciplines to meet the exponentially growing threats
  • Many new domain- and geographic-specific regulations
  • Attacker TTPs have changed and there are many new kinds of security tools to manage
  • Businesses and governments are on the path to full digital transformation

What stayed (relatively) the same?

  • Patching is important; for security rather than Y2K functionality
  • Identity as an attack and fraud vector
  • Malware has evolved dramatically into many forms, and is a persistent and growing threat
  • IT is still a growing and exciting field, especially in the areas of cybersecurity and identity management
  • There aren’t enough people to do all the work

What will we be working on in the years ahead?

  • Securing operational tech and IoT
  • Using and securing AI & ML
  • Blockchain
  • Cybersecurity, Identity, and Privacy

What are the two constants we have to live with in IT?

  • Change
  • Complexity

Though we may not have big significant industry-wide dates like Y2K to work toward, cybersecurity, identity, and privacy challenges will always need to be addressed. Thanks to methodologies like Agile, DevOps, and SecDevOps, these challenges will continue to accelerate.

Check KC Plus for regular updates on our research into these ever-changing technologies, and please join us for EIC (The European Identity and Cloud Conference) in Munich in May 2020.

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Subscribe to our Podcasts

KuppingerCole Podcasts - watch or listen anywhere

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00