KuppingerCole Blog

RPA and AI: Don’t Isolate Your Systems, Synchronize Them

We already hear a lot about artificial intelligence (AI) systems being able to automate repetitive tasks. But AI is such a large term that encompasses many types of very different technologies. What type of solutions are really able to do this?

Robotic Process Automation (RPA) configures software to mimic human actions on a graphic user interface (GUI) to carry out a business process.  For example, an RPA system could open a relevant email, extract information from an attached invoice, and input it in an internal billing system. Although modern RPA solutions are already relying on various AI-powered technologies like image recognition to perform their functions, positioning RPA within the spectrum of AI-powered tools is still somewhat premature: on its own, RPA is basically just an alternative to scripting for non-technical users.

Enterprises that are currently beginning with automating prescribed tasks hope to adopt more advanced capabilities like data-based analytics, machine learning, and ending with cognitive decision making; they should however realize that existing RPA solutions might not yet be intelligent enough for such aspirations.

Filling in the Gaps

If RPA sounds limited, then you are correct; it is not a one-stop-shop for intelligent automation. RPA only automates the button clicks of a multi-step process across multiple programs. If you’re under the impression that RPA can deliver end-to-end process automation, pause and reassess. RPA can do a limited and explicitly defined set of tasks well, but faces serious limitation when flexibility is required.

As soon as any deviation from the defined process is needed, RPA cannot and does not function. However, it can be part of a larger business process orchestration that operates from an understanding of what must be done instead of how. RPA delivers some value in isolation, but much more is possible when coordinated with other AI systems.

The weaknesses of RPA systems overlap nicely with the potential that machine learning (ML)-based AI can offer. ML happens to be capable of adding flexibility to a process based on data inputs. Solutions are coming available that learn from each situation – unlike RPA – and produce interchangeable steps so that the system can assess the type of issue to be solved, and build the correct process to handle it from the repository of already learned steps. It widens the spectrum of actions that an RPA system can make.

Synchronization Adds Value

AI does have strengths that overlap with RPA weaknesses like handling unstructured data. An AI-enabled RPA system can process unstructured data from multiple channels (email, document, web) in order to input information later in the RPA process. The analytics functionality of ML can add value to an RPA process, such as identifying images of a defective product in a customer complaint email and downloading them to the appropriate file. There are aspects that the pairing of RPA and AI do not solve, such as end-to-end process automation, or understanding context (at least not yet).

Overall, RPA’s value to a process increases when used in combination with other relevant AI tools.

Proper Patch Management Is Risk-Oriented

With regard to cybersecurity, the year 2020 kicks off with considerable upheavals. Few days ago, my colleague Warwick wrote about the security problems that arise with some of Citrix's products and that can potentially affect any company, from start-ups and SMEs to large corporations and critical infrastructure operators.

Just a few hours later, NSA and many others reported a vulnerability in the current Windows 10 and Windows Server 2016 and '19 operating systems that causes them to fail to properly validate certificates that use Elliptic Curve Cryptography (ECC). This results in an attacker being able to spoof the authenticity of certificate chains. The effects that can be concealed behind the fabrication of supposedly valid signatures are many and varied. For example, they can identify unwanted code as valid, or corrupt trustworthy communication based on ECC-based X.509 certificates. More information is now available through Microsoft.

Immediate Patching as the default recommendation

What both of these news items have in common is that default recommendations are typically: Patch immediately when a patch is available and implement mitigating measures until then. And you can't really argue with that either. However, this must be executed properly.

If you take a step back from the current, specific events, the patching process as a pivotal challenge for cybersecurity management becomes evident. First and foremost, a comprehensive approach to patch management must exist at all, ideally integrated into a comprehensive release management system. The high number of long-term unpatched systems, such as during the ‘heartbleed’ vulnerability, shows that this is far from being a comprehensively solved problem.

Criticality and number of affected systems as the key parameters

Security patches have a high criticality. Therefore, they usually have to be implemented on all affected systems as quickly as possible. This inevitably leads to a conflict of objectives between the speed of reaction (and thus the elimination of a vulnerability) and the necessary validation of the patch for actual problem resolution and possible side effects. A patch that changes mission-critical systems from the status "vulnerable" to the status "unusable" is the "worst case scenario" for business continuity and resilience.

The greater the number of affected systems, the greater the risk of automatically installing patches. If patching has to be carried out manually (e.g. on servers) and in the context of maintenance windows, questions about a strategy regarding the sequence and criticality of affected systems arise as the number of affected systems increases. Patches affect existing functionalities and processes deeply, so criticalities and dependencies must be taken into account.

Modern DevOps scenarios require the patching of systems also in repositories and tool chains, so that newly generated systems meet the current security requirements and existing ones can be patched or replaced appropriately.

Automated patches are indispensable

It is essential that software vendors provide automated (and well-tested and actually working) patches. There are huge differences when it comes to speed, timeliness and potential problems encountered, no matter how big the vendor. Automated patching is certainly a blessing in many situations in today's security landscape.

The risk assessment between automated patch risk and security risk for an unpatched system in an increasingly hostile Internet has been shifting from 2010 to today (2020). In many cases, the break-even point that occurred somewhere in this period can be used with some conscience as justification for automated patching and some basic confidence in the quality of the patches provided.

But simply patching everything automatically and unmonitored can be a fatal default policy. This is especially true for OT-Systems (Operational technology), e.g. on the factory floor: The risk inherent to automated patches going wrong in such a mission critical might be considered much higher, increasing the desire to manually control the patching process. And even a scheduled update might be a challenge, as maintenance windows require downtimes, which must be coordinated in complex production processes.

Individual risk assessments and smart policies within patch management

It's obvious there's no one-size-fits-all approach here. But it is also clear that every company and every organization must develop and implement a comprehensive and thorough strategy for the timely and risk-oriented handling of vulnerabilities through patch management as part of cybersecurity and business continuity.

This includes policies for the immediate risk assessment of vulnerabilities and their subsequent remediation. This also includes the definition and implementation of mitigating measures as long as no patch is available, even up to the potential temporary shutdown of a system. Decision processes as to whether patches should be automatically installed in the field largely immediately, which systems require special, i.e. manual attention and which patch requires special quality assurance, depend to a large extent on operational and well-defined risk management. In this case, however, processes with minimal time delays (hours or a few days, certainly not months) and with accompanying "compensatory controls" of an organizational or technical nature are required.

Once the dust has settled around the current security challenges, some organizations might do well to put a comprehensive review of their patch management policies on their cybersecurity agenda. And it should be kept in mind that a risk assessment is far from being a mere IT exercise, because IT risks are always business risks.

Assessing and managing IT risks as business risks integrated into an overall risk management exercise is a challenging task and requires changes in operations and often the organization itself. This is even more true when it comes to using risk assessments as the foundation for actionable decision in the daily patching process. The benefits of a reduced overall risk posture and potentially less downtime however make this approach worthwhile.

KuppingerCole Analysts provide research and advisory in that area and many more areas of cybersecurity and operational resilience. Check out e.g. our “Leadership Brief: Responding to Cyber Incidents – 80209” or for the bigger picture the “Advisory Note: GRC Reference Architecture – 72582”. Find out, where we can support you in helping you getting better by maturing your processes. Don’t hesitate to get in touch with us for a first contact.

And yes: You should *very* soon patch your affected systems, as Microsoft provides an exploitability assessment for the above described vulnerability of “1 - Exploitation More Likely”. How to effectively apply this patch? Well, assess your specific risks...

Mitigate Citrix Vulnerability in Face of PoC Exploits

Despite a Citrix warning in mid-December of a serious vulnerability in Citrix Application Delivery Controller (ADC) and Citrix Gateway (formerly NetScaler and NetScaler Gateway), thousands of companies have yet to put in place the recommended mitigations.

In the meantime, several proof of concept (PoC) exploits have been published on GitHub, making it extremely easy for attackers to gain access to networks and impersonate authorized users.

Thousands of Citrix systems still vulnerable

Initial estimates put the number of vulnerable systems at 80,000 in 158 countries. Researchers reported on 8 January that scans showed the number was probably around 60,000 and that 67% did not have mitigations enabled, including high value targets in finance, government and healthcare.

Any company that uses either of the affected Citrix products should therefore implement the mitigation measures as soon as possible to reduce the risk of unauthenticated attackers using the PoC exploits to carry out arbitrary code execution on their systems.

Citrix “strongly urges” affected customers to apply the provided mitigation and recommends that customers upgrade all vulnerable appliances to a fixed version of the appliance firmware when released.

Window of opportunity for attackers

The first security updates are expected to be available on 20 January 2020 for versions 11.1 and 12.0. A fix for versions 12.1 and 13.0 is expected on 27 January, while a fix for version 10.5 is expected only on 31 January.

In the light of the fact that PoCs have been published and various security research teams have reported evidence that attackers are scanning the internet for vulnerable appliances and attempting exploits, IT admins using affected Citrix products should not wait to implement mitigations to reduce the risk of compromise.

Mitigate and patch as soon as possible

When easily exploitable vulnerabilities are announced by suppliers, it is always a good idea to apply recommended mitigations and security updates as soon as they are available. The importance of this is underlined by the impact of attacks like WannaCry and NotPetya due to the failure of affected organizations to apply patches as soon as they were available.

Patching reduces the attack surface by ensuring that vulnerabilities are mitigated as quickly as possible. Many forms of ransomware exploit known vulnerabilities for which patches are available, for example. For more detail see KuppingerCole’s leadership brief: Defending Against Ransomware and advisory note: Understanding and Countering Ransomware.

Other related research includes:
Leadership Brief: Optimizing your Cybersecurity Spending
Leadership Brief: Penetration Testing Done Right
Leadership Brief: Responding to Cyber Incidents

Related blog posts include:
Akamai to Block Magecart-Style Attacks
Microsoft Partnership Enables Security at Firmware Level
API Security in Microservices Architectures

PAM Can Reduce Risk of Compliance Failure but Is Part of a Bigger Picture

The importance of privilege accounts to digital organizations and their appeal to cyber attackers has made Privilege Access Management (PAM) an essential component of an identity and access management portfolio. Quite often, customers will see this as purely as a security investment, protecting the company’s crown jewels against theft by organized crime and against fraudulent use by internals. More successful cyber-attacks are now enabled by attackers gaining access to privilege accounts.

However, that is only part of the story. Organizations also must worry about meeting governance and compliance demands from governments and industry bodies. Central to these are penalties, often quite stringent, that punish organizations that lose data or fail to meet data usage rules. These rules are multiplying; the most recent being the new California Privacy Act (CCPA) that joins GDPR (personal data), PCI-DSS (payment data) and HIPAA (medical data) in affecting organizations across the globe.

Along came the GDPR

In the run up to GDPR in 2018, alert security and governance managers realised that better control of identity and access management in an organization went some way to achieving compliance. Further, there was a realisation that PAM could give more granular control of those highly privileged accounts that criminals were now actively targeting and putting them in danger of falling foul of compliance laws.

Digital transformation ushered in an era of growth in cloud, big data, IoT, containers as organizations sought to get a competitive advantage. This led to an increase in data and access points and privileged accounts multiplied. Those accounts that had access to personal and customer data were at particular risk.

Digital transformation brings new challenges

The PAM market in 2020 is set for change as vendors realise that customers need to protect privilege accounts in new environments like DevOPs or cloud infrastructures that are part of digital transformation. Increasingly organizations will grant privilege access on a Just in Time (JIT) or One Time Only (OTO) basis to reduce reliance on vaults to store credentials, simplify session management and to achieve their primary goal - speed up business processes. However, this acceleration of the privilege process introduces new risks to compliance if PAM solutions are not able to secure the new processes.

The good news is that vendors are responding to these demands with established players introducing new modules for DevOPs and JIT deployment for their PAM suites, while smaller start-ups are seeing niches in the market and acting accordingly with boutique PAM solutions for more digital environments.

PAM reduces risk but does not guarantee compliance

None of this means that an organization will be fully compliant just because it beefs up its PAM solutions across the board. Done well, it will reduce the risk of data loss through infiltration of privilege accounts by some percentage points, and along the way tick some boxes in every CISO’s favourite security standard, ISO 27001. An organization also needs to harden data centres, improve web security and improve auditing - among other tasks.

More SEs + TEEs in Products = Improved Security

Global Platform announced in 4Q2019 that more than 1 billion TEE (Trusted Execution Environment) compliant devices shipped in 2018, and that is a 50% increase from the previous year. Moreover, 6.2 billion SEs (Secure Elements) were shipped in 2018, bringing the total number of SEs manufactured to over 35 billion since 2010.

This is good news for cybersecurity and identity management. TEEs are commonly found in most Android-based smartphones and tablets. A TEE is the secure area in the processor architecture and OS that isolates programs from the Rich Execution Environment (REE) where most applications execute. Some of the most important TEE characteristics include:

  • All code executing in the TEE has been authenticated
  • Integrity of the TEE and confidentiality of data therein is assured by isolation, cryptography, and other security mechanisms
  • The TEE is designed to resist known remote and software attacks, as well as some hardware attacks.

See Introduction to Trusted Execution Environments for more information.

A Secure Element (SE) is a tamper-resistant component which is used in a device to provide the security, confidentiality, and multiple application environments required to support various business models. Such a Secure Element may exist in any form factor such as UICC, embedded SE, smartSD, smart microSD, etc. See Introduction to Secure Elements for more information.

Global Platform has functional and security certification programs, administered by independent labs, to ensure that vendor products conform to their standards.

These features make TEEs the ideal place to run critical apps and apps that need high security, such as mobile banking apps, authentication apps, biometric processing apps, mobile anti-malware apps, etc. SEs are the components where PKI keys and certificates, FIDO keys, or biometrics templates that are used for strong or multi-factor authentication apps should be securely stored.

The FIDO Alliance™ has partnered with Global Platform on security specifications. FIDO has three levels of authenticator certification, and using a TEE is required for Level 2 and above. For example:

  • FIDO L2: UAF implemented as a Trusted App running in an uncertified TEE
  • FIDO L2+: FIDO2 using a keystore running in a certified TEE
  • FIDO L3: UAF implemented as a Trusted App running in a certified TEE using SE

See FIDO Authenticator Security Requirements for more details.

KuppingerCole recommends as a best practice that all such apps should be built in to run in a TEE and store credentials in the SE. This architecture provides for the highest security levels, ensuring that unauthorized apps cannot get access to the stored credentials, interfere with operation of the trusted app; and this combination presents a Trusted User Interface (TUI) which prevents other apps from recording or tampering with user input, as in cases where PIN authentication is included.

In recent Leadership Compasses, we have asked whether vendor products for mobile and IoT can utilize the TEE, and if key and certificate storage is required, whether vendor products can store those data assets in the SE. To see which vendors use SEs and TEEs, see the following Leadership Compasses:

In addition to mobile devices, Global Platform specifications pertain to IoT devices. IoT device adoption is growing, and there have been a myriad of security concerns due to the generally insecure nature of many types of IoT devices. Global Platform’s IoTopia initiative directly addresses these security concerns as they work to build a comprehensive framework for designing, certifying, deploying and managing IoT devices in a secure way.

KuppingerCole will continue to follow developments by Global Platform and provide insights on how these important standards can help organizations improve their security posture.

The 20-Year Anniversary of Y2K

The great non-event of Y2K happened twenty years ago. Those of us in IT at that time weren’t partying like it was 1999, we were standing by making sure the systems we were responsible for could handle the date change. Fortunately, the hard work of many paid off and the entry into the 21st century was smooth. Many things have changed in IT over the last 20 years, but many things are pretty similar.

What has changed?

  • Pagers disappeared (that’s a good thing)
  • Cell phones became smartphones
  • IoT devices began to proliferate
  • The cloud appeared and became a dominant computing architecture
  • CPU power and storage has vastly increased
  • Big data and data analytics
  • More computing power has led to the rise of Machine Learning in certain areas
  • Cybersecurity, identity management, and privacy grew into discrete disciplines to meet the exponentially growing threats
  • Many new domain- and geographic-specific regulations
  • Attacker TTPs have changed and there are many new kinds of security tools to manage
  • Businesses and governments are on the path to full digital transformation

What stayed (relatively) the same?

  • Patching is important; for security rather than Y2K functionality
  • Identity as an attack and fraud vector
  • Malware has evolved dramatically into many forms, and is a persistent and growing threat
  • IT is still a growing and exciting field, especially in the areas of cybersecurity and identity management
  • There aren’t enough people to do all the work

What will we be working on in the years ahead?

  • Securing operational tech and IoT
  • Using and securing AI & ML
  • Blockchain
  • Cybersecurity, Identity, and Privacy

What are the two constants we have to live with in IT?

  • Change
  • Complexity

Though we may not have big significant industry-wide dates like Y2K to work toward, cybersecurity, identity, and privacy challenges will always need to be addressed. Thanks to methodologies like Agile, DevOps, and SecDevOps, these challenges will continue to accelerate.

Check KC Plus for regular updates on our research into these ever-changing technologies, and please join us for EIC (The European Identity and Cloud Conference) in Munich in May 2020.

Cisco Promises Future of the Internet – But Can They Deliver?

On December 11th I attended an analyst webcast from Cisco entitled “The Future of the Internet”. At this Cisco unveiled its plans for its next generation of networking products. While this was interesting, it did not meet my expectations for a deeper vision of the future of the internet.

The timing is interesting because 50 years ago in 1969 there were several events that were seminal to the internet. Many people will remember Apollo 11 and the moon landing – while this was an enormous achievement in its own right – it was the space race that led to the miniaturization and commercialization of the silicon-based technology upon which the internet is based.

The birth of the Internet

1969 also marked the birth of Unix. Two Bell Labs computer scientists Ken Thompson and Dennis Ritchie had been working on an experimental time-sharing operating system called Multics as part of the joint research group with General Electric and MIT. They decided to take the best ideas from Multics and implement them on a smaller scale – on a PDP-7 minicomputer at Bell Labs. That marked the birth of Unix, the ancestor of Linux, the most widely deployed computing platform in the cloud, as well MacOS, IoS and Android.

October 29th, 1969 was another important date: it marked the first computer to computer communication using a packet-switched network ARPANET. In 1958, US President Dwight D. Eisenhower formed the Advanced Research Projects Agency (ARPA), bringing together some of the best scientific minds in the US with the aim to help American military technology stay ahead of its enemies. Among ARPA’s projects was a remit to test the feasibility of a large-scale computer network.

In July 1961 Leonard Kleinrock at MIT published the first paper on packet switching theory and the first book on the subject in 1964. Lawrence Roberts, the first person to connect two computers, was responsible for developing computer networks at ARPA, working with scientist Leonard Kleinrock. When the first packet-switching network was developed in 1969, Kleinrock successfully used it to send messages to another site, and the ARPA Network, or ARPANET, was born—the forerunner of the internet.

These were the foundational ideas and technologies that led to the largest man-made artifact – the internet.

Cisco announces new products based on single multi-purpose ASIC

The internet today depends upon the technology provided by a range of vendors including Cisco. During the webcast, Chuck Robbins, CEO of Cisco, made the comment that he believed that 90% of the traffic on the internet flows through silicon created by Eyal Dagan, SVP silicon engineering at Cisco. Obviously, this technology is an important part of the internet infrastructure. So, what did Cisco announce that is new?

The first and most significant announcement was that Cisco has created the first single multi-purpose ASIC (computer chip) that can handle both routing and switching efficiently, with high performance and with lower power consumption. According to Cisco, this is a first, and in the past, it was generally thought that the requirements of routing and switching were so different that it was not possible for a single chip to meet all the requirements for both these tasks. Why does this matter?

The current generation of network appliances is built with different chips for different jobs and so this means that multiple software stacks are needed to manage the different hardware combinations. A single chip means a single software stack which is smaller, more efficient and easier to maintain. Their new network operating system IOS XR7 implements this and is claimed to be simpler, supports modern APIs and is more trustworthy. Trustworthiness is ensured through a single hardware root of trust.

One of the problems that network operators have when deploying new hardware is testing to ensure that the service is maintained after the change. Cisco also announced a new cloud service that helps them with this problem. The end user uploads their current configuration to the service which generates tests for the new platform and greatly speeds up testing as well as providing more assurance that the service can be cut over without problems.

Cisco delivers infrastructure but is that enough?

It is easy to forget, when you click on your phone, just how many technical problems had to be overcome to provide the seamless connectivity that we all now take for granted. The vision, creativity, and investment by Cisco over the five years that it took for this development is to be applauded. It is excellent news to hear that the next generation of infrastructure components that underly the internet will provide more bandwidth, better connectivity and use less energy. However, this infrastructure does not define the future of the internet.

The internet has created enormous opportunities but has also brought significant challenges. It has provided opportunities for new kinds of business but also enabled the scourge of cybercrime by providing an unpoliced hiding place for cybercriminals. It has opened widows across the world to allow people from all cultures to connect with each other but has also provided a platform for cyberbullying, fake news and political interference. It is enabling new technologies such as artificial intelligence which have the potential to do great good but also raise many ethical concerns.

Mankind has created the internet, with the help of technology from companies like Cisco. The internet is available to mankind, but can mankind master its creation?

Regulatory Compliance a Potential Driver of Cloud Migration

Newly announced AWS offerings of Access Analyzer, Amazon Detective and AWS Nitro Enclaves discussed in my last blog post, further round out AWS’s security services and tools such as Amazon GuardDuty that continuously monitors for threats to accounts and workloads, Amazon Inspector that assesses application hosts for vulnerabilities and deviations from best practices, Amazon Macie that uses machine learning to discover, classify, and protect sensitive data, and AWS Security Hub, a unified security and compliance center.

These new security capabilities come hard on the heels of other security-related innovation announced ahead of re:Invent, including a feature added to AWS IAM to help organizations identify unused roles in AWS accounts by reporting the latest timestamp when role credentials were used to make an AWS request so that unused roles can be identified and removed; a native feature called  Amazon S3 Block Public Access to help customers use core services more securely; and the ability to connect Azure Active Directory to AWS Single Sign-on (SSO) once, manage permissions to AWS centrally in AWS SSO, and enable users to sign in using Azure AD to access assigned AWS accounts and applications.

Increasing focus on supporting regulatory frameworks

Further underlining the focus by AWS on security and compliance, its Security Hub service available in Europe since June 2019 recently announced 12 new partner integrations and plans to announce a set of new features in early 2020, focusing on supporting all major regulatory frameworks.

By making it easier for organizations using web services to comply with regulations, AWS once again appears to be shoring up the security reputation of cloud-based services as well as working to make security and compliance prime drivers of cloud migration.

While Security Hub integrates with three third-party Managed Security Services Providers (MSSPs), namely Alert Logic, Armor and Rackspace and has more than 25 security partner integrations that enable sharing of threat intelligence, most of the tools announced at re:Invent are designed to work with other AWS services to protect AWS workloads.

Reality check: IT environments are typically hybrid and multi-cloud

The reality is that most organizations using cloud services have a hybrid environment and are working with multiple cloud providers, which is something AWS should consider supporting with future security-related services.

In the meantime, organizations that have a hybrid multi-cloud IT environment may want to consider other solutions. At the very least, they should evaluate which set of solutions helps them across their complete IT environment, on premises and across various clouds. Having strong security tools for AWS, for Microsoft Azure, for other clouds, and for their on-premise environments helps for these platforms, but lacks the support for comprehensive security across and integrated Incident Management spanning the whole IT environment.

KuppingerCole Advisory Services can help in streamlining the security tools portfolio with our “Portfolio Compass” methodology, but also in defining adequate security architectures.

If you want more information about hybrid cloud security, check the Architecture Blueprint "Hybrid Cloud Security" and make sure you visit our 14th European Identity & Cloud Conference. Prime Discount expires by the end of the year, so get your ticket now.

Breaches and Regulations Drive Better Security, AWS re:Invent Shows

The high proportion of cyber attacks enabled by poor security practices has long raised questions about what it will take to bring about any significant change. Finally, however, there are indications that the threat of substantial fines for contravening the growing number of data protection regulations and negative media exposure associated with breaches are having the desired effect.

High profile data breaches driving industry improvements

The positive effect of high-profile breaches was evident at the Amazon Web Services (AWS) re:Invent conference in Las Vegas, where the cloud services firm made several security related announcements, that were undoubtedly expedited if not inspired by the March 2019 Capital One customer data breach, which was a text book example of a breach enabled by a cloud services customer not meeting their obligations under the shared responsibility model, which states organizations are responsible for anything they run in the cloud.

While AWS was not compromised and the breach was traced to a misconfiguration of a Web Application Firewall (WAF) and not the underlying cloud infrastructure, AWS has an interest in helping its customers to avoid breaches that inevitably lead to concerns about cloud security.

It is therefore unsurprising that AWS has introduced Access Analyzer, an Identity and Access Management (IAM) capability for Amazon S3 (Simple Storage Service) to make it easy for customer organizations to review access policies and audit them for unintended access. Users of these services are less likely to suffer data breaches that reflect badly on all companies involved and the cloud services industry is general. Something AWS is obviously keen to avoid.

Guarding against another Capital One type data breach

Access Analyzer complements preventative controls, such as Amazon S3 Block Public Access, which help protect against risks that stem from policy misconfiguration, widely viewed as the single biggest security risk in the context of cloud services. Access Analyser provides a single view across all access policies to determine whether any have been misconfigured to allow unintended public or cross-account access, which would have help prevent the Capital One breach.

Technically speaking, Access Analyzer uses a form of mathematical analysis called automated reasoning, which applies logic and mathematical inference to determine all possible access paths allowed by a resource policy to identify any violations of security and governance best practice, including unintended access.

Importantly, Access Analyzer continuously monitors policies for changes, meaning AWS customers no longer need to rely on intermittent manual checks to identify issues as policies are added or updated. It is also interesting to note, that Access Analyzer has been provided to S3 customers at no additional cost, unlike most of the other security innovations which represent new revenue streams for AWS.

On the security front, AWS also announced the Amazon Detective security service, currently available in preview, which is designed to make it easy for customers to conduct faster and more efficient investigations into security issues across their workloads.

In effect, Amazon Detective helps security teams conduct faster and more effective investigations by automatically analyzing and organizing data from AWS CloudTrail and Amazon Virtual Private Cloud (VPC) Flow Logs into a graph model that summarizes resource behaviors and interactions across a customer’s AWS environment.

Amazon Detective’s visualizations are designed to provide the details, context, and guidance to help analysts quickly determine the nature and extent of issues identified by AWS security services like Amazon GuardDuty, Amazon Inspector, Amazon Macie, and AWS Security Hub, to enable security teams to begin remediation quickly. Essentially an add-on to enable customers (and AWS) to get more value out of existing security services.

Hardware-based data isolation to address data protection regulatory compliance

Another capability due to be available in preview in early 2020 is AWS Nitro Enclaves, which is aimed at making it easy of AWS customers to process highly sensitive data by partitioning compute and memory resources within an instance to create an isolated compute environment.

This is an example of how data protection regulations are driving suppliers to support better practices by customer organizations by creating demand for such services. Although personal data can be protected using encryption, this does not address the risk of insider access to sensitive data as it is being processed by an application.

AWS Nitro Enclaves avoid the complexity and restrictions of either removing most of the functionality that an instance provides for general-purpose computing or creating a separate cluster of instances for processing sensitive data, protected by complicated permissions, highly restrictive networking, and other isolations. Instead, AWS customers can use AWS Nitro Enclave to create a completely isolated compute environment to process highly sensitive data.

Each enclave is an isolated virtual machine with its own kernel, memory, and processor that requires organizations only to select an instance type and decide how much CPU and memory they want to designate to the enclave. There is also no persistent storage, no ability to login to the enclave, and no networking connectivity beyond a secure local channel.

An early adopter of AWS Nitro Enclaves is European online fashion platform, Zalando, to make it easier for the Berlin-based firm to achieve application and data isolation to protect customer data in transit, at rest and while it is being processed.

AWS shoring up security in cloud services while adding revenue streams

The common theme across these security announcements is that they reduce the amount of custom engineering required to meet security and compliance needs, allow security teams to be more efficient and confident when responding to issues, and make it easier to manage access to AWS resources, which also harkens back to the Capital One breach.

In effect, AWS is continually making it easy for customers to meet their security obligations to protect the its own reputation as well as the reputation of the industry as a whole to the point that organizations will not only trust and have confidence in cloud environments, but will increasingly see improved security as being one of the main drivers for cloud migration.

AWS is also focusing on regulatory compliance as a driver rather than inhibitor of cloud migration. We will cover this in a blogpost tomorrow.

If you want more information about hybrid cloud security, check the Architecture Blueprint "Hybrid Cloud Security" and make sure you visit our 14th European Identity & Cloud Conference. Prime Discount expires by the end of the year, so get your ticket now.

API Platforms as the Secure Front Door to Your Identity Fabric

Identity and Access Management (IAM) is on the cusp of a new era: that of the Identity Fabric. An Identity Fabric is a new logical infrastructure that acts as a platform to provide and orchestrate separate IAM services in a cohesive way. Identity Fabrics help the enterprise meet the current expanded needs of IAM, like integrating many different identities quickly and securely, allow BYOID, enable accessibility regardless of geographic location or device, link identity to relationship, and more.

The unique aspect of Identity Fabrics is the many interlinking connections between IAM services and front- and back-end systems. Application Programming Interfaces (APIs) are the secure access points to the Identity Fabric, and can make or break it. APIs are defined interfaces that can be used to call a service and get a defined result, and have become a far more critical tool than simply for the benefit of developers.

Because APIs are now the main form of communication and delivery of services in an Identity Fabric, they – by default – become the security gatekeeper. With an API facilitating each interface between aspects of the fabric, it is potentially a weakness.

API security should be comprehensive, serving the key areas of an Identity Fabric. These include:

  • Directory Services, one or more authoritative sources managing data on identities of humans, devices, things, etc. at large scale
  • Identity Management, i.e. the Identity Lifecycle Management capabilities required for setting up user accounts in target systems, including SaaS applications; this also covers Identity Relationship Management, which is essential for digital services where the relationship of humans, devices, and things must be managed
  • Identity Governance, supporting access requests, approvals, and reviews
  • Access Management, covering the key element of an Identity Fabric, which is authenticating the users and providing them access to target applications; this includes authentication and authorization, and builds specifically on support for standards around authentication and Identity Federation
  • Analytics, i.e. understanding the user behavior and inputs from a variety of sources to control access and mitigate risks
  • IoT Support, with the ability of managing and accessing IoT devices, specifically for Consumer IoT – from health trackers in health insurance business cases to connected vehicles or traffic control systems for smart traffic and smart cities

API security is developing as a market space in its own right, and it is recommended that enterprises that are moving towards the Identity Fabric model of IAM be up to date on API security management. The recent Leadership Compass on API Management and Security has the most up-to-date information on the API market, critical to addressing the new era of identity.

Dive deep into API Management and Security with Alexei Balaganski's Leadership Compass.

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected



AI for the Future of Your Business Learn more

AI for the Future of Your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00