KuppingerCole Blog

Coming soon: The KuppingerCole Leadership Compass IDaaS AM

We are about to release the update of the first of two KuppingerCole Leadership Compass documents on IDaaS (Identity as a Service). We have segmented this market into two categories:

  • Access Management (AM)
  • Identity Governance and Administration (IGA)

A fast-growing market, IDaaS AM is largely characterized by cloud-based delivery of access management capabilities for business irrespective of the application and service delivery models. Improved time-to-value proposition prioritizes adoption of IDaaS for B2B, B2E and B2C access management use-cases, helping IDaaS AM to dominate new IAM purchases globally. This Leadership Compass discusses the market direction and provides a detailed evaluation of market players to offer necessary guidance for IAM and security leaders to make informed decisions.

The IDaaS market has registered significant growth over the last few years primarily driven by the need of organizations to:

  • Achieve better time-to-value proposition over on-premises IAM deployments
  • Extend IAM capabilities to meet the security requirements of growing SaaS portfolio
  • Adopt global IAM standards and practices with access to industry expertise
  • Reduce internal IAM costs and efforts to keep up with the market trends
  • Limit internal IAM failures in project delivery and ongoing operations

IDaaS vendors have originated from distinct backgrounds and therefore their abilities to support the primary IDaaS use-cases vary significantly. Most of the IDaaS vendors come from different backgrounds including:

  • Access Management vendors that offered broader IAM capabilities required for large IAM implementations but could not easily extend these functions to support rapidly emerging cloud and consumer access use-cases.
  • IGA (Identity Governance and Administration) vendors that traditionally offered support for identity administration and access governance on-premises, but neither could extend these capabilities to applications in the cloud, nor could support access management beyond basic authentication and authorization
  • Traditional SSO vendors that have evolved over time to support web and cloud access use-cases but are deficient on common Identity Governance and Administration (IGA) functions required by most organizations for basic IAM implementation

IDaaS market consolidates access management functions with few IGA and Access Governance capabilities thrown in – all delivered and managed as a service. Today, all IDaaS vendors predominantly deliver a cloud-based service in a multitenant or dedicatedly hosted fashion to serve the common IAM requirements of an organization’s hybrid IT environment. The common IAM capabilities served by most IDaaS vendors can be grouped largely in three categories:

IDaaS Capability Matrix​Figure1: IDaaS Capability Matrix

Identity Administration: This represents the group of capabilities required by organizations to administer identity lifecycle events maintaining identity repository, managing access entitlements and synchronization of user attributes across the heterogeneous IT environment. A self-service user interface allows for requesting access, profile management, password reset, and synchronization. Other common identity administration capabilities include administrative web interface, batch import interface, delegated administration, SPML, and SCIM support.

Access Management: This refers to the group of capabilities targeted at supporting access management requirements of organizations ranging from authentication, authorization, single sign-on and identity federation for both on-premises and SaaS applications delivered as a cloud service. The underlying support for industry standards such as SAML, OAuth and OpenID Connect can vary but are largely present in most IDaaS offerings. API security and web access management gateways are fast becoming a differentiator for IDaaS vendors looking to offer competitive access management capabilities and so is social identity integration – which now represents a basic qualifier for consumer access use-cases.

Access Governance: Access governance represents the group of capabilities that are least mature and largely absent from the portfolio of most IDaaS vendors, partly due to architectural limitations and partly due to ownership issues. While many organizations still prefer to keep access governance on-premises for better control and auditing purposes, several others are moving it to the cloud for ease of integration and better time to value as their SaaS portfolio continues to grow. IDaaS vendors may have some serious limitations in how they could support integration with legacy on-prem systems for common access governance capabilities such as auditing and reporting, and so it is important for IAM leaders to ensure they assess their access governance requirements aligned with their IAM vision before starting to evaluate IDaaS vendors for their access governance capabilities.

Depending on the key focus, architectural type and product origin, which affect their overall ability to support IDaaS functions, most IDaaS vendors can be classified in two major categories - either as Access Management or IGA focussed IDaaS vendors:

IDaaS Access Management (IDaaS AM)

There are primarily 2 types of AM focussed IDaaS vendors:

The first type is the traditional SSO vendors that progressed overtime as WAM vendors to mostly address web-centric use-cases along with identity federation but originally lacked the ability to address IAM requirements for cloud-based infrastructure and applications. Over the last few years, these vendors have made significant changes to their product architecture to make them cloud-ready, however, there remain certain limitations in addressing cloud AM requirements.

The second category of IDaaS AM vendors are born in the cloud to primarily manage access management requirements of SaaS and IaaS applications but have architectural limitations in how these could be easily extended to address access management for on-prem applications.

IDaaS Identity Governance and Administration (IDaaS IGA)

The IGA focussed IDaaS vendors are the ones that have traditionally been offering identity administration capabilities including identity provisioning, lifecycle management and access governance across on-premises IT applications and systems. The key focus of these vendors on managing user identities in an increasingly complex IT environment combined with the demand and adoption trends of identity-centric solutions in the market has led these vendors to focus lesser and lesser on building access management capabilities. The move to the cloud, however, required them to support basic access management functions, in addition, to be able to support the delivery of all IGA capabilities to compete with the new IDaaS entrants. The depth of IGA functions delivered by these vendors in a cloud-based delivery model to support a hybrid IT environment not only remains questionable due to the technological limitations but also due to the consumption archetypes of on-premises IT applications and systems.

In the first of the two upcoming KuppingerCole Leadership Compass documents, we will focus on IDaaS AM and provide our insights and ratings on the leading vendors in this market segment. Soon after, we will publish the LC IDaaS IGA for the offerings that are targeted on IGA specifically.

Technology Trend: The Road to Integrated, Hybrid and Heterogeneous IAM Architectures

Requirements for - and context of - the future Identity Fabric. 

We call it Digital Transformation for lack of a better term, but it consists of much more than this buzzword is able to convey. Digital technologies are influencing and changing all areas of a company, and this is fundamentally reshaping the way communication takes place, how people work together and how customers are delivered value. 

IT architectures, in turn, are undergoing profound structural transformations to enable and accelerate this creeping paradigm shift. This evolution reflects the changes resulting from the changing challenges facing companies, government agencies and educational institutions. These challenges, which virtually every organization worldwide has faced for a long time, change processes and systems in the same way that they affect the underlying architectures.

In order to survive in this highly competitive environment, companies are striving to be as agile as possible by adapting and modifying business models and, last but not least, opening up new communication channels with their partners and customers. Thanks to the rapidly growing spread of cloud and mobile computing, companies are becoming increasingly networked with each other. The very idea of an outer boundary of a company, the concept of a security perimeter, have practically ceased to exist.

And with that the idea that different identities can be treated fundamentally isolated in one enterprise has come to an end. 

The Road to Integrated, Hybrid and Heterogeneous IAM Architectures​Figure: The Road to Integrated, Hybrid and Heterogeneous IAM Architectures

Managing identities and access in digital transformation is the key to security, governance and audit, but also to usability and user satisfaction. The challenges for a future-proof IAM are complex, diverse and sometimes even contradictory:

  • We need to integrate consumers into the system, but they often want to retain control over their identity by bringing their own identity (BYOID).
  • We want our employees (internal and external) to be able to use the end-user devices they prefer to use to gain secure access to their work environment, wherever they are.
  • We need to link identities and model real-life dependencies within teams, companies, families or our partner organizations.
  • Maybe we even want to trust identities that are maintained in other organizations and reliably integrate them and authorize them in our IAM.
  • We need to integrate identity, payment and trade.
  • We need to comply with laws and regulations and yet eliminate annoying KYC processes making site visitors leave without completing registration.
  • We want to use existing data to enable artificial intelligence for ongoing business transformation, while ensuring compliance, consent and customer security.
  • We need to extend identities beyond people and integrate devices, services and networks into our next-generation IAM infrastructure.

Workforce mobility, rapidly changing business models and business partnerships are all contributing to a trend where companies need to be able to seamlessly provide access for everyone and to any digital service. These services can be in a public cloud, they can be web applications with or without support of federation standards, they can solely be backend services accessed through APIs, or even legacy applications accessible only through some kind of middleware. However, the agility of the digital journey requires IT to provide seamless access to all these services while maintaining control and enforcing security.

In a nutshell: We need to reconsider IAM as a whole and step by step transform it into a set of services which allow to connect everything via an overarching architecture, making our services available to everyone, everywhere, without losing control.

KuppingerCole Analysts strongly support the concept of Identity Fabrics as a logical infrastructure that enables access for all, from anywhere to any service while integrating advanced approaches such as support for adaptive authentication, auditing capabilities, comprehensive federation services, and dynamic authorization capabilities. In this context, it is of no importance where and in which deployment model IT services are provided. These can be legacy services encapsulated in APIs, current standard services either “as a service” or in your own data center and future digital offerings. Identity fabrics are designed to integrate these services regardless of where they are provided, i.e. anywhere between on-premises, hybrid, public or private clouds, managed by MSPs or in outsourcing data centers, or completely serverless.

We expect Identity Fabrics to be an integral part of current and future architectures for many organizations and their IT. Future issues of KuppingerColes Analysts' View on IAM will look at this topic from multiple perspectives, with particular emphasis on architectural, technical and process-related aspects. KuppingerCole Analysts research will explore this concept of "One IAM for the Digital Age" in detail and KuppingerCole Advisory clients will be among the first to benefit from sophisticated identity fabric architectures. Watch this space, especially our blogs and our research for more to come on all things “Identity Fabric”. And remember: You’ve heard it here at KC first.

The changing role of Azure AD in Enterprise IAM Architectures

For many companies, Microsoft Azure Active Directory (Azure AD) was the basis for a coordinated step into the cloud, by extending the reach of their existing on-premises Active Directory to the cloud. For others, Azure AD was at the beginning just something that came with Microsoft Office 365 – just another target system when it comes to IAM (Identity and Access Management). However, we are talking to more and more corporate executives who are considering whether Azure AD's role should become a more strategic element within their IAM infrastructure. 

There is no simple answer to this question as it depends on a variety of aspects. This starts with your overall strategy for IAM and the way you intend to deploy IAM in the future. It depends on the breadth of applications and services you have in place and on how to best integrate these. It depends on your existing IAM tooling and other factors. For the future IAM strategy, it is worth re-thinking your approach – the concept of an Identity Fabric might be a great starting point for re-visiting your strategy. 

Azure AD is, without any doubt, one of the leading offerings in the IDaaS (Identity as a Service) market, serving far more than just the Microsoft environment. There is a huge range of pre-integrated SaaS applications available, thus allowing Azure AD to become a central element in the Access Management strategy to SaaS services. 

Notably, discussing a strategy role does not equal “should Azure AD become the one and only tool for IAM?”. Given the breadth of requirements in IAM, ranging from Identity Provisioning and Access Governance to Web Access Management, Identity Federation, Privileged Access Management and several other topics. For some scenarios, even the answer to that question could become a “yes”, but for many, it won’t, due to the breadth and depth of requirements, the existing infrastructure, and other factors. 

Obviously, cloud-based IAM solutions (IDaaS) are gaining momentum and will continue to do so, with an ever-increasing share of the critical workloads moving to as-a-service models. When these workloads are run from the cloud, running IAM from the cloud as well is just logical. 

While there is not a simple answer, I want to give four recommendations:

  1. Carefully follow what Microsoft is doing around Azure AD to understand which of your requirements are met and which aren’t, and which might be met soon. 

  1. Review your IAM landscape and revisit your IAM “architectural blueprint” for having a clear strategy on how to evolve (or even modernize) your IAM. 

  1. Reconsider ownership of Azure Active Directory – regardless of how you use it, it is part of the IAM landscape. Ownership should not be with the Office 365 team or an infrastructure/client management team, but with IAM. 

  1. If you already integrate the on-premises Active Directory and Azure AD, consider reverting the order of integration – with Azure AD being the lead.

If you are in need of a concrete assessment of your IAM infrastructure, don’t hesitate to ask us. 

Account Takeovers on the Rise

Account Takeover (ATO) attacks are on the rise. The 2019 Forter Fraud Attack Index shows a 45% increase in this type of attack on consumer identities in 2018. ATOs are just what they sound like: cybercriminals gain access to accounts through various illegal means and use these take over accounts to perpetrate fraud. How do they get access to accounts? There are many technical methods that bad actors can use, such as consumers responding to phishing emails; grafting through fake websites; collection of credentials from keyloggers, rootkits, or botnets; harvesting cookie data using spyware; credential stuffing; brute force password guessing, or perusing compromised credential lists on the dark web. However, they don’t even have to use sophisticated means. Sometimes account information can be found on paper, so variations of “dumpster diving” still works.

Once cybercriminals have account information, depending on the type of account, they can use it for many different kinds of fraud. Of course, financial fraud is a top concern. A banking overlay is a type of mobile malware that looks like a legitimate banking site but is designed to capture bank customer credentials. Banking overlays usually pass on user interaction to the underlying banking app, but also pass on the captured credentials to the malicious actors. Some are sophisticated enough to grab SMS OTPs, thereby defeating that form of 2FA. This problem is more acute on Android than iOS. Using mobile anti-malware and ensuring that users get apps from trusted app stores can help prevent this kind of attack.

Consumer banking is not the only kind of financial industry targeted by cybercriminals. B2B banks, mortgage brokers, investment banks, pension fund managers, payment clearing houses, and cryptocurrency exchanges are also under attack. From the cybercriminals’ point of view, it is easier to attack the end user and the often-less-secured apps they use than to attack financial industry infrastructure.

Just about any online site that exchanges anything of value has become a target for fraudsters. Airline frequent flyer programs and other kinds of travel/hospitality loyalty programs made up 13% of all accounts for sale on the dark web as of the end of 2018. Other consumer rewards programs that can be monetized are also being stolen and brokered. Digital goods, such as in-game purchases, can be highly sought-after, so there are black markets for gamer credentials.

ATO fraud has hit the insurance sector in a big way in recent years. Fraudsters use ATO methods to get insurance customer credentials to submit claims and redirect payouts. Some malicious actors go after insurance agent credentials to facilitate claims processing and get even bigger gains.

Though these stories have been circulating for years, real estate and escrow agents are still occasionally getting ATO’d, such that the home buyers are deceived into transferring large sums to fraudsters during real estate closing deals.

Consumer-facing businesses need to take two major steps to help reduce ATO fraud.

  1. Implement MFA, and not just SMS OTP. This is the biggest bang for the buck here. Passwords are ineffective. SMS OTP can be compromised. Use securely designed mobile apps. Use mobile security SDKs to build apps. Push notifications in-app and native biometrics are a better choice than passwords and texted passcodes. FIDO Alliance has standardized 2FA and mobile-based MFA. FIDO 2.0, released this year, greatly improves interoperability with web applications. Use FIDO authentication mechanisms for not only better security, but also enhanced privacy, and a more pleasant consumer experience. For comprehensive reviews of MFA products, see our Leadership Compasses on Cloud-based MFA and Adaptive Authentication (on-premises products). 

  1. Use fraud reduction intelligence services for real-time analysis of many pertinent behavioral and environmental factors to reduce fraud risk. Examples of factors that fraud reduction platforms evaluate include user behavior, behavioral biometrics, device hygiene, device reputation, geo-location, geo-velocity, bot intelligence, and cyber threat intelligence. These solutions employ machine learning (ML) techniques to more efficiently identify potentially malicious behavior.

ATOs and how to mitigate them will be one of the main topics discussed at our upcoming Consumer Identity World event in Seattle from September 25-27,2019. For more information, please see the event page at https://www.kuppingercole.com/events/ciwusa2019.  KuppingerCole will be publishing additional research on Fraud Reduction Intelligence Technologies in the near future. Stay tuned.

How to Train Your AI to Mis-Identify Dragons

This week Skylight Cyber disclosed that they were able to fool a popular “AI”-based Endpoint Protection (EPP) solution into incorrectly marking malware as safe. While trying to reverse-engineer the details of the solution's Machine Learning (ML) engine, the researchers found that it contained a secondary ML model added specifically to whitelist certain types of software like popular games. Supposedly, it was added to reduce the number of false positives their "main engine" was producing. By dumping all strings contained in such a whitelisted application and simply appending them to the end of a known piece of malware, the researchers were able to avoid its detection completely, as shown in their demo video.

This finding is just another confirmation of inherent challenges of designing ML-based cybersecurity products. Here are some issues:

  1. The advantages that ML-enhanced cybersecurity tools provide can be easily defeated if overrides are used to eliminate false positives rather than proper training of ML algorithms. ML works best when fed as much data as possible, and when products are implemented using the right combination of supervised and unsupervised ML methods. It’s possible that whitelisting would not have been necessary if sufficient algorithmic training had been performed.
  2. ML can be gamed. Constraining data sets or simply not having enough data piped through the appropriate mix of ML algorithms can lead to bias, which can lead to missed detections in the cybersecurity realm. This can be either intentional or unintentional. In cases of intentional gaming, malicious actors select subsets of data with which to train the discriminator, while purposely omitting others. In the unintentional case, software developers may not have access to a full sample set or may simply choose to not use a full sample set during the construction of the model.  
  3. Single-engine EPP products are at a disadvantage compared to multi-engine products. Using “AI” techniques in cybersecurity, especially in EPP products, is an absolute necessity. With millions of new malware variants appearing monthly, human analysts can’t analyze and build signatures fast enough. It is infeasible to rely on signature-based AV alone, and this has been true for years. However, just because signature-based engines are not completely effective doesn’t mean that products should abandon that method in favor of a different single method. The best endpoint protection strategy is to use a mixture of techniques, including signatures, ML-enhanced heuristics, behavioral analysis, sandboxing, exploit prevention, memory analysis, and micro-virtualization. Even with an assortment of malware detection/prevention engines, EPP products will occasionally miss a piece of malicious code. For those rare cases, most endpoint security suite vendors have Endpoint Detection & Response (EDR) products to look for signs of compromise.
  4. Marketing ML-enhanced tools as an “AI” panacea has drawbacks.  ML tools are a commodity now. ML techniques are used in many cybersecurity tools, not just EPP. ML is in most data analytics programs as well. It’s a necessary component to deal with enormous volumes of data in most applications. The use of the term “AI” in marketing today suggests infallibility and internal self-sufficiency. But such tools can make mistakes, and they don’t eliminate the need for human analysts.   

KuppingerCole is hosting an AImpact Summit in Munich in November where we’ll tackle these very issues. The Call for speakers is open.

For an in-depth comparison of EPP vendors, see our Leadership Compass on Endpoint Security: Anti-Malware.

Passwordless for the Masses

What an interesting coincidence: I’m writing this just after finishing a webinar where we talked about the latest trends in strong authentication and the ways to eliminate passwords within an enterprise. Well, this could not have been a better time for the latest announcement from Microsoft, introducing Azure Active Directory support for passwordless sign-in using FIDO2 authentication devices.

Although most people agree that passwords are no longer an even remotely adequate authentication method for the modern digital and connected world, somehow the adoption of more secure alternatives is still quite underwhelming. For years, security experts warned about compromised credentials being the most common reason for data breaches, pointing out that just by enabling multi-factor authentication, companies may prevent 99% of all identity-related attacks. Major online service providers like Google or Microsoft have been offering this option for years already. The number of vendors offering various strong authentication products, ranging from hardware-based one-time password tokens to various biometric methods to simple smartphone apps is staggering – sure there is a solution for any use case on the market today…

Why then are so few individuals and companies using MFA? What are the biggest reasons preventing its universal adoption? Arguably, it all boils down to three major perceived problems: high implementation costs, poor user experience, and lack of interoperability between all those existing products. Alas, having too many options does not encourage wider adoption – if anything, it has the opposite effect. If an organization wants to provide consistently strong authentication experiences to users of different hardware platforms, application stacks, and cloud services, they are forced to implement multiple incompatible solutions in parallel, driving costs and administration efforts up, not down.

FIDO Alliance was founded back in 2013, promising to establish certified interoperability among various strong authentication products. KuppingerCole has been following their developments closely ever since, even awarding the Alliance twice with our EIC Awards for the Best Innovation and Best Standard project. Unfortunately, the adoption rate of FIDO-enabled devices was not particularly universal, and mostly limited  to individuals, although large-scale consumer-oriented projects supported by vendors like Samsung, Google or PayPal succeeded. Lack of consistent support of the standard in browsers restricted its popularity even further.

Fast forward to early 2019, however, and the second version of the FIDO specification has been adopted as a W3C standard, ensuring its consistent support in all major web browsers, as well as in Windows 10 and Android platforms. The number of online services that support FIDO2-based strong authentication is now growing much faster than in previous years and yet, many experts would still argue that the standard is too focused on consumers and not a good fit for enterprise deployments.

Well, this week, Microsoft has announced that FIDO2 security devices are now supported in Azure Active Directory, meaning that any Azure AD-connected application or service can immediately benefit from this secure, standards-based and convenient experience. Users can now authenticate themselves using a Yubikey or any other compatible security device, the Microsoft Authenticator mobile app, or the native Windows Hello framework.

With Azure Active Directory being the identity platform behind Microsoft’s own cloud services like Office 365 and Azure Cloud, as well as one of the most popular cloud-based IAM services for numerous 3rd party applications, can this be any more “enterprise-y”?

We realize that the service is still in the preview stage, so there are still a few kinks to iron out, but in the end, this announcement may be the final push for many companies that were considering adopting some form of modern strong authentication but were wary of the challenges mentioned earlier. Going fully passwordless is not something that can be achieved in a single step, but Microsoft has made it even easier now, with more traditional MFA options and even support for legacy apps still available when needed.

And, of course, this could be a major boost for FIDO2 adoption in the enterprise world, which we can only wholeheartedly welcome.

Device Authentication and Identity of Things (IDoT) for the Internet of Things (IoT)

Security has seldom been the focus of device manufacturers who have historically taken their own approach for securing the devices in the IoT. Most devices in enterprise, consumer or industrial IoT continue to be developed and designed to perform specific functions and security is often a neglected theme in the majority of product development lifecycles. The proprietary protocols these devices operate on are primarily characterized by the purpose they are built to serve and offer very limited or no interoperability. With the increasing convergence of IT and OT towards IoT, lack of a common operating framework and security principles pose some serious challenges for device manufacturers and also the consumers.

In an increasingly connected world where we see an explosion of networked and cloud-enabled devices ranging from home appliances to medical devices to consumer electronics, creating and maintaining device and user identities, the relationships between the various entities and ensuring the integrity of devices has remained a constant challenge for consumers as well as for the security leaders. The industry has seen the emergence of several standards from governing bodies and consortiums but we still lack appropriate mechanisms that define how the identity of things (IDoT) should be defined, standardized and deployed across operating networks and entities. Besides the need for verifying identity and establishing trust levels of various entities such as devices, people, applications, cloud services and/or gateways operating in an IoT environment – there’s a need to manage ‘Identity of Things’ or IDoT throughout the lifecycle of things.

An effective authentication and identity framework for IoT devices should be able to provide appropriate protection against cyber threats throughout the distinct operational life-cycle stages of a device depicted here in the figure:

Authentication for the devices in IoT is different and considerably lighter weight than people authentication methods prevalent today due to the potential resource constraints of devices, the bandwidth of networks they operate within, and the nature of interaction with the devices.

A lack of established industry standards for IoT authentication has led vendors to develop proprietary authentication methods. Since many IoT devices can be resource-constrained with low computing power and storage capacity, existing authentication methods are not a good candidate due to their significant bandwidth and computational requirements. There is a growing need to evaluate and streamline the methods adopted for device and service authentication over constrained IoT networks. It is important to analyze and use the factors essential for verifying the identity of ‘things’ to establish the desired level of trust in the device identity without overburdening the fit-for-purpose computing abilities of the IoT device. While there is increased adoption of the adaptations/ deviations of standard authentication methods such as PKI, OAuth and OIDC to serve the required scope and scale of the IoT use-cases, we also see the use of standards from OAM DM, LWM2M and TR-069 specifications for establishing secure communication between the constrained nodes in the IoT networks.

The resource-constrained nature of many IoT endpoints severely limits their ability to sustain prevalent authentication methods which have further led to the adoption of proprietary authentication methods in the market that do not conform to trust requirements and offer very limited or no interoperability. This is, however, changing rapidly as the industry moves from considering security as an afterthought to including it as part of the system design process. The majority of embedded systems today implement device authentication methods that rely predominantly on a software-based approach. Popular examples include use of MAC (Message Authentication Code) to enable secure key exchange for device authentication over constrained IoT protocols, as well as light-weight adaptations of PKI and OAuth2 protocols to match the scope and scale of IoT use-cases.

The software implementation of IoT authentication has notable cost and maintenance advantages but the protection offered by these methods is severely restricted by the security of embedded OS and coding practices of embedded system developer. Largely ineffective against the common software-based attacks and physical device tempering, software implementation of IoT authentication methods are known to offer limited or no protection against the next-gen IoT threats that exploit specific IoT device functions such as remote administration, device provisioning and boot sequence.

Hardware-based security approaches such as Hardware Root of Trust (HRoT) and Trusted Execution Environment (TEE) are fast becoming an industry-wide standard for securing desktops, tablets and mobile phones. These approaches have found an increased relevance to securing IoT devices. Hardware-based ‘root of trust’ offers on-chip security functions including key generation, integrity checks and attestation, which are executed in an isolated hardware environment and, therefore, offer effective protection against physical thefts. Trusted Platform Module (TPM), a prime example of HRoT implementation, when combined with software-based PKI delivers high trust authentication for IoT devices. Other than securing digital credentials, TPM trust measurements provide secure control over the boot sequence of IoT devices thereby validating the authenticity of each device as it loads up in the IoT network.

The choice of an appropriate authentication method for a given set of IoT devices is largely driven by the identity assurance requirements that vary from device to device, depending primarily on the operating environment, device lifecycle stages (as depicted in the figure above) and the impact of potential compromise through unauthorized access. IoT security designers and architects should make use of defined metrics that correlate the trust level(s) offered by available device authentication methods to the trust requirements of IoT devices. Hardware-based authentication methods such as TPM are most suited for IoT use-cases where the requirement of establishing higher levels of trust (equivalent to FIPS 140-2, level 3 to 4) in the device identity is paramount.

Since strong authentication methods that provide a higher level of trust in the device identity are not a viable option for many IoT use-cases where the concerns of additional component cost and increased device size restrict the adoption of TPM or TEE methods, considerable loss in the strength of authentication method and hence in the associated level of assurance is expected. IoT security architects should work to find out the right balance and realize an appropriate trade-off in the level of trust offered by authentication methods for each category of IoT devices in operation.

Assuming High Criticality: Resilience, Continuity and Security for Organizations and Infrastructures

Acronyms are an ever-growing species. Technologies, standards and concepts come with their share of new acronyms to know and to consider. In recent years we had to learn and understand what GDPR or PSD2 stand for. And we have learned that IT security, compliance and data protection are key requirements for virtually any enterprise. The following acronyms and more importantly the concepts behind them can teach us about what forward-looking organizations and their leaders should be thinking of. 

MTPD stands for "Maximum Tolerable Period of Disruption". Its value determines the longest possible amount of time an organization can endure until the impact of an incident leading to a partial or complete disruption of service becomes inacceptable or a recovery becomes more or less useless. Determining this period is an exercise every reader of this text might want to do just now. It might be surprisingly low.

MBCO, closely related to the MTPD, is short for "Minimum Business Continuity Objective". It describes the baseline of services that are necessary for an organization to survive during a disruption. Another important aspect for all of us to think of. MTDL describes the “Maximum Tolerable Data Loss”. It is usually defined as the largest possible amount of data in IT systems (or analog media, like files and binders) an organization can accept to lose and still be able to recover successful operations afterwards. These terms (and many more related and relevant concepts) stem originally from the area of Business Continuity Planning, but they become increasingly important also to management and staff of IT security departments.  

One reason for that is yet another acryonym, namely “KRITIS”, which is an abbreviation of „KRITische InfraStrukturen“ (“critical infrastructure”). Critical infrastructure is defined as organizations or institutions of major importance to the state community whose failure or degradation would result in sustained supply shortages, significant public safety disruptions or other dramatic consequences.  

Originating from an EU Directive in 2008 the term is closely linked to the Federal Republic of Germany, its legislation and its efforts to reduce potential vulnerabilities of critical infrastructure. The concept aims at improving protection and resilience as a result of the increasing extent of pervasiveness and dependence of almost all areas of life with and from critical infrastructure. A German law (“IT-SiG”), and a regulation “BSI-Kritisverordnung” (“Kritis regulation”) issued in 2015/2016 are the foundation for the specification and enforcement of this significant set of requirements.  

Many countries are already looking at regulating and securing critical infrastructure as well, including  the US (Department of Homeland Security), so this is far from being just yet another German or European issue. But taking Germany as an example, the overall picture of critical infrastructure includes Energy, Information Technology and Telecommunications, Nutrition and Water, Healthcare, Finance and Insurance, Transport and Traffic. The actual scope of organizations affected can be looked up online. The core legislation is the same for each critical infrastructure, the challenge for individual industries is that sector-specific requirements need to be identified individually. The definition of industry-specific requirements is the responsibility of the individual industries, their industry associations and key corporations as exemplary representatives of their sector. However, these documents need to be government-approved.

Implementing these requirements requires organizations to think in more than just in terms of IT security. While the industry-specific requirement documents often have some IT security specific bias (usually starting with implementing an ISO 27xxx ISMS), organizations also need to consider the acronyms in the beginning of this text. This “paradigm shift” that critical infrastructure has to deal with now (and obviously had to deal with before already) is an important step for any organization. Extending security towards resilience, business continuity will be essential for almost any organization within a world of increasing challenges, including but not limited to cyber threats.

To make systems, processes and organizations future-proof, it is highly recommended to consider security, safety and business continuity more holistically. Why not use related KRITIS-requirements as a benchmark that could help you to increase your organizational maturity? Just because you are not obliged to comply does not mean that going beyond your individual, mandatory requirements cannot improve your overall security posture and business continuity approach.  

The definitions and requirements concerning critical infrastructure as they exist at an European and, in particular, German level can be regarded as exemplary in many respect. Even if they have direct relevance primarily for operators of critical infrastructure in Germany, they can serve as a basis for the design, operation and documentation of resilient architectures in Europe and beyond, due to the degree of detail and their comprehensive coverage of a multitude of sectors and industries.

And as a heads up for German readers, the update of the IT-SiG (“IT-Sicherheitsgesetz 2.0”) could be yet another game changer, so they should be prepared for more major changes in systems, processes and organization.

 

Benefits of IAM in Healthcare: Compliance, Security, Profits and More

Healthcare organizations must use IAM as an integral part of their IT infrastructure in order to cope with challenges in various fields, such as compliance, regulations, cybersecurity, and Digital Transformation. In this respect, IAM not only serves as a security technology but also as a mechanism that helps responding to new business challenges and drivers.

While every industry currently has to deal with the disruptive nature of the Digital Transformation and ever-increasing cyberattacks, some of the developments are endemic to healthcare organizations. For instance, complying with new regulations such as Electronic Prescribing for Controlled Substances (EPCS) or the well-known Health Insurance Portability and Accountability Act (HIPAA).

Due to their sensitivity, patient data are a highly valuable target for hackers, which is why the healthcare industry is among the most regulated ones.

In order to protect sensitive patient data, it is inevitable for any given healthcare organization to implement a strong Identity and Access Management as a central pillar of the IT infrastructure. Control and restriction of access with the help of a sophisticated IAM is a prerequisite to the protection of information that network firewalls and single sign-on (SSO) cannot deliver.

Altogether, there are five areas a strong IAM infrastructure will bolster:

  • Compliance & Regulations
  • Security
  • Organizational & Financial Challenges
  • M&A Activity
  • Digital Transformation

Compliance & Regulations: In recent years, the regulatory bar that healthcare organizations have to comply with has been raised. HIPAA, ECPS and state-level regulations, like the California Consumer Privacy Act (CCPA) are just a few. The regulations have strong authentication and refined access control at their core. They are complemented by a detailed registration of user activities. In stark contrast to other industries, healthcare providers may find themselves in emergency situations, when data must be accessed quickly. IAM delivers the infrastructure for such regulation-compliant emergency access processes.

Security: As cyberattacks rise in number and intensity, organizations have to take precautions more than ever. IT security teams in healthcare organizations must prioritize this kind of risk, especially regarding incidences of ransomware attacks against hospitals.

Most healthcare organizations still focus too little on detection while being caught up in prevention. Despite external prevention being necessary, it does not protect against internal attacks. Therefore, it is unavoidable to restrict access to sensitive data and critical infrastructure, particularly when the attacker is already in the system. IAM does not replace firewalls and other preventive measures but should always go hand in hand with them.

Organizational & Financial Challenges: Apart from having to care for their patients’ wellbeing, healthcare organizations, ultimately, are also businesses and should keep an eye on profits. Here, IAM helps increasing efficiency and convenience for user experience, for instance with an SSO portal.

The number of user types accessing systems and data within healthcare organizations is sheer incalculable: Doctors, nurses, students, patients are just a few and the number is growing. The careful distribution of user rights has its litmus test when emergency situations arise and EMR data must be accessed quickly.

Like any other kind of business, healthcare businesses often lack the resources and ownership for IAM. The latter must be clearly defined, and projects need support from sponsors within the organization.

M&A Activity: Healthcare is not exempt from M&A activity and IAM infrastructures should be designed to make a merger go as smoothly as possible. The merging organizations must federate identities and give employees, patients, and contractors adequate levels of access.

Digital Transformation: Telemedicine, EMR, and Patient Access Management result in an increase in identities as well as in complexity in access entitlements which must be controlled. The role of IAM is to support these processes in working together seamlessly. Where SSO is not enough, IAM gains center stage: Provisioning and deprovisioning of accounts, management of access entitlements, audit and governance, and granular access control.

As healthcare services become more digitalized and “consumerized”, it is IAM that must support these hybrid environments and multi-cloud infrastructures. Different types of users have different types of devices, all of which has to be considered when setting up an IAM solution, especially one that goes beyond SSO. This is the only way to address the challenges of digitization and provide secure access at the same time. Ultimately, IAM is the foundation for supporting the consumerization of healthcare businesses.

Will the Stars Align for Libra?

This week, Facebook announced details about its cryptocurrency project, Libra. They expect it to go live for Facebook and other social media platform users sometime in 2020. The list of initial backers, the Founding Members of the Libra Association, is quite long and filled with industry heavyweights such as Coinbase, eBay, Mastercard, PayPal, and Visa. Other tech companies including Lyft, Spotify, and Uber are Founding Members, as well as Andreesen Horowitz and Thrive Capital.  

Designed to be a peer-to-peer payment system, Libra will be backed by a sizable reserve and pegged to physical currencies to defray wild currency floats and speculation. The Libra Association will manage the reserve, and it will not be accessible to users. The Libra Association will mint and destroy Libra Coins in response to demand from authorized resellers. Founding Members will have validator voting rights. As we can see from the short list above, Libra Founding Members are large organizations, and they will have to buy in with Libra Investment Tokens. This investment is intended to incentivize Founding Members to adequately protect their validators. Libra eventually plans to transition to a proof-of-stake system where Founding Members will receive voting rights proportional (capped at 1%) to their Investment Tokens. They expect this to facilitate the move to permissionless blockchain at some point in the future. Libra blockchain will therefore start off as permissioned and closed. The Libra roadmap can be found here.

Let’s look at some of the interesting technical details related to security that have been published at https://libra.org. Libra protocol takes advantage of lessons learned over the last few years of blockchain technologies. For example, unlike Bitcoin, which depends on the accumulation of transactions into blocks before commission, in Libra, individual transactions compose the ledger history. The Consensus protocol handles aggregation of transactions into blocks. Thus, sequential transactions and events can be contained in different blocks.  

Authentication to accounts will use private key cryptography, and the ability to rotate keys is planned. Multiple Libra accounts can be created per user. User accounts will not necessarily be linked to other identities. This follows Bitcoin and Ethereum model for pseudonymity. Libra accounts will be collections of resources and modules. Libra “serialize(s) an account as a list of access paths and values sorted by access path. The authenticator of an account is the hash of this serialized representation. Note that this representation requires recomputing the authenticator over the full account after any modification to the account... Furthermore, reads from clients require the full account information to authenticate any specific value within it.”  

Transaction fees in Libra will adhere to an Ethereum-like “gas” model: senders name a price they are willing to pay, and if the cost to the validators exceeds the number of units at that price, the transaction aborts. The ledger won’t be changed, but the sender will still be charged the fee. This is designed to keep fees low during times of high transaction volumes. Libra foresees that this may help mitigate against DDoS attacks. It also will prevent senders from overdrawing their accounts, because the Libra protocol will check to make sure there is enough Libra coin to cover the cost of the transaction prior to committing it.

The Libra Protocol will use a new programming language, called Move, which will be designed to be extensible to allow user-defined data-types and smart contracts. There will be no copy commands in Move, only create/destroy, to avoid double-spend. Programmers will be able to write in higher level source and intermediate representation languages, which will be output to a fairly simple and constrained bytecode which can be type- and input-verified for security. Transactions are expected to be atomic, in that each should contain a single operation. In Libra, modules will contain code and resources will have data, which is in contrast to Ethereum, where a smart contract contains both code and data.  

Another interesting concept in Libra is the event. An event is defined as a change of state resulting from a transaction. Each transaction can cause multiple events. For example, payments result in corresponding increases and decreases in account balances. Libra will use a variant of the HotStuff consensus protocol called LibraBFT (Byzantine Fault Tolerance), which is architected to withstand multiple malicious actors attempting to hack or sabotage validators. The HotStuff consensus protocol is not based on proof-of-work, thereby avoiding performance and environmental concerns. Libra intends to launch with 100 validators, and eventually increase to 500-1,000 validator nodes.

Libra Core code will be written in Rust and open sourced. Facebook and Libra acknowledge that security of the cryptocurrency exchange depends on the correct implementation of validator node code, Move apps, and the Move VM itself. Security must be a high priority, since cryptocurrency exchanges are increasingly under attack.  

Facebook’s new subsidiary Calibra will build the wallet app. Given that Coinbase and others in the business are on the board, it’s reasonable to expect that other cryptocurrency wallets will accept Libra too. Facebook and other cryptocurrency wallet makers must design security and privacy into these apps as well as the protocol and exchange. Wallets should take advantage of features such as TPMs on traditional hardware and Secure Elements / Trusted Execution Environment and Secure Enclave on mobile devices. Wallets should support strong and biometric authentication options.

Users will have no guarantees of anonymity due to international requirements for AML and KYC. Facebook claims social media profile information and Libra account information will be kept separate, and only shared with user consent. Just how Facebook will accomplish this separation remains to be seen. The global public has legitimate trust issues with Facebook. Nevertheless, Facebook, WhatsApp, and Instagram have 2.3B, 1.6B, and 1.0B user accounts respectively. Despite some overlap, that user base is larger than a couple of the largest countries combined.

The moral argument in favor of cryptocurrencies has heretofore been that blockchain technologies will benefit the “unbanked”, the roughly 2B people who do not have bank accounts. If Libra takes off, there is a possibility that more of unbanked will have access to affordable payment services, provided there is sizable intersection between those with social media accounts and who are unbanked.  

Much work, both technical and political, remains to be done if Libra is to come to fruition. Government officials have already spoken out against it in some areas. Libra will have to be regulated in many jurisdictions. An open/permissionless blockchain model would help with transparency and independent audit, but it could be years before Libra moves in that direction. While Libra runs as closed/permissioned, they will face more resistance from regulators around the world.  

Facebook and the Libra Association will have to handle not only a mix of financial regulations such as AML and KYC, but also privacy regulations like GDPR, PIPEDA, CCPA, and others. There was no mention in the original Libra announcement about support for EU PSD2, which will soon govern payments in Europe. PSD2 mandates Strong Customer Authentication and transactional risk analysis for payment transactions. Besides the technical and legal challenges ahead, Facebook and Libra will then have to convince users to actually use the service.

Initiating payments from a social media app has been done already: WeChat, for example. So it’s entirely possible that Libra will succeed in some fashion. If Libra does take off in the next couple of years, expect massive disruption in the payments services market. It is too early to accurately predict the probability of success or the long-term impact if it is successful. KuppingerCole will follow and report on relevant developments. This is sure to be a topic of discussion at our upcoming Digital Finance World and Blockchain Enterprise Days coming up in September in Frankfurt, Germany.  

 

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00