KuppingerCole Blog

Ambient Intelligence Can’t Mature Without an Identity Protocol

Every day we are experiencing the intersection of IoT and AI. The interactions of users, sensors, robots, vehicles, smart buildings, and much more is creating a new status quo for digital experiences. This growing range of smart devices – both in the IoT sense and the intelligent AI sense – mean we are moving beyond a singular focus on the smartphone. This heightened immersion into increasingly distributed, decentralized digital networks is what KuppingerCole has termed “Ambient Intelligence”.

The synergy of AI and IoT that Ambient Intelligence enables will be a key driver for the machine-to-machine (M2M) economy, with businesses and consumers already demanding it in daily tasks. However, advancing the M2M economy is held back by the lack of a reliable, secure identity protocol for objects and devices. Without this sort of protocol, companies use strange workarounds to meet the demands of users – using a smartphone as a proxy for a vehicle, for example.

When a phone isn’t a phone

Artist Simon Weckert publicized photo and video evidence of his performance art piece on 1 February 2020, showing himself pulling a wagon filled with 99 smartphones through Berlin. His message of how the presence of a smartphone impacts the way traffic is represented in Google Maps received over three million views. With so many smartphones, he easily turned a “green”, traffic-free section of road to “red” and congested, rerouting actual vehicles to side streets. Weckert’s purpose was to illuminate the influence that digital tools such as Google Maps have over our choices in the physical world, but this performance invites other takeaways, such as the misrepresentation of devices.

Artist Simon Weckert publicized photo and video evidence of his performance art piece on 1 February 2020, showing himself pulling a wagon filled with 99 smartphones through Berlin. His message of how the presence of a smartphone impacts the way traffic is represented in Google Maps received over three million views. With so many smartphones, he easily turned a “green”, traffic-free section of road to “red” and congested, rerouting actual vehicles to side streets. Weckert’s purpose was to illuminate the influence that digital tools such as Google Maps have over our choices in the physical world, but this performance invites other takeaways, such as the misrepresentation of devices.

IOTA’s role in delivering an identity protocol

IOTA has very promising contributions to digital identity, for humans as well as devices and objects. It is an open-source, blockchain-like protocol that among other things can host the emerging Unified Identity Protocol that would enable a secure M2M exchange of data – including ambient devices. Established in Germany as a non-profit foundation to provide “the next generation of protocols for the connected world”, IOTA is both the name of the foundation and the DAG-based technology layer which is free for public use. It stands out as a digital (and decentralized) identity solution for a number of reasons:

  • The use of Directed Acyclic Graphs (DAG) solves the chronic scalability weakness of typical blockchains – the more transactions are executed with IOTA, the faster each transaction is executed. A participating node’s transaction is only confirmed when it approves two previous transactions. Therefore, the more transactions in IOTA, the faster each can be approved.
  • Transactions are fee-less, again because of the blockchain-like DAG structure. Unlike the typical proof-of-work consensus mechanism which incentivizes honest participation by awarding cryptocurrencies and charging transaction fees, a participating node’s incentive in IOTA is to have its own transaction approved.
  • Microtransactions (such as sharing the battery status of an electric vehicle every few minutes) are possible because transactions are fee-less. The potential for objects and devices to share data on a constant basis using IOTA is much more feasible if there are not prohibitive costs associated with it.
  • IOTA is bringing Self-Sovereign Identity for human and device and object identity. It arranges an identity ecosystem made up of Holder, Issuer, and Verifier roles following emerging standards for DID and Verifiable Credentials. By employing cryptographic techniques such as zero-knowledge proofs, users – and objects – can prove an identity attribute is true without over-revealing information.
  • The partner ecosystem includes key players that are highly invested in bringing IoT and Industry 4.0 to maturity. Siemens alone has been granted 13 patents for IOTA-based identification and authentication technologies.

Multiple decentralized identity solutions are under development, but this typically means blockchain. While they have their merits, IOTA is a non-blockchain option for decentralized identity that may outpace them all. To learn more about the future of digital and decentralized identity, join KuppingerCole at the European Identity & Cloud Conference in May 2020 for over 200 sessions and insight from a wide array of global experts.

Top 5 Recommendations for Reducing Cyber Risks in 2020

The turn of the year has been an occasion for many cybersecurity news outlets to talk about trends and challenges in cybersecurity. Despite the importance of knowing what the trends and challenges are, we want to give you some hands-on recommendations to increase security for your company. Of course the following recommendations are just a selection out of many possible measures. We are happy to discuss with you in more detail the implications for your concrete business model.

1. Beyond detect, prevent, respond: recovery & Incident Response Management

While AI helps in increasing cyberattack resilience, there is one more thing to look at: Recovery. Every organization is under attack, and there is the risk of being hit by a major attack at some time. The most important things then are, in that order: Recover your IT, at least core functions, to become operational again. The time window for a business to survive when being hit by a severe attack can be very short, sometimes in the range of very, very few days. Be able to recover and integrate your cybersecurity efforts with Business Continuity Management. The second thing to do is preparing for communication and resolution: Incident Response Management. This must be prepared. Thinking about it when the disaster occurred will be too late. Join the brand-new KC Master Class Incident Response Management starting on February 18 to learn how to define an incident response strategy to protect your company.

2. Define your Identity & Security Fabric for serving both agility & security

Beyond API Security, you need to ensure that your IT can serve the needs of the teams creating the new digital services. That all is then about agility, about time-to-value. You need to provide consistent, easy-to-use identity and security services via APIs. It is time to build your Identity & Security Fabric that delivers to both the digital services and the need for managing and protecting your legacy IT.

3. Go Adaptive Authentication

Put Adaptive Authentication and Passwordless Authentication to the top of your to-do-list. Everything you change and add around authentication must fit to these paradigms. Build a central authentication platform, and ensure that you also can work seamless with other Identity Providers (IdPs) and understand the authentication assurance level they provide.

4. Build on Managed SOC & SOC as a Service

It is hard to run your own SOC. Look for managed services or a SOC as a Service. There are many providers out there already. While it is hard to build and run your own SOC independently, despite all technology improvements, it is not that hard to find a strong partner supporting you.

5. Define your IIoT and OT security approach - together

The biggest challenge in IIoT and OT security is the one of understanding and accepting each other. IT Security and OT Security have different challenges, starting with the difference between security and safety. Thus, to make progress, it is overly important to find a common understanding of targets, terminology, requirements, and to understand that both sides can provide to better solutions. It is about people and organization first, then technology.

There would be many more recommendations to give, beyond the five key challenges, the top technology trends, and the related recommendations. Let me look at just three more:

1. PAM: Implement a strong PAM for the future

PAM (Privileged Access Management) remains a central technology, not only for identity but also for cybersecurity – it sits somewhere in the middle. You need a strong PAM, and PAM is evolving beyond the traditional PAM into areas such as PAM for DevOps and cloud-integrated PAM. Understand what you need and ensure that you have a strong PAM in place for the future. For a deeper understanding, join the KC Master Class PAM for the 2020s.

2. Portfolio Management. The right tools, not many tools

As indicated at the beginning: Tools don’t help, if they are not fostered by people, organization, policies, and processes. And many tools don’t help better than a good selection of the right tools. Given that budgets are limited, picking the right portfolio is essential. Understand which tools help really in mitigating which risks, and redefine your portfolio, focusing on the tools that really help you mitigating risks. KuppingerCole’s Portfolio Compass provides a proven methodology for optimizing your cybersecurity tools portfolio.

3. C-SCRM: Understand and manage the risks of your Cybersecurity Supply Chain

Finally, there is a new theme to look at closely: C-SCRM or the Cybersecurity Supply Chain Risk Management. This involves both hardware and software (including cloud services) you procure, and your suppliers that might affect your security posture. Pick up this topic, with well-thought-out supplier (cyber) risk management at all levels. For a start, check out this blog post which looks at why C-SCRM is becoming essential for your digital business.

There would be far more information to provide. The good news is: While challenges are increasing, there are ways to keep a grip on the risk. Focus on the major risks, focus your investments, and work with the experts as well as your peers. A good place to meet your peers will be EIC 2020, May 12th to 15th, in Munich.

KuppingerCole is specialized in offering advisory services for cybersecurity, artificial intelligence, and identity and access management. If your company needs assistance finding the right toolset, architecture or what to focus on, KuppingerCole Analysts is happy to support you.

Why C-SCRM Is Becoming so Essential for Your Digital Business

The current discussion around Huawei and whether or not it should be endorsed as a supplier for 5G mobile network hard- and software has reminded us on how dependent we are on the integrity and reliability of such manufacturers and how difficult it is to trust their products if they are closed source and proprietary or otherwise hard or impossible to examine. Due to its undisputed vicinity to the Chinese government, Huawei has come under suspicion primarily by the US authorities to provide undocumented access capabilities to Chinese intelligence agencies enabling them to globally wiretap mobile communications in 5G networks.

Lessons learned from the Crypto AG scandal

Such allegations weigh heavily and if they are more than just politically inspired trade war rhetoric, we would have to profoundly change the way we look at the risks deriving from supply chains for cybersecurity-related equipment. Is it paranoid to think that governments and their secret services are evil enough to make suppliers deliver manipulated hard- and software or is it real? The most recent story about the German Secret Service BND and its US counterpart CIA covertly co-owning the former Swiss manufacturer of cryptographic hard- and software, Crypto AG, shows that some governments even go further.

Germany and the USA secretly took over a leading crypto manufacturer supplying diplomatic, military and secret services of more than 120 countries worldwide and weakened its algorithms in a way that they were able to decrypt messages without a proper key. Needless to say, neither the Soviet Union nor China were careless enough to purchase from Crypto AG, so most affected countries were, in fact, those that considered themselves the USA’s allies.

Paranoia vs. Adequate risk assessment in supplier choice

„Rubikon“, which was the German codename of this operation, makes us better understand why some governments are more paranoid than others with regards to Huawei: they simply know from their own experience that these threats are real. The fact that Crypto AG was situated in Switzerland and looked like a privately owned, comparably small company with very high expertise in cryptography, should make us think even more about the way we chose our suppliers.

The weak spots of the supply chain  

The risk of purchasing security hard- and software with deliberately or accidentally built-in weaknesses looks higher than we expected – but it is not the only element of Supply Chain Risk. Supply chains can only be as strong as its weakest spot. In a world where enterprises focus on what they can do best and add everything else through supply chains, it is more critical than ever to know these weak spots and to limit the risks occurring from them. Some of the most important challenges are:

  1. Selecting suppliers with a low risk profile: It is very complex, expensive and inefficient to collect all necessary information needed to evaluate and quantify risks deriving from internal processes and vulnerabilities within the supplier´s organization. 
  2. In a networked economy, the number of suppliers is increasing: Even if we manage to assess a relatively small number of suppliers that are not too big and complex, time and resources consumed by properly risk-assessing an ever-increasing number of cyber suppliers are simply getting too high.
  3. Most organizations underestimate cyber supply chain risks. Cyber incidents happen every day, anywhere in a supply chain. Suppliers are threatened the same way as your own company. Your supplier´s threats add to your company´s risk profile. Therefore, suppliers and their risks have to be continuously monitored, not just once.
  4. Cyber supply chain risks are multidimensional, with many different stakeholders involved and interfaces to privacy & data protection, risk management, compliance, controlling, and audit. Reliably building continuous assessment strategies and processes on top of such a multidimensional topic is a challenge and remains widely unsolved in many organizations.

Looking at these complex supply chain risk management challenges and adding the increasing maturity and sophistication of cyberattacks to the equation, it is the right time now to add C-SCRM to our core cybersecurity strategy.

Good practices and standards provide guidance

It doesn´t really matter whether a cyberattack or data theft is targeted directly against the infrastructure of your company or whether a supplier´s weakness is exploited to gain unauthorized access. As a first step, good practices and standards will provide enough guidance. ISO/IEC 27036:2013 as part of the ISO 27000 series describes the foundations of information security in supply chain relationships.

Furthermore, NIST has updated its Cyber Security Framework and added a chapter on “Supply Chain Risk Management”. Specifically, aside from general cyber supply chain risks, version 1.1 of the NIST Cyber Security Framework is addressing IoT/IIoT related challenges. For the first time, NIST has added a whole category specifically focused on Supply Chain Risk evaluation and assessments involving all actors, like hardware manufacturers, software vendors, cloud service providers, and other service suppliers and consumers.

Where KuppingerCole can help you to make your supply chain more secure

Communication and verification of mandatory commitments to cybersecurity requirements between all involved parties is a core aspect of C-SCRM, with regular security assessments and vulnerability scans to make sure that supply chain security standards remain high.

With the Cloud Risks and Controls Matrix (CRCM) KuppingerCole offers both a toolkit and a compendium for assisting cloud customers in assessing the overall security risk resulting from the deployment of services in the cloud.

Cyber Supply Chain Risk Management will be discussed at EIC 2020 on May 13 at 12 pm, in the Digital Enterprise Security Track. An hour session dedicated to C-SCRM will kick off with the KuppingerCole analyst talk - Necessary Components of an Effective C-SCRM. This will be followed by the panel discussion on Managing Cyber Supply Chain Risks and Achieving Digital Business Resilience. Participating in this panel will be representatives of Huawei and various international cybersecurity organizations.

Will 2020 Be the Year of Oracle Cloud?

Recently I had an opportunity to attend the Next Generation Cloud Summit, an event organized by Oracle in Seattle, WA for industry analysts to learn about the latest developments in Oracle Cloud strategy. This was the first Oracle’s analyst summit in Seattle and coincidentally my first time in the Cloud city as well… Apparently, that’s a legitimate nickname for Seattle for a few years already, since all notable cloud service providers are located there, with Google and Oracle joining AWS and Microsoft at their historical home grounds by opening their cloud offices in the city.

Alas, when it comes to weather, Seattle in winter lives up to its nickname as well – it was raining non-stop for the whole three days I’ve spent at the event. Oh well, at least nothing distracted me from learning and discussing the latest developments in Oracle’s cloud infrastructure, database, and analytics, security and application development portfolios. Unfortunately, some of the things I’ve learned are still under NDA for some time, but I think that even the things we can already talk about clearly show that Oracle has finally found the right way to reinvent itself.

A veteran database technology vendor, the company has been working hard to establish itself as a prominent cloud service provider in the recent years, and the struggle to bridge the cultural gap between the old-school “sealed ecosystem” approach Oracle has been so notorious for and the open and heterogeneous nature of the cloud has been very real.

A latecomer to the cloud market, the company had a unique opportunity not to repeat all the mistakes of its older competitors and to implement their cloud infrastructure with a much higher level of security by design (at least in what Oracle refers to as the “second generation cloud”). Combined with a rich suite of business applications and the industry-leading database to power them, Oracle had all the components of a successful public cloud, but unfortunately, it took them quite some time to figure out how to market it properly.

It was only last year when the company has finally stopped trying to fight with competing cloud providers on their terms with tactics like claiming that Oracle cloud is cheaper than AWS (while it might technically be the case for some scenarios, independent tests by industry analysts usually measure cloud costs with completely different methods). Instead, it finally became clear that the company should focus on its unique differentiators and their added value for Oracle cloud customers – such as the performance and compliance benefits of the Autonomous Database, the intelligent capabilities of the Oracle Analytics services and, of course, the cutting-edge networking technology of Oracle Cloud Infrastructure.

However, it’s the year 2020 that’s going to be the decisive push for Oracle’s new cloud strategy, and the company demonstrates its commitment with some impressive developments. First of all, by the end of this year, Oracle Cloud will expand from the current 17 regions to 36, including such countries as Israel, Saudi Arabia or Chile, to bring its services to all major markets. In addition, Oracle is expanding the interconnect program with Microsoft, increasing the number of data centers with high-speed direct connections to Azure cloud to six. This strategic partnership with Microsoft finally makes true multi-cloud scenarios possible, where developers could, for example, deploy their frontend applications using Azure services while keeping their data in Autonomous databases on managed Exadata servers in the Oracle Cloud.

Speaking of “autonomous”, the company is continuing to expand this brand and ultimately to deliver a comprehensive, highly integrated and, of course, intelligent suite of services under the Autonomous Data Platform moniker: this will not only include various flavors of the “self-driving” Oracle Database but a range of data management services for all kinds of stakeholders: from developers and data scientists to business analysts and everyone else. Together with the Oracle Analytics Cloud, the company is aiming to provide a complete solution for all your corporate data in one place, with seamless integrations with both Oracle’s own public cloud services, hybrid deployments “at Customer” and even with competitors (now rather partners) like Microsoft.

My personal favorite, however, was Oracle APEX, the company’s low-code development platform that gives mere mortals without programming skills the opportunity to quickly develop simple, but useful and scalable business applications. To be honest, APEX has been an integral part of every Oracle database for over 15 years, but for a long time, it has remained a kind of a hidden gem used primarily by Oracle database customers (I was surprised to learn that Germany has one of the largest APEX communities with hundreds of developers in my hometown alone). Well, now anyone can start with APEX for free without any prerequisites, you don’t even need an Oracle account for that! Alas, I wish Oracle had invested a bit more in promoting tools like this outside of their existing community. I had to travel all the way to Seattle to learn about this, but at least now you don’t have to!

Of course, Oracle still has to learn quite a lot from the likes of Microsoft (how to reinvent its public image for the new generation of IT specialists) and perhaps even Apple (how to charge a premium and still make customers feel happy). But I’m pretty sure they are already on the right track to becoming a proper cloud service provider with a truly open ecosystem and a passionate community. 

Moving Towards AI and IoT Solutions Beyond Machine Learning

Microsoft is currently running ads extoling the virtue of AI and IoT sensors in helping farmers produce more and better crops, with less waste and higher yields. Elsewhere in manufacturing, supply chain management is being transformed with digital maps of goods and services that reduce waste and logistical delays.

In Finland, a combination of AI and IoT is making life safer for pedestrians. The City of Tampere and Tieto built a pilot system that automatically detects when a pedestrian is planning to cross the street at an intersection. Cameras at intersections accessed algorithms trained to detect the shape of pedestrians with 99% accuracy then activated the traffic lights to stop traffic.

Low Latency, High Expectations

There is a common thread in all these examples; sensors at the edge are used to send data to algorithms in the cloud trigger a response. All the while the data is collected to improve the algorithms to extrapolate trends and improve future systems. These examples show that IOT and AI already works well it responds to pre-scripted events such as a pedestrian appearing near a crossing, or soil drying out. The machines have already learnt how to deal with situations that would be expected in their environment. They are not so much replacing human decision-making process but removing the chore of having to make the right decision. All good.

Low latency is essential in any AI and IOT application for industry or agriculture if the right response is to be sent promptly to the edge from an existing library of algorithms. But what if the edge devices had to learn very quickly how to deal with a situation they had not experienced before such as an out of control wildfire or unprecedented flooding on agricultural plains? Here latency is only part of the equation. The other is the potential availability at the edge of massive amounts of data needed to decide on what to do, but edge devices by their nature cannot typically store or process such levels of data.

IBM has written a research paper on how edge devices, in this case drones sent to monitor a wildfire, could perform a complex learning operation, simultaneously model, test and rank many algorithms before deciding on the appropriate analytics that will be deployed to the edge and allow firefighters to respond. This is much closer to a truly intelligent model of IoT deployment than our earlier examples.

In the IBM example, Cognitive Processing Elements (CPE) are used in sequence to assist in making the right decisions to help stop the fire spreading, and understand how wildfires behave in extremis – in itself a not well understood phenomenon. Therefore, can we create a hybrid IOT/AI/Cloud architecture that can intelligently process data at appropriate points in the system depending on circumstances? It’s not just in natural disasters it may help but in another great hope for Ai and IOT: the fully autonomous vehicle.

Who Goes First, Who Goes Second?

Currently, driverless cars are totally reliant on pre-existing algorithm and learnings – such as a red-light or the shape of a pedestrian in the headlights to make decisions. We remain a long way from fully autonomous vehicles, in fact some researchers are now sceptical of whether we will ever achieve that point. The reason is that human car drivers, already act like the intelligent drones featured in IBM’s research paper – but uber versions of such. They not only have access to massive levels of intelligence but can process it at the edge in real time to make decisions based on their experience, intelligence and, crucially, learnt social norms.

Consider the following example that occurs millions of time every day on Europe’s narrow, crowded suburban streets to see how this works. Cars will invariably be parked on both sides with only a gap for one to pass in the middle. What happens when two cars approach: one or the other must give way – but which one? And how many cars are let through once one driver takes the passive role? Somehow, in 99.9% of incidents, it just works. One day we may be able to say the same when two autonomous vehicles meet each other on a European street!

Three Critical Elements Required to Close the Cybersecurity Skills Gap

The status on cybersecurity is fairly clear: 82% of employers report that their cybersecurity skills are not enough to handle the rising number of cyber incidents (Center for Strategic & International Studies, 2019. The Cybersecurity Workforce Gap). There is a gap – a gap between the skills needed for strong cybersecurity, and the skills you actually have. It is an individual problem, but also an enterprise problem and a global problem. The vast majority of the world simply does not have the skills to keep up with the cyber risks that we know exist.

Three Critical Elements to Closing the Skills Gap

KuppingerCole research shows that there are three critical elements required to close the cybersecurity skills gap: education, tools, and collaboration. Skills require having adequate knowledge: what are the typical attack vectors of a cyber incident? What are the best processes to have in place? Skills also require using the correct tools: a skilled carpenter would never use a welder in his woodwork. So why do many still cut corners by jerry-rigging inadequate tools to fit security purposes? Lastly, these skills require collaboration. Some aspects of cybersecurity should come from in-house; others would be far more efficient coming from a Managed Security Service Provider (MSSP). Deciding what the appropriate balance is requires insight into your own team’s capabilities.

The Role of Organizational Change Management

Closing the cybersecurity skills gap is also an organizational change problem. Very often, incident response management programs do not have the full support of senior management, or face implementation challenges when employees do not fully understand new processes. Experience plays a dominant role here; the misconception is that only a few people are relevant to cybersecurity programs when in fact, every person in an organization should play an active role. Taking the time to build allies in an organization, communicate with and train coworkers, and assess progress is fundamental to building cybersecurity skills in an organization.

This skills shortage paradigm is shifting. Having identified the critical elements to building cybersecurity capacity, KuppingerCole Analysts pulled from years of experience working alongside companies to implement top-of-the-line cybersecurity programs to create a master class bringing pivotal knowledge to the right people. Every individual is a critical actor in a cybersecurity program. The global economy does lack trained cybersecurity professionals, but training for these skills is no longer inaccessible.

A Solution to the Skills Gap

The first steps to building up cybersecurity begin with knowing the organization in question. An analysis of capabilities already covered in an organization should be made, and careful consideration should be given to where an organization should supplement with MSSPs. KuppingerCole can help support in this process. The KC Master Class facilitates a tight relationship with the trainer, a senior KC analyst. Individualized workshops, 1:1 problem solving sessions, and decision support is built into the masterclass. A modern learning style combines a digital/analog instructional environment with real-world, bootcamp-style meetings and eLearning materials. The process is conducted in close contact with the trainer and expert community, using standard collaboration software such as MS Teams.

Lead Analyst Alexei Balaganski writes: “the primary reason for not doing security properly is insufficient guidance and a lack of widely accepted best practices in every area of cybersecurity.” Each individual has the capacity to change this reality. KuppingerCole can help do this.

Taking One Step Back: The Road to Real IDaaS and What IAM Really Is About

Shifting IAM to Modern Architecture and Deployment Models

There is a lot of talk about IDaaS (Identity as a Service) these days, as the way to do IAM (Identity and Access Management). There are also fundamental changes in technology, such as the shift to containers (or even serverless) and microservice architectures, which also impact the technology solutions in the IAM market.
However, we should start at a different point: What is it that business needs from IAM? If we step back and take a broader perspective, it all ends up with a simple picture (figure 1): The job of IAM is to provide access for everyone (and everything) to every service and system, in a controlled manner. That is what we must focus on, and that is where we should start (or restart) our IAM initiatives.

Identity Fabric Lifecycle by Martin Kuppinger

Focus on the Business Need, Not on Technology: Deliver Identity Services

Even while this graphic looks simple, there is a lot in it:

  1. It is about all types of identities – employees, partners, customers, consumers, non-human identities e.g. in RPA (Robotic Process Automation), services, or things
  2. It is about an integrated perspective on Access Management (e.g. Identity Federation) and Identity Management (e.g. Lifecycle Management/Provisioning), but also beyond to aspects such as Consent and Privacy Management; however, Access Management is at the core
  3. It is about supporting a variety of Identity Providers, beyond internal directories
  4. It is about collaboration along the value chain and supply chain, with others, well beyond Employee IAM
  5. It is about delivering these services in an agile manner, supporting the demand in creating “identity-enabled” digital services in the digital transformation of businesses
  6. It is about a common set of services, what we call an Identity Fabric

You could argue that IDaaS becomes a different notion in the model of the Identity Fabric, which is true: It is providing Identity Services.

Taking a Deeper Look at the Identity Fabric: Identity Services and IDaaS

When we take a deeper look at the Identity Fabric (figure 2), it becomes apparent that there are both aspects of IDaaS integrated into this concept, and even more when looking at the architecture and microservices:

  1. IAM must be provided supporting flexible operating models, from on premises to the cloud. Many businesses will run some sort of hybrid mode for their IAM, given that the Identity Fabric commonly will be a mix of existing and new components. But supporting IDaaS in its common understanding – IAM delivered in an “as a Service” operating model – is essential.
  2. IAM must provide services, beyond just managing applications. Currently, IAM is targeted on the latter aspect, by creating user accounts, setting entitlements, or acting as a shell for Access Management in front of the applications. However, digital services require access to a set of identity services (APIs) to consume. This is a fundamentally different concept, and this form of Identity Services must be supported as well.
  3. Finally, and related to #1, the architecture must be based on microservices. Only that allows flexible deployments, agile roll-out, and extensions/customizations. Done right, customization, integration and orchestration across multiple services take place in separate microservices. Done that way, they are easy to maintain and product/service updates will not affect customizations (as long as the APIs remain stable).

Identity Fabrics are, from our perspective, the foundation for a future-proof IAM that serves the business demand. They provide the capabilities required for supporting the business use cases, based on a set of services that are built in a modern architecture.

Identity Fabric by Martin Kuppinger

The Road to IDaaS

Moving to an Identity Fabric is a journey that allows building on what you have in IAM and gradually transforming this, while adding new and modern services that rapidly provide the new capabilities required to serve the identity needs of digital services as well as the integration of new SaaS services.

Take a look at our Advisory Services for further decision support for the digital strategy of your business or simply browse our research library KC PLUS to get more insights on digital identity topics.

The C5:2020 - A Valuable Resource in Securing the Provider-Customer Relationship for Cloud Services

KuppingerCole has accompanied the unprecedented rise of the cloud as a new infrastructure and alternative platform for a multitude of previously unimaginable services – and done this constructively and with the necessary critical distance right from the early beginnings (blog post from 2008). Cybersecurity, governance and compliance have always been indispensable aspects of this.

When moving to the use of cloud services, it is most important to take a risk-based approach. There is nothing like “just the cloud”. It is not a single model but covers a wide and constantly growing spectrum of applications, services and virtualized infrastructure.

The “wild west phase” of early cloud deployments, based on quick decisions and individual, departmental “credit card”-based cloud subscriptions without corporate oversight should lie behind us. An organization adopting a cloud service needs to ensure that it remains in compliance with laws and industry regulations. There are many aspects to look at, including but not limited to compliance, service location, data security, availability, identity and access management, insider abuse of privilege, virtualization, isolation, cybersecurity threats, monitoring and logging.

Moving to the cloud done right

When moving to the use of cloud services, it is most important to take a risk-based approach. There is nothing like one single version of “the cloud”. It is not a single model but covers a wide and constantly growing spectrum of applications, services and virtualized infrastructure all summed up as the cloud service providers. While many people think mainly of the large platform providers like AWS or Microsoft Azure there is a growing number of companies providing services in and from the cloud. To ensure the security of their customers’ data the provider of cloud services should comply with best practice for the provision of the services they offer.

Moving services into the cloud or creating new services within the cloud substantially changes the traditional picture of typical responsibilities for an application/infrastructure and introduces the Cloud Service Provider (CSP) as a new stakeholder to the network of functional roles already established. Depending on the actual decision of which parts of the services are provided by the CSP on behalf of the customer and which parts are implemented by the tenant on top of the provided service layers, the responsibilities are assigned to either the CSP or the tenant.

Shared responsibilities between the provider and the tenant are a key characteristic of every deployment scenario of cloud services. For every real-life cloud service model scenario, all responsibilities identified have to be clearly assigned individually to the appropriate stakeholder. This might be drastically different in scenarios where only infrastructure is provided, for example the provisioning of plain storage or computing services, compared to scenarios where complete "Software as a Service" (SaaS, e.g. Office 365) is provided. Therefore, the prerequisite for an appropriate service contract between provider and the tenant has to be a comprehensive identification of all responsibilities and an agreement on which contract partner within a cloud service scenario these responsibilities have been assigned to.

However, the process involved is often manual and time consuming, and there is a multitude of aspects to consider. From the start it was important to us to support organizations in understanding the risks that come with the adoption of cloud services and in assessing the risks around their use of cloud services in a rapid and repeatable manner.

Best practices as a baseline

There are several definitions of best practice including: ITIL, COBIT, ISO/IEC 270xx, but also industry-specific specifications from the Cloud Security Alliance (CSA). For a primarily German audience (but de facto far beyond that), the BSI (the German Federal Office for Information Security) created the Cloud Computing Compliance Criteria Catalogue (BSI C5 for short) several years ago as a guideline for all those involved (users, vendors, auditors, security providers and service providers and many more) in the process of evaluating cloud services.

It is available free of charge to anyone interested. And many should be interested: The readership benefits from a well-curated and proofread current catalogue of criteria. It is worth noting that the document is updated regularly, while it is openly available for anyone to learn and use.

These criteria can be used by cloud services users to evaluate the services offered. In reverse, service providers can integrate these criteria already at the conceptual phase of their services and thus ensure "compliance by design" in technology and processes.

C5 reloaded – the 2020 version

BSI just published a major update of the C5 entitled C5:2020. Many areas have been thoroughly revised to cover current trends and developments like DevOps. Two further areas have been added:

  • “Product security” focuses on the security of the cloud service itself so that the requirements of the EU Cybersecurity Act are included in the questionnaire.
  • Especially with regard to US authorities, dealing with “Investigation requests from government agencies” for European customers regularly raises questions. For this reason, the second block of questions was designed to ensure appropriate handling of these requests with regard to legal review.

The C5:2020 is clearly an up-to-date and valuable resource for securing the shared responsibility between cloud customer and cloud service provider.

Applying best practices to real-life scenarios

The process of implementing and securing the resulting technical concepts and necessary mitigating measures requires an individual consideration of the specific requirements of a customer company. This includes a risk-oriented approach to identify the criticality of data, services and processes and to evaluate a deep understanding of the effectiveness and impact of implemented measures.

KuppingerCole Research can provide essential information as a valuable foundation for technologies and strategies. KuppingerCole Advisory Services support our clients strategically in the definition and implementation of necessary conceptual and actionable measures. This is particularly true when it comes to finding out how to efficiently close gaps once they have been identified. This includes mitigating measures, accompanying organizational and technical activities, and the efficient selection of the appropriate and optimal portfolio of tools. Finally, the KuppingerCole Academy with its upcoming master classes for Incident Response Management and Privileged Access Management supports companies and employees in creating knowledge and awareness.

The Next Best Thing After "Secure by Design"

There is an old saying that goes like this: “you can lead a horse to water, but you can’t make it drink”. Nothing personal against anyone in particular, but it seems to me that it perfectly represents the current state of cybersecurity across almost any industry. Although the cybersecurity tools are arguably becoming better and more sophisticated, and, for example, cloud service providers are constantly rolling out new security and compliance features in their platforms, the number of data breaches and hacks continues to grow. But why?

Well, the most obvious answer is that security tools, even the best ones, are still just tools. When a security feature is implemented as an optional add-on to a business-relevant product or service, someone still has to know that it exists to deploy and configure it properly and then operate and monitor it continuously, taking care of security alerts, as well as bug fixes, new features and the latest best practices.

The skills gap is real

Perhaps the most notorious example of this problem is the Simple Storage Service (better known as S3) from AWS. For over a decade, this cloud storage platform has been one of the most popular places to keep any kind of data, including the most sensitive kinds like financial or healthcare records. And even though over the years AWS has introduced multiple additional security controls for S3, the number of high-profile breaches caused by improper access configuration leaving sensitive data open to the public, is still staggering. A similar reputation stain – when their database installations were exposed to the whole Internet without any authentication – still haunts MongoDB even though they have fixed this issue years ago.

Of course, every IT expert is supposed to know better and never make such disastrous mistakes. Unfortunately, to err is human, but the even bigger problem is that not every company can afford to have a team of such experts. The notorious skills gap is real – only the largest enterprises can afford to hire the real pros, and for smaller companies, managed security services are perhaps the only viable alternative. For many companies, cybersecurity is still some kind of a cargo cult, when a purchased security tool isn’t even properly deployed or monitored for alerts.

“Secure by design” is too often not an option

Wouldn’t it be awesome if software just were secure on its own, without any effort from its users? This idea has been the foundation for “secure by design” principles that have been established years ago, defining various approaches towards creating software that is inherently free from vulnerabilities and resilient against hacking attacks. Alas, writing properly secured software is a tedious and costly process, which in most cases does not provide any immediate ROI (with a few existing exceptions like space flight or highly regulated financial applications). Also, these principles do not apply well to existing legacy applications – it is very difficult to refactor old code for security without breaking a lot of stuff.

So, if making software truly secure is so complicated, what are more viable alternatives? Well, the most trivial, yet arguably still the most popular one is offering software as a managed service, with a team of experts behind it to take care of all operational maintenance and security issues. The only major problem with this approach is that it does not scale well for the same reason – the number of experts in the world is finite.

Current AI technologies lack flexibility for different challenges

The next big breakthrough that will supposedly solve this challenge is replacing human experts with AI. Unfortunately, most people tend to massively overestimate the sophistication of existing AI technologies. While they are undoubtedly much more efficient than us at automating tedious number-crunching tasks, the road towards fully autonomous universal AI capable of replacing us in mission-critical decision making is still very long. While some very interesting developments for narrow security-related AI-powered solutions already exist (like Oracle’s Autonomous Database or automated network security solutions from vendors like Darktrace), they are nowhere nearly flexible enough to be adapted for different challenges.

And this is where we finally get back to the statement made in this post’s title. If “secure by design” and “secure by AI” are undoubtedly the long-term goals for software vendors now, what is the next best thing possible in the shorter term? My strong belief has always been that the primary reason for not doing security properly (which in the worst cases degenerates into a cargo cult mentioned above) is insufficient guidance and a lack of widely accepted best practices in every area of cybersecurity. The best security controls do not work if they are not enabled, and their existence is not communicated to users.

“Secure by default” should be your short-term goal

Thus, the next best thing after “secure by design” is “secure by default”. If a software vendor or service provider cannot guarantee that their product is free of security vulnerabilities, they should at least make an effort to ensure that every user knows the full potential of existing security controls, has them enabled according to the latest best practices and, ideally, that their security posture cannot be easily compromised through misconfiguration.

The reason for me to write this blog post was the article about security defaults introduced by Microsoft for their Azure Active Directory service. They are a collection of settings that can be applied to any Azure tenant with a single mouse click and which will ensure that all users are required to use multi-factor authentication, that legacy, insecure authentication protocols are no longer used and that highly privileged administration activities are protected by additional security checks.

There isn’t really anything fancy behind this new feature – it’s just a combination of existing security controls applied according to the current security best practices. It won’t protect Azure users against 100% of cyberattacks. It’s not even suitable for all users, since, if applied, it will conflict with more advanced capabilities like Conditional Access. However, protecting 95% of users against 95% of attacks is miles better than not protecting anyone. Most importantly, however, is that these settings will be applied to all new tenants as well as to existing ones that have no idea about any advanced security controls.

Time to vaccinate your IT now

In a way, this approach can be compared to vaccinations against a few known dangerous diseases. There will always be a few exemptions and an occasional ill effect, but the notion of population immunity applies to cybersecurity as well. Ask your software vendor or service provider for security defaults! This is the vaccination for IT.

Quantum Computing and Data Security - Pandora's Box or a Good Opportunity?

Not many people had heard of Schroedinger's cat before the CBS series "The Big Bang Theory" came out. Dr. Sheldon Cooper used this thought experiment to explain to Penny the state of her relationship with Lennard. It could be good and bad at the same time, but you can't be sure until you've started (to open) the relationship.

Admittedly, this is a somewhat simplified version of Schroedinger's thoughts by the authors of the series, but his original idea behind it is still relevant 100 years later. Schroedinger considered the following: "If you put a cat and a poison, which is randomly effective in time, into a box and seal it, as an observer you cannot tell whether the cat is alive or not. Therefore, it will be both until someone opens the box and checks.”

Superposition states lead to parallel calculations

This is a metaphor for superposition as it applies to quantum mechanics. One bit (the cat) can have several states at the same time and is therefore fundamentally different from the classical on/off or 0/1 representation in today's computer science, which is based on physical laws. Due to this possibility of superposition states, parallel computing operations can also be performed according to the laws of quantum mechanics, which accelerates the time of complex calculations. Google announced a few months ago that they have managed to build a quantum computer with 53 (Q)bits, capable of handling computations much faster than current supercomputers can; it can solve a selected problem in 3 minutes instead of 10,000 years, for example.

The way we decrypt data actually is in danger

This is precisely where the dangers for our current IT lie. Almost all encryption of data at rest and in transit is based on complex calculations that can only be efficiently decrypted with the right "key". If quantum computers become able to efficiently calculate, our current security concept for data collapses entirely.

Moreover, it would also have a massive impact on cryptographic currencies. Their added value is based on complex calculations in the blockchain, which requires a certain amount of computing power. If this could from now on be done in milliseconds, this market would also suddenly become obsolete.

Quantum based calculations offer a lot of potential

Of course, quantum computing also has advantages, because the biggest disadvantage (as it stands today) is also the biggest advantage: Complex calculations can be completed in a very short time. Everything that is based on many variables and various parameters can be calculated efficiently and with a realistic forecast. Good examples are environmental events and weather forecasts. These are based on an extremely large number of variables and are currently predicted using approximate algorithms rather than correct calculation. The same applies to traffic data. Cities or navigation systems could use all parameters and thus calculate the perfect route for each participant and adjust signal circuits if necessary. This is a dream come true from an environmental and traffic planning perspective: fewer traffic jams, fewer emissions and faster progress.

We need to change the way we encrypt

But what remains, despite all the enthusiasm for the potential, is the bland aftertaste regarding the security of data. However, scientists are already working on this as well, developing new algorithms based on other paradigms such as complex calculations. For example, instead of today's private keys, an encryption based on the value system itself can be defined. For example, if the algorithm does not know what value the number 4 really represents, it cannot decipher it easily. The key to encrypt is the underlying coordinate system. Future algorithms with the use of artificial intelligence will emerge and of course there are also considerations on how to use quantum computing for encryption.

 

In the end, quantum computing is just one more step towards more efficient computers, which might be replaced by artificial brains in another 100 years, bringing mankind another step forward in technology.

 

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of Your Business Learn more

AI for the Future of Your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00