KuppingerCole Blog

Preparation Is Key: Where Prevention Ends, and Business Continuity and Incident Response Management Begins

Ensuring the availability of processes and services in the event of an incident or a cyber attack is a fundamental part of a company’s cybersecurity approach. Commonly used phrases when it comes to such cybersecurity strategies, are Incident Response Management (IRM) and Business Continuity Management (BCM). Both should be part of a company's cybersecurity strategy, but what is the difference, how are they connected, and at what point in time do they start?

Identification and prevention are fundamental

Every organization is under attack, and there is the risk of being hit by a major attack at any time. Therefore, it is important to have the necessary plans and strategies in place. And to to that, you first need to know where your most critical risks are. Figure 1 shows what this process usually looks like, and that IRM and BCM should start in the “Respond & Recover” phase.

Defining Incident Response

Figure 1: The integrated process for a company's cybersecurity approach

There are processes that are unique to the “Prevent” phase. A company’s IT Risk Management team should identify and rate risks in a global approach as part of the Corporate Risk Management process in order to understand the risks. Prevention mechanisms should be implemented by the IT Security Operations & Configuration team for the highly rated risks. Current threats that are shared by vendors or have been identified in actual breaches are typically fixed by the Computer Emergency Response Team (CERT), for instance, through the installation of patches or hotfixes. The goal in the prevention phase is to prevent attacks and continually learn about new vectors.

Detection could be made months after the attack

In the “Detection” phase, Cyber Defense Center (CDC) actively tracks current and older log files and orchestrates Information to detect potential attacks or data breaches that may have been missed by the mechanisms used by the CERT and IT Security Operations team. If the CDC detects an anomaly, they start an in-depth analysis. If an incident is verified, they hand it over to Incident Response Management and Business Continuity Management teams, where the “Respond & Recover” phase begins.

It is essential for a company to Respond & Recover after an attack

Incident Response Management’s first step is to rate the criticality of an incident and collect more details of the attack to inform further action. At this point in time two parallel streams jump into action: The Incident Response Management (IRM) and the Business Continuity Management (BCM) streams.

IRM is responsible for mitigating the effects of the attack. After the evaluation, a team of experts is usually formed to collect and evaluate further forensic details. Any affected systems are isolated, and in the event of data loss, these are evaluated, and recovery measures are initiated to return to normal operations. An important part of IRM is also internal and external communication. Especially in the case of a data breach, information must be forwarded to the relevant data protection supervisory authority, depending on the country.

Business Continuity Management, in turn, takes care of the continued availability of business functions in the event of a system failure or loss of data. In the case of a data breach, for example, this can be the provision of backups, which means the creation of regular backups as part of the BCM strategy is required. In case of a ransomware attack, alternative systems or devices can be provided or, in the worst case, business operations can even be switched to manual analog processes. BCM is always only a temporary measure for emergencies.

Use the knowledge after an attack to improve your security

To ensure that a company improves its security in the long term and that incidents with the same cause cannot recur, regular review and improvement after an incident is necessary.

What can a company do to sustainably improve security?

Clearly, a company must invest in a process for Incident Response Management and be prepared for an attack. If an incident occurs, it is too late to deal with “who is responsible for what” and “who needs to be informed”. The same applies to Business Continuity Management: once data is lost or no longer accessible, it is too late to worry about data backups or a plan B.

A good point to start could be KuppingerCole’s Master Class about Incident Response Management, which also covers some topics of Business Continuity Management. Meeting, networking, and discussing topics like BCM with peers will be possible at EIC 2020, May 12th to 15th, in Munich.

KuppingerCole is specialized in offering advisory services for cybersecurity, artificial intelligence, and identity and access management. If your company needs assistance finding the right toolset, architecture or what to focus on, KuppingerCole Analysts is happy to support you.

Related Research:

Compromise of IOTA

Turning a blind eye to security in favor of optimism

If you have any take-away from reading KuppingerCole research, hopefully it is that APIs are a critical element to protect. This is true regardless of the industry. Even cryptocurrencies.

IOTA, the blockchain-like cryptocurrency and transaction network was compromised in mid-February. The API access to the IOTA crypto wallet via a payment service was targeted and exploited for potentially two to three weeks. Approximately 50 accounts were compromised, leading to the eventual theft of around 2 million Euros.

There is a risk in trusting the promises of hyped technology. Blockchain is often praised as being tamperproof and highly secure, and it still is. The blockchain – or more specifically, the DAG protocol that is similar to blockchain – didn’t cause the vulnerability. However, somebody – perhaps network overseers, third-party services, or Content Delivery Networks – trusted this claim a little too much and neglected to protect the mundane aspects of the solution.

Do we want decentralization?

A delay in communication caused the attacker to exit with their payload. The third-party service that was compromised became aware of the breach on February 10th and removed the attacker’s entry point to stealing private key information. Only five days later did the third-party service communicate and collaborate with the IOTA Foundation to freeze the network and all transactions. In that period of time, the attacker was able to empty the compromised accounts of approximately 2 million Euros.

The damage to individual accounts wasn’t higher because the IOTA Foundation has some degree of control over the network. This level of control allows the network to be arbitrarily halted, and for the Foundation to implement a claims registration tool to offer some degree of user protection. These basic tasks are completely absent from fully decentralized solutions like Blockchain or Ethereum. But in instances like this, perhaps some centralized support is not amiss.

5G - How Will This Affect Your Organization?

What is it that connects Covent Garden in London, The Roman Baths in Bath and Los Angeles? The answer is 5G mobile communications used by media organizations. On January 29th I attended the 5G Unleashed event at the IET in London. (The IET is the body that provides professional accreditation for Engineers in the UK). At this event there were several presentations describing real world use cases of 5G as well as deep dives into the supporting infrastructure. While 5G is being sold to consumers as superfast mobile broadband there is a lot more to it than that. It has the potential to impact on a wide range of organizations as well as public infrastructure. 5G provides the communications needed to enable clouds of things but it also creates risks which need to be managed.

Key Features of 5G

The key features that 5G offers include the ability to handle up to 1000 times greater data volumes and up to 100 times more connected devices than with 4G today. It can also reduce communications latency and enable up to ten-year battery life for low power devices.

Key Features of 5G

Example Applications of 5G

There are many areas where 5G is set to transform the way things are done. Mobile robots within factories and better control over supply chain processes will benefit manufacturing. Connected and Automated Vehicles will require low latency, high volume communication creating terabytes of data every day. Remote healthcare applications using 5G can enable the delivery of healthcare services at a lower cost. Guaranteed low latency and high bandwidth capabilities provided by 5G have the potential to dramatically reduce the cost and equipment needed for real time news broadcasting. 5G could replace the need for the bulky satellite equipment that is currently used for this.

For example: on May 31st, 2019 the BBC made it first outside news broadcast using 5G from Covent Garden in London. This didn’t go as well as expected! Their SIM ran out of data and the location was so full of tourists sharing photos and videos that the mobile service they were using could not provide the bandwidth needed. These teething problems will be overcome now that 5G has been fully launched.

During 2018 and 2019 in the 5G Smart Tourism project BBC Research & Development trialled an app at the Roman Baths in Bath, to visualise the Baths at times before and after the Roman period. The app tells the story of three periods: the mythical discovery of the hot springs by King Bladud, the Baths falling into disrepair when the Romans left, and their renovation in Victorian times. This used the so-called ‘magic window’ paradigm where, as the user moves their device around, they see a view appropriate to where they are looking. Once again, this immersive experience depended upon the high bandwidth and low latency provided by a 5G mobile network.

In Britain, Worcester Bosch has become the first-ever factory in Britain to have 5G wireless access. This has been used in a trial to run sensors in the Worcester Bosch factory for preventative maintenance and real-time feedback whilst also using data analytics to predict any potential failures.

5G and Edge Computing

To obtain maximum benefit from the low communications latency that 5G can offer the end to end system architecture needs to change. The computing power needs to move to be physically closer to the edge to avoid delays across the backhaul – the network between the cellular radio base station and the data centre. In practical terms this means locating the data centre next to the mobile radio station.

This is exactly what AWS has done in partnership with Verizon in Los Angeles. This was announced in December 2019 as the first AWS Local Zone. AWS say that Los Angeles was chosen to support the local industries that would benefit from the high bandwidth and low latency including media & entertainment content creation, real-time gaming, reservoir simulations, electronic design automation, and machine learning. AWS also has similar partnerships with mobile service providers in other parts of the world.

5G and Security

Given that the data payload carried by 5G will include massive amounts of potentially sensitive data security is essential. The potential to compromise the traffic control across a whole city or to disconnect energy supplied through millions of smart meters means that 5G must be treated as a component of critical national infrastructures.

The 5G standards provide security mechanisms that include enhancements in the areas of encryption, authentication and user privacy. However, they do not protect against all possible threats, for example DDoS and radio jamming. Protecting against these will depend on the actual deployment.

However, many of the IoT devices contain numerous known technical vulnerabilities and some even feature a fixed and unchangeable root password. It is essential that the IoT devices are designed, built and deployed with security in mind. One of the opportunities provided by 5G is that by using an embedded SIM the device identity can be more rigorously authenticated and the integrity and confidentiality of communication better ensured.

So, you need to consider what opportunities that 5G could bring to your industry sector. This technology will provide greater mobile connectivity and capacity – how will this affect your organization? Industry sectors that are likely to see benefits are those that require the new capabilities that this technology will provide. These include logistics, manufacturing, transport, healthcare, the media and local government.

Make sure that you build in end-to-end Security from the start – there is often a tendency when exploring new areas to focus on functionality and consider security as an afterthought. The consequences of security failure in many of the potential use cases go way beyond data leakage to include physical harm and large-scale disruption.

For more details on this subject see KuppingerCole Leadership Brief 5G Impact on Organizations and Security 80238. Also attend the Public & On-Premise Cloud, Core IT Hosting, Edge IoT Track at EIC in Munich on May 13th, 2020.

Ambient Intelligence Can’t Mature Without an Identity Protocol

Every day we are experiencing the intersection of IoT and AI. The interactions of users, sensors, robots, vehicles, smart buildings, and much more is creating a new status quo for digital experiences. This growing range of smart devices – both in the IoT sense and the intelligent AI sense – mean we are moving beyond a singular focus on the smartphone. This heightened immersion into increasingly distributed, decentralized digital networks is what KuppingerCole has termed “Ambient Intelligence”.

The synergy of AI and IoT that Ambient Intelligence enables will be a key driver for the machine-to-machine (M2M) economy, with businesses and consumers already demanding it in daily tasks. However, advancing the M2M economy is held back by the lack of a reliable, secure identity protocol for objects and devices. Without this sort of protocol, companies use strange workarounds to meet the demands of users – using a smartphone as a proxy for a vehicle, for example.

When a phone isn’t a phone

Artist Simon Weckert publicized photo and video evidence of his performance art piece on 1 February 2020, showing himself pulling a wagon filled with 99 smartphones through Berlin. His message of how the presence of a smartphone impacts the way traffic is represented in Google Maps received over three million views. With so many smartphones, he easily turned a “green”, traffic-free section of road to “red” and congested, rerouting actual vehicles to side streets. Weckert’s purpose was to illuminate the influence that digital tools such as Google Maps have over our choices in the physical world, but this performance invites other takeaways, such as the misrepresentation of devices.

Artist Simon Weckert publicized photo and video evidence of his performance art piece on 1 February 2020, showing himself pulling a wagon filled with 99 smartphones through Berlin. His message of how the presence of a smartphone impacts the way traffic is represented in Google Maps received over three million views. With so many smartphones, he easily turned a “green”, traffic-free section of road to “red” and congested, rerouting actual vehicles to side streets. Weckert’s purpose was to illuminate the influence that digital tools such as Google Maps have over our choices in the physical world, but this performance invites other takeaways, such as the misrepresentation of devices.

IOTA’s role in delivering an identity protocol

IOTA has very promising contributions to digital identity, for humans as well as devices and objects. It is an open-source, blockchain-like protocol that among other things can host the emerging Unified Identity Protocol that would enable a secure M2M exchange of data – including ambient devices. Established in Germany as a non-profit foundation to provide “the next generation of protocols for the connected world”, IOTA is both the name of the foundation and the DAG-based technology layer which is free for public use. It stands out as a digital (and decentralized) identity solution for a number of reasons:

  • The use of Directed Acyclic Graphs (DAG) solves the chronic scalability weakness of typical blockchains – the more transactions are executed with IOTA, the faster each transaction is executed. A participating node’s transaction is only confirmed when it approves two previous transactions. Therefore, the more transactions in IOTA, the faster each can be approved.
  • Transactions are fee-less, again because of the blockchain-like DAG structure. Unlike the typical proof-of-work consensus mechanism which incentivizes honest participation by awarding cryptocurrencies and charging transaction fees, a participating node’s incentive in IOTA is to have its own transaction approved.
  • Microtransactions (such as sharing the battery status of an electric vehicle every few minutes) are possible because transactions are fee-less. The potential for objects and devices to share data on a constant basis using IOTA is much more feasible if there are not prohibitive costs associated with it.
  • IOTA is bringing Self-Sovereign Identity for human and device and object identity. It arranges an identity ecosystem made up of Holder, Issuer, and Verifier roles following emerging standards for DID and Verifiable Credentials. By employing cryptographic techniques such as zero-knowledge proofs, users – and objects – can prove an identity attribute is true without over-revealing information.
  • The partner ecosystem includes key players that are highly invested in bringing IoT and Industry 4.0 to maturity. Siemens alone has been granted 13 patents for IOTA-based identification and authentication technologies.

Multiple decentralized identity solutions are under development, but this typically means blockchain. While they have their merits, IOTA is a non-blockchain option for decentralized identity that may outpace them all. To learn more about the future of digital and decentralized identity, join KuppingerCole at the European Identity & Cloud Conference in May 2020 for over 200 sessions and insight from a wide array of global experts.

Top 5 Recommendations for Reducing Cyber Risks in 2020

The turn of the year has been an occasion for many cybersecurity news outlets to talk about trends and challenges in cybersecurity. Despite the importance of knowing what the trends and challenges are, we want to give you some hands-on recommendations to increase security for your company. Of course the following recommendations are just a selection out of many possible measures. We are happy to discuss with you in more detail the implications for your concrete business model.

1. Beyond detect, prevent, respond: recovery & Incident Response Management

While AI helps in increasing cyberattack resilience, there is one more thing to look at: Recovery. Every organization is under attack, and there is the risk of being hit by a major attack at some time. The most important things then are, in that order: Recover your IT, at least core functions, to become operational again. The time window for a business to survive when being hit by a severe attack can be very short, sometimes in the range of very, very few days. Be able to recover and integrate your cybersecurity efforts with Business Continuity Management. The second thing to do is preparing for communication and resolution: Incident Response Management. This must be prepared. Thinking about it when the disaster occurred will be too late. Join the brand-new KC Master Class Incident Response Management starting on February 18 to learn how to define an incident response strategy to protect your company.

2. Define your Identity & Security Fabric for serving both agility & security

Beyond API Security, you need to ensure that your IT can serve the needs of the teams creating the new digital services. That all is then about agility, about time-to-value. You need to provide consistent, easy-to-use identity and security services via APIs. It is time to build your Identity & Security Fabric that delivers to both the digital services and the need for managing and protecting your legacy IT.

3. Go Adaptive Authentication

Put Adaptive Authentication and Passwordless Authentication to the top of your to-do-list. Everything you change and add around authentication must fit to these paradigms. Build a central authentication platform, and ensure that you also can work seamless with other Identity Providers (IdPs) and understand the authentication assurance level they provide.

4. Build on Managed SOC & SOC as a Service

It is hard to run your own SOC. Look for managed services or a SOC as a Service. There are many providers out there already. While it is hard to build and run your own SOC independently, despite all technology improvements, it is not that hard to find a strong partner supporting you.

5. Define your IIoT and OT security approach - together

The biggest challenge in IIoT and OT security is the one of understanding and accepting each other. IT Security and OT Security have different challenges, starting with the difference between security and safety. Thus, to make progress, it is overly important to find a common understanding of targets, terminology, requirements, and to understand that both sides can provide to better solutions. It is about people and organization first, then technology.

There would be many more recommendations to give, beyond the five key challenges, the top technology trends, and the related recommendations. Let me look at just three more:

1. PAM: Implement a strong PAM for the future

PAM (Privileged Access Management) remains a central technology, not only for identity but also for cybersecurity – it sits somewhere in the middle. You need a strong PAM, and PAM is evolving beyond the traditional PAM into areas such as PAM for DevOps and cloud-integrated PAM. Understand what you need and ensure that you have a strong PAM in place for the future. For a deeper understanding, join the KC Master Class PAM for the 2020s.

2. Portfolio Management. The right tools, not many tools

As indicated at the beginning: Tools don’t help, if they are not fostered by people, organization, policies, and processes. And many tools don’t help better than a good selection of the right tools. Given that budgets are limited, picking the right portfolio is essential. Understand which tools help really in mitigating which risks, and redefine your portfolio, focusing on the tools that really help you mitigating risks. KuppingerCole’s Portfolio Compass provides a proven methodology for optimizing your cybersecurity tools portfolio.

3. C-SCRM: Understand and manage the risks of your Cybersecurity Supply Chain

Finally, there is a new theme to look at closely: C-SCRM or the Cybersecurity Supply Chain Risk Management. This involves both hardware and software (including cloud services) you procure, and your suppliers that might affect your security posture. Pick up this topic, with well-thought-out supplier (cyber) risk management at all levels. For a start, check out this blog post which looks at why C-SCRM is becoming essential for your digital business.

There would be far more information to provide. The good news is: While challenges are increasing, there are ways to keep a grip on the risk. Focus on the major risks, focus your investments, and work with the experts as well as your peers. A good place to meet your peers will be EIC 2020, May 12th to 15th, in Munich.

KuppingerCole is specialized in offering advisory services for cybersecurity, artificial intelligence, and identity and access management. If your company needs assistance finding the right toolset, architecture or what to focus on, KuppingerCole Analysts is happy to support you.

Why C-SCRM Is Becoming so Essential for Your Digital Business

The current discussion around Huawei and whether or not it should be endorsed as a supplier for 5G mobile network hard- and software has reminded us on how dependent we are on the integrity and reliability of such manufacturers and how difficult it is to trust their products if they are closed source and proprietary or otherwise hard or impossible to examine. Due to its undisputed vicinity to the Chinese government, Huawei has come under suspicion primarily by the US authorities to provide undocumented access capabilities to Chinese intelligence agencies enabling them to globally wiretap mobile communications in 5G networks.

Lessons learned from the Crypto AG scandal

Such allegations weigh heavily and if they are more than just politically inspired trade war rhetoric, we would have to profoundly change the way we look at the risks deriving from supply chains for cybersecurity-related equipment. Is it paranoid to think that governments and their secret services are evil enough to make suppliers deliver manipulated hard- and software or is it real? The most recent story about the German Secret Service BND and its US counterpart CIA covertly co-owning the former Swiss manufacturer of cryptographic hard- and software, Crypto AG, shows that some governments even go further.

Germany and the USA secretly took over a leading crypto manufacturer supplying diplomatic, military and secret services of more than 120 countries worldwide and weakened its algorithms in a way that they were able to decrypt messages without a proper key. Needless to say, neither the Soviet Union nor China were careless enough to purchase from Crypto AG, so most affected countries were, in fact, those that considered themselves the USA’s allies.

Paranoia vs. Adequate risk assessment in supplier choice

„Rubikon“, which was the German codename of this operation, makes us better understand why some governments are more paranoid than others with regards to Huawei: they simply know from their own experience that these threats are real. The fact that Crypto AG was situated in Switzerland and looked like a privately owned, comparably small company with very high expertise in cryptography, should make us think even more about the way we chose our suppliers.

The weak spots of the supply chain  

The risk of purchasing security hard- and software with deliberately or accidentally built-in weaknesses looks higher than we expected – but it is not the only element of Supply Chain Risk. Supply chains can only be as strong as its weakest spot. In a world where enterprises focus on what they can do best and add everything else through supply chains, it is more critical than ever to know these weak spots and to limit the risks occurring from them. Some of the most important challenges are:

  1. Selecting suppliers with a low risk profile: It is very complex, expensive and inefficient to collect all necessary information needed to evaluate and quantify risks deriving from internal processes and vulnerabilities within the supplier´s organization. 
  2. In a networked economy, the number of suppliers is increasing: Even if we manage to assess a relatively small number of suppliers that are not too big and complex, time and resources consumed by properly risk-assessing an ever-increasing number of cyber suppliers are simply getting too high.
  3. Most organizations underestimate cyber supply chain risks. Cyber incidents happen every day, anywhere in a supply chain. Suppliers are threatened the same way as your own company. Your supplier´s threats add to your company´s risk profile. Therefore, suppliers and their risks have to be continuously monitored, not just once.
  4. Cyber supply chain risks are multidimensional, with many different stakeholders involved and interfaces to privacy & data protection, risk management, compliance, controlling, and audit. Reliably building continuous assessment strategies and processes on top of such a multidimensional topic is a challenge and remains widely unsolved in many organizations.

Looking at these complex supply chain risk management challenges and adding the increasing maturity and sophistication of cyberattacks to the equation, it is the right time now to add C-SCRM to our core cybersecurity strategy.

Good practices and standards provide guidance

It doesn´t really matter whether a cyberattack or data theft is targeted directly against the infrastructure of your company or whether a supplier´s weakness is exploited to gain unauthorized access. As a first step, good practices and standards will provide enough guidance. ISO/IEC 27036:2013 as part of the ISO 27000 series describes the foundations of information security in supply chain relationships.

Furthermore, NIST has updated its Cyber Security Framework and added a chapter on “Supply Chain Risk Management”. Specifically, aside from general cyber supply chain risks, version 1.1 of the NIST Cyber Security Framework is addressing IoT/IIoT related challenges. For the first time, NIST has added a whole category specifically focused on Supply Chain Risk evaluation and assessments involving all actors, like hardware manufacturers, software vendors, cloud service providers, and other service suppliers and consumers.

Where KuppingerCole can help you to make your supply chain more secure

Communication and verification of mandatory commitments to cybersecurity requirements between all involved parties is a core aspect of C-SCRM, with regular security assessments and vulnerability scans to make sure that supply chain security standards remain high.

With the Cloud Risks and Controls Matrix (CRCM) KuppingerCole offers both a toolkit and a compendium for assisting cloud customers in assessing the overall security risk resulting from the deployment of services in the cloud.

Cyber Supply Chain Risk Management will be discussed at EIC 2020 on May 13 at 12 pm, in the Digital Enterprise Security Track. An hour session dedicated to C-SCRM will kick off with the KuppingerCole analyst talk - Necessary Components of an Effective C-SCRM. This will be followed by the panel discussion on Managing Cyber Supply Chain Risks and Achieving Digital Business Resilience. Participating in this panel will be representatives of Huawei and various international cybersecurity organizations.

Will 2020 Be the Year of Oracle Cloud?

Recently I had an opportunity to attend the Next Generation Cloud Summit, an event organized by Oracle in Seattle, WA for industry analysts to learn about the latest developments in Oracle Cloud strategy. This was the first Oracle’s analyst summit in Seattle and coincidentally my first time in the Cloud city as well… Apparently, that’s a legitimate nickname for Seattle for a few years already, since all notable cloud service providers are located there, with Google and Oracle joining AWS and Microsoft at their historical home grounds by opening their cloud offices in the city.

Alas, when it comes to weather, Seattle in winter lives up to its nickname as well – it was raining non-stop for the whole three days I’ve spent at the event. Oh well, at least nothing distracted me from learning and discussing the latest developments in Oracle’s cloud infrastructure, database, and analytics, security and application development portfolios. Unfortunately, some of the things I’ve learned are still under NDA for some time, but I think that even the things we can already talk about clearly show that Oracle has finally found the right way to reinvent itself.

A veteran database technology vendor, the company has been working hard to establish itself as a prominent cloud service provider in the recent years, and the struggle to bridge the cultural gap between the old-school “sealed ecosystem” approach Oracle has been so notorious for and the open and heterogeneous nature of the cloud has been very real.

A latecomer to the cloud market, the company had a unique opportunity not to repeat all the mistakes of its older competitors and to implement their cloud infrastructure with a much higher level of security by design (at least in what Oracle refers to as the “second generation cloud”). Combined with a rich suite of business applications and the industry-leading database to power them, Oracle had all the components of a successful public cloud, but unfortunately, it took them quite some time to figure out how to market it properly.

It was only last year when the company has finally stopped trying to fight with competing cloud providers on their terms with tactics like claiming that Oracle cloud is cheaper than AWS (while it might technically be the case for some scenarios, independent tests by industry analysts usually measure cloud costs with completely different methods). Instead, it finally became clear that the company should focus on its unique differentiators and their added value for Oracle cloud customers – such as the performance and compliance benefits of the Autonomous Database, the intelligent capabilities of the Oracle Analytics services and, of course, the cutting-edge networking technology of Oracle Cloud Infrastructure.

However, it’s the year 2020 that’s going to be the decisive push for Oracle’s new cloud strategy, and the company demonstrates its commitment with some impressive developments. First of all, by the end of this year, Oracle Cloud will expand from the current 17 regions to 36, including such countries as Israel, Saudi Arabia or Chile, to bring its services to all major markets. In addition, Oracle is expanding the interconnect program with Microsoft, increasing the number of data centers with high-speed direct connections to Azure cloud to six. This strategic partnership with Microsoft finally makes true multi-cloud scenarios possible, where developers could, for example, deploy their frontend applications using Azure services while keeping their data in Autonomous databases on managed Exadata servers in the Oracle Cloud.

Speaking of “autonomous”, the company is continuing to expand this brand and ultimately to deliver a comprehensive, highly integrated and, of course, intelligent suite of services under the Autonomous Data Platform moniker: this will not only include various flavors of the “self-driving” Oracle Database but a range of data management services for all kinds of stakeholders: from developers and data scientists to business analysts and everyone else. Together with the Oracle Analytics Cloud, the company is aiming to provide a complete solution for all your corporate data in one place, with seamless integrations with both Oracle’s own public cloud services, hybrid deployments “at Customer” and even with competitors (now rather partners) like Microsoft.

My personal favorite, however, was Oracle APEX, the company’s low-code development platform that gives mere mortals without programming skills the opportunity to quickly develop simple, but useful and scalable business applications. To be honest, APEX has been an integral part of every Oracle database for over 15 years, but for a long time, it has remained a kind of a hidden gem used primarily by Oracle database customers (I was surprised to learn that Germany has one of the largest APEX communities with hundreds of developers in my hometown alone). Well, now anyone can start with APEX for free without any prerequisites, you don’t even need an Oracle account for that! Alas, I wish Oracle had invested a bit more in promoting tools like this outside of their existing community. I had to travel all the way to Seattle to learn about this, but at least now you don’t have to!

Of course, Oracle still has to learn quite a lot from the likes of Microsoft (how to reinvent its public image for the new generation of IT specialists) and perhaps even Apple (how to charge a premium and still make customers feel happy). But I’m pretty sure they are already on the right track to becoming a proper cloud service provider with a truly open ecosystem and a passionate community. 

Moving Towards AI and IoT Solutions Beyond Machine Learning

Microsoft is currently running ads extoling the virtue of AI and IoT sensors in helping farmers produce more and better crops, with less waste and higher yields. Elsewhere in manufacturing, supply chain management is being transformed with digital maps of goods and services that reduce waste and logistical delays.

In Finland, a combination of AI and IoT is making life safer for pedestrians. The City of Tampere and Tieto built a pilot system that automatically detects when a pedestrian is planning to cross the street at an intersection. Cameras at intersections accessed algorithms trained to detect the shape of pedestrians with 99% accuracy then activated the traffic lights to stop traffic.

Low Latency, High Expectations

There is a common thread in all these examples; sensors at the edge are used to send data to algorithms in the cloud trigger a response. All the while the data is collected to improve the algorithms to extrapolate trends and improve future systems. These examples show that IOT and AI already works well it responds to pre-scripted events such as a pedestrian appearing near a crossing, or soil drying out. The machines have already learnt how to deal with situations that would be expected in their environment. They are not so much replacing human decision-making process but removing the chore of having to make the right decision. All good.

Low latency is essential in any AI and IOT application for industry or agriculture if the right response is to be sent promptly to the edge from an existing library of algorithms. But what if the edge devices had to learn very quickly how to deal with a situation they had not experienced before such as an out of control wildfire or unprecedented flooding on agricultural plains? Here latency is only part of the equation. The other is the potential availability at the edge of massive amounts of data needed to decide on what to do, but edge devices by their nature cannot typically store or process such levels of data.

IBM has written a research paper on how edge devices, in this case drones sent to monitor a wildfire, could perform a complex learning operation, simultaneously model, test and rank many algorithms before deciding on the appropriate analytics that will be deployed to the edge and allow firefighters to respond. This is much closer to a truly intelligent model of IoT deployment than our earlier examples.

In the IBM example, Cognitive Processing Elements (CPE) are used in sequence to assist in making the right decisions to help stop the fire spreading, and understand how wildfires behave in extremis – in itself a not well understood phenomenon. Therefore, can we create a hybrid IOT/AI/Cloud architecture that can intelligently process data at appropriate points in the system depending on circumstances? It’s not just in natural disasters it may help but in another great hope for Ai and IOT: the fully autonomous vehicle.

Who Goes First, Who Goes Second?

Currently, driverless cars are totally reliant on pre-existing algorithm and learnings – such as a red-light or the shape of a pedestrian in the headlights to make decisions. We remain a long way from fully autonomous vehicles, in fact some researchers are now sceptical of whether we will ever achieve that point. The reason is that human car drivers, already act like the intelligent drones featured in IBM’s research paper – but uber versions of such. They not only have access to massive levels of intelligence but can process it at the edge in real time to make decisions based on their experience, intelligence and, crucially, learnt social norms.

Consider the following example that occurs millions of time every day on Europe’s narrow, crowded suburban streets to see how this works. Cars will invariably be parked on both sides with only a gap for one to pass in the middle. What happens when two cars approach: one or the other must give way – but which one? And how many cars are let through once one driver takes the passive role? Somehow, in 99.9% of incidents, it just works. One day we may be able to say the same when two autonomous vehicles meet each other on a European street!

Three Critical Elements Required to Close the Cybersecurity Skills Gap

The status on cybersecurity is fairly clear: 82% of employers report that their cybersecurity skills are not enough to handle the rising number of cyber incidents (Center for Strategic & International Studies, 2019. The Cybersecurity Workforce Gap). There is a gap – a gap between the skills needed for strong cybersecurity, and the skills you actually have. It is an individual problem, but also an enterprise problem and a global problem. The vast majority of the world simply does not have the skills to keep up with the cyber risks that we know exist.

Three Critical Elements to Closing the Skills Gap

KuppingerCole research shows that there are three critical elements required to close the cybersecurity skills gap: education, tools, and collaboration. Skills require having adequate knowledge: what are the typical attack vectors of a cyber incident? What are the best processes to have in place? Skills also require using the correct tools: a skilled carpenter would never use a welder in his woodwork. So why do many still cut corners by jerry-rigging inadequate tools to fit security purposes? Lastly, these skills require collaboration. Some aspects of cybersecurity should come from in-house; others would be far more efficient coming from a Managed Security Service Provider (MSSP). Deciding what the appropriate balance is requires insight into your own team’s capabilities.

The Role of Organizational Change Management

Closing the cybersecurity skills gap is also an organizational change problem. Very often, incident response management programs do not have the full support of senior management, or face implementation challenges when employees do not fully understand new processes. Experience plays a dominant role here; the misconception is that only a few people are relevant to cybersecurity programs when in fact, every person in an organization should play an active role. Taking the time to build allies in an organization, communicate with and train coworkers, and assess progress is fundamental to building cybersecurity skills in an organization.

This skills shortage paradigm is shifting. Having identified the critical elements to building cybersecurity capacity, KuppingerCole Analysts pulled from years of experience working alongside companies to implement top-of-the-line cybersecurity programs to create a master class bringing pivotal knowledge to the right people. Every individual is a critical actor in a cybersecurity program. The global economy does lack trained cybersecurity professionals, but training for these skills is no longer inaccessible.

A Solution to the Skills Gap

The first steps to building up cybersecurity begin with knowing the organization in question. An analysis of capabilities already covered in an organization should be made, and careful consideration should be given to where an organization should supplement with MSSPs. KuppingerCole can help support in this process. The KC Master Class facilitates a tight relationship with the trainer, a senior KC analyst. Individualized workshops, 1:1 problem solving sessions, and decision support is built into the masterclass. A modern learning style combines a digital/analog instructional environment with real-world, bootcamp-style meetings and eLearning materials. The process is conducted in close contact with the trainer and expert community, using standard collaboration software such as MS Teams.

Lead Analyst Alexei Balaganski writes: “the primary reason for not doing security properly is insufficient guidance and a lack of widely accepted best practices in every area of cybersecurity.” Each individual has the capacity to change this reality. KuppingerCole can help do this.

Taking One Step Back: The Road to Real IDaaS and What IAM Really Is About

Shifting IAM to Modern Architecture and Deployment Models

There is a lot of talk about IDaaS (Identity as a Service) these days, as the way to do IAM (Identity and Access Management). There are also fundamental changes in technology, such as the shift to containers (or even serverless) and microservice architectures, which also impact the technology solutions in the IAM market.
However, we should start at a different point: What is it that business needs from IAM? If we step back and take a broader perspective, it all ends up with a simple picture (figure 1): The job of IAM is to provide access for everyone (and everything) to every service and system, in a controlled manner. That is what we must focus on, and that is where we should start (or restart) our IAM initiatives.

Identity Fabric Lifecycle by Martin Kuppinger

Focus on the Business Need, Not on Technology: Deliver Identity Services

Even while this graphic looks simple, there is a lot in it:

  1. It is about all types of identities – employees, partners, customers, consumers, non-human identities e.g. in RPA (Robotic Process Automation), services, or things
  2. It is about an integrated perspective on Access Management (e.g. Identity Federation) and Identity Management (e.g. Lifecycle Management/Provisioning), but also beyond to aspects such as Consent and Privacy Management; however, Access Management is at the core
  3. It is about supporting a variety of Identity Providers, beyond internal directories
  4. It is about collaboration along the value chain and supply chain, with others, well beyond Employee IAM
  5. It is about delivering these services in an agile manner, supporting the demand in creating “identity-enabled” digital services in the digital transformation of businesses
  6. It is about a common set of services, what we call an Identity Fabric

You could argue that IDaaS becomes a different notion in the model of the Identity Fabric, which is true: It is providing Identity Services.

Taking a Deeper Look at the Identity Fabric: Identity Services and IDaaS

When we take a deeper look at the Identity Fabric (figure 2), it becomes apparent that there are both aspects of IDaaS integrated into this concept, and even more when looking at the architecture and microservices:

  1. IAM must be provided supporting flexible operating models, from on premises to the cloud. Many businesses will run some sort of hybrid mode for their IAM, given that the Identity Fabric commonly will be a mix of existing and new components. But supporting IDaaS in its common understanding – IAM delivered in an “as a Service” operating model – is essential.
  2. IAM must provide services, beyond just managing applications. Currently, IAM is targeted on the latter aspect, by creating user accounts, setting entitlements, or acting as a shell for Access Management in front of the applications. However, digital services require access to a set of identity services (APIs) to consume. This is a fundamentally different concept, and this form of Identity Services must be supported as well.
  3. Finally, and related to #1, the architecture must be based on microservices. Only that allows flexible deployments, agile roll-out, and extensions/customizations. Done right, customization, integration and orchestration across multiple services take place in separate microservices. Done that way, they are easy to maintain and product/service updates will not affect customizations (as long as the APIs remain stable).

Identity Fabrics are, from our perspective, the foundation for a future-proof IAM that serves the business demand. They provide the capabilities required for supporting the business use cases, based on a set of services that are built in a modern architecture.

Identity Fabric by Martin Kuppinger

The Road to IDaaS

Moving to an Identity Fabric is a journey that allows building on what you have in IAM and gradually transforming this, while adding new and modern services that rapidly provide the new capabilities required to serve the identity needs of digital services as well as the integration of new SaaS services.

Take a look at our Advisory Services for further decision support for the digital strategy of your business or simply browse our research library KC PLUS to get more insights on digital identity topics.


KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Subscribe to our Podcasts

KuppingerCole Podcasts - watch or listen anywhere


How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00