Blog posts by Martin Kuppinger
On October 28th IBM announced its intention to acquire Red Hat. At $34 Billion, this is the largest software acquisition ever. So why would IBM pay such a large amount of money for Red Hat? Not surprising, there were quite a few negative comments from parts of the Open Source community. However, there is logic behind that intended acquisition.
Aside of the potential it holds for some of the strategic fields of IBM such as AI (Artificial Intelligence) and even security (which is amongst the divisions of IBM showing the biggest growth), there is an obvious potential in the field of Hybrid Cloud as well as for DevOps.
Red Hat has for a long time been a company that is much bigger than just a Linux company. When you look at their portfolio, Red Hat is strong in middleware and technologies supporting hybrid cloud environments. Technology stacks like JBoss, Ansible, OpenShift or OpenStack are well-established.
Red Hat has also been a longstanding supplier preferred by enterprises. They have a strong position in growth markets that play an important role for businesses, Cloud Service Providers (CSPs), and obviously for IBM itself. Red Hat empowers IBM to deliver better and broader services to its customers and strengthen its role as a provider for Hybrid Cloud and DevOps and thus its competitive position in the battle with companies such as AWS, Microsoft, or Oracle. On the other hand, IBM allows Red Hat scaling its business, by delivering both the organizational structure to grow and a global services team and infrastructure.
From our perspective, there is little risk that Red Hat will lose a significant share of its current business – they are already an enterprise player and selling to enterprise customers, and IBM will strengthen not weaken them.
As with every acquisition, this one also brings some risk for customers. There is some overlap in certain parts of the portfolio, particularly around managing hybrid cloud environments, i.e. Cloud Foundry and OpenShift. While this might affect some customers, the overall risk for customers appears to be limited. On the other hand, the joint potential to support business in their Digital Transformation is significant. IBM can increase its offerings and attractiveness for Hybrid Cloud and DevOps, fostered by strong security and with interesting potential for new fields such as AI.
The only question will be whether the price tag of Red Hat is too high. While there is huge potential, the combined IBM and Red Hat will still need to monetize on this.
Read as well: IBM Acquires Red Hat: The AI potential
Ten years ago, for the second EIC, we published a report and survey on the intersection of IAM and SOA (in German language). The main finding back then was that most businesses don’t secure their SOA approaches adequately, if at all.
Ten years later, we are talking Microservices. Everything is DevOps, a small but growing part of it is DevSecOps. And again, the question is, whether we have appropriate approaches in place to protect a distributed architecture. This question is even more important in an age where deployment models are agile and hybrid.
So how to do IAM for this microservices world? Basically, there are two challenges: supporting the environments and supporting the services and applications.
The former are about securing containers. That includes privileged access to the environments the containers run in as well as the containers itself, but also the fine-grained access management and governance of such environments. It also includes the interesting challenge of segregating access to development, test, and production in the DevOps world, which is an even more demanding task than in traditional IT.
The second challenge is about how to secure communication between microservices. One of the technologies that inevitably comes into play here is API Management & Security. Beyond that, we will have to rethink authorization for services, but also how to manage and govern identities and their access at both the level of individual microservices and the orchestrated services and applications provided to the business.
Reasonably defined microservices, fully encapsulated and providing their functionality to connected services and applications exclusively via secure, authenticated and auditable APIs, are an important step towards secure architectures “by design”.
Notably, we must also start thinking about deploying security components as services, externalizing and standardizing them. I discussed this topic a while ago in a webinar – you might want to watch the webcast. With moving to a more agile approach of IT, where changes are quickly deployed to production environments, identity and security must become adequately agile. Automation becomes key to success. We see some interesting trends and offerings arriving, however most of them currently are focused on privileged users – which is a good start, but by far not the end of our journey towards secure microservices architectures.
It’s about time to make our IAM services ready to support the new way IT is done: agile and modular. Otherwise we will end up in a security nightmare.
Since I’m observing the IAM business, it has been under constant change. However, there is a change on its way that is bigger than many of the innovations we have seen over the past decade. It is IAM adopting the architectural concept of microservices.
This will have a massive impact on the way we can do IAM, and it will impact the type of offerings in the market. In a nutshell: microservices can make IAM far more agile and flexible. But let’s start with the Wikipedia definition of Microservices:
Microservices is a software development technique—a variant of the service-oriented architecture (SOA) architectural style that structures an application as a collection of loosely coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight. [Source: Wikipedia]
Basically, it is about moving to loosely coupled services and lightweight protocols for integration. Factually, this also includes lightweight APIs these services expose. Each microservice delivers specific, defined capabilities, which are then orchestrated.
When we look at the evolution of the IAM market, most applications have been architected more or less monolithic in the past. Most of them have some “macroservice” model built-in, with a couple of rather large functional components such as a workflow engine or an integration layer. Some vendors already have come a bit further in their journey towards microservices, but when looking at today’s on-premises solutions for IAM, for most vendors the journey towards microservices has just started, if at all.
Looking at IDaaS (Identity as a Service), the situation is different. Many (but not all) of the IDaaS solutions on the market have been architected from scratch following the microservices approach. However, in most cases, they do so internally, while still being exposed as a monolithic service to the customer.
The emerging trend – and, even more important, the growing demand of customers – now is for IAM being delivered as a set of microservices and via containers (which might contain multiple microservices or even a few legacy components not yet updated to the new model). Such an approach allows for more flexible deployments, customization, integration, and delivers the agility businesses are asking for today.
From a deployment point of view, such architecture gives business the option to decide where to run which of the services and, for example, support a hybrid deployment or a gradual shift from on-premises to private cloud and on to public cloud.
From a customization and integration perspective, orchestrating services via APIs with IAM services and other services such as IT Service Management is more straightforward than coding, and more flexible than just relying on customization. Lightweight APIs and standard protocols help.
Finally, a microservice-style IAM solution (and the containers its microservices reside in) can be deployed in a far more agile manner by adding services and orchestrations, instead of the “big bang” style rollout of rather complex toolsets we know today.
But as always, this comes at a price. Securing the microservices and their communication, particularly in hybrid environments, is an interesting challenge. Rolling out IAM in an agile approach, integrated with other services, requires strong skills in both IT and security architecture, as well as a new set of tools and automation capabilities. Mixing services of different vendors requires well-thought-out architectural approaches. But it is feasible.
Moving to a microservices approach for IAM provides a huge potential for both the customers and the vendors. For customers, it delivers flexibility and agility. They also can integrate services provided by different vendors in a much better way and they can align their IAM infrastructure with an IT service model much more efficiently.
For vendors, it allows supporting hybrid infrastructures with a single offering, instead of developing or maintaining both an on-premises product and an IDaaS offering. But it also raises many questions, starting with the one on the future licensing or subscription models for microservices – particularly if customers only want some, not all services.
There is little doubt that the shift to microservices architectures in IAM will significantly affect the style of IAM offerings provided by vendors, as it will affect the way IAM projects are done.
During yesterday’s opening keynote at the EIC (European Identity & Cloud Conference), I brought up (and explained) a slide about the areas where Blockchain technology has the potential of helping solving existing identity problems, either by doing it just better than today or delivering entirely new capabilities. Notably: it was about the potential, not that this will inevitably happen.
Not surprisingly – an Opening Keynote should provoke thoughts and discussions – this lead to some discussions in the social media right after. Some found that I’m gone over the top with that slide. Honestly, I don’t agree – not, when following what I’ve said. Yes, if I would have stated that all these things are already getting better or will definitely and inevitably get better, that would have been over the top. But factually, I don’t believe that there is any single area marked green in that chart where I’m wrong in predicting a potential for improving what we do in identity with Blockchain technology and where Blockchain (or, even broader, Distributed Ledger technology) has a potential for solving some of the open challenges we are facing around identity.
Let’s just look at the left-hand side of the slide. Identification is something outside of technology, unless we are talking DNA. Verification is straightforward – there are so many KYC (Know Your Customer) use cases based on Blockchain these days, with valid business models, that this is already reality or at least close to becoming reality.
Authentication might definitely become simpler, by having various authenticators and IDs, from eIDs to social logins, associated with a wallet. Just one simple store to get access. Yes, there are challenges in creating secure, easy-to-use wallets, but there is potential as well.
Authorization and smart contracts, privacy and smart contracts: obvious potential.
Auditing: there was a cool use case presented by T-Mobile US in that space the evening before during the Blockchain ID Innovation Night.
And finally, all the use cases on the right-hand side are ones closely related to what is discussed as the potential of Blockchain.
Simply said: for all these areas, Blockchain (ID) technology delivers a potential of solving challenges better. Whether someone can deliver on that potential, is a different story. But there is potential.
And to be very clear: we should not search for problems where we can apply Blockchain as a solution. But in the broad field of identity, we have masses of challenges where Blockchain, as one element of the solution, has a potential to solve the problems. We shouldn’t ignore that potential. Time to think beyond, I’d say.
When new things arrive, which are still in the pioneering stage and far from reaching maturity, there is always a lot of discussion. This is even more true for Blockchain Identity, where the massive hype around Blockchains, a long history of clever ideas failing, and a few interesting technical and security challenges come together. During my keynote at this year’s EIC, I addressed the challenges and success factors for Blockchain ID as well. That led to a discussion on Twitter about whether some of these success factors are contradictory.
That definitely is a good question worth thinking about. So where might be the contradiction lie?
- Critical mass vs. interoperability? No conflict.
- Critical mass vs. easy-to-use or secure wallets? No conflict.
- Critical mass vs. affordability? No conflict?
- There is anyway no conflict with Privacy by Design and Security by Design.
Anyway, if I make such pair-wise comparisons, I don’t find any obvious contradictions. I might have overlooked some, of course.
Obviously, there are some major challenges. Cyberattack resilience vs. cost vs. usability is not super-easy to achieve. That is why it is a challenge.
One factor where we definitely might have a discussion whether this is a contradiction in itself is the “easy-to-use, easy-to-secure wallet”. Making things both secure and easy to use is a challenge in itself, and it is a success factor for Blockchain ID in general, I admit it.
However, while it is not easy, I doubt that this is impossible, i.e. contradictory in itself. We have seen many improvements in usability of more secure solutions in the past years. Fingerprint biometrics might not be perfect, but it is better than 4-digit PINs. And it is quite easy to use. And that is just one example. In other words: there are ways to combine an acceptable level of usability with good-enough security. Yes, you can always use security as the killer argument. But we also know that there is no 100% security – it is always about finding the right balance.
But what we really should do is actually quite easy: stop arguing what might hinder us in delivering better identity solutions and start figuring out how we can deliver them by using Blockchain technologies wherever appropriate, combining it with what we already have (Identity Relationship Management, OpenID Connect, UMA, PKI, whatever else), and joining our forces.
Yesterday, the reports of the German government having become a victim of a cyber-attack spread the news. According to them, the attack affected the Ministry of Defense and the Department of Foreign Affairs. There is an assumption that the attack had been carried out by APT28, a group of Russian hackers. However, only very few details are available to the public.
When reading the news, there are various points that made me raise my eyebrows. These include
- it has been a group of Russian hackers
- the attack is under control/isolated
- the German government network is well secured
- there has been only one attack
Let’s be realistic and start with the last one. Anything but having continuous attacks against the German government network would be an unrealistic assumption. There must be permanent automated attacks, but also manual ones on a regular basis. Most of them will just bounce at the perimeter, but others will go through undetected. That one large attack has been detected, obviously after already running for quite a while. It might be under control or not. That is in the nature of APTs (Advanced Persistent Threats), involving various attack vectors and spanning multiple systems. Isolating it is not easy at all.
The fact that this attack took place and went undetected for quite a while raises the question whether there are other, undetected attacks still running (or dormant to further evade detection). The probability is high. Notably, the source of the attack remains unclear. Even while there might be hints to a certain group of attackers, it also might turn out that other attackers camouflaged as them. Contrary to some sources quickly jumping to conclusions, cyberattack attribution is a very difficult and unreliable process.
So, this leads to the question: Is the German government network so super secure as they claim?
Obviously not. It might be good in security, it might even be above average. But it is, as every network, vulnerable to attacks. When looking at the IT security spending of the German government, I have massive doubts that it can be secure enough. Security costs money, and the cost of security increases exponentially when approaching 100% security. Notably, the limit is infinite here, or, in other words, there is no absolute security.
This all should be kept in mind when commenting on the recent attack:
- we can’t be sure about who the attackers were or whether they’re associated with any state actors;
- even if this particular attack has been isolated (which isn’t necessary so), there might be other attacks still running and new attacks will continue on daily basis;
- the network might be well-secured, but there is no 100% security and its safety should not be a blind assumption;
The essence is: prevention alone is not enough anymore. It is about understanding the weaknesses and potential attack vectors. Modern IT security combines well-designed, multi-layered protection/prevention with advanced detection, response and recovery and is all about continuous improvement. That needs people and costs a lot of money. Time for the German government to review their cyber security spending.
While we still regularly see and hear about IAM (Identity & Access Management) projects that don’t deliver to the expectations or are in trouble, we all see and hear about many projects that ran well. There are some reasons for IAM projects being more complex than many other IT projects, first and foremost the fact that they are cross-system and cross-organization. IAM integrates a variety of source systems such as HR and target systems, from the mainframe to ERP applications, cloud services, directory services, and many others. They also must connect business and IT, with the business people requesting access, defining business roles, and running recertifications.
In a new whitepaper by One Identity, we compiled both the experience of a number of experts from companies out of different regions and industries, and our own knowledge and experience, to provide concrete, focused recommendations on how to make your IAM project a success. Amongst the top recommendations, we find the need for setting the expectations of stakeholders and sponsors right. Don’t promise what you can’t deliver. Another major recommendation is splitting the IAM initiative/program into smaller chunks, which can be run successfully as targeted projects. Also, it is essential not to run IAM as a technology project only. IAM needs technology, but it needs more – the interaction with the business, well-defined processes, and well-thought-out models for roles and entitlements.
Don’t miss that new whitepaper when you are already working on your IAM program or when you will have to do in future.
Just two weeks after One Identity has acquired Balabit, the news spread about the next acquisition in this market segment: Bomgar acquires Lieberman Software. Both vendors have been active in this market. While Bomgar entered the market a couple of years ago, having a long history in Remote Control solutions, Lieberman Software is one of the Privilege Management veterans.
Looking at their portfolios, there is some functional overlap. However, while the strength of Bomgar comes from Session Management related to their Remote Control features, Lieberman Software is stronger in the Shared Account Password Management and related capabilities. The two companies will be able to deliver strong capabilities in most areas of Privilege Management by joining their forces.
With that second merger in a row, the Privilege Management market dynamics are under change. Aside from the established leaders in the market, there are now two vendors about to bring strong combined offerings to the market. This will foster competition among the leaders, but also increase pressure on smaller vendors that need to rethink their positioning and strategy to find their sweet spots in the market. However, from a customer perspective, more competition and more choice is always a good thing.
Yesterday, One Identity announced that they have acquired Balabit, a company specialized on Privileged Management, headquartered in Luxembourg but with their main team located in Hungary. One Identity, a Quest Software business, counts amongst the leading vendors in the Identity Management market. Aside of their flagship product One Identity Manager, they deliver a number of other products, including Safeguard as their Privilege Management offering. Balabit, on the other hand, is a pure-play Privilege Management vendor, offering several products with particular strengths around Session Management and Privileged Behavior Analytics.
One Identity already has a technical integration with Balabit’s Session Management product as a part of their Safeguard offering. With the acquisition, One Identity gets direct access to one of the leading Session Management technologies, but also the Privileged Behavior Analytics capabilities of Balabit. Combined with the One Identity Safeguard capabilities, this results in a comprehensive Privilege Management offering, from Shared Account Password Management to Session Management and Privileged Behavior Analytics. Given that there is already some integration, we expect One Identity to progress fast on creating a fully integrated solution. Another advantage might occur from the fact that still a significant portion of the One Identity Manager team is based in Germany, geographically relatively close to Hungary.
The acquisition strengthens the position of One Identity in both the Privilege Management market and the overall Identity Management market. For Privilege Management, the combined portfolio and the expected close integration moves One Identity into the group of the market leaders, with respect to both the number of customers and technical capabilities. One Identity becomes a clear pick for every shortlist, when evaluating vendors in this market segment.
When looking at the overall Identity Management market, One Identity improves its position as one of the vendors that cover all major areas of that market, with particular strengths in IGA (Identity Governance and Administration, i.e. Identity Provisioning and Access Governance) and Privilege Management, but also in Identity Federation and Cloud SSO, plus other capabilities such as cloud-based MFA (Multi-Factor Authentication). For companies that focus on single sourcing for Identity Management or at least one core supplier, One Identity becomes an even more interesting choice now.
The acquisition underpins the strategy that One Identity had announced after the split of Quest Software from Dell and the creation of One Identity as a separate business of Quest Software: playing a leading role in the overall Identity Management market as a vendor that covers all major areas of this market segment.
A few days ago, Microsoft announced Azure Confidential Computing. As the name implies, the technology is about adding a new layer of protection to cloud services, specifically Microsoft Azure, but also Windows 10 and Windows Server 2016 running in other public cloud infrastructures on specific hardware.
The foundation for Azure Confidential Computing are so-called TEEs (Trusted Execution Environments). Such environments protect the code running in that environment and data used by the code from other parties’ access. Neither administrators, neither people having direct access to hardware, nor attackers that gain access to administrator accounts can bypass that protection layer – at least this is what the TEE concept promises.
Based on TEEs, data can be held encrypted in the cloud services and their data stores and are only decrypted and processed within the TEE. That means that data is not always encrypted, but it remains – if the application is implemented correctly – encrypted in the accessible areas of the public cloud.
For now, there are two supported TEEs. One is Virtual Secure Mode, a software-based TEE that is based on Microsoft Hyper-V in Windows 10 and Windows Server 2016. The other is Intel SGX (Software Guard Extensions), which is a hardware-based TEE. Based on Intel SGX, secure TEEs can be used outside of the Microsoft Azure Cloud.
Microsoft has been using such technologies as part of their Coco Framework for enterprise blockchain networks for some weeks already, and now is moving support to Microsoft SQL Server and Azure SQL Database. This is achieved by delegating computations on sensitive data to an “enclave”, which is based on a TEE. However, Azure Confidential Computing supports broader use of this capability for various types of data.
Microsoft Azure Confidential Computing, which is available in an early adopter’s program, is a great improvement for security and confidentiality in public cloud environments and will enable customers to port workloads to the cloud which, so far, have been considered as being too sensitive. The announcement stands in line with the recent IBM announcement for their IBM Z14 systems, where factually the entire system acts as a TEE. While the use of TEEs in Azure Confidential Computing is restricted to parts of the application that are moved to the TEE specifically, both announcements are about significantly increasing the level of security in computing. That is good news.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]