Blog posts by Martin Kuppinger
Don’t Run into Security Risks by Managing Robot Accounts the Wrong Way
Robotic Process Automation (RPA) is one of the hot IT topics these days. By using robots that automatically perform tasks that humans executed before, companies unlock a significant potential for cost savings. AI (Artificial Intelligence) helps in realizing RPA solutions. However, if done wrong, the use of RPA can cause severe security challenges.
It starts with the definition of the accounts used by the robots. There appears being a tendency of creating sort of “super-robot” accounts – accounts, that are used by various robots and that accumulate entitlements for multiple robots. Obviously, such approaches stand in stark contrast to the Least Privilege principle. They are the direct opposite of what IAM and specifically Access Governance are focusing on: Mitigating access-related risks.
The only argument for such solution is that management of the robot accounts appears being easier, because accounts can be re-used across robots. But that is just a consequence of doing the management of RPA wrong.
RPA accounts are factually technical (functional) user accounts, which are anyway heavily used in IT. In contrast to common approaches for such technical accounts, there is an individual “user” available: The robot. Each robot can run in the context of its own account, that has exactly the entitlements required for the (commonly very narrow) tasks that robot is executing.
Using standard functions of IGA, robots can be managed as a specific type of user. Each robot needs to be onboarded, which is a process that is very similar to onboarding (and changing and offboarding) an external user. There is nothing fundamentally new or different, compared to existing IAM processes. The robot should have a responsible manager, and changes in management can be handled via well-defined mover processes – as they should be for every technical account and every external user.
The entitlements can be granted based on common entitlement structures such as roles and groups, or – in sophisticated IAM infrastructures – be based on runtime authorization, i.e. Dynamic Authorization Management.
As for technical accounts, in the rare case that someone else needs access to that account, Privileged Access Management (PAM) comes into play. Access to that in some sense shared account can be handled via Shared Account Password Management. Behavior of the accounts can be tracked via the UBA (User Behavior Analytics) capabilities found in several of the PAM solutions in the market, in specific UBA products, or in other solutions.
There are no identity and access related functionalities specific to RPA that can’t be managed well by relying on standard IAM capabilities of IGA (Identity Governance & Administration) and PAM. And if the accounts are managed and associated per robot, instead of creating the “super-robot” accounts, RPA is not an IAM challenge, but IAM helps in managing RPA well, without growing security risks.
Smart Manufacturing or, as the Germans tend to say, Industry 4.0, has already become a reality for virtually any business in manufacturing. However, as just recently demonstrated by the attack on Norsk Hydro, this evolution comes at a price: There are doors created and opened for attackers that are not easy to close again.
These new challenges are not a surprise when looking at what the quintessence of Smart Manufacturing is from a security perspective. Smart Manufacturing is about connecting business processes to manufacturing processes or, in other words, the (business) value chain to the physical processes (or process chains) on the factory floor.
The factory floor has seen some cyber-attacks even before Smart Manufacturing became popular. However, these were rare attacks, some of them being highly targeted on specific industries. Stuxnet, while having been created in the age of Smart Manufacturing, is a sample of such an attack targeted at non-connected environments, in that case, nuclear plants.
In contrast, cyber-attacks on business IT environments are common, with numerous established attack vectors, but also a high degree of “innovation” in the attacks. There are many attacks. Smart Manufacturing, by connecting these two environments, opens these new doors – at the network level as well as at the application layer. The quintessence of Smart Manufacturing, from the IT perspective, is thus “connecting everything = everything is under attack”. Smart Manufacturing extends the reach of cybercriminals.
But how to lock these doors again? It all starts with communication, and communication starts with a common language. The most important words here are not SCADA or ICS or the likes, but “safety” and “security”. Manufacturing is driven by safety. IT is driven by security. Both can align, but both also need to understand the differences and how one affects the other. Machines that are under attack due to security issues might cause safety issues. Besides that, there are other aspects such as availability and others that differ in their relevance and other characteristics between the OT (Operational Technology) and the IT world. If an HR system is down for a day, that is annoying, but most people will not notice. If a production line is down for a day, that might cause massive costs.
Thus, as always, it begins with people – knowing, understanding, and respecting each other – and processes. The latter includes risk management, incident handling, etc. But, also common, there is a need for technology (or tools). Basically, this involves a combination of two groups of tools: Specific solutions for OT networks such as unidirectional gateways for SCADA environments, and the well-thought-out use of standard security technologies. This includes Patch Management, which is more complex in OT environments due to the restrictions regarding availability and planned downtimes. This includes the use of Security Intelligence Platforms and Threat Intelligence to monitor and analyze what is happening in such environments and identify anomalies and potential attacks. It also includes various IAM (Identity & Access Management) capabilities. Enterprise Single Sign-On, while no longer being a hyped technology, might help in moving from open terminals to individual access, using fast user switching such as in healthcare environments. Privileged Access Management might help in restricting privileged user access to critical systems. Identity Provisioning can be used to manage users and their access to such environments.
There are many technologies from IT Security that can help in locking the doors in OT environments again. It is the about time for people from OT and IT to start working together, by communicating and learning from each other. Smart Manufacturing is here to stay – now it is time to do it right not only from a business but from a security perspective.
Figure: Connecting Everything = Everything is Under Attack
One of the slides I use most frequently these days is about Identity Brokers or Identity Fabrics, that manage the access of everyone to every service. This slide is based on recent experience from several customer advisories, with these customers needing to connect an ever-increasing number of users to an ever-increasing number (and complexity) of services, applications, and systems.
This reflects the complex reality of most businesses. Aside of the few “cloud born” businesses that don’t have factory floors, large businesses commonly have a history in their IT. Calling this “legacy” ignores that many of these platforms deliver essential capabilities to run the business. They neither can be replaced easily, nor are there always simple “cloud born” alternatives that deliver even the essential capabilities. Businesses must check whether all capabilities of existing tools are essential. Simple answer: They are not. Complex answer: Not all; but identifying and deciding on the essentials is not that easy. Thus, businesses today just can’t do all they need with the shiny, bright cloud services that are hyped.
There are two aspects to consider: One is the positive side of maturity (yes, there is a downside, by being overloaded with features, monolithic, hard to maintain,…), the other is the need to support an existing environment of services, applications, and systems ranging from the public cloud service to on-premises applications that even might rely on a mainframe.
When looking at the hyped cloud services, they always start lean – in the positive sense of being not overly complex, overloaded with features, hard to maintain, etc. Unfortunately, these services also start lean in the sense of focusing on some key features, but frequently falling short in support for the more complex challenges such as connecting to on-premises systems or coming with strong security capabilities.
Does that mean you shouldn’t look for innovative cloud services? No, on the contrary, they can be good options in many areas. But keep in mind that there might be a price to pay for capabilities. If these are not essential, that’s fine. If you consider them essential, you best first check whether they really are. If they remain essential after that check, think about how to deal with that. Can you integrate with existing tools? Will these capabilities come soon, anyway? Or will you finally end up with a shiny, bright point solution or, even worse, a zoo of such shiny, bright tools?
I’m an advocate of the shift to the cloud. And I believe in the need to get rid of many of the perceived essential capabilities that aren’t essential. But we should not be naïve regarding the hybrid reality of businesses that we need to support. That is the complex part when building services–integrating and supporting the hybrid IT. Just know of the price and do it right (which equals “well-thought-out” here).
Figure: Identity Fabrics: Connecting every user to every service
Blockchain - Just a Hype?
What AI is and what not
Where to put your focus on in 2019
Where to put your focus on in 2019
On October 28th IBM announced its intention to acquire Red Hat. At $34 Billion, this is the largest software acquisition ever. So why would IBM pay such a large amount of money for Red Hat? Not surprising, there were quite a few negative comments from parts of the Open Source community. However, there is logic behind that intended acquisition.
Aside of the potential it holds for some of the strategic fields of IBM such as AI (Artificial Intelligence) and even security (which is amongst the divisions of IBM showing the biggest growth), there is an obvious potential in the field of Hybrid Cloud as well as for DevOps.
Red Hat has for a long time been a company that is much bigger than just a Linux company. When you look at their portfolio, Red Hat is strong in middleware and technologies supporting hybrid cloud environments. Technology stacks like JBoss, Ansible, OpenShift or OpenStack are well-established.
Red Hat has also been a longstanding supplier preferred by enterprises. They have a strong position in growth markets that play an important role for businesses, Cloud Service Providers (CSPs), and obviously for IBM itself. Red Hat empowers IBM to deliver better and broader services to its customers and strengthen its role as a provider for Hybrid Cloud and DevOps and thus its competitive position in the battle with companies such as AWS, Microsoft, or Oracle. On the other hand, IBM allows Red Hat scaling its business, by delivering both the organizational structure to grow and a global services team and infrastructure.
From our perspective, there is little risk that Red Hat will lose a significant share of its current business – they are already an enterprise player and selling to enterprise customers, and IBM will strengthen not weaken them.
As with every acquisition, this one also brings some risk for customers. There is some overlap in certain parts of the portfolio, particularly around managing hybrid cloud environments, i.e. Cloud Foundry and OpenShift. While this might affect some customers, the overall risk for customers appears to be limited. On the other hand, the joint potential to support business in their Digital Transformation is significant. IBM can increase its offerings and attractiveness for Hybrid Cloud and DevOps, fostered by strong security and with interesting potential for new fields such as AI.
The only question will be whether the price tag of Red Hat is too high. While there is huge potential, the combined IBM and Red Hat will still need to monetize on this.
Read as well: IBM Acquires Red Hat: The AI potential
Ten years ago, for the second EIC, we published a report and survey on the intersection of IAM and SOA (in German language). The main finding back then was that most businesses don’t secure their SOA approaches adequately, if at all.
Ten years later, we are talking Microservices. Everything is DevOps, a small but growing part of it is DevSecOps. And again, the question is, whether we have appropriate approaches in place to protect a distributed architecture. This question is even more important in an age where deployment models are agile and hybrid.
So how to do IAM for this microservices world? Basically, there are two challenges: supporting the environments and supporting the services and applications.
The former are about securing containers. That includes privileged access to the environments the containers run in as well as the containers itself, but also the fine-grained access management and governance of such environments. It also includes the interesting challenge of segregating access to development, test, and production in the DevOps world, which is an even more demanding task than in traditional IT.
The second challenge is about how to secure communication between microservices. One of the technologies that inevitably comes into play here is API Management & Security. Beyond that, we will have to rethink authorization for services, but also how to manage and govern identities and their access at both the level of individual microservices and the orchestrated services and applications provided to the business.
Reasonably defined microservices, fully encapsulated and providing their functionality to connected services and applications exclusively via secure, authenticated and auditable APIs, are an important step towards secure architectures “by design”.
Notably, we must also start thinking about deploying security components as services, externalizing and standardizing them. I discussed this topic a while ago in a webinar – you might want to watch the webcast. With moving to a more agile approach of IT, where changes are quickly deployed to production environments, identity and security must become adequately agile. Automation becomes key to success. We see some interesting trends and offerings arriving, however most of them currently are focused on privileged users – which is a good start, but by far not the end of our journey towards secure microservices architectures.
It’s about time to make our IAM services ready to support the new way IT is done: agile and modular. Otherwise we will end up in a security nightmare.
Since I’m observing the IAM business, it has been under constant change. However, there is a change on its way that is bigger than many of the innovations we have seen over the past decade. It is IAM adopting the architectural concept of microservices.
This will have a massive impact on the way we can do IAM, and it will impact the type of offerings in the market. In a nutshell: microservices can make IAM far more agile and flexible. But let’s start with the Wikipedia definition of Microservices:
Microservices is a software development technique—a variant of the service-oriented architecture (SOA) architectural style that structures an application as a collection of loosely coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight. [Source: Wikipedia]
Basically, it is about moving to loosely coupled services and lightweight protocols for integration. Factually, this also includes lightweight APIs these services expose. Each microservice delivers specific, defined capabilities, which are then orchestrated.
When we look at the evolution of the IAM market, most applications have been architected more or less monolithic in the past. Most of them have some “macroservice” model built-in, with a couple of rather large functional components such as a workflow engine or an integration layer. Some vendors already have come a bit further in their journey towards microservices, but when looking at today’s on-premises solutions for IAM, for most vendors the journey towards microservices has just started, if at all.
Looking at IDaaS (Identity as a Service), the situation is different. Many (but not all) of the IDaaS solutions on the market have been architected from scratch following the microservices approach. However, in most cases, they do so internally, while still being exposed as a monolithic service to the customer.
The emerging trend – and, even more important, the growing demand of customers – now is for IAM being delivered as a set of microservices and via containers (which might contain multiple microservices or even a few legacy components not yet updated to the new model). Such an approach allows for more flexible deployments, customization, integration, and delivers the agility businesses are asking for today.
From a deployment point of view, such architecture gives business the option to decide where to run which of the services and, for example, support a hybrid deployment or a gradual shift from on-premises to private cloud and on to public cloud.
From a customization and integration perspective, orchestrating services via APIs with IAM services and other services such as IT Service Management is more straightforward than coding, and more flexible than just relying on customization. Lightweight APIs and standard protocols help.
Finally, a microservice-style IAM solution (and the containers its microservices reside in) can be deployed in a far more agile manner by adding services and orchestrations, instead of the “big bang” style rollout of rather complex toolsets we know today.
But as always, this comes at a price. Securing the microservices and their communication, particularly in hybrid environments, is an interesting challenge. Rolling out IAM in an agile approach, integrated with other services, requires strong skills in both IT and security architecture, as well as a new set of tools and automation capabilities. Mixing services of different vendors requires well-thought-out architectural approaches. But it is feasible.
Moving to a microservices approach for IAM provides a huge potential for both the customers and the vendors. For customers, it delivers flexibility and agility. They also can integrate services provided by different vendors in a much better way and they can align their IAM infrastructure with an IT service model much more efficiently.
For vendors, it allows supporting hybrid infrastructures with a single offering, instead of developing or maintaining both an on-premises product and an IDaaS offering. But it also raises many questions, starting with the one on the future licensing or subscription models for microservices – particularly if customers only want some, not all services.
There is little doubt that the shift to microservices architectures in IAM will significantly affect the style of IAM offerings provided by vendors, as it will affect the way IAM projects are done.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]