Blog posts by Martin Kuppinger

Vista Equity Partners to Acquire Ping Identity

Yesterday, Ping Identity announced the intent of Vista Equity Partners to acquire it. Ping Identity is a privately held company backed by venture capital, and will become acquired by a private equity firm.

This acquisition is no real surprise to me. We have seen a few private equity deals in the past years, with SailPoint and NetIQ becoming acquired by Thoma Bravo (with the most of NetIQ’s business now being part of Micro Focus) or Courion becoming acquired by K1 Investment Management. Ping Identity has grown massively over the past years, so this acquisition is a logical step in their evolution and, as Ping states in the press release, “the acquisition by Vista does not preclude us from the option of IPO”.

Anyway, the main questions are about the impact on customers and on the competition. For customers, there is no direct impact. The shareholder structure of Ping Identity has changed, however the intent is to keep the management team including founder Andre Durand in place and to grow the team further. There are growth plans with a focus on both organic and inorganic growth. From my perspective, Ping Identity benefits from that acquisition, because it can now focus on innovation and growth. Mid-term, Ping Identity has the opportunity to grow their portfolio, continuing their journey from a vendor in a particular segment of the IAM (Identity and Access Management) market, i.e. Federation and Web Access, towards a platform provider with more products and services. The recent innovations and enhancements in their portfolio already have set the direction.

For the competitors, the days where they could name Ping Identity a niche vendor are finally past. Notably, Ping Identity has already grown into a 400+ employee company, well beyond the level of a start-up or a niche vendor. With the new owner, Ping Identity will be able to focus even more on extending their position in the market.

From my perspective, this deal is positive for both Ping Identity and their customers. I expect the company to further strengthen their market position, beyond pure-play Identity Federation and Access Management.

Oh, and Vista Equity Partners announced on May 31, 2016 that they have entered into a definitive agreement to acquire Marketo. While Ping Identity plays an important role on the IAM side of CIAM (Customer Identity and Access Management) and KYC (Know Your Customer), Marketo is a strong player in marketing automation and analytics.

IoT (or IoEE): Product Security Is Becoming a Strategic Risk

For a long time, IT risks have been widely ignored by business people, including Corporate Risk Officers (CROs) and C-level management. This has changed recently with the increasing perception of cyber-security risks. With the move to the IoT (Internet of Things) or, better, the IoEE (Internet of Everything and Everyone), we are beginning upon a new level.

When a company starts selling and deploying connected things, this also raises product liability questions. Obviously, goods that are connected are more in danger than goods that aren’t. Connecting things creates a new type of product liability risk, by creating a specific attack surface over the Internet. Thus, when enthusiastically looking at the new business potential of connecting things, organizations must also analyze the impact on product liability. If things go really wrong, this might put the entire organization at risk.

Product security inevitably becomes a #1 topic for any organization that starts selling connected things. These things contain some software – let’s call this a “thinglet”. It’s not an app with a user interface. It is a rather autonomous piece of code that connects to apps and to backend services – and vice versa. Such thinglets must be designed following the principles of Security by Design and Privacy by Design. They also must be operated securely, including a well thought-out approach to patch management.

It’s past time for vendors to analyze the relationship of the IoEE, product security, and product liability risks.

Sounds like “security as the notorious naysayer”? Sounds like “security kills agility”? Yes, but only at first glance. If you use the security argument for blocking innovation, then security stays in its well-known, negative role. However, as I have written in a recent post (and, in more details, in some other posts linked to that post), security and privacy, if done right, are an opportunity not a threat. Security by Design and Privacy by Design drive Agility by Design. A shorter time-to-market results from consequently following these principles. If you don’t do so, you will have to decide between the security risk and the risk of being too late – but only then. Security done right is a key success factor nowadays.

Complexity Kills Agility: Why the German Reference Architecture Model for Industry 4.0 Will Fail

The German ZVEI (Zentralverband Elektrotechnik- und Elektroindustrie), the association of the electrical and electronic industries, and the VDI (Verein Deutscher Ingenieure), the association of German engineers, has published a concept called RAMI (Referenzarchitekturmodell Industrie 4.0). This reference architecture model has a length of about 25 pages, which is OK. The first target listed for RAMI 4.0 is “providing a clear and simple architecture model as reference”.

However, when analyzing the model, there is little clearness and simplicity in it. The model is full of links to other norms and standards. It is full of multi-layer, sometimes three-dimensional architecture models. On the other hand, the model doesn’t provide answers on details, and only a few links to other documents.

RAMI 4.0 e.g. says that the minimal infrastructure of Industry 4.0 must fulfill the principles of Security-by-Design. There is no doubt that Industry 4.0 should consequently implement the principles of Security-by-Design. Unfortunately, there is not even a link to a description of what Security-by-Design concretely means.

Notably, security (and safety) are covered in a section of the document spanning not even 1% of the entire content. In other words: Security is widely ignored in that reference architecture, in these days of ever-increasing cyber-attacks against connected things.

RAMI 4.0 has three fundamental faults:

  1. It is not really concrete. It lacks details in many areas and doesn’t even provides links to more detailed information.
  2. While only being 25 pages in length and not being very detailed, it is still overly complex, with multi-layered, complex models.
  3. It ignores the fundamental challenges of security and safety.

Hopefully, we will see better concepts soon, that focus on supporting the challenges of agility and security, instead of over-engineering the world of things and Industry 4.0.

There Is No Such Thing as an API Economy

Martin Kuppinger explains why there is no API economy.

Free MFA on Windows for the masses

Cion Systems, a US-based IAM (Identity and Access Management) vendor, recently released a free service that allows Windows users to implement  two-factor authentication. It works only for Windows and supports either a PIN code sent via SMS (not available in all countries) or a standard USB storage key as the second factor. Thus, users can increase the security of their Windows devices in a simple and efficient manner, without the need of paying for a service or purchasing specialized hardware.

Their free service is based on the same technology Cion Systems is using for their enterprise self-service and multi-factor authentication solutions and services, which, e.g., also support password synchronization with Microsoft Office 365.

While such a service is not unique, it is a convenient approach for Windows users for increasing the security of their systems that still commonly are only protected via username and password.

„Disruptive Change“: Right time to think security anew

Is „Digital Transformation“ something of the future? Definitely not. It has long become reality. With connected things and production, business models of enterprises are already changing profoundly. Communication with customers no longer happens over traditional websites. It encompasses apps and increasingly connected things as well. Rapidly changing business models and partnerships lead to new application architectures like micro service models, especially however to a more intensive usage of APIs (Application Programming Interfaces, interfaces of applications for external function calls), in order to combine functions of various internal and external services to new solutions.

This quick change is often being used as an argument that security can't be improved, since there is the believe that this would hinder the fulfilment of temporal and functional business requirements, especially of all at once. No new, better, up-to-date and future-oriented security concepts in applications are being implemented due to alleged time pressure. However, exactly the opposite is the case: Precisely this change is the chance to implement security faster than ever before. And anyhow, for communication from apps to backend and external systems, user authentication and of course complete handling of connected things one can’t use the same concepts that were introduced for websites five, ten or fifteen years ago.

Furthermore, by now there is a whole lot of established standards, from the more traditional SAML (Security Assertion Markup Language) to more modern worldwide standards, in which REST-based access of apps to services and between services is normal. OAuth 2.0 and OpenID Connect are good examples of this. Or, in other words: Mature possibilities for better security solutions are already a reality, in the form of standards as well as on a conceptual level.

Another good example is the new (and not yet really established) UMA (User Managed Access) standard of the Kantara Initiative. With this standard, users can share “their” data purposefully with applications beyond the basic OAuth 2.0 functions. If you look for example at some of the data challenges associated with the “connected car”, it soon becomes clear how useful new concepts can be.

UMA and other new standards enable easy control of who gets access when and to which data. Traditional concepts don’t allow this – as soon as diverse user groups need access to diverse data sources in diverse situations, one hits the wall or needs to “tinker” solutions (with much effort). If you look e.g. at the crash data recorder, to which insurances, manufacturers and the police need to have access – however, not always and definitely not to all data – it becomes clear how expansively some new challenges in digital transformation have to be solved if not built on modern security concepts.

“Disruption”, the fundamental change we experience in the digital transformation in many places – contrary to the slow and continual development that was the rule in many industries for years – is the chance to become faster, more agile and more secure. For this, we need to deploy new concepts that are oriented towards these new requirements. Already in the first project you are often quicker with this approach than by trying to adapt old concepts to new problems. We should use the chance to make security stronger, especially in the digital transformation. The alternative is risking not to be sufficiently agile enough to withstand competition, due to outdated software and old security architectures.

Thycotic acquires Arellia – moving beyond pure Privilege Management

On February 23rd, 2016, Thycotic, one of the leading vendors in the area of Privilege Management (also commonly referred to as Privileged Account Management or Privileged Identity Management) announced the acquisition of Arellia. Arellia delivers Endpoint Security functionality and, in particular, Application Control capabilities. Both Thycotic and Arellia have built their products on the Microsoft Windows platform, which will allow the more rapid integration of the two offerings.

Thycotic, with its Secret Server product, has evolved over the past years from an entry-level solution towards an enterprise-level product, with significant enhancements in functionality. With the addition of the Arellia products, Thycotic will be able not only to protect access to shared accounts, to discover privileged identities, and to manage sessions, but furthermore can actually control what users do with their privileged accounts and restrict account usage. Applications can be whitelisted or blacklisted, further enhancing control.

With this acquisition, another vendor is combining Privilege Management and Application Control, after CyberArk’s acquisition of Viewfinity some months ago. While it might be too early to name this a trend, there is logic in extending Privilege Management beyond the account management or session management aspect. Protecting not only the access to privileged accounts, but furthermore limiting and controlling the use of such accounts had already become part of Privilege Management with Session Management capabilities, but also more commonly in Unix and Linux environments with restrictions for the use of shell commands. Thus, adding Application Control and other Endpoint Security features is just a logical step.

Our view on Privilege Management always has been beyond pure Shared Account Password Management. The current evolution towards integration with Application Control and other features fits in our broader view of protecting all accounts with elevated privileges at any time, both for access and use.

Beyond Datacenter Micro-Segmentation – start thinking about Business Process Micro-Segmentation!

Sometime last autumn I started researching the field of Micro-Segmentation, particularly as a consequence of attending a Unisys analyst event and, subsequently, VMworld Europe. Unisys talked a lot about their Stealth product, while at VMworld there was much talk about the VMware NSX product and its capabilities, including security by Micro-Segmentation.

The basic idea of Datacenter Micro-Segmentation, the most common approach on Micro-Segmentation, is to segment by splitting the network into small (micro) segments for particular workloads, based on virtual networks with additional capabilities such as integrated firewalls, access control enforcement, etc.

Using Micro-Segmentation, there might even be multiple segments for a particular workload, such as the web tier, the application tier, and the database tier. This allows further strengthening security by different access and firewall policies that are applied to the various segments. In virtualized environments, such segments can be easily created and managed, far better than in physical environments with a multitude of disparate elements from switches to firewalls.

Obviously, by having small, well-protected segments with well-defined interfaces to other segments, security can be increased significantly. However, it is not only about datacenters.

The applications and services running in the datacenter are accessed by users. This might happen through fat-client applications or by using web interfaces; furthermore, we see a massive uptake in the use of APIs by client-side apps, but also backend applications consuming and processing data from other backend services. Furthermore, there is also a variety of services where, for example, data is stored or processed locally, starting with downloading documents from backend systems.

Apparently, not everything can be protected perfectly well. Data accessed through browsers is out of control once it is at the client – unless the client can become a part of the secure environment as well.

Anyway, there are – particularly within organizations with good control of everything within the perimeter and at least some level of control around the devices – more options. Ideally, everything becomes protected across the entire business process, from the backend systems to the clients. Within that segmentation, other segments can exist, such as micro-segments at the backend. Such “Business Process Micro-Segmentation” stands not in contrast to Datacenter Micro-Segmentation, but extends that concept.

From my perspective, we will need two major extensions for moving beyond Datacenter Micro-Segmentation to Business Process Micro-Segmentation. One is encryption. While there is limited need for encryption within the datacenter (don’t consider your datacenter being 100% safe!) due to the technical approach on network virtualization, the client resides outside the datacenter. The minimal approach is protecting the transport by means like TLS. More advanced encryption is available in solutions such as Unisys Stealth.

The other area for extension is policy management. When looking at the entire business process —and not only the datacenter part — protecting the clients by integrating areas like endpoint security into the policy becomes mandatory.

Neither Business Process Micro-Segmentation nor Datacenter Micro-Segmentation will solve all of our Information Security challenges. Both are only building blocks within a comprehensive Information Security strategy. In my opinion, thinking beyond Datacenter Micro-Segmentation towards Business Process Micro-Segmentation is also a good example of the fact that there is not a “holy grail” for Information Security. Once organizations start sharing information with external parties beyond their perimeter, other technologies such as Information Rights Management – where documents are encrypted and distributed along with the access controls that are subsequently enforced by client-side applications – come into play.

While there is value in Datacenter Micro-Segmentation, it is clearly only a piece of a larger concept – in particular because the traditional perimeter no longer exists, which also makes it more difficult to define the segments within the datacenter. Once workloads are flexibly distributed between various datacenters in the Cloud and on-premises, pure Datacenter Micro-Segmentation reaches its limits anyway.

Secured by Design: The smart streets of San Francisco

On April 18th 1906, an earthquake and fires destroyed nearly three quarters of San Francisco. Around 3000 people lost their lives. Right up to the present many other, less critical tremors followed. The danger of another catastrophe can’t be ignored. In a city like San Francisco, however wonderful it might be to live there, people always have to be aware that their whole world can change in an instant. Now the Internet of Things (IoT) can help to make alarm systems get better. People in this awesome city can at least be sure that the mayor and his office staff do their best to keep them safe and secure in all aspects.

Not only that: With the help of the Internet of Things (IoT) they’re also looking for new ways to make the life of the citizens more convenient. That became clear to me when I saw ForgeRock’s presentation about their IoT and Identity projects in San Francisco. I noticed with pleasure that Lasse Andresen, ForgeRock’s CTO and Founder confirmed what I have been saying for quite some time: Security and Privacy must not be an afterthought. Rightfully designed from the start, both do not hinder new successful business models but actually enable them. In IoT, security and privacy are integral elements. They lead to more agility and less risk.

Andresen says in the presentation that Identity, Security and Privacy are core to IoT: “It’s kind of what makes IoT work or not work. Or making big data valuable or not valuable.” San Francisco is an absolutely great example of what that means in practice. Everything – “every thing” - in this huge city shall have its own unique identity, from the utility meter to the traffic lights and parking spaces to the police, firefighters and ambulance. This allows fast, secure and ordered action in case of emergencies. Because of their identities and with geolocation, the current position of each vehicle is always exactly known to the emergency coordinators. The firefighters identify themselves with digital key cards at the scene to show that they are authorized to be there. Thus everything and everyone becomes connected with each other, people, things and services. With identity as the glue.

Identity information enables business models that e. g. improve life in the city. The ForgeRock demonstration shows promising examples such as optimizing the traffic flow and road planning with big data, street lights that reduce power consumption by turning on and off automatically, smart parking that allows the car driver to reserve a space online in advance combined with demand based pricing of parking spaces and, last but not least, live-optimization of service routes.

The ForgeRock solution matches the attributes and characteristics of human users to those of things, devices, and apps, collects the notifications all together in a big data repository and then flexibly manages the relationships between all entities - people and things -  from this central authoritative source. Depending on her or his role, each different user will be carefully provisioned with access to certain devices as well as certain rights and privileges. That is why identity is a prerequisite for secure relationships. Things are just another channel demanding access to the internet. It has to be clear what they are allowed to do, e. g. may item A send sensitive data to a certain server B? If so, does the information have to be encrypted? Without the concept of identities, their relations, and for managing their access there are too many hindrances for successful change in business models and regulations.

Besides the questions about security and privacy, the lack of standards has long been the biggest challenge for full-functioning IoT. Manifold platforms, various protocols and many different APIs made overall integration of IoT systems problematic. Yes, there are even many different “standards”. However, with User Managed Access (UMA) a new standard eventually evolved that’s taking care of the management of access rights. With UMA, millions of users can manage their own access rights and keep full control over their own data without giving it to the service provider. They alone decide which information they share with others. While the resources may be stored on several different servers, a central authorization server controls that the rules laid down by the owner are being reliably applied. Any enterprise that adapts UMA early now has the chance to build a new, strong and long-lasting relationship with customers built on security and privacy by design.

Why Distributed Public Ledgers such as Blockchain will not solve the identification and thus the authentication problem

There is a lot of talk about Blockchain and, more generally, Distributed Public Ledgers (DPLs) these days. Some try to position DPLs as a means for better identification and, in consequence, authentication. Unfortunately, this will not really work. We might see a few approaches for stronger or “better” identification and authentication, but no real solution. Not even by DPLs, which I see as the most disruptive innovation in Information Technology in a very, very long time.

Identification is the act of finding out whether someone (or something) is really the person (or thing) he (it) claims to be. It is about knowing whether the person claiming to be Martin Kuppinger is really Martin Kuppinger or in fact someone else.

Authentication, in contrast, is a proof that you have a proof such as a key, a password, a passport, or whatever. The quality of authentication depends on one hand on the quality of identification (to obtain the proof) and on the other hand on aspects such as protection against forgery and the ubiquitous authentication strength.

Identification is, basically, the challenge in the enrollment process of an authenticator. There are various ways of doing it. People might be identified by their DNA or fingerprints – which works as long as you know that the DNA or fingerprint belongs to someone. But even then, you might not have the real name of that person. People might be identified by showing their ID cards or passports – which works well unless they use faked ID cards or passports. People might be identified by linking profiles of social networks together – which doesn’t help much, to be honest. They might use fake profiles or they might use fake names in real profiles. There is no easy solution for identification.

In the end, it is about trust: Do we trust the identification when rolling out authentications to trust the authenticators?

Authentication can be performed with a variety of mechanisms. Again, this is about trust: How much do we trust a certain authenticator? However, authentication does not identify you. It proves that you know the username and password; that you possess a token; or that someone has access to your fingerprints. Some approaches are more trustworthy; others are less trustworthy.

So why don’t DPLs such as Blockchain solve the challenge of identification and authentication? For identification, this is obvious. They might provide a better proof that an ID is linked to various social media profiles (such as with Onename), but they don’t solve the underlying identification challenge.

DPLs also don’t solve the authentication issue. If you have such an ID, it either must be unlocked in some way (e.g. by password, in the worst case) or bound to something (e.g. a device ID). That is the same challenge as we have today.

DPLs can help in improving trust e.g. in that still the same social media profiles are linked. It can support non-repudiation which is an essential element. It will increase the trust level with a growing number of parties participating in a DPL. But it can’t solve the underlying challenges of identification and authentication. Simply said, Technology will never know exactly who someone is.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00