English   Deutsch   Русский   中文    

Blog posts by Martin Kuppinger

There is no Consumer Identity & Access Management at all – at least not as a separate discipline

Mar 01, 2016 by Martin Kuppinger

These days, there is a lot of talk about Consumer Identity & Access Management or CIAM. However, there is no such thing as CIAM, at least not as a separate discipline within IAM. There are technologies that are of higher relevance when dealing with customers and consumers than they are when dealing with employees. But there neither are technologies that are required for CIAM only nor is there any benefit in trying to set up a separate CIAM infrastructure.

This does not mean that IAM should or must not focus on consumers – in contrast. But it is about extending and, to some extent, renovating the existing on-premise IAM, which commonly is focused on employees and some business partners. It is about one integrated approach for all identities (employees, partner, consumers,…), managing their access to all services regardless of the deployment model, using all types of devices and things. It is about seamlessly managing all access of all identities in a consistent way. Notably, “consistent way” is not the same as “from a single platform”.

So why don’t we need a separate CIAM? The easiest answer is found by asking a simple question: “Is there any single application in your organization that is only accessed by consumers?” This implies “and not by at least some of your employees, e.g. for customer services, administration & operations, or analyzing the data collected.” The obvious answer on that question is that there is no such application. There are applications, which are only used by employees, but not the other way round. So why should there be separate IAM deployments for applications that are used by a common group of users? That could only result in security issues and management trouble.

The other aspect is that the way applications are used within the enterprise is changing anyway. Mobile users access cloud applications without even touching the internal network anymore. Thus, technologies such as Adaptive Authentication, Cloud IAM or IDaaS (Identity Management as a Service), Identity Federation, etc. are relevant not only for consumer-facing solutions but for all areas of IAM.

Finally, there is the aspect, that users frequently have multiple digital identities, even in relationship to their employers. Many employees of car manufacturers also are customers of that company. Many employees of insurance companies also have insurance contracts with that companies, and some even act as freelance insurance brokers. Managing such complex relationships becomes far easier when having one IAM for all – employees, partner, and consumers. One IAM that serves all applications, on-premise and in the Cloud. And one IAM, that supports all type of access.

That might anyway result in projects that focus on managing consumer access to services, IAM for cloud services, and so on. But all these projects should be part of moving IAM to the next level: An IAM that serves all requirements, from the traditional access of an employee using his PC in the corporate LAN to access a legacy backend system to the mobile consumer coming in via a social login and accessing a cloud service.


Google+

„Disruptive Change“: Right time to think security anew

Feb 29, 2016 by Martin Kuppinger

Is „Digital Transformation“ something of the future? Definitely not. It has long become reality. With connected things and production, business models of enterprises are already changing profoundly. Communication with customers no longer happens over traditional websites. It encompasses apps and increasingly connected things as well. Rapidly changing business models and partnerships lead to new application architectures like micro service models, especially however to a more intensive usage of APIs (Application Programming Interfaces, interfaces of applications for external function calls), in order to combine functions of various internal and external services to new solutions.

This quick change is often being used as an argument that security can't be improved, since there is the believe that this would hinder the fulfilment of temporal and functional business requirements, especially of all at once. No new, better, up-to-date and future-oriented security concepts in applications are being implemented due to alleged time pressure. However, exactly the opposite is the case: Precisely this change is the chance to implement security faster than ever before. And anyhow, for communication from apps to backend and external systems, user authentication and of course complete handling of connected things one can’t use the same concepts that were introduced for websites five, ten or fifteen years ago.

Furthermore, by now there is a whole lot of established standards, from the more traditional SAML (Security Assertion Markup Language) to more modern worldwide standards, in which REST-based access of apps to services and between services is normal. OAuth 2.0 and OpenID Connect are good examples of this. Or, in other words: Mature possibilities for better security solutions are already a reality, in the form of standards as well as on a conceptual level.

Another good example is the new (and not yet really established) UMA (User Managed Access) standard of the Kantara Initiative. With this standard, users can share “their” data purposefully with applications beyond the basic OAuth 2.0 functions. If you look for example at some of the data challenges associated with the “connected car”, it soon becomes clear how useful new concepts can be.

UMA and other new standards enable easy control of who gets access when and to which data. Traditional concepts don’t allow this – as soon as diverse user groups need access to diverse data sources in diverse situations, one hits the wall or needs to “tinker” solutions (with much effort). If you look e.g. at the crash data recorder, to which insurances, manufacturers and the police need to have access – however, not always and definitely not to all data – it becomes clear how expansively some new challenges in digital transformation have to be solved if not built on modern security concepts.

“Disruption”, the fundamental change we experience in the digital transformation in many places – contrary to the slow and continual development that was the rule in many industries for years – is the chance to become faster, more agile and more secure. For this, we need to deploy new concepts that are oriented towards these new requirements. Already in the first project you are often quicker with this approach than by trying to adapt old concepts to new problems. We should use the chance to make security stronger, especially in the digital transformation. The alternative is risking not to be sufficiently agile enough to withstand competition, due to outdated software and old security architectures.


Google+

Thycotic acquires Arellia – moving beyond pure Privilege Management

Feb 24, 2016 by Martin Kuppinger

On February 23rd, 2016, Thycotic, one of the leading vendors in the area of Privilege Management (also commonly referred to as Privileged Account Management or Privileged Identity Management) announced the acquisition of Arellia. Arellia delivers Endpoint Security functionality and, in particular, Application Control capabilities. Both Thycotic and Arellia have built their products on the Microsoft Windows platform, which will allow the more rapid integration of the two offerings.

Thycotic, with its Secret Server product, has evolved over the past years from an entry-level solution towards an enterprise-level product, with significant enhancements in functionality. With the addition of the Arellia products, Thycotic will be able not only to protect access to shared accounts, to discover privileged identities, and to manage sessions, but furthermore can actually control what users do with their privileged accounts and restrict account usage. Applications can be whitelisted or blacklisted, further enhancing control.

With this acquisition, another vendor is combining Privilege Management and Application Control, after CyberArk’s acquisition of Viewfinity some months ago. While it might be too early to name this a trend, there is logic in extending Privilege Management beyond the account management or session management aspect. Protecting not only the access to privileged accounts, but furthermore limiting and controlling the use of such accounts had already become part of Privilege Management with Session Management capabilities, but also more commonly in Unix and Linux environments with restrictions for the use of shell commands. Thus, adding Application Control and other Endpoint Security features is just a logical step.

Our view on Privilege Management always has been beyond pure Shared Account Password Management. The current evolution towards integration with Application Control and other features fits in our broader view of protecting all accounts with elevated privileges at any time, both for access and use.


Google+

Beyond Datacenter Micro-Segmentation – start thinking about Business Process Micro-Segmentation!

Feb 04, 2016 by Martin Kuppinger

Sometime last autumn I started researching the field of Micro-Segmentation, particularly as a consequence of attending a Unisys analyst event and, subsequently, VMworld Europe. Unisys talked a lot about their Stealth product, while at VMworld there was much talk about the VMware NSX product and its capabilities, including security by Micro-Segmentation.

The basic idea of Datacenter Micro-Segmentation, the most common approach on Micro-Segmentation, is to segment by splitting the network into small (micro) segments for particular workloads, based on virtual networks with additional capabilities such as integrated firewalls, access control enforcement, etc.

Using Micro-Segmentation, there might even be multiple segments for a particular workload, such as the web tier, the application tier, and the database tier. This allows further strengthening security by different access and firewall policies that are applied to the various segments. In virtualized environments, such segments can be easily created and managed, far better than in physical environments with a multitude of disparate elements from switches to firewalls.

Obviously, by having small, well-protected segments with well-defined interfaces to other segments, security can be increased significantly. However, it is not only about datacenters.

The applications and services running in the datacenter are accessed by users. This might happen through fat-client applications or by using web interfaces; furthermore, we see a massive uptake in the use of APIs by client-side apps, but also backend applications consuming and processing data from other backend services. Furthermore, there is also a variety of services where, for example, data is stored or processed locally, starting with downloading documents from backend systems.

Apparently, not everything can be protected perfectly well. Data accessed through browsers is out of control once it is at the client – unless the client can become a part of the secure environment as well.

Anyway, there are – particularly within organizations with good control of everything within the perimeter and at least some level of control around the devices – more options. Ideally, everything becomes protected across the entire business process, from the backend systems to the clients. Within that segmentation, other segments can exist, such as micro-segments at the backend. Such “Business Process Micro-Segmentation” stands not in contrast to Datacenter Micro-Segmentation, but extends that concept.

From my perspective, we will need two major extensions for moving beyond Datacenter Micro-Segmentation to Business Process Micro-Segmentation. One is encryption. While there is limited need for encryption within the datacenter (don’t consider your datacenter being 100% safe!) due to the technical approach on network virtualization, the client resides outside the datacenter. The minimal approach is protecting the transport by means like TLS. More advanced encryption is available in solutions such as Unisys Stealth.

The other area for extension is policy management. When looking at the entire business process —and not only the datacenter part — protecting the clients by integrating areas like endpoint security into the policy becomes mandatory.

Neither Business Process Micro-Segmentation nor Datacenter Micro-Segmentation will solve all of our Information Security challenges. Both are only building blocks within a comprehensive Information Security strategy. In my opinion, thinking beyond Datacenter Micro-Segmentation towards Business Process Micro-Segmentation is also a good example of the fact that there is not a “holy grail” for Information Security. Once organizations start sharing information with external parties beyond their perimeter, other technologies such as Information Rights Management – where documents are encrypted and distributed along with the access controls that are subsequently enforced by client-side applications – come into play.

While there is value in Datacenter Micro-Segmentation, it is clearly only a piece of a larger concept – in particular because the traditional perimeter no longer exists, which also makes it more difficult to define the segments within the datacenter. Once workloads are flexibly distributed between various datacenters in the Cloud and on-premises, pure Datacenter Micro-Segmentation reaches its limits anyway.


Google+

Secured by Design: The smart streets of San Francisco

Jan 15, 2016 by Martin Kuppinger

On April 18th 1906, an earthquake and fires destroyed nearly three quarters of San Francisco. Around 3000 people lost their lives. Right up to the present many other, less critical tremors followed. The danger of another catastrophe can’t be ignored. In a city like San Francisco, however wonderful it might be to live there, people always have to be aware that their whole world can change in an instant. Now the Internet of Things (IoT) can help to make alarm systems get better. People in this awesome city can at least be sure that the mayor and his office staff do their best to keep them safe and secure in all aspects.

Not only that: With the help of the Internet of Things (IoT) they’re also looking for new ways to make the life of the citizens more convenient. That became clear to me when I saw ForgeRock’s presentation about their IoT and Identity projects in San Francisco. I noticed with pleasure that Lasse Andresen, ForgeRock’s CTO and Founder confirmed what I have been saying for quite some time: Security and Privacy must not be an afterthought. Rightfully designed from the start, both do not hinder new successful business models but actually enable them. In IoT, security and privacy are integral elements. They lead to more agility and less risk.

Andresen says in the presentation that Identity, Security and Privacy are core to IoT: “It’s kind of what makes IoT work or not work. Or making big data valuable or not valuable.” San Francisco is an absolutely great example of what that means in practice. Everything – “every thing” - in this huge city shall have its own unique identity, from the utility meter to the traffic lights and parking spaces to the police, firefighters and ambulance. This allows fast, secure and ordered action in case of emergencies. Because of their identities and with geolocation, the current position of each vehicle is always exactly known to the emergency coordinators. The firefighters identify themselves with digital key cards at the scene to show that they are authorized to be there. Thus everything and everyone becomes connected with each other, people, things and services. With identity as the glue.

Identity information enables business models that e. g. improve life in the city. The ForgeRock demonstration shows promising examples such as optimizing the traffic flow and road planning with big data, street lights that reduce power consumption by turning on and off automatically, smart parking that allows the car driver to reserve a space online in advance combined with demand based pricing of parking spaces and, last but not least, live-optimization of service routes.

The ForgeRock solution matches the attributes and characteristics of human users to those of things, devices, and apps, collects the notifications all together in a big data repository and then flexibly manages the relationships between all entities - people and things -  from this central authoritative source. Depending on her or his role, each different user will be carefully provisioned with access to certain devices as well as certain rights and privileges. That is why identity is a prerequisite for secure relationships. Things are just another channel demanding access to the internet. It has to be clear what they are allowed to do, e. g. may item A send sensitive data to a certain server B? If so, does the information have to be encrypted? Without the concept of identities, their relations, and for managing their access there are too many hindrances for successful change in business models and regulations.

Besides the questions about security and privacy, the lack of standards has long been the biggest challenge for full-functioning IoT. Manifold platforms, various protocols and many different APIs made overall integration of IoT systems problematic. Yes, there are even many different “standards”. However, with User Managed Access (UMA) a new standard eventually evolved that’s taking care of the management of access rights. With UMA, millions of users can manage their own access rights and keep full control over their own data without giving it to the service provider. They alone decide which information they share with others. While the resources may be stored on several different servers, a central authorization server controls that the rules laid down by the owner are being reliably applied. Any enterprise that adapts UMA early now has the chance to build a new, strong and long-lasting relationship with customers built on security and privacy by design.


Google+

Why Distributed Public Ledgers such as Blockchain will not solve the identification and thus the authentication problem

Dec 17, 2015 by Martin Kuppinger

There is a lot of talk about Blockchain and, more generally, Distributed Public Ledgers (DPLs) these days. Some try to position DPLs as a means for better identification and, in consequence, authentication. Unfortunately, this will not really work. We might see a few approaches for stronger or “better” identification and authentication, but no real solution. Not even by DPLs, which I see as the most disruptive innovation in Information Technology in a very, very long time.

Identification is the act of finding out whether someone (or something) is really the person (or thing) he (it) claims to be. It is about knowing whether the person claiming to be Martin Kuppinger is really Martin Kuppinger or in fact someone else.

Authentication, in contrast, is a proof that you have a proof such as a key, a password, a passport, or whatever. The quality of authentication depends on one hand on the quality of identification (to obtain the proof) and on the other hand on aspects such as protection against forgery and the ubiquitous authentication strength.

Identification is, basically, the challenge in the enrollment process of an authenticator. There are various ways of doing it. People might be identified by their DNA or fingerprints – which works as long as you know that the DNA or fingerprint belongs to someone. But even then, you might not have the real name of that person. People might be identified by showing their ID cards or passports – which works well unless they use faked ID cards or passports. People might be identified by linking profiles of social networks together – which doesn’t help much, to be honest. They might use fake profiles or they might use fake names in real profiles. There is no easy solution for identification.

In the end, it is about trust: Do we trust the identification when rolling out authentications to trust the authenticators?

Authentication can be performed with a variety of mechanisms. Again, this is about trust: How much do we trust a certain authenticator? However, authentication does not identify you. It proves that you know the username and password; that you possess a token; or that someone has access to your fingerprints. Some approaches are more trustworthy; others are less trustworthy.

So why don’t DPLs such as Blockchain solve the challenge of identification and authentication? For identification, this is obvious. They might provide a better proof that an ID is linked to various social media profiles (such as with Onename), but they don’t solve the underlying identification challenge.

DPLs also don’t solve the authentication issue. If you have such an ID, it either must be unlocked in some way (e.g. by password, in the worst case) or bound to something (e.g. a device ID). That is the same challenge as we have today.

DPLs can help in improving trust e.g. in that still the same social media profiles are linked. It can support non-repudiation which is an essential element. It will increase the trust level with a growing number of parties participating in a DPL. But it can’t solve the underlying challenges of identification and authentication. Simply said, Technology will never know exactly who someone is.


Google+

Cyber Security: Why Machine Learning is Not Enough

Dec 09, 2015 by Martin Kuppinger

Currently, there is a lot of talk about new analytical approaches in the field of cyber security. Anomaly detection and behavioral analytics are some of the overarching trends along with RTSI (Real Time Security Intelligence), which combines advanced analytical approaches with established concepts such as SIEM (Security Information and Event Management).

Behind all these changes and other new concepts, we find a number of buzzwords such as pattern-matching algorithms, predictive analytics, or machine learning. Aside from the fact that such terms frequently aren’t used correctly and precisely, some of the concepts have limitations by design, e.g. machine learning.

Machine learning implies that the “machine” (a piece of software) is able to “learn”. In fact this means that the machine is able to improve its results over time by analyzing the effect of previous actions and then adjusting the future actions.

One of the challenges with cyber security is the fact that there are continuously new attack vectors. Some of them are just variant of established patterns; some of them are entirely new. In an ideal world, a system is able to recognize unknown vectors. Machine learning per se doesn’t – the concept is learning from things that have gone wrong.

This is different from anomaly detection which identifies unknown or changing patterns. Here, the new is something which is identified as an anomaly.

Interestingly, some of the technologies where marketing talks about “machine learning” in fact do a lot more than ex-post-facto machine learning. Frequently, it is not a matter of technology but of the wrong use of buzzwords in marketing. Anyway, customers should be careful about buzzwords: Ask the vendor what is really meant by them. Any ask yourself whether the information provided by the vendor really is valid and solves your challenge.


Google+

Security and Privacy: An opportunity, not a threat

Dec 08, 2015 by Martin Kuppinger

One of the lessons I have learned over the years is that it is far simpler “selling” things by focusing on the positive aspects, instead of just explaining that risk can be reduced. This is particularly true for Information Security. It also applies to privacy as a concept. A few days ago I had a conversation about the chances organizations have in better selling their software or services through supporting advanced privacy features. The argument was that organizations can achieve better competitive positioning by supporting high privacy requirements.

Unfortunately, this is only partially true. It is true in areas with strong compliance regulations. It is true for that part of the customer base that is privacy-sensitive. However, it might even become a negative inhibitor in other countries with different regulations and expectations.

There are three different groups of arguments for implementing more security and privacy in applications and services:

  1. Security and regulatory requirements – even while they must be met, these arguments are about something that must be done, with no business benefit.
  2. Competitive differentiation – an opportunity; however, as described above, that argument commonly is only relevant for certain areas and some of the potential customers. For these, it is either a must-have (regulations) or a positive argument, a differentiator (security/privacy sensitive people).
  3. Security and privacy as a means for becoming more agile in responding to business requirements. Here we are talking about positive aspects. Software and services that can be as secure as it needs to be (depending on regulations or customer demand) or as open as the market requires allows organizations to react flexibly on demand amid changing requirements.

The third approach is obviously the most promising one when trying to sell your project internally as well as your product to customers.


Google+

Microsoft to offer cloud services from German datacenters

Dec 02, 2015 by Martin Kuppinger

With a recent announcement, Microsoft reacts on both privacy and security concerns of customers and the continuous uncertainty regarding a still pending law suit in the U.S. The latter is about an order Microsoft had received on turning over a customer’s emails stored in Ireland to the U.S. government.

The new data centers will operate from two locations within Germany, Frankfurt/Main and Magdeburg. They will run under the control of T-Systems, a subsidiary of Deutsche Telekom. Thus, an independent German company is acting as the data trustee, as Microsoft has named that role. Microsoft itself will not be able to access the data without the permission of customers or the data trustee, and if permission is granted will do so only under its supervision.

Concretely, customers can access the Microsoft cloud services from a non-Microsoft datacenter which operates locally. They have access to the full functionality of the Microsoft cloud services, but do not work with Microsoft as an U.S.-based company.

Microsoft’s announcement is not the first of that sort. T-Systems e.g. already operates Cisco cloud services, while the Microsoft cloud services are expected being available in the second half of 2016. VMware also works with independent service providers for delivering their cloud services.

Basically, we observe a growing trend of U.S. cloud service providers to provide delivery options altogether with partners from other countries, to serve to the customer requests for privacy, security, and independence of the U.S. court decisions. On one hand, U.S. cloud providers going that path can address their customer’s needs better now. On the other hand, this provides a tremendous potential for locally operating enterprise-class cloud providers, which can act as the local partners by delivering services locally. They even might combine such services with value-add services and integrations, e.g. complete offerings for medium-sized business covering all major enterprise functionalities from email to ERP, CRM, and other areas.

There is no doubt that such offering will come at a price – but I’m sure that many customers will be willing to pay that price, not only in Germany and other European countries but also many other regions worldwide that prefer relying on locally delivered, well-segregated services.


Google+

Security is part of the business. Rethink your organization for IoT and Smart Manufacturing

Dec 01, 2015 by Martin Kuppinger

IoT (Internet of Things) and Smart Manufacturing are part of the ongoing digital transformation of businesses. IoT is about connected things, from sensors to consumer goods such as wearables. Smart Manufacturing, also sometimes titled Industry 4.0, is about bridging the gap between the business processes and the production processes, i.e. manufacturing goods.

In both areas, security is a key concern. When connecting things, both things and the central systems receiving data back from things must be sufficiently secure. When connecting business IT and operational IT (OT for Operational Technology), frequently systems that formerly have been behind an “air gap” now become directly connected. The simple rule behind all this is: “Once a system is connected, it can be attacked” – via that connection. Connecting things and moving forward to Smart Manufacturing thus inevitably is about increasing the attack surface.

Traditionally, if there is a separate security (and not only a “safety”) organization in OT, this is segregated from the (business) IT department and the Information Security and IT Security organization. For the things, there commonly is no defined security department. The logical solution when connecting everything apparently is a central security department that oversees all security – in business IT, in OT, in things. However, this is only partially correct.

Things must be constructed following the principles of security by design and privacy by design from the very beginning. Security must not be an afterthought. Notably, this also increases agility. Thus, the people responsible for implementing security must reside in the departments creating the “things”. Security must become an integral part of the organization.

For OT, there is a common gap between the safety view in OT and the security perspective of IT. However, safety and security are no dichotomy – we need to find ways of supporting both, in particular by modernizing the architecture of OT, well beyond security. Again, security has to be considered here at any stage. Thus, execution also should be an integral part of e.g. planning plants and production lines.

Notably, the same applies for IT. Security must not be an afterthought. It must move into the DNA of the entire organization. Software development, procurement, system management etc. all have to think about security as part of their daily work.

Simply said: Major parts of security must move into the line of business departments. There are some cross-functional areas e.g. around the underlying infrastructure that still need to be executed centrally (plus potentially service centers e.g. for software development etc.) – but particularly when it is about things, security must become an integral part of R&D.

On the other hand, the new organization also needs a strong central element. While the “executive” element will become increasingly decentralized, the “legislative” and “judicative” elements most be central – across all functions, i.e. business IT, OT, and IoT. With other words: Governance, setting the guidelines and governing their correct execution, is a central task that must span and cover all areas of the connected enterprise.


Google+


top
Author info

Martin Kuppinger
Founder and Principal Analyst
Profile | All posts
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live training sessions.
Register now
Spotlight
Customer-Centric Identity Management
As more and more traditional services move online as part of the digital transformation trend, consumer-centric identity management is becoming increasingly vital business success factor. Customers aren’t just physical persons, they are also the devices used by customers, they are also intermediate organisations and systems which operate together to enable the provisioning of the service.
KC EXTEND
KC EXTEND shows how the integration of new external partners and clients in your IAM can be done while at the same time the support of the operational business is ensured.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2016 KuppingerCole