‘Know your customer’ started as an anti-money laundering (AML) initiative in the financial industry. Regulators insisted that banks establish a customer ‘due-diligence’ processes to ensure that all bank accounts could be traced back to the entities that owned them. The intent was to make it difficult to establish a business to re-purpose money from illegal activity via a legitimate commercial activity. But while they focus on AML regulation, banks often miss the opportunity to know, and serve, their customers.
Increasingly businesses are realizing that the demographics of their customers are changing. It’s moving away from the ‘baby-boomers’, who are focused on value, to ‘millennials’, who are focused on experience.
Baby-boomers have grown up in a relatively stable environment, with a stable family life and long-term employment. They value ‘best practices’ and loyalty. Millennials, those coming-of-age at the turn of the century, have experienced a much more fluid upbringing. Their family life has been fractured and inconsistent and they have no expectation, nor desire for, long-term employment. They are more interested in flex-time, job-sharing arrangements and sabbaticals.
More importantly, millennials want experience over value. They are less concerned with what they pay for something than they are in their experience in purchasing it. They will not tolerate a bad experience whether it be in-store or on-line. And they have the technology to let others know about their experience.
There’s two approaches to this situation: become despondent and despair of ever attracting this market sector, or consider the vast opportunity of hundreds of millennials posting and tweeting about the fantastic service they experienced when they did business with you.
Coming from Knowing to Serving
So – how do we ‘serve’ our customers? Firstly, we need to know them and then we need to align our marketing practices to them.
Knowing them requires us to build a picture of our customer base and segment them into groups according to their propensity to purchase our products and services. This will likely require an analysis of CRM data and potentially doing some big-data analysis of customer transaction records. Engaging a Cloud service provider and using their Hadoop services and map-reduce functionality may assist. The intent is to build a customer identity management service that can be used for product/service development and automated marketing. Customer analytics and deploying user-managed access i.e. providing users control of their data and management of their transactions with your organisation, are enabled by a good customer identity and access management (CIAM) facility.
Once we know our customers we can tailor our marketing program to ‘serve’ our customers. This means that we need to modify our product or service to suit their requirements. There is no point in offering something that they don’t want, and you can’t rely on history; as the baby-boomer segment must inevitably decline their purchasing patterns becomes irrelevant. Millennials will gladly tell you what they want if they are asked, putting some effort into understanding them will not go un-rewarded.
Pricing must also be commensurate with the product or service being offered. As noted earlier millennials are far less price-conscious than baby-boomers so a ‘differentiation’ strategy is advised. Make your product or service special, and charge for it.
Promotion should also be targeted too. Hardcopy media is of little use. Focus on social networks and on-line advertising. Google AdWords do work and it can be money well-spent. Make sure your website is responsive, millennials are lost on anything bigger than a 12cm screen.
There is no doubt that doing business is becoming much more interesting. The potential for attracting new customers has never been greater and the opportunities are vast. The only question is “are we agile enough to exploit it?”
Providing a corporate IT infrastructure is a strategic challenge. Delivering all services needed and fulfilling all requirements raised by all stakeholders for sure is one side of the medal. Understanding which services customers and all users in general are using and what they are doing within the organisation’s infrastructure, no matter whether it is on premises, hybrid or in the cloud, is for sure an important requirement. And it is more and more built into the process framework within customer facing organisations.
The main drivers behind this are typically business oriented aspects, like customer relationship management (CRM) processes for the digital business and, increasingly, compliance purposes. So we see many organisations currently learning much about their customers and site visitors, their detailed behaviour and their individual needs. They do this to improve their products, their service offerings and their overall efficiency which is of course directly business driven. Understanding your customers comes with the immediate promise of improved business and increased current and future revenue.
But the other side of the medal is often ignored: While customers and consumers are typically kept within clearly defined network areas and online business processes, there are other or additional areas within your corporate network (on-premises and distributed) where different types of users are often acting much more freely and much less monitored.
Surprisingly enough there is a growing number of organisations which know more about their customers than about their employees. But this is destined to prove as short-sighted: Maintaining compliance to legal and regulatory requirements is only possible when all-embracing and robust processes for the management and control of access to corporate resources by employees, partners and external workforce are established as well. Preventing, detecting and responding to threats from inside and outside attackers alike is a constant technological and organisational challenge.
So, do you really know your employees? Most organisations stop when they have recertification campaigns scheduled and some basic SoD (Segregation of Duties) rules are implemented. But that does not really help, when e.g. a privileged user with rightfully assigned, critical access abuses that access for illegitimate purposes or a business user account has been hacked.
KYE (Know your Employee - although this acronym might still require some more general use) needs to go far beyond traditional access governance. Identifying undesirable behaviour and ideally preventing it as it happens requires technologies and processes that are able to review current events and activities within the enterprise network. Unexpected changes in user behaviour and modified access patterns are indicators of either inappropriate behaviour of insiders or that of intruders within the corporate network.
Adequate technologies are on their way into the organisations although it has to be admitted that “User Activity Monitoring” is a downright inadequate name for such an essential security mechanism. Other than it suggests, it is not meant to implement a fully comprehensive, corporate-wide, personalized user surveillance layer. Every solution that aims at identifying undesirable behaviour in real-time needs to satisfy the high standards of requirements as imposed by many accepted laws and standards, including data protection regulations, labour law and the general respect for user privacy.
Nevertheless, the deployment of such a solution is possible and often necessary. To achieve this, such a solution needs to be strategically well-designed from the technical, the legal and an organisational point of view. All relevant stakeholders from business to IT and from legal department to the workers’ council need to be involved from day one of such a project. A typical approach means that all users are pseudonymized and all information is processed on basis of Information that cannot be traced back to actual user IDs. Outlier behaviour and inadequate changes in access patterns can be identified with looking at an individual user. The outbreak of a malware infection or a privileged account being taken over can usually be identified without looking at the individual user. And in the rare case of the de-pseudonymization of a user being required, there have to be adequate processes in place. This might include the four eyes principle for actual de-cloaking and the involvement of the legal department, the workers’ council and/or a lawyer.
Targeted access analytics algorithms can nowadays assist in the identification of security issues. Thus they can help organisations in getting to know their employees, especially their privileged business users and administrators. By correlating this information with other data sources, for example threat intelligence data and real-time security intelligence (RTSI) this might act as the basis for the identification of Advanced Persistent Threats (APT) traversing a corporate network infrastructure from the perimeter through the use of account information and the actual access to applications.
KYE will be getting as important as KYC but for different reasons. Both rely on intelligent analytics algorithms and a clever design of infrastructure, technology and processes. They both transform big data technology, automation and a well-executed approach towards business and security into essential solutions for sustainability, improved business processes and adequate compliance. We expect that organisations leveraging existing information and modern technology by operationalising both for constant improvement of security and the core business can draw substantial competitive advantages from that.
Martin Kuppinger talks about firewalls and the fact that they are not really dead.
Today, Ping Identity announced the acquisition of UnboundID. The two companies have been partnering already for a while, with a number of joint customers. After the recent acquisition of Ping Identity by Vista Equity Partners, a private equity firm, this first acquisition of Ping Identity can be seen as the result of the new setup of the company. The initial announcement by Vista Equity Partners already included the information that both organic and inorganic – as now has happened with UnboundID – growth is planned.
The acquisition of UnboundID is interesting from two perspectives. One concerns the capabilities of the UnboundID Platform in managing identity data at scale and to capture, store, sync, and aggregate data from a variety of sources such as directories, CRM systems, and others. The other involves the capabilities UnboundID provides for multi-channel customer engagement. This, for example, includes an analytics engine for analyzing customer behavior trends.
Combined with the proven strength of Ping Identity in the Identity Federation and Access Management market, this allows the companies to extend their offering particularly towards the currently massively growing market of CIAM (Customer Identity and Access Management). Furthermore, the technical platform that Ping Identity provides is complemented with an underlying large scale directory and synchronization service.
Due to the fact that both companies have been working closely together for a while, we expect that existing and new customers will benefit rapidly from Ping Identity’s expanded offering.
There is probably no single thing in Information Security that has been claimed being dead as frequent as the password. Unfortunately, it isn’t yet dead and far from dying. Far from it! The password will survive all of us.
That thesis seems standing in stark contrast to the rise of strong online identities. Also, weak online identities such as device IDs or the identifiers of things as an alternative to username and password will not make the password obsolete.
We all know that passwords aren’t really save. Weak passwords such as the one used by Mark Zuckerberg – it’s said being “Dadada” – are commonly used. Passwords either are complex and hard to keep in mind, or they are long and annoying to type, or they are short, easy to type, and weak.
However, what are the alternatives? We can use biometrics. But even with upcoming standards such as the FIDO Alliance standards, there still are many scenarios where biometrics do not work well, aside of the fact that most also aren’t perfectly save. Then there are these approaches where you have to pick known faces from a number of photos. Takes longer than typing in a password, thus it adds inconvenience.
Yes, we are becoming more flexible in choosing the authenticator which works best for us. Both in Enterprise IAM and Consumer IAM, adaptive authentication and the support of a broad variety of authenticators is on the rise. But even there, the password remains a simple and convenient option. Other options such as OTP hardware tokens (One Time Password) are not that convenient, they are expensive, logistics is complex and in case we lose a device or a token or whatever else, we still might come back to the password (or some password-like constructs such as security questions).
Using many weak authenticators also is an option. But again: What is our fallback in case that there aren’t sufficient authenticators available for a certain interaction or transaction? Not enough proof for the associated risk?
There is no doubt that we can construct scenarios where we do not need passwords at all. There is also no doubt that we will see more such scenarios in future. But we will not get fully rid of passwords. Starting with access to legacy systems that don’t support anything else than passwords (oh, and even if you put something in front, there then will be the username and password of the functional account); with the passwords used for identifying us when calling our mobile phone providers; with the passphrases and security questions; with all the websites and services that still don’t support anything else than passwords: There are too many scenarios where passwords will further exist. For many, many years.
We will observe an uptake of alternative, strong authenticators as well as the use of a combination of weak authenticators e.g. for continuous authentication. But we will not get rid of passwords. Not in one year, not in five years, not in ten years.
Hopefully, we will be able to use better approaches than username and passwords for all the websites we access and the services we use. Today, we are far from that. But even then, the username and password will be a supported approach in most scenarios, sometimes combined e.g. with an out-of-band OTP or whatever else. Why? Simply, because vendors rarely will lock out customers. When you raise the bar too high for strong authentication, this will cost you business. Username and password aren’t a good, secure approach. But we all are used to it, thus they aren’t an inhibitor.
What is a strong online identity? A strong online identity can be defined as a combination of identification, authentication technologies along with personal identity data store capabilities which enables a strong and resilient correlation of digital identities to a physical person, entity or organisation, thus enabling trusted interaction and communication between individuals and organisations. Strong online identities with full user identity sovereignty can be considered as providing a subset functionality that a fully-fledged Life Management Platform would provide.
While this definition immediately brings social networks and social authentication to mind, such as Google, Facebook and Linkedin to name the most popular, the concept of data sovereignty further strengthens the concept of strong online identities and eliminates these popular services as potential contenders. The principle of data sovereignty can be summed up by the foundational belief that individuals and organisations should be the ultimate owners and have total control of their personal information.
As with any definition of sovereignty today, sovereignty and custodianship are often treated separately. For example, a patient might have a legally-defined sovereignty of over their bodies in as far as their freedom to choose which medical treatments to undergo is concerned, yet once under treatment, the custodianship of their bodies to a large degree falls under the responsibility of the medical professionals performing the medical treatment.
How does the above example apply to strong online identities? Let’s take the revised EU General Data Protection Regulation (GDPR2) as an example. The GDPR2 provides the legal principle of personal information sovereignty, and then proceeds to define the custodianship responsibilities of all organisations which store and/or process this personal data.
While the social networking giants will assure users that they remain in control (sovereign) of their personal information, and that they will not misuse this personal information (custodianship), users must simply trust that these statements are true. The upcoming GDPR2 provides ulterior legal protection in regards to personal information, but again this comes down to how effective the EU and its member states will be at enforcing this regulation.
So how can a sovereign, strong online identity solution or vendor provide proof of trustworthiness rather than simple assurances of trust? The goal of many blockchain-based identity solutions is to allow an individual or organisation better control over the custodianship of their digital identity, by using consensus algorithms to provide mathematical proof of custodianship, as well as eliminate – as much as possible – centralised, trusted third parties.
Ultimately these projects aim to eliminate the distinction between sovereignty and custodianship. These are ambitious goals and arguably more to be considered as ideals or design standards than non-negotiable requirements. This is due to the difficulty of entirely doing away with trust in third parties in favour of fully decentralised systems based on consensus algorithms.
How can the individual become the sovereign over her/his identity and why is that of growing importance?
The concerns that have driven the upcoming GDPR2 have been noted for some time now by technologists and customers. These are largely due to the recognition that most personal online identity information is not actually owned by the users themselves. The internet giants today own and control most of this information, and this is cause for privacy and security concerns. One’s personal identity information is only as safe third party custodian is.
Which forms exist today?
An interesting initiative is ID3 (ID cubed), a non-profit which aims to establish new trust frameworks and digital ecosystems in order to enable the use of sovereign online identities. Evernym is a project which uses its own permissioned blockchain to create an open source sovereign identity platform. Microsoft Azure’s blockchain initiatives also are focusing on using blockchains to provide sovereign identity, along with humanitarian ambitions to assist the problem of under-identification in the developing world.
While these are all great initiatives, there are still a number of challenges which tend to plague all emerging technologies and mostly come down to standardisation and adoption. Also, given how complex and multi-faceted the digital identity dilemma is, so far there is no single solution that can meet all the requirements of a strong digital identity store whilst also remaining fully user-sovereign.
What does the future look like?
It is highly unlikely we will ever see a single identity solution, even if it is completely user-controlled. This is simply down to the complexity of human identity and contexts, as well as the conflict between national legislation and the international nature of the online world. For example, many national governments today have online digital identity services for access to government services, and it is highly unlikely that in the near future we will see these national schemes integrate with say, blockchain-based solutions which primarily focus on decentralised social login replacements and secure digital communication between individuals.
Yet it remains highly likely that we will see a proliferation of competing standards and approaches to strong online identification and authentication/authorisation. The determinant success factor will be usability and adoption by mainstream online services. Usability has been the key success factor of the internet giants, and we have signed away our privacy to many of these organisations simply due to how easy it is to use their services. Unless sovereign alternatives to online identity can provide similar ease of use as well as convince popular services to integrate with them, their use will remain limited to technology-savvy power users, not the public at large.
In the 35 years we’ve had personal computers, tablets and smartphones, authentication has meant a username and password (or Personal Identification Number, PIN) for most people. Yet other methods, and other schemes for using those methods, have been available for at least the past 30 years. As we look to replace ─ or at least augment ─ passwords, it’s time to re-examine these methods and schemes.
Multi-factor refers to using at least two of the three generally agreed authentication methods: something you know; something you have; and something you are.
Something you know: the most widely used factor because it includes passwords. It refers to what is called a “shared secret” ─ something known to the user and the system they are authenticating too. Also included in this are PINs, pass phrases, security questions, etc. Security questions come in two types: those previously configured (mother’s maiden name, first car, city of birth, etc.) and those the authenticator gleans from public records (usually multiple choice, such as “which was your address when you lived in London” with one choice being “I never lived in London”).
Something you have: usually a token of some kind. The RSA SecureID is, perhaps, the most widely known but there are lots of others. Proximity cards, for example, or your smartphone could be one. In one scenario, you log in with a username and password and the system sends you a code via text to your phone. You then enter that code to complete the authentication. Note that the US National Institute of Standards and Technology (NIST) has just deprecated the use of SMS messaging as a second factor due to security issues.
Something you are: usually a biometric of some type: fingerprint, retina scan, facial scan, etc. It can also be a measure of your typing, swiping ─ or even walking! Handwriting is also included, but is now mostly just a subset of swiping. Other, more exotic schemes include palm scans and vein readings.
Any of these can be used for authentication. For a stronger system, you would choose one each from two or all three groups. Two types from the same group (say a password and a PIN or a PIN and a security question) does not constitute a multi-factor authentication.
Dynamic, or adaptive, authentication involves having the system check the context of the login (who is it, where are they, what platform, etc.) and deciding which factor or factors (and which methods of those factors) should be applied in the given situation. This is an essential element of risk-based access control.
Finally, there’s continuous authentication. Passwords could be requested periodically (irritating to the user) or the presence of a proximity card could be detected periodically (and the session timed out if it’s not present) or the keyboarding could be constantly checked against the user’s baseline and the session timed out or the user asked to input something they know so that the session can continue.
We recommend that you look into adaptive and/or continuous authentication as an integral part of your access control system.
Last week, Microsoft has announced the general availability of the Azure Security Center – the company’s integrated solution for monitoring, threat detection and incident response for Azure cloud resources. Initially announced last year as a part of Microsoft’s new cross-company approach to information security, Azure Security Center has been available as a preview version since December 2015. According to Microsoft, the initial release has been used to monitor over 100 thousand cloud subscriptions and has identified over a million and a half of vulnerabilities and security threats.
So, what is it all about anyway? In short, Azure Security Center is a security intelligence service built directly into the Azure cloud platform.
- It provides security monitoring and event logging across Azure Cloud Services and Linux-based virtual machines, as well as various partner solutions;
- It enables centralized management of security policies for various resource groups, depending on business requirements or compliance regulations;
- It provides automated recommendations on addressing most common security problems, such as configuring network security groups, installing missing system updates or automatically deploying antimalware, web application firewall or other security tools in your cloud infrastructure;
- It analyzes and correlates various security events in near real-tome, fuses them with the latest threat intelligence from own and third party security intelligence feeds and generates prioritized security alerts when threats are detected;
- It provides a number of APIs, an interface to Microsoft Power BI and a SIEM connector to access and analyze security events from the Azure cloud using existing tools.
In other words, Microsoft Azure Security Center is a full-featured Real-Time Security Intelligence solution “in the cloud, for the cloud”. Sure, other SIEM and security analytics solutions provide integrations with cloud resources as well, but, being a native component of the Azure cloud infrastructure, Microsoft’s own solution has several obvious benefits, such as better integration with other Azure services, more efficient resource utilization and much lower deployment effort.
In fact, there is nothing to deploy at all – one can activate the Security Center directly in the Azure Portal. Moreover, basic security features and partner integrations are available for free; only advanced threat detection (like threat intelligence, behavior analysis, and anomaly detection) is priced per monitored resource.
With Azure Security Center now available for all Azure subscribers, offering new partner integrations (for example, vulnerability assessment by companies like Qualys) and new threat detection algorithms, there is really no reason why you should not immediately turn it on for your subscription. Even with the basic free functions, it provides a useful layer of security for the cloud infrastructure, but with the full range of behavior-based and anomaly-detection algorithms and a rich set of integration options, Azure Security Center can serve either as a center of your cloud security platform or as a means of extending your existing SIEM-based security operations center to the Azure cloud.
Martin Kuppinger talks about Cloud IAM and that it is more than CSSO
Back in 2014, a US court decision ordered Microsoft to turn over a customer’s emails stored in Ireland to an US government agency. The order had been temporarily suspended from taking effect to allow Microsoft time to appeal to the 2nd US Circuit Court of Appeals.
I wrote a post on that issue back then and described the pending decision as a Sword of Damocles hanging atop of all of the US Cloud Service Providers (CSPs). While that decision raised massive awareness back then in the press, the news that hit my desk few days ago didn’t get much attention. In the so-called “search warrant case”, the 2nd US Circuit Court of Appeals ruled in favor of Microsoft, overturning an earlier ruling from a lower court.
The blog post Brad Smith, President and Chief Legal Officer at Microsoft, published is very well worth reading, particularly the part about the support Microsoft has experienced from other parties and the section that points out that legislation needs to be updated to reflect the world that exists today. The latter is currently on its way in the EU, with the upcoming EU GDPR, becoming effective in 2018.
From the perspective of US CSPs and their customers, the court decision is definitely good news. Despite the fact that it is “only” a court decision and updated legislation is still missing, it mitigates some of the risk particularly EU, but also, e.g., APAC customers perceived when relying on US CSPs. This helps US CSPs with their business, by removing barriers for rapid cloud adoption. It helps customers, because the risk for data being requested by US governmental agencies while being held in non-US data centers is reduced significantly. So it’s not a Sword of Damocles hanging around. Maybe it’s still a knife, so to speak, but the risk is far lower now.
What I definitely find interesting to observe is the rather low attention the good news received. But that’s not too surprising. Bad news always sells better than good news.
The decision, from my perspective, can have a significant impact on further speeding up the shift of customers from on-premise solutions to the cloud. Most are on their way anyway. Each risk that is mitigated eases customer’s decisions. Anyway, the next challenge to solve for US CSPs (and all other CSPs that do any business with the EU) will by to comply with EU GDPR. But there at least we have the legislation and do not rise or fall with court decisions.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
In order to improve and tighten business relationships, today’s connected businesses need to communicate, collaborate and interact with their customers in a way that’s faster and suppler than ever before. Knowing consumers and their identities spot on also allows to optimally serve them. To use this key to success in the era of digital transformation, enterprises today often think about employing Customer Identity & Access Management (CIAM), complementary to traditional IAM for employees and business partners. They often forget however that, forced by regulation [...]