The proverbial Computing Troika that KuppingerCole has been writing about for years does not show any signs of slowing down. The technological trio of Cloud, Mobile and Social computing, as well as their younger cousin, the Internet of Things, have profoundly changed the way our society works. Modern enterprises were quickly to adopt these technologies, which create great new business models, open up numerous communication paths to their partners and customers, and, last but not least, provide substantial cost savings. We are moving full speed ahead towards the Digital Era, and the future is full of promise. Or is it?
Unfortunately, the Digital Transformation does not only enable a whole range of business prospects, it also exposes the company’s most valuable assets to new security risks. Since those digital assets are nowadays often located somewhere in the cloud, with an increasing number of people and devices accessing them anywhere at any time, the traditional notion of security perimeter ceases to exist, and traditional security tools cannot keep up with the new sophisticated cyberattack methods.
In the recent years, the industry has come up with a new generation of security solutions, which KuppingerCole has dubbed “Real-Time Security Intelligence”. Thanks to a technological breakthrough that finally commoditized Big Data analytics technologies previously only affordable to large corporations, it became possible to collect, store, and analyze huge amounts of security data across multiple sources in real time. Various correlation algorithms have been implemented to find patterns in the data, as well as to detect anomalies, which in most cases indicate a certain kind of malicious activities.
Such security analytics solutions have been hailed (quite justifiably) by the media as the ultimate solution to most modern cybersecurity problems. Some even go as far as referring to these technologies as “machine learning” or even “artificial intelligence”. It should be noted however, that detecting patterns and anomalies in data sets has very little to do with true intelligence – in fact, if the “IQ level” of a traditional signature-based antivirus can be compared to that of an insect, then the correlation engine of a modern security analytics solution is about as “smart” as a frog catching flies.
Unfortunately, the strong artificial intelligence, comparable in skill and flexibility to a human, is still purely a subject of theoretical academic research. Its practical applications, however, are no longer a science fiction topic. To the contrary, these applied cognitive technologies have been actively developed for quite some time already, and the exponential growth of cloud computing has been a major boost for their further development in the recent years. Such technologies as computer vision, speech recognition, natural language processing or machine learning have found practical use in many industries, and cybersecurity is the most recent field where they promise to achieve a major breakthrough.
You see, the biggest problem information security is now facing has nothing to do with computers. In fact, the vast majority (over 80%) of security-related information in the world remains completely inaccessible to computers: it exists only in an unstructured form spread across tens of thousands of publications, conference presentations, forensic reports and other sources – spoken, written or visual.
Only a human can read and interpret those data sources, but we do not have nearly enough humans trained as security analysts to cope with the amount of new security information produced daily.
This is where Cognitive Security, a new practical application of existing cognitive technologies, comes into play. A cognitive security solution would be able to utilize natural language processing and machine learning methods to analyze both structured and unstructured security information the way humans do. It would be able to read texts (or even see pictures and listen to speeches) and not just recognize patterns within them, but be able to interpret and organize the information, explain its meaning, postulate hypotheses and provide reasoning based on evidence.
This may feel like science fiction to some, but the first practical cognitive security solutions are already appearing on the market. A major player and one of the pioneers in this field is undoubtedly IBM with their Watson platform. Originally created back in 2005 to compete with human players in the game of Jeopardy, over the years Watson has expanded significantly and found many practical applications in business analytics, government, legal and even healthcare services.
In May 2016, IBM has announced Watson for Cyber Security, a completely new field for their natural language processing and machine learning platform. However, IBM is definitely not a newcomer in cyber security. In fact, their own X-Force research library is being used as the primary source of security information to be fed into the specialized instance of the platform running in the cloud. Although the learning process is still in progress, the ultimate goal is to process all of those 80% of security intelligence data and make it available in structured form.
Of course, Watson for Cyber Security will never replace a human security analyst, but that is not its goal. First, making this “dark security data” accessible for automated processing by current security analytics solutions can greatly improve their efficiency as well as provide additional external threat intelligence. Second, cognitive security would provide analysts with powerful decision support tools, simplifying and speeding up their work and thus reducing the skills gap haunting the security industry today. In the future, the same cognitive technologies may be also applied to a company’s own digital assets to provide better analytics and information protection. Potentially, they may even make developing malware capable of evading detection too costly, thus turning the tide of the ongoing battle against cybercrime.
There are good reasons for the move towards “Cognitive Security”. The skill gap in Information Security is amongst the most compelling ones. We just don’t have sufficient skilled people. If we can computers make stepping in here, we might close that gap.
On the other hand, a lot of what we see being labeled “Cognitive Security” is still far away from really advanced, “cognitive” technologies. Marketing tends to exaggeration. On the other hand, there is a growing number of examples of advanced approaches, such as IBM Watson – the latter focusing on filtering the unstructured information and delivering exactly what an Information Security professional needs.
A challenge we must not ignore is the fact that these technologies are based on what is called “machine learning”. The machines must learn before they can do their job. That is not different from humans. An experienced security expert first needs experience. That, on the other hand, leads to two challenges with machines.
One is that machines, if used in Information Security, first must learn about incidents and attacks. With other words: They only can identify attacks after learning. Potentially, that means that there must occur some attacks until the machine can identify and protect against these. There are ways to address this. Machines can share their “knowledge”, better than humans. Thus, the time until they can react on attacks can be massively shortened. Furthermore, the more “cognitive” the machines behave, the better they might detect new attacks by identifying analogies and similarities in patterns, without knowing the specific attack.
On the other hand, training the machines bears the risk that they learn the wrong things. Attackers even might systematically train cognitive security systems in wrong behavior. Botnets might be used for sophisticated “training”, before the concrete attacks occur.
While there is a strong potential for Cognitive Security, we are still in the very early stages of evolution. However, I see a strong potential in these technologies, not in replacing humans but complementing these. Systems can run advanced analysis on masses of data and help finding the few needles in the haystack, the signs of severe attacks. They can help Information Security professionals in making better use of their time, by focusing on the most likely traces of attacks.
Traditional SIEM (Security Information and Event Management) will be replaced by such technologies – an evolution that is already on its way, by applying Big Data and advanced analytical capabilities to the field of Information Security. We at KuppingerCole call this Real Time Security Intelligence (RTSI). RTSI is a first step on the journey towards Cognitive Security. Given the fact that Security on one hand is amongst the most complex challenges to solve and, on the other hand, attacks cause massive damage, this is one of the fields where the evolution in cognitive technologies will take place. It is not as popular as playing Go or chess, but it is a huge market with massive demand. Today, we can observe the first examples of “Cognitive Security”. In 2025, such solutions will be mainstream.
Intel Security recently released an in-depth survey of the cybersecurity industry, looking at causal agents of the low availability of people with training and professional accreditation in computer security. The global report titled “Hacking the Skills Shortage” concludes: “The cybersecurity workforce shortfall remains a critical vulnerability for companies and nations”.
Most respondents to the survey considered the ‘cybersecurity skills gap’ as having a negative effect on their company, three quarters felt that government were not investing appropriately to develop cybersecurity talent and a whopping 82% reported that they can’t get the cybersecurity skills they need.
Only one in five believed current cybersecurity legislation in their country is sufficient. Over half thought current legislation could be improved and a quarter felt current regulation could be significantly enhanced.
From an education viewpoint, the study concluded that colleges are not preparing their students well for a career in cybersecurity. It suggested that there should be a relaxation of the requirement for a graduate degree for cybersecurity positions and that greater stock should be placed on professional certifications. Cybersecurity appreciation should start at an earlier age and we need to move with the times, targeting a more diverse multicultural and mixed gender clientele.
But what does it mean for companies needing assistance with their cybersecurity requirements now? How should we respond to a known deficiency in available expertise? Given that companies are increasing relying on consultants and analysts there is a need to be preparing our staff and suppliers to step into the gap and assist us in identifying requirements, analysing potential solutions and developing a roadmap to follow so that we can maintain our computer security, minimise data loss and protect our intellectual property.
There’s potentially another option to fill the cybersecurity expertise gap in the future – Cognitive Security.
The term Cognitive Security refers to an increasingly important technology that combines self-learning systems with artificial intelligence to be able to look for patterns and identify situations that meet predefined conditions, which can be used to indicate network compromise activity and provide expert advice for diagnostic activity.
While artificial intelligence has had a chequered past it is likely to significantly impact society over the next decade. It’s started to being deployed in big data analysis to enable us to identify trends and understand consumer behaviour, it provides the ability to automate promotional activity and it enables us to better meet customer expectations even as marketing budgets are constrained. In the cybersecurity space it can be used to identify potential nefarious activity and to make decisions on how to respond to events. Increasingly, automated data analysis allows us to detect network compromise and artificial intelligence provides assistance in taking remedial action.
A number of large research organisations are at the forefront of Cognitive Computing. IBM is very active via the Watson initiative which is pioneering data mining, pattern recognition and natural language processing. IBM Watson for CyberSecurity, the one solution that is already announced, focuses on collecting unstructured information and providing the required information to the Information Security professionals, giving them the background information they need, without searching for it. Google DeepMind demonstrated approaching Singularity with AlphaGo beating a GO grand master earlier this year. Microsoft is also heavily involved in the sector with the release of their first set of Cognitive Services, in effect APIs for facial recognition, facial tracking, speech recognition, spell checking and smile prediction software.
So what does this have to do with security on our company’s network?
We’re seeing the beginning of cognitive security in the threat analytics that are now rapidly developing with innovative solutions that monitor corporate networks and then ‘learn’ what normal network traffic looks like. Nefarious activity can be detected via anomalies in network traffic. If an account that normally accesses a departmental subnet for access to work applications suddenly attempts to access another server, not part of their normal activity, threat analytics will identify such events and will act in accordance with the established policy, either issuing a notification for follow-up or disabling the account pending investigation. Many suppliers also maintain, or subscribe to, community threat signature services that identify known attack vectors and can automatically alert on their occurrence on the network being monitored. These systems can also provide triage services to assist in determining remedial action and forensic analysis to aid in developing preventative maintenance processes.
While there’s still a long way to go, the technology holds significant benefit. If we extrapolate the findings of the Intel survey to our own situations, it is unlikely that we will be able to fill the need for human cybersecurity experts, it is therefore prudent to track developments in cognitive security as it applies to our network monitoring and incident response requirements.
While it must not be our only approach to network protection and data loss prevention it holds significant potential to be a major component of our corporate security arsenal in the future.
‘Know your customer’ started as an anti-money laundering (AML) initiative in the financial industry. Regulators insisted that banks establish a customer ‘due-diligence’ processes to ensure that all bank accounts could be traced back to the entities that owned them. The intent was to make it difficult to establish a business to re-purpose money from illegal activity via a legitimate commercial activity. But while they focus on AML regulation, banks often miss the opportunity to know, and serve, their customers.
Increasingly businesses are realizing that the demographics of their customers are changing. It’s moving away from the ‘baby-boomers’, who are focused on value, to ‘millennials’, who are focused on experience.
Baby-boomers have grown up in a relatively stable environment, with a stable family life and long-term employment. They value ‘best practices’ and loyalty. Millennials, those coming-of-age at the turn of the century, have experienced a much more fluid upbringing. Their family life has been fractured and inconsistent and they have no expectation, nor desire for, long-term employment. They are more interested in flex-time, job-sharing arrangements and sabbaticals.
More importantly, millennials want experience over value. They are less concerned with what they pay for something than they are in their experience in purchasing it. They will not tolerate a bad experience whether it be in-store or on-line. And they have the technology to let others know about their experience.
There’s two approaches to this situation: become despondent and despair of ever attracting this market sector, or consider the vast opportunity of hundreds of millennials posting and tweeting about the fantastic service they experienced when they did business with you.
Coming from Knowing to Serving
So – how do we ‘serve’ our customers? Firstly, we need to know them and then we need to align our marketing practices to them.
Knowing them requires us to build a picture of our customer base and segment them into groups according to their propensity to purchase our products and services. This will likely require an analysis of CRM data and potentially doing some big-data analysis of customer transaction records. Engaging a Cloud service provider and using their Hadoop services and map-reduce functionality may assist. The intent is to build a customer identity management service that can be used for product/service development and automated marketing. Customer analytics and deploying user-managed access i.e. providing users control of their data and management of their transactions with your organisation, are enabled by a good customer identity and access management (CIAM) facility.
Once we know our customers we can tailor our marketing program to ‘serve’ our customers. This means that we need to modify our product or service to suit their requirements. There is no point in offering something that they don’t want, and you can’t rely on history; as the baby-boomer segment must inevitably decline their purchasing patterns becomes irrelevant. Millennials will gladly tell you what they want if they are asked, putting some effort into understanding them will not go un-rewarded.
Pricing must also be commensurate with the product or service being offered. As noted earlier millennials are far less price-conscious than baby-boomers so a ‘differentiation’ strategy is advised. Make your product or service special, and charge for it.
Promotion should also be targeted too. Hardcopy media is of little use. Focus on social networks and on-line advertising. Google AdWords do work and it can be money well-spent. Make sure your website is responsive, millennials are lost on anything bigger than a 12cm screen.
There is no doubt that doing business is becoming much more interesting. The potential for attracting new customers has never been greater and the opportunities are vast. The only question is “are we agile enough to exploit it?”
Providing a corporate IT infrastructure is a strategic challenge. Delivering all services needed and fulfilling all requirements raised by all stakeholders for sure is one side of the medal. Understanding which services customers and all users in general are using and what they are doing within the organisation’s infrastructure, no matter whether it is on premises, hybrid or in the cloud, is for sure an important requirement. And it is more and more built into the process framework within customer facing organisations.
The main drivers behind this are typically business oriented aspects, like customer relationship management (CRM) processes for the digital business and, increasingly, compliance purposes. So we see many organisations currently learning much about their customers and site visitors, their detailed behaviour and their individual needs. They do this to improve their products, their service offerings and their overall efficiency which is of course directly business driven. Understanding your customers comes with the immediate promise of improved business and increased current and future revenue.
But the other side of the medal is often ignored: While customers and consumers are typically kept within clearly defined network areas and online business processes, there are other or additional areas within your corporate network (on-premises and distributed) where different types of users are often acting much more freely and much less monitored.
Surprisingly enough there is a growing number of organisations which know more about their customers than about their employees. But this is destined to prove as short-sighted: Maintaining compliance to legal and regulatory requirements is only possible when all-embracing and robust processes for the management and control of access to corporate resources by employees, partners and external workforce are established as well. Preventing, detecting and responding to threats from inside and outside attackers alike is a constant technological and organisational challenge.
So, do you really know your employees? Most organisations stop when they have recertification campaigns scheduled and some basic SoD (Segregation of Duties) rules are implemented. But that does not really help, when e.g. a privileged user with rightfully assigned, critical access abuses that access for illegitimate purposes or a business user account has been hacked.
KYE (Know your Employee - although this acronym might still require some more general use) needs to go far beyond traditional access governance. Identifying undesirable behaviour and ideally preventing it as it happens requires technologies and processes that are able to review current events and activities within the enterprise network. Unexpected changes in user behaviour and modified access patterns are indicators of either inappropriate behaviour of insiders or that of intruders within the corporate network.
Adequate technologies are on their way into the organisations although it has to be admitted that “User Activity Monitoring” is a downright inadequate name for such an essential security mechanism. Other than it suggests, it is not meant to implement a fully comprehensive, corporate-wide, personalized user surveillance layer. Every solution that aims at identifying undesirable behaviour in real-time needs to satisfy the high standards of requirements as imposed by many accepted laws and standards, including data protection regulations, labour law and the general respect for user privacy.
Nevertheless, the deployment of such a solution is possible and often necessary. To achieve this, such a solution needs to be strategically well-designed from the technical, the legal and an organisational point of view. All relevant stakeholders from business to IT and from legal department to the workers’ council need to be involved from day one of such a project. A typical approach means that all users are pseudonymized and all information is processed on basis of Information that cannot be traced back to actual user IDs. Outlier behaviour and inadequate changes in access patterns can be identified with looking at an individual user. The outbreak of a malware infection or a privileged account being taken over can usually be identified without looking at the individual user. And in the rare case of the de-pseudonymization of a user being required, there have to be adequate processes in place. This might include the four eyes principle for actual de-cloaking and the involvement of the legal department, the workers’ council and/or a lawyer.
Targeted access analytics algorithms can nowadays assist in the identification of security issues. Thus they can help organisations in getting to know their employees, especially their privileged business users and administrators. By correlating this information with other data sources, for example threat intelligence data and real-time security intelligence (RTSI) this might act as the basis for the identification of Advanced Persistent Threats (APT) traversing a corporate network infrastructure from the perimeter through the use of account information and the actual access to applications.
KYE will be getting as important as KYC but for different reasons. Both rely on intelligent analytics algorithms and a clever design of infrastructure, technology and processes. They both transform big data technology, automation and a well-executed approach towards business and security into essential solutions for sustainability, improved business processes and adequate compliance. We expect that organisations leveraging existing information and modern technology by operationalising both for constant improvement of security and the core business can draw substantial competitive advantages from that.
Martin Kuppinger talks about firewalls and the fact that they are not really dead.
Today, Ping Identity announced the acquisition of UnboundID. The two companies have been partnering already for a while, with a number of joint customers. After the recent acquisition of Ping Identity by Vista Equity Partners, a private equity firm, this first acquisition of Ping Identity can be seen as the result of the new setup of the company. The initial announcement by Vista Equity Partners already included the information that both organic and inorganic – as now has happened with UnboundID – growth is planned.
The acquisition of UnboundID is interesting from two perspectives. One concerns the capabilities of the UnboundID Platform in managing identity data at scale and to capture, store, sync, and aggregate data from a variety of sources such as directories, CRM systems, and others. The other involves the capabilities UnboundID provides for multi-channel customer engagement. This, for example, includes an analytics engine for analyzing customer behavior trends.
Combined with the proven strength of Ping Identity in the Identity Federation and Access Management market, this allows the companies to extend their offering particularly towards the currently massively growing market of CIAM (Customer Identity and Access Management). Furthermore, the technical platform that Ping Identity provides is complemented with an underlying large scale directory and synchronization service.
Due to the fact that both companies have been working closely together for a while, we expect that existing and new customers will benefit rapidly from Ping Identity’s expanded offering.
There is probably no single thing in Information Security that has been claimed being dead as frequent as the password. Unfortunately, it isn’t yet dead and far from dying. Far from it! The password will survive all of us.
That thesis seems standing in stark contrast to the rise of strong online identities. Also, weak online identities such as device IDs or the identifiers of things as an alternative to username and password will not make the password obsolete.
We all know that passwords aren’t really save. Weak passwords such as the one used by Mark Zuckerberg – it’s said being “Dadada” – are commonly used. Passwords either are complex and hard to keep in mind, or they are long and annoying to type, or they are short, easy to type, and weak.
However, what are the alternatives? We can use biometrics. But even with upcoming standards such as the FIDO Alliance standards, there still are many scenarios where biometrics do not work well, aside of the fact that most also aren’t perfectly save. Then there are these approaches where you have to pick known faces from a number of photos. Takes longer than typing in a password, thus it adds inconvenience.
Yes, we are becoming more flexible in choosing the authenticator which works best for us. Both in Enterprise IAM and Consumer IAM, adaptive authentication and the support of a broad variety of authenticators is on the rise. But even there, the password remains a simple and convenient option. Other options such as OTP hardware tokens (One Time Password) are not that convenient, they are expensive, logistics is complex and in case we lose a device or a token or whatever else, we still might come back to the password (or some password-like constructs such as security questions).
Using many weak authenticators also is an option. But again: What is our fallback in case that there aren’t sufficient authenticators available for a certain interaction or transaction? Not enough proof for the associated risk?
There is no doubt that we can construct scenarios where we do not need passwords at all. There is also no doubt that we will see more such scenarios in future. But we will not get fully rid of passwords. Starting with access to legacy systems that don’t support anything else than passwords (oh, and even if you put something in front, there then will be the username and password of the functional account); with the passwords used for identifying us when calling our mobile phone providers; with the passphrases and security questions; with all the websites and services that still don’t support anything else than passwords: There are too many scenarios where passwords will further exist. For many, many years.
We will observe an uptake of alternative, strong authenticators as well as the use of a combination of weak authenticators e.g. for continuous authentication. But we will not get rid of passwords. Not in one year, not in five years, not in ten years.
Hopefully, we will be able to use better approaches than username and passwords for all the websites we access and the services we use. Today, we are far from that. But even then, the username and password will be a supported approach in most scenarios, sometimes combined e.g. with an out-of-band OTP or whatever else. Why? Simply, because vendors rarely will lock out customers. When you raise the bar too high for strong authentication, this will cost you business. Username and password aren’t a good, secure approach. But we all are used to it, thus they aren’t an inhibitor.
What is a strong online identity? A strong online identity can be defined as a combination of identification, authentication technologies along with personal identity data store capabilities which enables a strong and resilient correlation of digital identities to a physical person, entity or organisation, thus enabling trusted interaction and communication between individuals and organisations. Strong online identities with full user identity sovereignty can be considered as providing a subset functionality that a fully-fledged Life Management Platform would provide.
While this definition immediately brings social networks and social authentication to mind, such as Google, Facebook and Linkedin to name the most popular, the concept of data sovereignty further strengthens the concept of strong online identities and eliminates these popular services as potential contenders. The principle of data sovereignty can be summed up by the foundational belief that individuals and organisations should be the ultimate owners and have total control of their personal information.
As with any definition of sovereignty today, sovereignty and custodianship are often treated separately. For example, a patient might have a legally-defined sovereignty of over their bodies in as far as their freedom to choose which medical treatments to undergo is concerned, yet once under treatment, the custodianship of their bodies to a large degree falls under the responsibility of the medical professionals performing the medical treatment.
How does the above example apply to strong online identities? Let’s take the revised EU General Data Protection Regulation (GDPR2) as an example. The GDPR2 provides the legal principle of personal information sovereignty, and then proceeds to define the custodianship responsibilities of all organisations which store and/or process this personal data.
While the social networking giants will assure users that they remain in control (sovereign) of their personal information, and that they will not misuse this personal information (custodianship), users must simply trust that these statements are true. The upcoming GDPR2 provides ulterior legal protection in regards to personal information, but again this comes down to how effective the EU and its member states will be at enforcing this regulation.
So how can a sovereign, strong online identity solution or vendor provide proof of trustworthiness rather than simple assurances of trust? The goal of many blockchain-based identity solutions is to allow an individual or organisation better control over the custodianship of their digital identity, by using consensus algorithms to provide mathematical proof of custodianship, as well as eliminate – as much as possible – centralised, trusted third parties.
Ultimately these projects aim to eliminate the distinction between sovereignty and custodianship. These are ambitious goals and arguably more to be considered as ideals or design standards than non-negotiable requirements. This is due to the difficulty of entirely doing away with trust in third parties in favour of fully decentralised systems based on consensus algorithms.
How can the individual become the sovereign over her/his identity and why is that of growing importance?
The concerns that have driven the upcoming GDPR2 have been noted for some time now by technologists and customers. These are largely due to the recognition that most personal online identity information is not actually owned by the users themselves. The internet giants today own and control most of this information, and this is cause for privacy and security concerns. One’s personal identity information is only as safe third party custodian is.
Which forms exist today?
An interesting initiative is ID3 (ID cubed), a non-profit which aims to establish new trust frameworks and digital ecosystems in order to enable the use of sovereign online identities. Evernym is a project which uses its own permissioned blockchain to create an open source sovereign identity platform. Microsoft Azure’s blockchain initiatives also are focusing on using blockchains to provide sovereign identity, along with humanitarian ambitions to assist the problem of under-identification in the developing world.
While these are all great initiatives, there are still a number of challenges which tend to plague all emerging technologies and mostly come down to standardisation and adoption. Also, given how complex and multi-faceted the digital identity dilemma is, so far there is no single solution that can meet all the requirements of a strong digital identity store whilst also remaining fully user-sovereign.
What does the future look like?
It is highly unlikely we will ever see a single identity solution, even if it is completely user-controlled. This is simply down to the complexity of human identity and contexts, as well as the conflict between national legislation and the international nature of the online world. For example, many national governments today have online digital identity services for access to government services, and it is highly unlikely that in the near future we will see these national schemes integrate with say, blockchain-based solutions which primarily focus on decentralised social login replacements and secure digital communication between individuals.
Yet it remains highly likely that we will see a proliferation of competing standards and approaches to strong online identification and authentication/authorisation. The determinant success factor will be usability and adoption by mainstream online services. Usability has been the key success factor of the internet giants, and we have signed away our privacy to many of these organisations simply due to how easy it is to use their services. Unless sovereign alternatives to online identity can provide similar ease of use as well as convince popular services to integrate with them, their use will remain limited to technology-savvy power users, not the public at large.
In the 35 years we’ve had personal computers, tablets and smartphones, authentication has meant a username and password (or Personal Identification Number, PIN) for most people. Yet other methods, and other schemes for using those methods, have been available for at least the past 30 years. As we look to replace ─ or at least augment ─ passwords, it’s time to re-examine these methods and schemes.
Multi-factor refers to using at least two of the three generally agreed authentication methods: something you know; something you have; and something you are.
Something you know: the most widely used factor because it includes passwords. It refers to what is called a “shared secret” ─ something known to the user and the system they are authenticating too. Also included in this are PINs, pass phrases, security questions, etc. Security questions come in two types: those previously configured (mother’s maiden name, first car, city of birth, etc.) and those the authenticator gleans from public records (usually multiple choice, such as “which was your address when you lived in London” with one choice being “I never lived in London”).
Something you have: usually a token of some kind. The RSA SecureID is, perhaps, the most widely known but there are lots of others. Proximity cards, for example, or your smartphone could be one. In one scenario, you log in with a username and password and the system sends you a code via text to your phone. You then enter that code to complete the authentication. Note that the US National Institute of Standards and Technology (NIST) has just deprecated the use of SMS messaging as a second factor due to security issues.
Something you are: usually a biometric of some type: fingerprint, retina scan, facial scan, etc. It can also be a measure of your typing, swiping ─ or even walking! Handwriting is also included, but is now mostly just a subset of swiping. Other, more exotic schemes include palm scans and vein readings.
Any of these can be used for authentication. For a stronger system, you would choose one each from two or all three groups. Two types from the same group (say a password and a PIN or a PIN and a security question) does not constitute a multi-factor authentication.
Dynamic, or adaptive, authentication involves having the system check the context of the login (who is it, where are they, what platform, etc.) and deciding which factor or factors (and which methods of those factors) should be applied in the given situation. This is an essential element of risk-based access control.
Finally, there’s continuous authentication. Passwords could be requested periodically (irritating to the user) or the presence of a proximity card could be detected periodically (and the session timed out if it’s not present) or the keyboarding could be constantly checked against the user’s baseline and the session timed out or the user asked to input something they know so that the session can continue.
We recommend that you look into adaptive and/or continuous authentication as an integral part of your access control system.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Traditional tools are still widely deployed by many organizations and in certain scenarios serve as a useful part of enterprise security infrastructures, but recent trends in the IT industry have largely made them obsolete. Continued deperimeterization of corporate networks because of adoption of cloud and mobile services, as well as emergence of many new legitimate communication channels with external partners has made the task of protecting sensitive corporate information more and more difficult. The focus of information security has gradually shifted from perimeter protection towards [...]