Blog posts by Dave Kearns
In the 35 years we’ve had personal computers, tablets and smartphones, authentication has meant a username and password (or Personal Identification Number, PIN) for most people. Yet other methods, and other schemes for using those methods, have been available for at least the past 30 years. As we look to replace ─ or at least augment ─ passwords, it’s time to re-examine these methods and schemes.
Multi-factor refers to using at least two of the three generally agreed authentication methods: something you know; something you have; and something you are.
Something you know: the most widely used factor because it includes passwords. It refers to what is called a “shared secret” ─ something known to the user and the system they are authenticating too. Also included in this are PINs, pass phrases, security questions, etc. Security questions come in two types: those previously configured (mother’s maiden name, first car, city of birth, etc.) and those the authenticator gleans from public records (usually multiple choice, such as “which was your address when you lived in London” with one choice being “I never lived in London”).
Something you have: usually a token of some kind. The RSA SecureID is, perhaps, the most widely known but there are lots of others. Proximity cards, for example, or your smartphone could be one. In one scenario, you log in with a username and password and the system sends you a code via text to your phone. You then enter that code to complete the authentication. Note that the US National Institute of Standards and Technology (NIST) has just deprecated the use of SMS messaging as a second factor due to security issues.
Something you are: usually a biometric of some type: fingerprint, retina scan, facial scan, etc. It can also be a measure of your typing, swiping ─ or even walking! Handwriting is also included, but is now mostly just a subset of swiping. Other, more exotic schemes include palm scans and vein readings.
Any of these can be used for authentication. For a stronger system, you would choose one each from two or all three groups. Two types from the same group (say a password and a PIN or a PIN and a security question) does not constitute a multi-factor authentication.
Dynamic, or adaptive, authentication involves having the system check the context of the login (who is it, where are they, what platform, etc.) and deciding which factor or factors (and which methods of those factors) should be applied in the given situation. This is an essential element of risk-based access control.
Finally, there’s continuous authentication. Passwords could be requested periodically (irritating to the user) or the presence of a proximity card could be detected periodically (and the session timed out if it’s not present) or the keyboarding could be constantly checked against the user’s baseline and the session timed out or the user asked to input something they know so that the session can continue.
We recommend that you look into adaptive and/or continuous authentication as an integral part of your access control system.
To understand what this article is about it’s important that we have an agreement on what we mean when we use the term “adaptive authentication”. It isn’t a difficult concept, but it’s best if we’re all on the same page, so to speak.
First, the basics: authentication is the ceremony which allows someone to present credentials which allow access to something. Typically and traditionally this is a username/password combination. But username/password is only one facet of one factor of authentication and we usually speak of three possible factors, identified as:
- Something you know (e.g., a password)
- Something you have (e.g., a token such as a SecureID fob)
- Something you are (e.g., a biometric such as a fingerprint)
There are multiple facets to each of these, of course, such as the so-called There are multiple facets to each of these, of course, such as the so-called “security questions” (mother’s maiden name, first pet’s name, city you were born in, etc.) which are part of the Something you know factor.
Beginning around 30 years ago, it was suggested that multi-factor authentication – using two of the three factors, or even all three – made for stronger security. Within the last ten years, on-line organizations (such as financial businesses) and even social networks (Google+, Facebook, etc.) have suggested users move to two-factor authentication.
While this is good practice, this multi-factor authentication is static. Every time you access the service you need to present the same two credentials in order to log in. It’s always the same. Once a hacker (usually through what’s called “phishing”) knows the two factors your account is as open to them as if there was no security.
Within the past 5 years we sat KuppingerCole have advocated moving to what we called “dynamic” authentication – authentication that could change “on the fly”. But because we advocated much more than a change in how the authentication credentials were established, we now call the technology “adaptive” authentication.
It’s called “adaptive” because it adapts to the circumstances at the time of the authentication ceremony and dynamically adjusts both the authentication factors as well as the facet(s) of the factors chosen. This is all done as part of the risk analysis of what we call the Adaptive Policy-based Access Management (APAM) system. It’s best to show an example of how this works.
Let’s say that the CFO of a company wishes to access the company’s financial data from his desktop PC in his office on a Monday afternoon. The default authentication is a username, password and hardware token. The CFO presents these, and is granted full access. Now let’s say another CFO of another company wishes to access that company’s financial data. But she’s not in the office, so she’s using a computer at an internet café on a Caribbean island where she’s vacationing. The access control system notes the “new” hardware’s footprint, it’s previously unknown IP address and the general location. Based on these (and other) context data from the transaction the access control system asks for additional factors and facets for authentication, perhaps password, token, security questions and more. Even so, once the CFO presents these facets and factors she is only given limited read access to the data.
The authentication is dynamically changed and adapted to the circumstances. That’s what we’re discussing here.
A Life Management Platform (LMP) allows individuals to access all relevant information from their daily life and manage its lifecycle, in particular data that is sensitive and typically paper-bound today, like bank account information, insurance information, health information, or the key number of their car.
Three years ago, at EIC 2012, one of the major topics was Life Management Platforms (LMPs), which was described as “a concept which goes well beyond the limited reach of most of today’s Personal Data Stores and Personal Clouds. It will fundamentally affect the way individuals share personal data and thus will greatly influence social networks, CRM (Customer Relationship Management), eGovernment, and many other areas.”
In talking about LMPs, Martin Kuppinger wrote: “Life Management Platforms will change the way individuals deal with sensitive information like their health data, insurance data, and many other types of information – information that today frequently is paper-based or, when it comes to personal opinions, only in the mind of the individuals. They will enable new approaches for privacy and security-aware sharing of that information, without the risk of losing control of that information. A key concept is “informed pull” which allows consuming information from other parties, neither violating the interest of the individual for protecting his information nor the interest of the related party/parties.” (Advisory Note: Life Management Platforms: Control and Privacy for Personal Data - 70608)
It’s taken longer than we thought, but the fundamental principle that a person should have direct control of the information about themselves is finally taking hold. In particular, the work of the User Managed Access (UMA) group through the Kantara Initiative should be noted.
Fueled by the rise of cloud services (especially personal, public cloud services) and the explosive growth in the Internet of Things (IoT) which all leads to the concept we’ve identified as the Internet of Everything and Everyone (IoEE), Life Management Platforms – although not known by that name – are beginning to take their first, hesitant baby steps with everyone from the tech guru to the person in the street. The “platform” tends to be a smart, mobile device with a myriad of applications and services (known as “apps”) but the bottom line is that the data, the information, is, at least nominally, under control of the person using that platform. And the real platform is the cloud-based services, fed and fueled by public, standard Application Programming Interfaces (APIs) which provide the data for that mobile device everyone is using.
Social media, too, has had an effect. Using Facebook login, for example, to access other services people are learning to look closely at what those services are requesting (“your timeline, list of friends, birthday”) and, especially, what the service won’t do (“the service will not post on your behalf”). There’s still work to be done there, as the conditions are not negotiable yet but must be either accepted or rejected – but more flexible protocols will emerge to cover that. There’s also, of course, the fact that Facebook itself “spies” on your activity. Slowly, grudgingly, that is changing – but we’re not there yet. The next step is for enterprises to begin to provide the necessary tools that will enable the casual user to more completely control their data – and the release of their data to others – while protecting their privacy. Google (via Android), Apple (via IOS) and even Microsoft (thru Windows Mobile) are all in a position to become the first mover in this area – but only if they’re ready to fundamentally change their business model or complement their business models by an alternative approach. Indeed, some have taken tentative steps in that direction, while others seem to be headed in the opposite direction. Google and Facebook (and Microsoft, via Bing) do attempt to monetize your data. Apple tends to tell you what you want, then not allow you to change it.
But there are suggestions that users may be willing to pay for more control over their information- either in cash, or in licensing its re-use under strict guidelines. So who will step up? We shouldn’t ignore Facebook, of course, but – without a mobile operating system – they are at a disadvantage compared to the other three. And maybe, lurking in the wings, there’s an as yet undiscovered (or overlooked – yes, there are some interesting approaches) vendor ready to jump in and seize the market. After all, that’s what Google did (surprising Yahoo!) and Facebook did (supplanting MySpace) so there is precedent for a well-designed (i.e., using Privacy by Design principles) start-up to sweep this market. Someone will, we’re convinced of that. And just as soon as we’ve identified the key player, we’ll let you know so you can be prepared.
This article has originally appeared in the KuppingerCole Analysts' View newsletter.
For the past few years BYOD – Bring Your Own Device – has been a hot topic, often leading to shouting matches between IT and users who want to use their own mobile devices to access corporate assets. Lately, it’s been a more generic “BYO” (Bring Your Own) them with the aforementioned D (device) but also A (apps), I (identity) and P (platform) as well as countless others churned out by a vendor’s marketing machine.
In fact, little of this is new. Over 30 years ago users were bringing their own device (PCs) and apps (Visicalc, Lotus 1-2-3, etc.) into the office for better control over the corporate data. And IT (called IS, or Information Services in those days) was just as irate then.
IS lost the fight then, IT is losing that fight now. IT is always going to lose these fights.
Departments that generate revenue (sales, marketing, etc.) are always going to have more clout than those seen as a cost center, such as IT. Clients and customers will always have their issues addressed, no matter what IT says. Some issues, such as compliance (with a risk of fines or jail for senior execs) or security (with its risk of loss to both assets and reputation) can provide a temporary boost for IT’s arguments but, in the end, revenue and customer service will win out.
The rise of smart mobile devices, the coming dominance of cloud computing, the Internet of Everything and Everyone (IOEE) and ubiquitous published APIs for access to all those things requires different thinking on the part of IT.
Too often IT thinks like in terms of fighting “the last war”; they want to build “bigger and better” firewalls without realizing that getting around a firewall is child’s play these days.
Instead, IT should be concentrating on providing platforms that most can reach while concentrating on Access Control (AC), the means of Authentication and Authorization that allow the right people the right access to corporate data at the rate time and place, whether it’s employees, contractors, vendors, clients, customers or partners. Dynamic Access Control and Attribute-based Access Control (see Leadership Compass: Dynamic Authorization Management - 70966), Context- and Risk-based Access Control (see Getting the security you need) are what IT should be concentrating on.
Traditionally, IT liked (and in many cases, still likes) to provide static AC – network login accounts with hard to change attributes, permissions based on Access Control Lists (ACLs) that are also difficult to keep updated and firewalls with hard-and-fast rules for who (and what) can pass through. Spending time with those things is like trying to design better buggy whips for automobiles.
When properly implemented, RiskBAC (Risk-Based Access Control) collects context data from the transaction (Who, What, When, Where, Why, Which, How) and then can either:
- Approve authentication;
- Deny authentication;
- Request further authentication factors.
If the authentication is approved, the RiskBAC system assigns – or causes to be assigned - authorizations dynamically consistent with the risk associated with the authentication and the context. If the authentication isn’t approved, then a different reaction can occur depending on the perceived threat.
For now, we recommend that you define a BYOx strategy that is open but risk-based, allowing graded access based on the level of trust and risk. This is where risk- and context-based, versatile authentication and authorization comes into play. We cannot overstress the importance of hybrid solutions which account for all platforms, even those not yet delivered. And while often overlooked, they should have choices available for your users that are better – perhaps more integrated with the enterprise – than those available as BYOA.
This article has originally appeared in the KuppingerCole Analysts' View newsletter.
It’s a new year, and there are some new changes coming to KuppingerCole, especially in the material that will come into your inbox.
First, some background. After the past year or so we’ve been growing by leaps and bounds with new offices in Europe and the Asia-Pacific area as well as new analysts all over the map. With that has come an increase in the amount of email we’ve sent out, so now it’s time to get a better handle on that. From now on, you’ll receive, aside from webinar and event invitations, two emails a month: our standard newsletter and a new format which will have up to 3 articles related to our research. For this new publication, you can look forward to major discussions on the topics that we’ve identified as being the “hot topics” for the coming months, such as:
- Software Defined Networking (SDN) – Software Defined Computing Infrastructures (SDCI);
- Integrated Real-time Network Security Analytics;
- Cloud IAM (Identity and Access Management) – both on its own and as part of an all-encompassing security structure;
- Digital, Smart Manufacturing & Smart Infrastructure: ICS & SCADA;
- The API Economy and how it affects security and access;
- IoEE (Internet of Everything and Everyone) including Life Management Platforms;
- BYOI (Bring Your Own Identity) and Biometric Authentication;
- Big Data and other threats to privacy;
- Cloud Service Provider Selection and Assurance; and
- Ubiquitous Encryption as part of information stewardship.
I’ll still regularly pop up in these publications, just not so often. My thought on some of these topics will still be available on the KuppingerCole blog site, as well as my personal site and my Twitter feed.
While I’ve ranged far and wide in this newsletter over the past two years, the major points of emphasis have revolved around authentication, authorization and access control. These are subjects near and dear to my heart and will continue to be at the forefront of what I write about. And speak about also.
We’ve already started filling in the agenda for the upcoming European Identity & Cloud Conference, coming in May in Munich. So far, I’ll be contributing to “Authentication Trends” and “Supporting Social Login: Risks and Challenges” but you can be sure I’ll add more access control sessions, if I can.
There’s also an exciting new project coming up that Martin Kuppinger and I will be undertaking, but it would be premature to go into details right now. Still, I can say that it will be in the general area of Identity and Access Management (IAM). Note how “access” shows up there, also.
For those of you in Eastern Europe, or close by, let me remind you that KuppingerCole’s Identity, Cloud Risk & Information Security Summit is coming to Moscow in April. This conference will revolve around five topics in a round-table style where you can be part of the dialog. Topics will be:
- Extended Enterprise - the new Scope of Information Security. How to Deal with the Challenges of the Computing Troika - Cloud, Mobile and Social Computing
- How Mature is Your Cloud? Defining your own Benchmark on Cloud Maturity, Measuring and Enforcing it.
- Identity Information Quality - Foundation for reliable Access Control. How to Handle this in the Decade of the Identity Explosion, with Social Logins, BYOI (Bring Your Own Identity) etc.?
- From Preventive to Pro-Active: Big Data for Network Security Analytics in Realtime.
- Assessing your Information Security Infrastructure and Understanding your Biggest Risks
So watch for more information on these and other topics in your inbox, but organized into fewer emails so as not to intrude on your time quite so much. Enjoy!
Happy New Year everyone! We’ve just come through what’s probably the biggest gift giving month of the year – most of you, I’m sure, unwrapped more than one present. So let me ask a couple of questions.
If there was a pretty package, with no tag identifying the giver – would you open it?
If the tag said it was from a friend, using their Facebook name – would you open it?
If the tag said it was from a co-worker, but not one you’re very close to – would you open it?
Let’s change the scene just a bit, and imagine that it was an email you received, with an attachment that the email asked you to open – under what circumstances would you actually open the attachment?
Many of you will say that you’d analyze the message and make a judgment based on the words, the spelling, the grammar (malware merchants are all notoriously bad at spelling and grammar. No, really!) and how you (the recipient) and the sender were identified. But in a survey conducted earlier this year, Courion Corp. found that 1 in 5 respondents would “have opened an email at work they suspected to be fake or a phishing scam – without notifying the IT department”.
1 in 5. Over 19%, and that’s just the ones who thought the email might be problematic. I can only imagine that the number would be much higher for those who didn’t suspect the email was a phishing expedition.
So, what can you do about it? How can you protect the company from its own innocent, but curious, employees?
Malware – Trojans, viruses and the like – are usually handled by anti-virus packages either centrally (on the mail server, for example) or on each desktop – or both. These tools, if kept up to date, are quite effective. But phishing is a different problem.
There are three general vectors for a phishing email: it might contain a link to a URL (including URLs that resemble those you normally visit, such as your bank) that will collect protected information (usernames, passwords and PII – Personally Identifiable Information); it might contain an attachment that the user should fill out and email or fax back; or the note may simply ask for information to be sent in a reply. Alternatively, the URL or attachment could install active malware that would gather authentication or PII data (such as with a keylogger).
You could intercept all emails that contain attachments or URLs and quarantine them, notify the intended recipient and have a security expert review the email before allowing the intended recipient to see it. How long would this delay delivery, do you think? How long would the CEO put up with this?
You could intercept only those which came from outside the organization’s domain. That should cut down on the volume of email that needs to be reviewed, but might actually be more insecure than allowing everything to go through. Emails purporting to come from others in the organization (those, for example, whose credentials had been compromised) would most likely be willingly opened by all and sundry. At the same time, missives of a private nature coming from outside would be subject to intense scrutiny by a security clerk, perhaps one who couldn’t resist gossiping. Not a recipe for success!
No, there really is no substitute for education – teaching your people how to recognize potentially hazardous communications and how to handle them. Especially the part about letting security/IT staff examine questionable emails.
It’s going to take more than a memo and some “be aware” posters, though. What I’m talking about is a real education campaign with actual teaching, perhaps some mentoring and periodic testing. The occasional “pop quiz” via a phishing-style email should be part of your proactive anti-malware campaign. Those that fail the quiz should be required to take refresher courses.
Technology can help, but only well trained, fully-informed and security aware employees can keep your organization safe.
In January, NSA contractor Edward Snowden made contact with The Guardian and the Washington Post and by now we’re all familiar with the revelations of state-sponsored surveillance he revealed. Primarily concerned with the US government, and secondarily with the UK government, Snowden’s leaks also implicated other governments including Spain, France and Germany. Everyone, it seems, was spying on everyone else.
In December, InfoWeek’s Robert Cringeley published a column headlined “Welcome to the Internet of things. Please check your privacy at the door.”
Information Week’s Kristin Burnham wrote: “Facebook privacy changes seem to never end, which can make tackling your privacy settings a daunting task.”
Just last month the Dutch Data Protection Authority found Google to be in violation of its data protection law.
At various points throughout the year, newspaper headlines screamed out about data breaches:
- Conventioneers' credit card data stolen in Boston
- Kaiser Permanente Notifies 49K Patients of Data Breach in Anaheim
All of this could lead us to believe that 2013 was the Year We Lost our Privacy. But we’d be wrong.
It was 14 years ago, in 1999, that then Sun Microsystems CEO Scott McNealy said “You have zero privacy anyway, get over it.” The next year, according to one of the documents released by Snowden, a report by the NSA about its mission for the 21st Century notes: "The volumes of routing of data make indexing and processing nuggets of intelligence information more difficult. To perform both its offensive and defensive mission, NSA must 'live on the network.'" McNealy was either right, or prescient. The loss of privacy is not new. The knowledge of the loss of privacy is, perhaps, what’s new to many people. Rather than labeling 2013 as The End of Privacy, we’d do better to refer to it as The End of Innocence.
So what can we do?
In the fall of 2012, before any of the Snowden revelations, I wrote: “Thirteen years after McNealy’s proclamation we still are trying to keep at least some parts of our lives private. We also seem to believe that there is a technological solution that will help us maintain our privacy. That’s not going to happen. Get over it. In fact, technology is a greater aid to those looking to violate our privacy than to those looking to protect it.” That’s certainly proven to be true.
Many people and companies, over the past year, have offered new, innovative and (sometimes) weird ways to protect your data and privacy. Most, if not all, have been shown to be flawed when trying to protect against state-sponsored surveillance. Should we stop looking? No, while we may never be able to stop the intrusion of state-sponsored surveillance, we can keep the criminals – who don’t have the same resources – out, and that’s a desirable result.
Many tech companies (such as Google, Microsoft, Apple, Yahoo!, etc.) are petitioning and lobbying governments to put an end to state-sponsored mass surveillance. Of course, most of those governments already have laws in place to control this – they just either aren’t enforced or are interpreted in convoluted ways. Should the techs stop doing this? Probably not, as it does keep the issue in front of everybody.
The big takeaway here is that state-sponsored surveillance didn’t start in 2013. Neither did data breaches. If you’ve come this far without leaking data and without being visited by the spooks and spies, then either you’ve been doing the right thing or you’ve been very lucky. Hopefully you’ve been keeping your protections up to date, and you haven’t been drawing attention to your activities. Remember, there’s been no new loss of privacy, only the discovery of what went before.
But you really can’t afford sit back on your laurels, patting yourself on the back because nothing bad has happened to your organization. You need to keep moving forward, doing what needs to be done to continue to protect your assets, the data and information that gives your enterprise value.
Here at KuppingerCole we take data security and privacy very seriously. We think you need to consider the full lifecycle of information, from creation through to its final disposition. We call this Information Stewardship (see “From Data Leakage Prevention (DLP) to Information Stewardship”) and following its guidelines will keep you from an overactive paranoia about your data and information.
If you haven’t already, you need to move from a technology-centric idea of security to an information-centric approach. The basic objectives of information centric security are:
- Availability: individuals are able to access the business data and applications they need to perform their business functions when and where they need it, and without delay.
- Integrity: individuals are only able to manipulate data (create, change or delete) in ways that are authorized.
- Confidentiality: data and applications can only be accessed by authorized individuals and these are not able to pass data to which they have legitimate access to other individuals who are not authorized.
There’s a lot more of course, and we’ll be telling you more in the coming year. But if you start working now, you may be able to call 2013 the year you began to protect yourself and your organization.
In my last post (“Dogged Determination”) I briefly mentioned the FIDO alliance (Fast Identity Online) with the promise to take a closer look at the emerging internet password-replacing-authentication system this time. So I will.
But first, an aside. It’s quite possible that the alliance chose the acronym “FIDO” first, then found words to fit the letters. Fido, at least in the US, is a generic name for a dog which came into general use in the mid 19th century when President Abraham Lincoln named his favorite dog Fido. Choosing a word associated with dogs harkens back to the internet meme “On the internet nobody knows you’re a dog”. With the FIDO system, no one except those you intended would know who you are. That’s my theory and I’m sticking to it.
FIDO was in the news last week when it was announced that Fingerprint Cards (FPC) and Nok Nok Labs had announced an infrastructure solution for strong and simple online authentication using fingerprint sensors on smartphones and tablets. The two companies have initially implemented the joint solution utilizing Nok Nok Labs' client and server technology and commercially available Android smartphones using the FPC1080 fingerprint sensor in order to demonstrate readiness to support the emerging FIDO-based ecosystem.
That should give you an idea of the thrust of the Alliance.
The FIDO system doesn’t require a biometric component, but it appears to be highly recommended. From the Alliance’s literature:
“The FIDO protocols use standard public key cryptography techniques to provide stronger authentication. During registration with an online service, the user's client device creates a new key pair. It retains the private key and registers the public key with the online service. Authentication is done by the client device proving possession of the private key to the service by signing a challenge. The client's private keys can be used only after they are unlocked locally on the device by the user. The local unlock is accomplished by a user–friendly and secure action such as swiping a finger, entering a PIN, speaking into a microphone, inserting a second–factor device or pressing a button.FIDO is, first and foremost, about strong authentication. Two-factor authentication is a requirement. A biometric component (fingerprint, voiceprint, etc.) is highly recommended.
The FIDO protocols are designed from the ground up to protect user privacy. The protocols do not provide information that can be used by different online services to collaborate and track a user across the services. Biometric information, if used, never leaves the user's device.”
President of the Alliance is Michael Barrett, formerly CISO for PayPal, formerly president of the Liberty Alliance and before that VP, Security & Privacy Strategy for American Express. Interestingly, the VP of FIDO is Brett McDowell, currently Head of Ecosystem Security at PayPal, who was previously Executive Director of the Liberty Alliance and its successor, the Kantara Initiative. He also served as Management Council chair of the USA’s NSTIC (National Strategy for Trusted Identities in Cyberspace) Identity Ecosystem Steering Group. These are two guys who know identity systems inside out.
PayPal (which is always looking for stronger authentication methods) and Nok Nok Labs (which is always looking for better ways to use biometrics as well as strong authentication) were two of the founders of the alliance which has now grown to over 50 members including such big names as Google, Blackberry, Lenovo, MasterCard and Yubico as well as just about everyone in the biometric device space.
It’s a good cast of characters, but is that enough?
The impact of so many biometric friendly members means that the Alliance has to first answer (again) all the questions about the “problems” with biometric authentication. Now, if you know me at all you know that “I ♥ Biometrics” but getting others to like them is an uphill battle. In fact, the continuous (I’ve been involved in it for 15 years!) argument about the security of passwords is really a side issue for the FIDO Alliance. More important, I think, is its reliance on the Online Secure Transaction Protocol (OSTP).
OSTP is a protocol designed and issued by FIDO (they say they will turn it over to a public standards body once it is fully “baked”). It’s explained in a white paper (“The Evolution of Authentication,” this is a PDF file) where it’s generally referred to as the “FIDO protocol”. The heart of the system is the FIDO authenticator which the white paper explains:
“The FIDO Authenticator is a concept. It might be implemented as a software compo-nent running on the FIDO User Device, it might be implemented as a dedicated hard-ware token (e.g. smart card or USB crypto device), it might be implemented as soft-ware leveraging cryptographic capabilities of TPMs or Secure Elements or it might even be implemented as software running inside a Trusted Execution Environment.As I said, biometrics strongly recommended.
The User Authentication method could leverage any hardware support available on the FIDO User Device, e.g. Microphones (Speaker Recognition), Cameras (Face Recognition), Fingerprint Sensors, or behavioral biometrics, see (M. S. Obaidat) (BehavioSec, 2009).”
Read the paper for more details of how it works.
Can the FIDO proposal succeed? Yes, it’s a well thought-out system that does provide strong authentication with a high degree of confidence that the user is who they claim to be.
Will the FIDO proposal succeed? That’s much more problematic. It requires that relying parties and Identity Providers (which can be the same entity) install specific server software and that users install specific client software. The client part could be an easier “sell” if it comes along with the biometric devices and services that FIDO members provide. Easier, certainly, in a smartphone environment, less so in a desktop/browser environment. History says that anything requiring users voluntarily install something or requiring relying parties to buy, install and maintain single purpose services is a long shot. And the FIDO solution requires both. Still, if the members of the FIDO alliance provide the software and compel their clients to install it a tipping point could be reached. If so, I’d applaud it.
I will note that a number of my colleagues believe I’m reading too much into the so-called “biometric requirements” of FIDO, noting that hardware tokens (represented by Yubico and other members) are an even easier implementation since most modern smartphones can handle a microSD card, which could act as a hardware token – or, at least, turn the phone into a hardware token. It would be protected by a PIN, which users are familiar with entering for all sorts of services.
While I do agree with all that, the typical PIN is 4 digits so there are 10,000 possible combinations (0000 to 9999). That’s not strong enough for my taste. Brute force manual entry could try all possibilities within a few minutes, and – since some combinations (1234, 1111, 1379, 1397, etc.) are more popular than others it could be only a few seconds before the code is broken. Nevertheless, if this would increase the uptake in using the FIDO system, I’d be behind it – at least as a good beginning.
Some colleagues and I got into a short discussion about the FIDO alliance last week. That’s the Fast Identity Online Alliance, which was formed in July 2012 with the aim of addressing the lack of interoperability among strong authentication devices. They also wish to do something about the problems users face with creating and remembering multiple usernames and passwords. According to their web site, “the FIDO Alliance plans to change the nature of authentication by developing specifications that define an open, scalable, interoperable set of mechanisms that supplant reliance on passwords to securely authenticate users of online services. This new standard for security devices and browser plugins will allow any website or cloud application to interface with a broad variety of existing and future FIDO-enabled devices that the user has for online security.”
There’ll be more from us in the coming weeks and months about FIDO and other emerging protocols, but our discussion got me thinking about something else.
There are now more than 50 members of the alliance, and someone mentioned that Google was an early joiner – not a founding member, but getting on board soon after. The question was Why Google would do this and what that portended for FIDO’s future.
Obviously it’s good to have a big name like Google on board. Along with other well-known names (Lenovo, Blackberry, MasterCard, Yubico and more) the Google brand makes for a bigger “buzz” and makes getting press (and, let’s face it, analyst) coverage easier. There’s no question that having Google on board is a major plus for FIDO. But is FIDO a plus for Google?
Google is known for supporting many authentication and identity protocols and services. They were on board early with OpenID, OpenID Connect, SCIM (System for Cross-domain Identity Management), Oauth and Oauth2, SAML and more. In fact, there are few internet friendly identity-related mechanisms that Google doesn’t support. But why?
One clue is the person responsible for all this activity. Eric Sachs is Google’s Group Product Manager for Identity. On his Google+ site he identifies his goal rather succinctly: Eliminate Internet passwords. He’s very serious about that.
One first step was the implementation of two-factor authentication (2FA) by the search giant last year. Rumor is that in the not too distant future two-factor AuthN will be required by the company. FIDO, of course, has two-factor authentication as one of the ways they wish to strengthen on-line account access.
Why are Google doing this? 2FA is not as easy to use as a simple username/password. It is, though, more secure. And Google wants you to think that their services are more secure than their competitions’. If you feel your account (and your content) is more secure with Google Docs, Google Drive, YouTube, Google+, etc. then you are more likely to use those services rather than another. Since Google’s business is based on advertising, the more “eyes” it gets on its services the more ads it sells and for more dollars per ad.
The second part of the equation is that by supporting strong authentication Google is more assured that you are who you say you are. You may remember the brouhaha that ensued when Google required so called “real names” for Google+ back in 2011 (the aptly named “Nymwars”). Google’s advertising revenue is enhanced when it can deliver ads to you that resonate with your tastes, your wishes and your lifestyle – unlike Walmart, whose marketing gaffe (selling smoked hams to Jews) went viral. By making sure it’s really you, and by tying together everything you do on Google properties, the company ensures that it has the best targeted marketing in cyberspace.
Targeted advertising is another can of worms, which I won’t get into here. But, if that’s what keeps Google working for stronger authentication and password elimination, then I’m OK with that.
Martin Kuppinger suggested another possible reason: Google not only wants to eliminate username/password (which makes sense), they want to end up with strong authentication that does not require massive cost and logistics (deploying hard tokens etc.) for them. This is just not feasible for Google. And it is a problem all larger organizations are facing, while smaller organizations can handle it. Thus Google is looking for standards that allow them to reuse existing strong authentication in a flexible way. Makes sense to me.
To get back to the beginning of this piece, it appears that Google supports any protocol or system that shows any promise of bringing stronger authentication. So the question remains, is FIDO the right vehicle to do that? I’ll look at that question next time. For now, take a look at what Martin had to say last spring in “The FIDO Alliance – game changer for Internet Security?”
Some time ago, in the wake of Wired journalist Mat Honan’s story of his account compromise (“How Apple and Amazon Security Flaws Led to My Epic Hacking”), I wrote about BYOI – Bring Your Own Identity – and how “In the enterprise, there’s even less reason to support today’s BYOI.” Some time before that, my colleague Martin Kuppinger had also addressed this issue (“Bring Your Own Identity? Yes. And No”), dismissing the BYOI idea as simply a small piece of a much larger system.
But I think we need to re-address this issue.
First, the term “BYOI” as it’s commonly used is misleading. It’s not your “Identity” you bring with you (everyone brings their own identity wherever they go), but a third party authentication that you bring to the table such as Facebook, Apple, Google, Amazon, etc. ( generally referred to as “social” logins) as well as other third party systems (government eID, healthcare identities, etc.). So let’s be sure to keep that context in mind.
Likewise, from the Enterprise’s point of view, there’s the question of who is bringing this third party AuthN to the table: an employee, contractor, vendor, client, partner, customer – or a visitor who might potentially fill one of these roles. Each of these roles has different requirements for authentication, each will look for different authorization, so each will be scrutinized differently by whatever passes as our Risk-Based Access Control (Risk-BAC) system. And make no mistake about it, we all (that is, our organizations) have a Risk-BAC system. It might be highly sophisticated, automated and dynamic or be a simple, static, implemented-by-hand system based on little more than a username/password combination for access (just because it’s high risk doesn’t remove the Risk-based facet of the system).
For visitors who may or may not be potential vendors, clients or employees the use of a social login is probably sufficient. We want these people to be able to access the resources they need with a minimum of fuss, but with a certain amount of information collected (name, email, physical address, age, and possibly other details). Asking the person to fill out a long form just to be able to view job openings or download marketing materials is going to turn off some otherwise desirable potential employees or customers. Fortunately, we can use the API (also called the “graph matrix”) made available by the social login provider to gather this information by simply asking the person for their approval. So, yes, BYOI works in this case, and works better than creating our own authentication system for this class of users. However, a difficulty could arise when this initial contact is then extended to a full-scale client (or vendor or employee) account – how do we tie together the initial information collected with this new higher value account?
For existing vendors, partners, suppliers and others, whose organization has a current relationship with our organization, the best result would come through federated login. That is, the person would login to their own organization’s system and we would accept that the person is an authorized representative of that organization. We really don’t need any other information about them. We’ve previously negotiated the authorizations that the user would have, which could be adjusted based on the information sent along with the federation credentials. For example, a large supplier might have multiple people needing access to our inventory of various items and would send along qualifying information so that our system would give the correct authorization for that inventory. All the user account maintenance is done by our federation partner, so there’s a sense that the data quality is better than if we tried to maintain it in our system. Using a social BYOI login could be disastrous as we’d have no way of knowing if the person was still employed by the partner.
Then there are the cases of employees and contractors. First we’ll divide contractors into two groups – independent, individual contractors and those working for (or with) a contracted agency.
Those people under contract to a third party agency, who do work for us under the control of the agency, are probably best handled as with other partners – by federation. The situation is a bit different as we’d probably need to adjust authorizations for each individual depending on the work they were doing but it’s also probably best that we let the contracting agency handle the initial authorization especially as it’s that agency’s HR department who would hold all of the individual’s relevant identity data. Of course, should that contractor become a regular employee we would also be faced with converting any data collected about that person from the federated system into identity data within our enterprise system.
For those contractors who are directly contracted by our organization – and not by way of a third party agency – we should use the same controls we would for employees. Generally, the only difference is in the tax status of the employee or the legal status (e.g., “employment at will” statutes), which from our perspective (Identity and Security) are irrelevant.
These are the ones that need to have individual accounts within our enterprise. These are the people who we need to login directly using the credentials we provide. These are the ones we need to scrutinize most, using multi-factor authorization when our Risk-Based systems suggest that we do. These are also the individuals who should be subjected to the most rigorous identity validation when they are first enrolled in our system, something the HR department should handle. By no stretch of the imagination should we ever consider using a social “BYOI” login for this people.
BYOI – especially the case for social logins - by its very nature has a low level of assurance for identity when compared to enterprise controlled systems (all else being equal). It’s useful for low value transactions but – at least as it’s constituted today – should give security personnel nightmares if ever used for access to the organization’s valuable resources.
So to the question “does BYOI have a place in the enterprise?” we can answer with a qualified yes, but also a qualified no. The Information Risk & Security Summit 2013, coming up in Frankfurt Nov 27-28 will go into BYOI on much more detail. You should register now to attend.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
How can we help you