2FA, it’s an abbreviation (word? acronym?) I see a lot these days. But it’s not, as I first thought, teenage texting slang (“OMG, that’s 2FA!”) for “too freakin’ amazing”. No, it’s a shortened version of “two factor authentication” which has been a hot topic and buzzword since Google announced it (although they call it “two step verification”) after the now infamous case of hacking which struck Wired magazine’s Mat Honan (see “The Honan Hack and the BYOI meme”) last summer. Suddenly everyone is writing about 2FA. Of course, they rarely mention that two weak factors can be worse than one strong factor, e.g. Google.
But two-factor authentication is really only one case within the more established paradigm of multi-factor authentication (MFA), where “multi” stands for “more than one” and might be two but could be three, four or more. And multi-factor authentication is hardly the new kid on the block – I’ve been writing about it since last century.
Yes, it was in January, 2000 that I wrote two newsletters about Novell’s new release, NMAS - Novell Modular Authentication Services. As I said at the time:
“NMAS lets network administrators choose among different authentication methods, including traditional password control and adding biometric and smart card methods. While biometric and smart card access isn't new, it's the control over the methods used, and the subsequent access granted, which makes NMAS a major addition to NDS security.”With NMAS, you could specify one, two or even three factors to use for authentication and the factors could be any of What you know (password), What you have (smart card) or What you are (biometric). Depending on the factor or factors used, the administrator could further restrict the user’s access rights. Nice to see that Google, Apple and others are finally climbing onto the MFA bandwagon.
MFA is, of course, an integral part of Risk-Based Access Control (RBAC) especially when it can be optionally used depending on the risk factors involved in an authentication session.
You’ll remember, I hope (if not, go read “Passwords & Tokens & Eye Scans, Oh My!,” we’ll wait) that the calculated risk factor for an authentication/authorization event can be used to trigger multiple factors for verification in the authentication ceremony. It might simply be that someone is requesting access to high value resources, or they may be requesting access from an unfamiliar location or platform. It could simply be that the access requested is not within the user’s standard pattern of time of day or time of year (e.g., tax season). Whatever the case, a calculation of high risk should lead to multi-factor authentication for that user at that time.
In some cases (attempts to login as root or admin, for example) you should always look to MFA because the risk is always going to be high.
But it’s not just hardware tokens, biometrics and passwords that should make up the MFA mix. A lot of the contextual items you look at when evaluating risk can also be considered a 2nd (or 3rd) factor in the authentication ceremony.
If, for example, the user is accessing the network from their typical endpoint (office desktop PC, home pc, laptop, smartphone, etc.) then that can count almost as much as a hardware token. If your system then sends an out-of-band SMS to the user with a one-time password (OTP) to be entered during authentication, you might say this was a 3FA.
But how secure is 2FA, or MFA?
Noted security expert Bruce Schneier wrote (back in 2009, and referenced something else he wrote in 2005!) about hacking two-factor authentication and noted
“Here are two new active attacks we're starting to see:Then why does everyone, it seems, believe that using two factors for authentication is better than using only one? It’s simple, if implemented properly, 2FA does reduce the risk of unauthorized access. Let’s say that the risk of unauthorized access using just a password is 1 chance in 20 (5%), which is probably a little high. Then let’s say that the risk when using a different factor (let’s say a hardware token) is lower, perhaps 1 in 1000 (.1%). What’s the risk when both are used? Statistical theory says you multiply the first factor (5%) by the second (.1%) which yields .005%, or 1 in 20,000 – a much better risk factor, I think you’ll agree! Of course, if you use a higher risk second factor (say, 1% or 1 in one hundred) then the overall risk is 1 in 2000 (5% times 1%) which isn’t as secure as the hardware token we postulated.
Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank's real website. Done right, the user will never realize that he isn't at the bank's website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user's banking transactions while making his own transactions at the same time.
Trojan attack. Attacker gets Trojan installed on user's computer. When user logs into his bank's website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.”
The important thing to remember, though, is that you need to set a realistic risk factor for each authentication factor in your ceremony. The same realistic view should also govern how you look at the various context factors when weighing the risk involved in any particular transaction.
The bottom line is that it’s all about the risk, and your job is to minimize the risk either through strengthened authentication protocols or through reduced authorization rights – or both. I’ll be going into more depth on this when I present “Versatile Authentication, Risk- and Context-Based Authentication: Why you need these Concepts” along with some lively panel discussion on the topic at the European Identity & Cloud Conference 2013 coming up next month. I hope you’ll be there.