Blog posts by John Tolbert
As cybercrime and concerns about cybercrime grow, tools for preventing and interdicting cybercrime, specifically for reducing online fraud, are proliferating in the marketplace. Many of these new tools bring real value, in that they do in fact make it harder for criminals to operate, and such tools do reduce fraud.
Several categories of tools and services compose this security ecosystem. On the supply side there are various intelligence services. The forms of intelligence provided may include information about:
- Users: Users and associated credentials, credential and identity proofing results, user attributes, user history, behavioral biometrics, and user behavioral analysis. Output format is generally a numerical range.
- Devices: Device type, device fingerprint from Unified Endpoint Management (UEM) or Endpoint Mobility Management (EMM) solutions, device hygiene (operating system patch versions, anti-malware and/or UEM/EMM clients presence and versions, and Remote Access Trojan detection results), Mobile Network Operator carrier information (SIM, IMEI, etc), jailbreak/root status, and device reputation. Output format is usually a numerical range.
- Cyber Threat: IP and URL blacklisting status and mapped geo-location reputation, if available. STIX and TAXII are standards used for exchanging cyber threat intel. Besides these standards, many proprietary exchange formats exist as well.
- Bot and Malware Detection: Analysis of session and interaction characteristics to assess the likelihood of manipulation by bots or malware. Output format can be Boolean, or a numerical range of probabilities, or even text information about suspected malware or botnet attribution.
Risk-adaptive authentication and authorization systems are the primary consumers of these types of intelligence. Conceptually, risk-adaptive authentication and authorization functions can be standalone services or can be built into identity and web access management solutions, web portals, VPNs, banking apps, consumer apps, and many other kinds of applications.
Depending on the technical capabilities of the authentication and authorization systems, administrators can configure risk engines to evaluate one or more of these different kinds of intelligence sources in accordance with policies. For example, consider a banking application. In order for a high-value transaction (HVT) to be permitted, the bank requires a high assurance that the proper user is in possession of the proper registered credential, and that the requested transaction is intended by this user. To accomplish this, the bank’s administrators subscribe to multiple “feeds” of intelligence which can be processed by the bank’s authentication and authorization solutions at transaction time.
The results of a runtime risk analysis that yields ‘permit’ may be interpreted as “yes, there is a high probability that the proper user has authenticated using a high assurance credential from a low risk IP/location, the request is within previously noted behavioral parameters for this user, and the session does not appear to be influenced by malware or botnet activity.”
This is great for the user and for the enterprise. However, it can be difficult to implement by the administrators because there are few standards for representing the results of intelligence-gathering and risk analysis. The numerical ranges mentioned above vary from service to service. Some vendors provide scores from 0 to 99 or 999. Others range from -100 to 100. What do the ranges mean? How can the scores be normalized across vendors? Does a score of 75 from intel source A mean the same as 750 from intel source B?
Perhaps there is room for a little more standardization. What if a few attribute name value pairs were introduced and ranges limited to improve interoperability and to make it easier for policy authors to implement? Consider the following claims set, which could be translated into formats such as JWT, SAML, XACML, etc :
The above example* shows an Issuer of “IntelSource”, with timestamp and expiry, Audience of “RiskEngine”, Subject (user ID), and 3 additional attributes: “UserAssuranceLevel”, “DeviceAssuranceLevel”, and “BotProbability”. These new attributes are composites of the information types listed above for each category. Ranges for all 3 attributes are 0-99. In this example, the user looks legitimate. Low user and device assurance levels and/or high bot probability would make the transaction look like a fraud attempt.
KuppingerCole believes that standardization of a few intelligence attributes as well as normalization of values may help with implementation of risk-adaptive authentication and authorization services, thereby improving enterprise cybersecurity posture.
*Thanks to http://jwtbuilder.jamiekurtz.com/ for the JWT sample.
The EU European Banking Authority issued clarifications about what constitutes Strong Customer Authentication (SCA) back in late June. The definition states that two or more of the following categories are required: inherence, knowledge, and possession. These are often interpreted as something you are, something you know, and something you have, respectively. We have compiled and edited the following table from the official EBA opinion:
|Inherence elements||Compliant with SCA?|
|Hand and face geometry||Yes|
|Retina and iris scanning||Yes|
|Behavioral biometrics, including keystroke dynamics, heart rate or other body movement patterns that uniquely identify PSUs (Payment Service Users), and mobile device gyroscopic data||Yes|
|Information transmitted using EMV 3-D Secure 2.0||No|
|Password, Passphrase, or PIN||Yes|
|Knowledge-based authentication (KBA)||Yes|
|Memorized swiping path||Yes|
|Email address or username||No|
|Card details (including CVV codes on the back)||No|
|Possession of a device evidenced by an OTP generated by, or received on, a device (hardware/software token generator, SMS OTP)||Yes|
|Possession of a device evidenced by a signature generated by a device (hardware or software token)||Yes|
|Card or device evidenced through a QR code (or photo TAN) scanned from an external device||Yes|
|App or browser with possession evidenced by device binding — such as through a security chip embedded into a device or private key linking an app to a device, or the registration of the web browser linking a browser to a device||Yes|
|Card evidenced by a card reader||Yes|
|Card with possession evidenced by a dynamic card security code||Yes|
|App installed on the device||No|
|Card with possession evidenced by card details (printed on the card)||No|
|Card with possession evidenced by a printed element (such as an OTP list, e.g. “Grid Cards”)||No|
The list and details about implementations are subject to change. Check the EBA site for updates. KuppingerCole will also follow and provide updates and interpretations.
The EBA appears to be rather generous in what can be used for SCA, especially considering the broad range of biometric types on the list. However, a recent survey by GoCardless indicates that not all consumers trust and want to use biometrics, and these attitudes vary by country across the EU.
Although KBA is still commonly used, it should be deprecated due to the ease with which fraudsters can obtain KBA answers. The acceptance of smart cards or other hardware tokens is unlikely to make much of an impact, since most consumers aren’t going to carry special devices for authenticating and authorizing payments. Inclusion of behavioral biometrics is probably the most significant and useful clarification on the list, since it allows for frictionless and continuous authentication.
In paragraph 13, the EBA opinion opened the door for possible delays in SCA implementation: “The EBA therefore accepts that, on an exceptional basis and in order to avoid unintended negative consequences for some payment service users after 14 September 2019, CAs may decide to work with PSPs and relevant stakeholders, including consumers and merchants, to provide limited additional time to allow issuers to migrate to authentication approaches that are compliant with SCA…”
Finextra reported this week that the UK Financial Conduct Authority has announced an extension to March 2021 for all parties to prepare for SCA. The Central Bank of Ireland is following a similar course of delays. Given that various surveys place awareness of and readiness for PSD2 SCA on the part of merchants between 40-70%, it is not surprising to see such extensions. In fact, it is likely that the Competent Authorities in more member states will likely follow suit.
While these moves are disappointing in some ways, they are also realistic. Complying with SCA provisions is not a simple matter: many banks and merchants still have much work to do, including modernizing their authentication and CIAM infrastructures to support it.
Account Takeover (ATO) attacks are on the rise. The 2019 Forter Fraud Attack Index shows a 45% increase in this type of attack on consumer identities in 2018. ATOs are just what they sound like: cybercriminals gain access to accounts through various illegal means and use these take over accounts to perpetrate fraud. How do they get access to accounts? There are many technical methods that bad actors can use, such as consumers responding to phishing emails; grafting through fake websites; collection of credentials from keyloggers, rootkits, or botnets; harvesting cookie data using spyware; credential stuffing; brute force password guessing, or perusing compromised credential lists on the dark web. However, they don’t even have to use sophisticated means. Sometimes account information can be found on paper, so variations of “dumpster diving” still works.
Once cybercriminals have account information, depending on the type of account, they can use it for many different kinds of fraud. Of course, financial fraud is a top concern. A banking overlay is a type of mobile malware that looks like a legitimate banking site but is designed to capture bank customer credentials. Banking overlays usually pass on user interaction to the underlying banking app, but also pass on the captured credentials to the malicious actors. Some are sophisticated enough to grab SMS OTPs, thereby defeating that form of 2FA. This problem is more acute on Android than iOS. Using mobile anti-malware and ensuring that users get apps from trusted app stores can help prevent this kind of attack.
Consumer banking is not the only kind of financial industry targeted by cybercriminals. B2B banks, mortgage brokers, investment banks, pension fund managers, payment clearing houses, and cryptocurrency exchanges are also under attack. From the cybercriminals’ point of view, it is easier to attack the end user and the often-less-secured apps they use than to attack financial industry infrastructure.
Just about any online site that exchanges anything of value has become a target for fraudsters. Airline frequent flyer programs and other kinds of travel/hospitality loyalty programs made up 13% of all accounts for sale on the dark web as of the end of 2018. Other consumer rewards programs that can be monetized are also being stolen and brokered. Digital goods, such as in-game purchases, can be highly sought-after, so there are black markets for gamer credentials.
ATO fraud has hit the insurance sector in a big way in recent years. Fraudsters use ATO methods to get insurance customer credentials to submit claims and redirect payouts. Some malicious actors go after insurance agent credentials to facilitate claims processing and get even bigger gains.
Though these stories have been circulating for years, real estate and escrow agents are still occasionally getting ATO’d, such that the home buyers are deceived into transferring large sums to fraudsters during real estate closing deals.
Consumer-facing businesses need to take two major steps to help reduce ATO fraud.
Implement MFA, and not just SMS OTP. This is the biggest bang for the buck here. Passwords are ineffective. SMS OTP can be compromised. Use securely designed mobile apps. Use mobile security SDKs to build apps. Push notifications in-app and native biometrics are a better choice than passwords and texted passcodes. FIDO Alliance has standardized 2FA and mobile-based MFA. FIDO 2.0, released this year, greatly improves interoperability with web applications. Use FIDO authentication mechanisms for not only better security, but also enhanced privacy, and a more pleasant consumer experience. For comprehensive reviews of MFA products, see our Leadership Compasses on Cloud-based MFA and Adaptive Authentication (on-premises products).
Use fraud reduction intelligence services for real-time analysis of many pertinent behavioral and environmental factors to reduce fraud risk. Examples of factors that fraud reduction platforms evaluate include user behavior, behavioral biometrics, device hygiene, device reputation, geo-location, geo-velocity, bot intelligence, and cyber threat intelligence. These solutions employ machine learning (ML) techniques to more efficiently identify potentially malicious behavior.
ATOs and how to mitigate them will be one of the main topics discussed at our upcoming Consumer Identity World event in Seattle from September 25-27,2019. For more information, please see the event page at https://www.kuppingercole.com/events/ciwusa2019. KuppingerCole will be publishing additional research on Fraud Reduction Intelligence Technologies in the near future. Stay tuned.
This week Skylight Cyber disclosed that they were able to fool a popular “AI”-based Endpoint Protection (EPP) solution into incorrectly marking malware as safe. While trying to reverse-engineer the details of the solution's Machine Learning (ML) engine, the researchers found that it contained a secondary ML model added specifically to whitelist certain types of software like popular games. Supposedly, it was added to reduce the number of false positives their "main engine" was producing. By dumping all strings contained in such a whitelisted application and simply appending them to the end of a known piece of malware, the researchers were able to avoid its detection completely, as shown in their demo video.
This finding is just another confirmation of inherent challenges of designing ML-based cybersecurity products. Here are some issues:
- The advantages that ML-enhanced cybersecurity tools provide can be easily defeated if overrides are used to eliminate false positives rather than proper training of ML algorithms. ML works best when fed as much data as possible, and when products are implemented using the right combination of supervised and unsupervised ML methods. It’s possible that whitelisting would not have been necessary if sufficient algorithmic training had been performed.
- ML can be gamed. Constraining data sets or simply not having enough data piped through the appropriate mix of ML algorithms can lead to bias, which can lead to missed detections in the cybersecurity realm. This can be either intentional or unintentional. In cases of intentional gaming, malicious actors select subsets of data with which to train the discriminator, while purposely omitting others. In the unintentional case, software developers may not have access to a full sample set or may simply choose to not use a full sample set during the construction of the model.
- Single-engine EPP products are at a disadvantage compared to multi-engine products. Using “AI” techniques in cybersecurity, especially in EPP products, is an absolute necessity. With millions of new malware variants appearing monthly, human analysts can’t analyze and build signatures fast enough. It is infeasible to rely on signature-based AV alone, and this has been true for years. However, just because signature-based engines are not completely effective doesn’t mean that products should abandon that method in favor of a different single method. The best endpoint protection strategy is to use a mixture of techniques, including signatures, ML-enhanced heuristics, behavioral analysis, sandboxing, exploit prevention, memory analysis, and micro-virtualization. Even with an assortment of malware detection/prevention engines, EPP products will occasionally miss a piece of malicious code. For those rare cases, most endpoint security suite vendors have Endpoint Detection & Response (EDR) products to look for signs of compromise.
- Marketing ML-enhanced tools as an “AI” panacea has drawbacks. ML tools are a commodity now. ML techniques are used in many cybersecurity tools, not just EPP. ML is in most data analytics programs as well. It’s a necessary component to deal with enormous volumes of data in most applications. The use of the term “AI” in marketing today suggests infallibility and internal self-sufficiency. But such tools can make mistakes, and they don’t eliminate the need for human analysts.
KuppingerCole is hosting an AImpact Summit in Munich in November where we’ll tackle these very issues. The Call for speakers is open.
For an in-depth comparison of EPP vendors, see our Leadership Compass on Endpoint Security: Anti-Malware.
This week, Facebook announced details about its cryptocurrency project, Libra. They expect it to go live for Facebook and other social media platform users sometime in 2020. The list of initial backers, the Founding Members of the Libra Association, is quite long and filled with industry heavyweights such as Coinbase, eBay, Mastercard, PayPal, and Visa. Other tech companies including Lyft, Spotify, and Uber are Founding Members, as well as Andreesen Horowitz and Thrive Capital.
Designed to be a peer-to-peer payment system, Libra will be backed by a sizable reserve and pegged to physical currencies to defray wild currency floats and speculation. The Libra Association will manage the reserve, and it will not be accessible to users. The Libra Association will mint and destroy Libra Coins in response to demand from authorized resellers. Founding Members will have validator voting rights. As we can see from the short list above, Libra Founding Members are large organizations, and they will have to buy in with Libra Investment Tokens. This investment is intended to incentivize Founding Members to adequately protect their validators. Libra eventually plans to transition to a proof-of-stake system where Founding Members will receive voting rights proportional (capped at 1%) to their Investment Tokens. They expect this to facilitate the move to permissionless blockchain at some point in the future. Libra blockchain will therefore start off as permissioned and closed. The Libra roadmap can be found here.
Let’s look at some of the interesting technical details related to security that have been published at https://libra.org. Libra protocol takes advantage of lessons learned over the last few years of blockchain technologies. For example, unlike Bitcoin, which depends on the accumulation of transactions into blocks before commission, in Libra, individual transactions compose the ledger history. The Consensus protocol handles aggregation of transactions into blocks. Thus, sequential transactions and events can be contained in different blocks.
Authentication to accounts will use private key cryptography, and the ability to rotate keys is planned. Multiple Libra accounts can be created per user. User accounts will not necessarily be linked to other identities. This follows Bitcoin and Ethereum model for pseudonymity. Libra accounts will be collections of resources and modules. Libra “serialize(s) an account as a list of access paths and values sorted by access path. The authenticator of an account is the hash of this serialized representation. Note that this representation requires recomputing the authenticator over the full account after any modification to the account... Furthermore, reads from clients require the full account information to authenticate any specific value within it.”
Transaction fees in Libra will adhere to an Ethereum-like “gas” model: senders name a price they are willing to pay, and if the cost to the validators exceeds the number of units at that price, the transaction aborts. The ledger won’t be changed, but the sender will still be charged the fee. This is designed to keep fees low during times of high transaction volumes. Libra foresees that this may help mitigate against DDoS attacks. It also will prevent senders from overdrawing their accounts, because the Libra protocol will check to make sure there is enough Libra coin to cover the cost of the transaction prior to committing it.
The Libra Protocol will use a new programming language, called Move, which will be designed to be extensible to allow user-defined data-types and smart contracts. There will be no copy commands in Move, only create/destroy, to avoid double-spend. Programmers will be able to write in higher level source and intermediate representation languages, which will be output to a fairly simple and constrained bytecode which can be type- and input-verified for security. Transactions are expected to be atomic, in that each should contain a single operation. In Libra, modules will contain code and resources will have data, which is in contrast to Ethereum, where a smart contract contains both code and data.
Another interesting concept in Libra is the event. An event is defined as a change of state resulting from a transaction. Each transaction can cause multiple events. For example, payments result in corresponding increases and decreases in account balances. Libra will use a variant of the HotStuff consensus protocol called LibraBFT (Byzantine Fault Tolerance), which is architected to withstand multiple malicious actors attempting to hack or sabotage validators. The HotStuff consensus protocol is not based on proof-of-work, thereby avoiding performance and environmental concerns. Libra intends to launch with 100 validators, and eventually increase to 500-1,000 validator nodes.
Libra Core code will be written in Rust and open sourced. Facebook and Libra acknowledge that security of the cryptocurrency exchange depends on the correct implementation of validator node code, Move apps, and the Move VM itself. Security must be a high priority, since cryptocurrency exchanges are increasingly under attack.
Facebook’s new subsidiary Calibra will build the wallet app. Given that Coinbase and others in the business are on the board, it’s reasonable to expect that other cryptocurrency wallets will accept Libra too. Facebook and other cryptocurrency wallet makers must design security and privacy into these apps as well as the protocol and exchange. Wallets should take advantage of features such as TPMs on traditional hardware and Secure Elements / Trusted Execution Environment and Secure Enclave on mobile devices. Wallets should support strong and biometric authentication options.
Users will have no guarantees of anonymity due to international requirements for AML and KYC. Facebook claims social media profile information and Libra account information will be kept separate, and only shared with user consent. Just how Facebook will accomplish this separation remains to be seen. The global public has legitimate trust issues with Facebook. Nevertheless, Facebook, WhatsApp, and Instagram have 2.3B, 1.6B, and 1.0B user accounts respectively. Despite some overlap, that user base is larger than a couple of the largest countries combined.
The moral argument in favor of cryptocurrencies has heretofore been that blockchain technologies will benefit the “unbanked”, the roughly 2B people who do not have bank accounts. If Libra takes off, there is a possibility that more of unbanked will have access to affordable payment services, provided there is sizable intersection between those with social media accounts and who are unbanked.
Much work, both technical and political, remains to be done if Libra is to come to fruition. Government officials have already spoken out against it in some areas. Libra will have to be regulated in many jurisdictions. An open/permissionless blockchain model would help with transparency and independent audit, but it could be years before Libra moves in that direction. While Libra runs as closed/permissioned, they will face more resistance from regulators around the world.
Facebook and the Libra Association will have to handle not only a mix of financial regulations such as AML and KYC, but also privacy regulations like GDPR, PIPEDA, CCPA, and others. There was no mention in the original Libra announcement about support for EU PSD2, which will soon govern payments in Europe. PSD2 mandates Strong Customer Authentication and transactional risk analysis for payment transactions. Besides the technical and legal challenges ahead, Facebook and Libra will then have to convince users to actually use the service.
Initiating payments from a social media app has been done already: WeChat, for example. So it’s entirely possible that Libra will succeed in some fashion. If Libra does take off in the next couple of years, expect massive disruption in the payments services market. It is too early to accurately predict the probability of success or the long-term impact if it is successful. KuppingerCole will follow and report on relevant developments. This is sure to be a topic of discussion at our upcoming Digital Finance World and Blockchain Enterprise Days coming up in September in Frankfurt, Germany.
It seems almost every week in cybersecurity and IAM we read of a large company buying a smaller one. Many times, it is a big stack vendor adding something that may be missing to their catalog, or buying a regional competitor. Sometimes it’s a medium-sized technology vendor picking up a promising start-up. In the olden days (15+ years ago), start-ups hoped for going IPO. IPOs are far less common today. Why? Mostly because it’s an expensive, time-consuming process that doesn’t achieve the returns it once did. Many times, going IPO was an interim step to getting acquired by a large vendor, so why not just skip ahead?
Mergers are not common for a few reasons. Merger implies a coming together of near-equals, and executives and boards of directors don’t usually see it this way. So even when mergers happen, they’re often spun as simply acquisitions, and one brand survives while the other fades away. Mergers also mean de-duplication of products, services, and downsizing of the workforces. Mergers can be difficult for customers of both former brands to endure as well.
In the last few years, we’ve increasingly seen equity firms purchase mature start-ups and assemble portfolios of tech vendors. I say “mature start-up” because, instead of the “3 years and out” that occasionally worked in the early 2000s, now vendors are often taking investment (Series A, B, C, D, etc.) 5-7 years or more after founding. When equity firms pick up such companies, the purchased vendor generally retains their brand in the marketplace. The equity firms typically have 3-5 year plans to streamline the operations of the components in their portfolios, make each company profitable, build value, and then sell again.
Other times large companies spin off divisions that are “not part of their core competencies”. Maybe those divisions are not doing well under current management and might fare better in the market where they can have some brand separation and autonomy.
What motivates acquisitions? There are four major reasons companies merge with or buy others:
- To acquire technology
- To acquire customers
- To acquire territory
Getting a new technology to integrate into an existing suite is very straightforward. Picking up a smaller competitor to access their customer base is also a common strategy, provided it doesn’t run afoul of anti-trust laws. Large regional vendors will sometimes buy or merge with similar companies in other regions to gain overall market share. These can often be smart strategies toward building a global footprint in the market.
Every now and then, however, we read about deals that don’t make sense in the industry. This is the unknown category. Sometimes big companies do acquire smaller competition, but do not integrate, extend, or service the purchased product. Dissatisfied customers leave. Overall brand reputation suffers. These deals turn out to be mistakes in the long run, only benefitting the owners of the purchased company. A better plan is to out-compete rather than buy-out the competition.
Customers of vendors that are being bought or divested have questions: what will happen to the product I use? Will it be supported? Will it go away? Will I have to migrate to combined offering? If so, is now the time to do an RFP to replace it?
IT executives in end-user organizations may hold conflicting views about M&A activities. On the one hand, consolidation in the market can make vendor and service management easier: fewer products to support and fewer support contracts to administer. On the other hand, innovation in large companies tends to be slower than in smaller companies. It’s a momentum thing. As an IT manager, you need your vendor to support your use cases. Use cases evolve. New technical capabilities are needed. Depending on your business requirements and risk tolerance, you may occasionally have to look for new vendors to meet those needs, which means more products to support and more contracts to manage. Beware the shiny, bright thing!
Recommendation: executives in companies that are acquiring others or are being divested need to
- Quickly develop, or at least sketch, roadmaps of the product/services that are being acquired or divested. Sometimes plans change months or years after the event. When they do, let customers know.
- Communicate those roadmaps as well as known at the time of acquisition or divestiture. Explain the expected benefits of the M&A activity and the new value proposition. This will help reduce uncertainty in the market and perhaps prevent premature customer attrition.
In summary: there will always be mergers, acquisitions, and divestitures in the security and identity market. Consolidation happens, but new startups emerge every quarter in every year with new products and services to address unmet business requirements. IT managers and personnel in end-user organizations need to be aware of the changes in the market and how it may impact their businesses.
Likewise, executives in vendor companies, investors, VCs, and equity firms need to be cognizant of current market trends as well as make predictions about the impact and success of proposed ventures. This can help to avoid those deals that leave everyone scratching their heads wondering why did they do that? At KuppingerCole, we understand the cyber and IAM markets, and know the products and services in those fields. Stay on top of the latest security and identity product evaluations at www.kuppingercole.com.
Digital Transformation is one of those buzzwords (technically a buzzphrase, but buzzphrase isn’t a buzzword yet) that gets used a lot in all sorts of contexts. You hear it from IT vendors, at conferences, and in the general media. But Digital Transformation, or DT as we like to abbreviate it, is much more than that. DT is commonly regarded as a step or process that businesses go through to make better use of technology to deliver products and services to customers, consumers, and citizens. This is true for established businesses, but DT is enabling and creating entirely new businesses as well.
When we hear about DT, we think of smart home products, wearable technologies, connected cars, autonomous vehicles, etc. These are of course mostly consumer products, and most have digital device identity of some type built in. Manufacturers use device identity for a variety of reasons, to track deployed devices and utilization, to push firmware and software updates, and to associate devices with consumers.
To facilitate secure, privacy-respecting, and useful interactions with consumer of DT technologies, many companies have turned to Consumer Identity and Access Management (CIAM) solutions. CIAM solutions can provide standards-based mechanisms for registering, authenticating, authorizing, and storing consumer identities. CIAM solutions usually offer identity and marketing analytics or APIs to extract more value from consumer business. CIAM is foundational and an absolutely necessary component of the DT.
CIAM solutions differ from traditional IAM solutions in that they take an “outside-in” as opposed to the “inside-out” approach. IAM stacks were designed from the point of view that an enterprise provisions and manages all the identities of employees. HR is responsible for populating most basic attributes and then managers add other attributes for employee access controls. This model was extended to business partners and B2B customers throughout the 1990s and early 2000s, and in some cases, to consumers. Traditional IAM was often found lacking by consumer-driven businesses in terms of managing their end-user identities. HR and company management doesn’t provision and manage consumer identities. Moreover, the types of attributes and data about consumers needed by businesses today was not well-suited to be serviced by enterprise IAM systems.
Thus, CIAM systems began appearing in the 2010s. CIAM solutions are built to allow consumers to register with their email addresses, phone numbers, or social network credentials. CIAM solutions progressively profile consumers so as not to overburden users at registration time. Most CIAM services provide user dashboards for data usage consent, review, and revocation, which aids in compliance with regulations such as EU GDPR and CCPA.
CIAM services generally accept a variety of authenticators that can be used to match identity and authentication assurance levels with risk levels. CIAM solutions can provide better – more usable and more secure – authentication methods than old password-based systems. Consumers are tired of the seemingly endless trap of creating new usernames and passwords, answering “security questions” that are inherently insecure, and getting notified when their passwords and personal data are breached and published on the dark web. Companies with poor implementations of consumer identity miss out on marketing opportunities and sales revenue; they also can lose business altogether when they inconvenience users with registration and password authentication, and they suffer reputation damage after PII and payment card breaches.
In addition to common features, such as registration and authentication options, consider the following functional selection criterion from our newly published Buyer’s Guide to CIAM. Compromised credential intelligence can lower the risks of fraud. Millions of username/password combinations, illegally acquired through data breaches, are available on the dark web for use by fraudsters and other malefactors. Compromised credentials intelligence services alert subscribers to the attempted use of known bad credentials. All organizations deploying CIAM should require and use this feature. Some CIAM solutions, primarily the SaaS vendors, detect and aggregate compromised credential intelligence from across all tenants on their networks. The effectiveness of this approach depends on the size of their combined customer base. On-premises CIAM products should allow for consumption of third-party compromised credential intelligence.
Lastly, CIAM solutions can scale much better than traditional IAM systems. Whereas IAM stacks were architected to handle hundreds of thousands of users with often complex access control use cases, some CIAM services can store billions of consumer identities and process millions to hundreds of millions of login events and transactions.
Over the last few years, enterprise IAM vendors have gotten in on the CIAM market. In many cases they have extended or modified their “inside-out” model to be more accommodating of the “outside-in” reality of consumer use cases. Additionally, though traditional IAM was usually run on-premises, pure-play CIAM started out in the cloud as SaaS. Today almost all CIAM, including those with an enterprise IAM history, offer CIAM as SaaS.
Thus, CIAM is a real differentiator that can help businesses grow through the process of DT by providing better consumer experiences, enhanced privacy, and more security. Without CIAM, in the age of DT, businesses face stagnation, lost revenues, and declining customer bases. To learn more about CIAM, see the newly updated KuppingerCole Buyer’s Guide to CIAM.
Figure: The key to success in Digital Business: Stop thinking inside-out – think outside-in. Focus on the consumer and deliver services the way the consumer wants
#RSAC2019 is in the history books, and thanks to the expansion of the Moscone Center, there was ample space in the expo halls to house vendor booths more comfortably. In fact, there seemed to be a record number of exhibitors this year. As always, new IAM and cybersecurity products and services make their debut at RSAC.
Despite the extra room, it can be difficult for the security practitioner and executive to navigate the show floor. Some plan ahead and make maps of which booths to visit, others walk from aisle 100 to the end. It can take a good deal of time to peruse and discover what’s new. But most difficult of all it is digesting what we’ve seen and heard, considering it in a business context, and prioritizing possible improvement projects.
Security practitioners tend to hit the booths of vendors they have worked with, those with competing products, and others in their areas of specialty, including startups. For example, an identity architect will likely keep on walking past the “next gen” anti-malware and firewall booths but will stop at the booth offering a new identity proofing service. If a product does something novel or perhaps better than their current vendor’s product, they’ll know it and be open to it, even if it’s a small vendor and it means managing another product or service.
Executives gravitate toward the stack vendors in the front and middle, ignoring the startups on the sides and back. [It’s also increasingly likely execs will have meetings with specific vendors in the hotels surrounding Moscone, and not even set foot in the halls.] Why? IT execs and particularly CISOs are concerned with reducing complexity as well as securing the enterprise. A few stack vendors with consolidated functionality are easier to manage than dozens of point solutions.
Who is right? Well, it depends. Sometimes both, sometimes neither. It depends on knowing your cyber risk in relation to your business and understanding which technology enhancements will decrease your cyber risk and by approximately how much. Oftentimes practitioners and executives disagree on the cyber risk analysis and priorities set as a result.
Risk is conjunction of consequence and likelihood. At RSAC and other conferences we hear anecdotes of consequences and see products that reduce the likelihood and severity of those consequences. Executives and practitioners alike have to ask, “are the threats addressed by product X something we realistically face?”. If not, implementing it won’t reduce your cyber risk. Or, if there are two or more similar products, which one offers the most possible risk reduction?
The biggest risk is that the decision-makers don’t truly understand the threats and risks they face. There are cases where SMBs have built defenses against zero-day APTs that will never come their way yet have neglected to automate patch management or user de-provisioning. In other cases, a few big enterprises have naively dismissed the possibility that they could be the target of corporate or foreign state espionage and failed to protect against such attacks.
The riskiest time for organizations is the period when executive leadership changes and for 12-18 months afterward, or even longer. If an organization brings in a CIO or CISO from a different industry, it takes time for the person to learn the lay of the land and the unique challenges in which that organization operates. Long-held strategies and roadmaps get re-evaluated and changed. Mid-level managers and practitioners may leave during this time. That org’s overall cybersecurity posture is weakened during the transition time. Adversaries know this too.
Risk is a difficult subject for humans to grasp. No one gets it right all the time. Risk involves processing probabilities, and our brains didn’t really evolve to do that well. For an excellent in-depth look at that subject, read Leonard Mlodinow’s book The Drunkard’s Walk.
External risk assessments and benchmarks can be good mechanisms to overcome these circumstances; such as when tech teams and management disagree on priorities, when one or more parties is unsure of the likelihood of threats and risks, and when executive leadership changes. Having an objective view from advisors experienced in your particular industry can facilitate the re-alignment of tactics and strategies that can reduce cyber and overall risk. For information on the types of assessments and benchmarking KuppingerCole offers, see our advisory offerings.
2019 started off with a very noteworthy acquisition in the identity and security space: the purchase of Janrain by Akamai. Janrain is a top vendor in the Consumer Identity market, as recognized in our recent Leadership Compass: https://www.kuppingercole.com/report/lc79059. Portland, OR-based Janrain provides strong CIAM functionality delivered as SaaS for a large number of Global 2000 clients. Boston-based Akamai has a long history of providing web acceleration and content delivery services. Last year, they entered into a partnership whereby Akamai provided network layer protection for Janrain assets.
Akamai has lately been focusing on increasing its market share of web security services in order to grow revenue. This acquisition will add identity layer functionality and increase visibility for the infrastructure company.
New account fraud and account takeover fraud are two of the chief concerns that companies in many industries, particularly finance and retail, must guard against. Bot management has been one of Akamai’s fastest growing services. The further integration of Akamai’s threat intelligence capabilities with Janrain’s CIAM solution has the potential to enhance consumer security for their clients.
As with all such acquisitions, there are two major possible routes their combined service roadmap can take:
- Integrate Janrain's CIAM functionality into Akamai services in a purely supportive way, or
- Integrate Janrain's CIAM functionality into Akamai services while continuing to promote and sell the CIAM services as a standalone solution
In many cases, purchasers in the IT business take the first option. The second option is more difficult to execute, but often offers a better long-term investment for both the purchaser and their clients. Akamai has a defined, well thought-out plan to pursue option 2, to extend the Janrain solution and continue to market it as a CIAM SaaS branded under Akamai.
Given the size of the CIAM market, KuppingerCole expects to see additional M&A activity as well new entrants in this space in the next 12-18 months. Keep up to date with the latest developments and research in cybersecurity and identity management by watching our blog: https://www.kuppingercole.com/blog.
2018 was a year of sweeping changes in Consumer Identity Management products and services. CIAM continues to be a fast-growing market. Research indicates that about half of all CIAM deals are still originating outside the tent of the CISO and IAM support organizations. More vendors entered the market and there were some noteworthy acquisitions. Lastly, many innovative improvements occurred across most all solutions, due in part to GDPR.
What is driving CIAM growth? Businesses are realizing that efficient and effective digital identity solutions lead to more consumer engagement and a better consumer experience, which in turn generates additional revenue. CIAM deployments will continue to outpace IAM deployments in 2019.
GDPR took effect on May 25th this year. The response by CIAM vendors in the run-up to GDPR was mixed. Some were proactive, seeing it as a competitive advantage. Others played catch-up. However, by the end of 2018, most vendors offer consent management features that can allow industrious customers to comply with GDPR in terms of consent collection, data export, and data deletion. There is still a wide variety in the approaches taken, and some CIAM services are more advanced and easier to administer in this regard. Meanwhile, the world waits to see if and how GDPR will be enforced.
Consumer identities are a top target for cyber criminals. Consumers are phished for their credentials. Banking trojans are a leading form of malware. Account takeover fraud is growing and is eating into bank profits. Fraud of all types is a growing concern, and not just for the financial sector. Customer loyalty programs (one of the many drivers for deploying CIAM) are increasingly under attack. The recent Marriott/Starwood breach netted 500M accounts for the perpetrators. Airlines’ frequent flyer programs are also regularly stolen. In short, any online asset that is convertible to cash or cryptocurrency is a target. Fortunately, some CIAM vendors put an emphasis on fraud risk reduction by including user behavioral analytics and by real-time processing of compromised credential and other threat intelligence sources. The need to reduce fraud spurred innovation in CIAM in 2018. Biometrics, mobile apps/SDKs, and risk adaptive authentication are “must have” functions within CIAM solutions for 2019.
The need to associate IoT device identities with consumer identities is an expanding and evolving use case within CIAM. Not enough has been standardized in this field, so there is a lot of variation in IoT device identity support still. Look for additional growth and perhaps standardization in the years ahead.
From a market perspective, the year started out with a major acquisition of Gigya by SAP. As an independent company, Gigya was a leader in CIAM. The acquisition was beneficial for SAP, which was missing a fully functional CIAM capability. SAP, now powered by a rapidly-integrated Gigya, has become a major player in the consumer identity market. Later in the year Exostar acquired Pirean. This transaction will give Exostar, a secure business collaboration service provider, stronger IAM and CIAM features. The move also serves to increase the reach of both companies. More companies entered the CIAM market as well, and gained prominence in the field. No doubt there will be more acquisitions and entrants in 2019. For the latest information on this market, including technical details on how the solutions differ, see our just-published Leadership Compass.
Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]