Healthcare organizations must use IAM as an integral part of their IT infrastructure in order to cope with challenges in various fields, such as compliance, regulations, cybersecurity, and Digital Transformation. In this respect, IAM not only serves as a security technology but also as a mechanism that helps responding to new business challenges and drivers.
While every industry currently has to deal with the disruptive nature of the Digital Transformation and ever-increasing cyberattacks, some of the developments are endemic to healthcare organizations. For instance, complying with new regulations such as Electronic Prescribing for Controlled Substances (EPCS) or the well-known Health Insurance Portability and Accountability Act (HIPAA).
Due to their sensitivity, patient data are a highly valuable target for hackers, which is why the healthcare industry is among the most regulated ones.
In order to protect sensitive patient data, it is inevitable for any given healthcare organization to implement a strong Identity and Access Management as a central pillar of the IT infrastructure. Control and restriction of access with the help of a sophisticated IAM is a prerequisite to the protection of information that network firewalls and single sign-on (SSO) cannot deliver.
Altogether, there are five areas a strong IAM infrastructure will bolster:
- Compliance & Regulations
- Organizational & Financial Challenges
- M&A Activity
- Digital Transformation
Compliance & Regulations: In recent years, the regulatory bar that healthcare organizations have to comply with has been raised. HIPAA, ECPS and state-level regulations, like the California Consumer Privacy Act (CCPA) are just a few. The regulations have strong authentication and refined access control at their core. They are complemented by a detailed registration of user activities. In stark contrast to other industries, healthcare providers may find themselves in emergency situations, when data must be accessed quickly. IAM delivers the infrastructure for such regulation-compliant emergency access processes.
Security: As cyberattacks rise in number and intensity, organizations have to take precautions more than ever. IT security teams in healthcare organizations must prioritize this kind of risk, especially regarding incidences of ransomware attacks against hospitals.
Most healthcare organizations still focus too little on detection while being caught up in prevention. Despite external prevention being necessary, it does not protect against internal attacks. Therefore, it is unavoidable to restrict access to sensitive data and critical infrastructure, particularly when the attacker is already in the system. IAM does not replace firewalls and other preventive measures but should always go hand in hand with them.
Organizational & Financial Challenges: Apart from having to care for their patients’ wellbeing, healthcare organizations, ultimately, are also businesses and should keep an eye on profits. Here, IAM helps increasing efficiency and convenience for user experience, for instance with an SSO portal.
The number of user types accessing systems and data within healthcare organizations is sheer incalculable: Doctors, nurses, students, patients are just a few and the number is growing. The careful distribution of user rights has its litmus test when emergency situations arise and EMR data must be accessed quickly.
Like any other kind of business, healthcare businesses often lack the resources and ownership for IAM. The latter must be clearly defined, and projects need support from sponsors within the organization.
M&A Activity: Healthcare is not exempt from M&A activity and IAM infrastructures should be designed to make a merger go as smoothly as possible. The merging organizations must federate identities and give employees, patients, and contractors adequate levels of access.
Digital Transformation: Telemedicine, EMR, and Patient Access Management result in an increase in identities as well as in complexity in access entitlements which must be controlled. The role of IAM is to support these processes in working together seamlessly. Where SSO is not enough, IAM gains center stage: Provisioning and deprovisioning of accounts, management of access entitlements, audit and governance, and granular access control.
As healthcare services become more digitalized and “consumerized”, it is IAM that must support these hybrid environments and multi-cloud infrastructures. Different types of users have different types of devices, all of which has to be considered when setting up an IAM solution, especially one that goes beyond SSO. This is the only way to address the challenges of digitization and provide secure access at the same time. Ultimately, IAM is the foundation for supporting the consumerization of healthcare businesses.
This week, Facebook announced details about its cryptocurrency project, Libra. They expect it to go live for Facebook and other social media platform users sometime in 2020. The list of initial backers, the Founding Members of the Libra Association, is quite long and filled with industry heavyweights such as Coinbase, eBay, Mastercard, PayPal, and Visa. Other tech companies including Lyft, Spotify, and Uber are Founding Members, as well as Andreesen Horowitz and Thrive Capital.
Designed to be a peer-to-peer payment system, Libra will be backed by a sizable reserve and pegged to physical currencies to defray wild currency floats and speculation. The Libra Association will manage the reserve, and it will not be accessible to users. The Libra Association will mint and destroy Libra Coins in response to demand from authorized resellers. Founding Members will have validator voting rights. As we can see from the short list above, Libra Founding Members are large organizations, and they will have to buy in with Libra Investment Tokens. This investment is intended to incentivize Founding Members to adequately protect their validators. Libra eventually plans to transition to a proof-of-stake system where Founding Members will receive voting rights proportional (capped at 1%) to their Investment Tokens. They expect this to facilitate the move to permissionless blockchain at some point in the future. Libra blockchain will therefore start off as permissioned and closed. The Libra roadmap can be found here.
Let’s look at some of the interesting technical details related to security that have been published at https://libra.org. Libra protocol takes advantage of lessons learned over the last few years of blockchain technologies. For example, unlike Bitcoin, which depends on the accumulation of transactions into blocks before commission, in Libra, individual transactions compose the ledger history. The Consensus protocol handles aggregation of transactions into blocks. Thus, sequential transactions and events can be contained in different blocks.
Authentication to accounts will use private key cryptography, and the ability to rotate keys is planned. Multiple Libra accounts can be created per user. User accounts will not necessarily be linked to other identities. This follows Bitcoin and Ethereum model for pseudonymity. Libra accounts will be collections of resources and modules. Libra “serialize(s) an account as a list of access paths and values sorted by access path. The authenticator of an account is the hash of this serialized representation. Note that this representation requires recomputing the authenticator over the full account after any modification to the account... Furthermore, reads from clients require the full account information to authenticate any specific value within it.”
Transaction fees in Libra will adhere to an Ethereum-like “gas” model: senders name a price they are willing to pay, and if the cost to the validators exceeds the number of units at that price, the transaction aborts. The ledger won’t be changed, but the sender will still be charged the fee. This is designed to keep fees low during times of high transaction volumes. Libra foresees that this may help mitigate against DDoS attacks. It also will prevent senders from overdrawing their accounts, because the Libra protocol will check to make sure there is enough Libra coin to cover the cost of the transaction prior to committing it.
The Libra Protocol will use a new programming language, called Move, which will be designed to be extensible to allow user-defined data-types and smart contracts. There will be no copy commands in Move, only create/destroy, to avoid double-spend. Programmers will be able to write in higher level source and intermediate representation languages, which will be output to a fairly simple and constrained bytecode which can be type- and input-verified for security. Transactions are expected to be atomic, in that each should contain a single operation. In Libra, modules will contain code and resources will have data, which is in contrast to Ethereum, where a smart contract contains both code and data.
Another interesting concept in Libra is the event. An event is defined as a change of state resulting from a transaction. Each transaction can cause multiple events. For example, payments result in corresponding increases and decreases in account balances. Libra will use a variant of the HotStuff consensus protocol called LibraBFT (Byzantine Fault Tolerance), which is architected to withstand multiple malicious actors attempting to hack or sabotage validators. The HotStuff consensus protocol is not based on proof-of-work, thereby avoiding performance and environmental concerns. Libra intends to launch with 100 validators, and eventually increase to 500-1,000 validator nodes.
Libra Core code will be written in Rust and open sourced. Facebook and Libra acknowledge that security of the cryptocurrency exchange depends on the correct implementation of validator node code, Move apps, and the Move VM itself. Security must be a high priority, since cryptocurrency exchanges are increasingly under attack.
Facebook’s new subsidiary Calibra will build the wallet app. Given that Coinbase and others in the business are on the board, it’s reasonable to expect that other cryptocurrency wallets will accept Libra too. Facebook and other cryptocurrency wallet makers must design security and privacy into these apps as well as the protocol and exchange. Wallets should take advantage of features such as TPMs on traditional hardware and Secure Elements / Trusted Execution Environment and Secure Enclave on mobile devices. Wallets should support strong and biometric authentication options.
Users will have no guarantees of anonymity due to international requirements for AML and KYC. Facebook claims social media profile information and Libra account information will be kept separate, and only shared with user consent. Just how Facebook will accomplish this separation remains to be seen. The global public has legitimate trust issues with Facebook. Nevertheless, Facebook, WhatsApp, and Instagram have 2.3B, 1.6B, and 1.0B user accounts respectively. Despite some overlap, that user base is larger than a couple of the largest countries combined.
The moral argument in favor of cryptocurrencies has heretofore been that blockchain technologies will benefit the “unbanked”, the roughly 2B people who do not have bank accounts. If Libra takes off, there is a possibility that more of unbanked will have access to affordable payment services, provided there is sizable intersection between those with social media accounts and who are unbanked.
Much work, both technical and political, remains to be done if Libra is to come to fruition. Government officials have already spoken out against it in some areas. Libra will have to be regulated in many jurisdictions. An open/permissionless blockchain model would help with transparency and independent audit, but it could be years before Libra moves in that direction. While Libra runs as closed/permissioned, they will face more resistance from regulators around the world.
Facebook and the Libra Association will have to handle not only a mix of financial regulations such as AML and KYC, but also privacy regulations like GDPR, PIPEDA, CCPA, and others. There was no mention in the original Libra announcement about support for EU PSD2, which will soon govern payments in Europe. PSD2 mandates Strong Customer Authentication and transactional risk analysis for payment transactions. Besides the technical and legal challenges ahead, Facebook and Libra will then have to convince users to actually use the service.
Initiating payments from a social media app has been done already: WeChat, for example. So it’s entirely possible that Libra will succeed in some fashion. If Libra does take off in the next couple of years, expect massive disruption in the payments services market. It is too early to accurately predict the probability of success or the long-term impact if it is successful. KuppingerCole will follow and report on relevant developments. This is sure to be a topic of discussion at our upcoming Digital Finance World and Blockchain Enterprise Days coming up in September in Frankfurt, Germany.
One of my favorite stories is of a pen-test team who were brought in and situated next door to the SOC (Security Operations Centre); and after a week on-site they were invited for a tour of the SOC where they queried a series of alarms [that they had obviously caused] only to be told “oh that’s normal, we’ve been getting these continuously all week”.
People perform penetration tests (pen-tests) for a multitude of reasons; “I inherited a budget with an annual pen-test” or “it’s required by the audit committee” are the most common. Security teams use them to justify their budgets and hopefully show how good they are, but in the worst of cases people are paying consultancy rates for a simple vulnerability scan; but rarely are they used to find the “known unknowns” or even the “unknown unknowns”.
“because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns”
- United States Secretary of Defence Donald Rumsfeld, 2002
Aside from a specific pen-test to test, say, a new website before it goes live to the Internet, pen-tests will vary in scope and quality.
So, what does a bad pen-test look like? All pen-test companies will tell you of the phone calls that start “we need a pen-test” but when quizzed they have no idea what needs testing – they just need a “standard” pen-test because they’ve been told (by their management, auditors, audit committee that) they need to perform one – the “tick in the box”.
Conversely a good pen-test will be part of a series, each testing specific aspects of security – against a plan that is informed by a business-wide threat-assessment process that strives to understand who would want to harm your business and/or steal your data combined with the motivation behind your various adversaries; everyone from script-kiddies who do it for a laugh, organized crime who want to ransom your data, to the competitor funded by state intelligence who wants to steal your pre-patent intellectual property.
The best pen-tests are those that augment a security regime where the basics are already in place and part of everyday life. Why pay for a pen-test that conducts a one-off manual vulnerability assessment, when for probably less money you can implement continuous automated vulnerability assessment?
Going back to my initial story; every pen-tester will tell you how they’ve NEVER failed to get physical access to a building; and almost all companies run flat internal networks; therefore, simply assume they will get in, and simply give them a room with a network jack for the duration of the test.
The best pen-tests, in my opinion, use your threat assessment to identify “targets” or “flags to capture”, give the pen-testers access to the output from their automated vulnerability scanning system as well as physical access to their network, and then pay a bonus for every flag they capture, web-site they actually modify, or senior executive they manage to socially engineer. Because only this type of pen-test will help identify both the “known-unknowns” and “unknown-unknowns”.
Finally, before you start any pen-test you need to understand what you will do with the results – all pen-tests will expose a level of failing – such is the nature of modern IT systems. If you are able to pat yourself on the back and report to management “they did not manage to get in” then you either designed the pen-test wrong or hired the wrong company to perform the test.
Microservice-based architectures allow businesses to develop and deploy their applications in a much more flexible, scalable and convenient way – across multiple programming languages, frameworks and IT environments. Like with any other new technology that DevOps and security teams started to explore in the recent years, there is still quite a lot of confusion about the capabilities of new platforms, misconceptions about new attack vectors and renewed discussions about balancing security with the pace of innovation. And perhaps the biggest myth of microservices is that their security somehow takes care of itself.
Let’s get this thing out of the way first: microservices on their own are nothing more than a method of designing applications as an interconnected system of loosely coupled business-focused components. There is nothing inherent to microservices that would make them more resilient against cyber threats or prevent sensitive data from being stolen. On the contrary, microservice-based architectures rely on new tools and technologies, and those bring in new security challenges and new skills needed to mitigate them efficiently.
In fact, even if we disregard the “architectural” risks of microservices, like cascading failures or service discovery abuse, we have to agree that a modern loosely coupled application is subjected to the same risks as a traditional monolithic one – ranging from the low-level infrastructure exploits to the communication layer and all the way up to attacks targeting the application users. And perhaps no other attack vector is more critical than APIs.
As we have discussed in a recent KuppingerCole webinar, even for more traditional scenarios, API security is still something that many businesses tend to underestimate and neglect, hoping that existing tools like web application firewalls will be sufficient to secure their business APIs. Unfortunately, this could not be further from truth – APIs are subject to numerous risks that can only be successfully mitigated with a properly designed strategy that covers the whole API lifecycle – even before any code is written, let alone deployed to a backend.
In microservice-based applications, where hundreds of individual microservices are communicating with each other and with the outside world exclusively through APIs, the difficulty of securing all those interactions increases exponentially. Due to the nature of these applications, individual API endpoints become ephemeral, appearing as new containers are spun up, migrating between environments and disappearing again. And yet each of them must be secured by proper access control, threat protection, input validation, bot mitigation, and activity monitoring solutions – all those jobs which are typically performed by an API gateway. How many API gateways would you need for that?
Another challenge of microservice-based architectures is their diversity – when individual microservices are written using different development frameworks and deployed to different platforms, providing consistent authentication and authorization becomes a problem – ensuring that all components agree on a common access rights model, that they understand the same access token format, that this token exchange scales properly, and that sensitive attributes flowing between services are not exposed to the outside world. The same considerations apply to network-level communications: isolation, segmentation, traffic encryption - these are just some issues developers have to think about. Preferably, in advance.
Does all this mean that making microservices secure is too much of a hassle that undoes all the speed and convenience of the architecture? Not at all, but the key point here is that you need to do it the right way from the very beginning of your microservice journey. And luckily, you do not have to walk alone – everyone had faced the same challenges, and many have already figured them out. Others have even come up with convenient tools and frameworks that will take care of some of these problems for you.
Consider modern API security solutions that do not just focus on static infrastructure, but cover everything from proactive risk assessment of your API contracts to ensuring that each of your microservices is secured by a tiny centrally managed API microgateway. Or the protocols and standards designed specifically for microservices like Secure Production Identity Framework for Everyone (SPIFFE) – essentially the “next-gen PKI” for dynamic heterogeneous software systems. Or even full-featured service mesh implementations that provide a control and security foundation for your microservices – reinventing the wheel is the last thing you need to think about.
In fact, the only thing you absolutely must do yourself is to keep an open mind and never stop learning – about the recent technologies and tools, about the newest design patterns and best practices, and, of course, about the latest cyber threats and other risks. Needless to say, we are here to support you on this journey. See you at one of our upcoming events!
It seems almost every week in cybersecurity and IAM we read of a large company buying a smaller one. Many times, it is a big stack vendor adding something that may be missing to their catalog, or buying a regional competitor. Sometimes it’s a medium-sized technology vendor picking up a promising start-up. In the olden days (15+ years ago), start-ups hoped for going IPO. IPOs are far less common today. Why? Mostly because it’s an expensive, time-consuming process that doesn’t achieve the returns it once did. Many times, going IPO was an interim step to getting acquired by a large vendor, so why not just skip ahead?
Mergers are not common for a few reasons. Merger implies a coming together of near-equals, and executives and boards of directors don’t usually see it this way. So even when mergers happen, they’re often spun as simply acquisitions, and one brand survives while the other fades away. Mergers also mean de-duplication of products, services, and downsizing of the workforces. Mergers can be difficult for customers of both former brands to endure as well.
In the last few years, we’ve increasingly seen equity firms purchase mature start-ups and assemble portfolios of tech vendors. I say “mature start-up” because, instead of the “3 years and out” that occasionally worked in the early 2000s, now vendors are often taking investment (Series A, B, C, D, etc.) 5-7 years or more after founding. When equity firms pick up such companies, the purchased vendor generally retains their brand in the marketplace. The equity firms typically have 3-5 year plans to streamline the operations of the components in their portfolios, make each company profitable, build value, and then sell again.
Other times large companies spin off divisions that are “not part of their core competencies”. Maybe those divisions are not doing well under current management and might fare better in the market where they can have some brand separation and autonomy.
What motivates acquisitions? There are four major reasons companies merge with or buy others:
- To acquire technology
- To acquire customers
- To acquire territory
Getting a new technology to integrate into an existing suite is very straightforward. Picking up a smaller competitor to access their customer base is also a common strategy, provided it doesn’t run afoul of anti-trust laws. Large regional vendors will sometimes buy or merge with similar companies in other regions to gain overall market share. These can often be smart strategies toward building a global footprint in the market.
Every now and then, however, we read about deals that don’t make sense in the industry. This is the unknown category. Sometimes big companies do acquire smaller competition, but do not integrate, extend, or service the purchased product. Dissatisfied customers leave. Overall brand reputation suffers. These deals turn out to be mistakes in the long run, only benefitting the owners of the purchased company. A better plan is to out-compete rather than buy-out the competition.
Customers of vendors that are being bought or divested have questions: what will happen to the product I use? Will it be supported? Will it go away? Will I have to migrate to combined offering? If so, is now the time to do an RFP to replace it?
IT executives in end-user organizations may hold conflicting views about M&A activities. On the one hand, consolidation in the market can make vendor and service management easier: fewer products to support and fewer support contracts to administer. On the other hand, innovation in large companies tends to be slower than in smaller companies. It’s a momentum thing. As an IT manager, you need your vendor to support your use cases. Use cases evolve. New technical capabilities are needed. Depending on your business requirements and risk tolerance, you may occasionally have to look for new vendors to meet those needs, which means more products to support and more contracts to manage. Beware the shiny, bright thing!
Recommendation: executives in companies that are acquiring others or are being divested need to
- Quickly develop, or at least sketch, roadmaps of the product/services that are being acquired or divested. Sometimes plans change months or years after the event. When they do, let customers know.
- Communicate those roadmaps as well as known at the time of acquisition or divestiture. Explain the expected benefits of the M&A activity and the new value proposition. This will help reduce uncertainty in the market and perhaps prevent premature customer attrition.
In summary: there will always be mergers, acquisitions, and divestitures in the security and identity market. Consolidation happens, but new startups emerge every quarter in every year with new products and services to address unmet business requirements. IT managers and personnel in end-user organizations need to be aware of the changes in the market and how it may impact their businesses.
Likewise, executives in vendor companies, investors, VCs, and equity firms need to be cognizant of current market trends as well as make predictions about the impact and success of proposed ventures. This can help to avoid those deals that leave everyone scratching their heads wondering why did they do that? At KuppingerCole, we understand the cyber and IAM markets, and know the products and services in those fields. Stay on top of the latest security and identity product evaluations at www.kuppingercole.com.
Like many people with a long career in IT, I have numerous small computer-related side duties I’m supposed to perform for my less skilled friends and relatives. Among those, I’m helping manage a G Suite account for a small business a friend of mine has. Needless to say, I was a bit surprised to receive an urgent e-mail alert from Google yesterday, telling me that several users in that G Suite domain were impacted by a password storage problem.
Turns out, Google has just discovered that they’ve accidentally stored some of those passwords unencrypted, in plain text. Apparently, this problem can be traced back to a bug in the G Suite admin console, which has been around since 2005 (which, if I remember correctly, predates not just the “G Suite” brand, but the whole idea of offering Google services for businesses).
Google is certainly not the first large technology vendor caught violating one of the most basic security hygiene principles – just a couple months earlier we’ve heard the same story about Facebook. I’m pretty sure they won’t be the last as well – with the ever-growing complexity of modern IT infrastructures and the abundance of legacy IAM systems and applications, how can you be sure you don’t have a similar problem somewhere?
In Google’s case, the problem wasn’t even in their primary user management and authentication frameworks – it only affected the management console where admins typically create new accounts and then distribute credentials to their users. Including the passwords in plain text. In theory, this means that a rogue account admin could have access to other users’ accounts without their knowledge, but that’s a problem that goes way beyond just e-mail…
So, what can normal users do to protect themselves from this bug? Not much, actually – according to the mail from the G Suite team, they will be forcing a password reset for every affected user as well as terminating all active user sessions starting today. Combined with fixing the vulnerability in the console, this should prevent further potential exploits.
However, considering the number of similar incidents with other companies, this should be another compelling reason for everyone to finally activate Multi-Factor Authentication for each service that supports it, including Google. Anyone who is already using any reliable MFA authentication method – ranging from smartphone apps like Google Authenticator to FIDO2-based Google Security Keys – is automatically protected from any kind of credential abuse. Just don’t use SMS-based one-time passwords, ok? They’ve been compromised years ago and should not be considered secure anymore.
As for service providers themselves – how do you even start protecting sensitive information under your control if you do not know about all places it can be stored? Comprehensive data discovery and classification strategy should be the first step towards knowing what needs to be protected. Without it, both large companies like Google and smaller like the one that just leaked 50 million Instagram account details, will remain not just subjects of sensationalized publications in press, but constant targets for lawsuits and massive fines for compliance violations.
Remember, the rumors of password’s death are greatly exaggerated – and protecting these highly insecure but so utterly convenient bits of sensitive data is still everyone’s responsibility.
Digital Transformation is one of those buzzwords (technically a buzzphrase, but buzzphrase isn’t a buzzword yet) that gets used a lot in all sorts of contexts. You hear it from IT vendors, at conferences, and in the general media. But Digital Transformation, or DT as we like to abbreviate it, is much more than that. DT is commonly regarded as a step or process that businesses go through to make better use of technology to deliver products and services to customers, consumers, and citizens. This is true for established businesses, but DT is enabling and creating entirely new businesses as well.
When we hear about DT, we think of smart home products, wearable technologies, connected cars, autonomous vehicles, etc. These are of course mostly consumer products, and most have digital device identity of some type built in. Manufacturers use device identity for a variety of reasons, to track deployed devices and utilization, to push firmware and software updates, and to associate devices with consumers.
To facilitate secure, privacy-respecting, and useful interactions with consumer of DT technologies, many companies have turned to Consumer Identity and Access Management (CIAM) solutions. CIAM solutions can provide standards-based mechanisms for registering, authenticating, authorizing, and storing consumer identities. CIAM solutions usually offer identity and marketing analytics or APIs to extract more value from consumer business. CIAM is foundational and an absolutely necessary component of the DT.
CIAM solutions differ from traditional IAM solutions in that they take an “outside-in” as opposed to the “inside-out” approach. IAM stacks were designed from the point of view that an enterprise provisions and manages all the identities of employees. HR is responsible for populating most basic attributes and then managers add other attributes for employee access controls. This model was extended to business partners and B2B customers throughout the 1990s and early 2000s, and in some cases, to consumers. Traditional IAM was often found lacking by consumer-driven businesses in terms of managing their end-user identities. HR and company management doesn’t provision and manage consumer identities. Moreover, the types of attributes and data about consumers needed by businesses today was not well-suited to be serviced by enterprise IAM systems.
Thus, CIAM systems began appearing in the 2010s. CIAM solutions are built to allow consumers to register with their email addresses, phone numbers, or social network credentials. CIAM solutions progressively profile consumers so as not to overburden users at registration time. Most CIAM services provide user dashboards for data usage consent, review, and revocation, which aids in compliance with regulations such as EU GDPR and CCPA.
CIAM services generally accept a variety of authenticators that can be used to match identity and authentication assurance levels with risk levels. CIAM solutions can provide better – more usable and more secure – authentication methods than old password-based systems. Consumers are tired of the seemingly endless trap of creating new usernames and passwords, answering “security questions” that are inherently insecure, and getting notified when their passwords and personal data are breached and published on the dark web. Companies with poor implementations of consumer identity miss out on marketing opportunities and sales revenue; they also can lose business altogether when they inconvenience users with registration and password authentication, and they suffer reputation damage after PII and payment card breaches.
In addition to common features, such as registration and authentication options, consider the following functional selection criterion from our newly published Buyer’s Guide to CIAM. Compromised credential intelligence can lower the risks of fraud. Millions of username/password combinations, illegally acquired through data breaches, are available on the dark web for use by fraudsters and other malefactors. Compromised credentials intelligence services alert subscribers to the attempted use of known bad credentials. All organizations deploying CIAM should require and use this feature. Some CIAM solutions, primarily the SaaS vendors, detect and aggregate compromised credential intelligence from across all tenants on their networks. The effectiveness of this approach depends on the size of their combined customer base. On-premises CIAM products should allow for consumption of third-party compromised credential intelligence.
Lastly, CIAM solutions can scale much better than traditional IAM systems. Whereas IAM stacks were architected to handle hundreds of thousands of users with often complex access control use cases, some CIAM services can store billions of consumer identities and process millions to hundreds of millions of login events and transactions.
Over the last few years, enterprise IAM vendors have gotten in on the CIAM market. In many cases they have extended or modified their “inside-out” model to be more accommodating of the “outside-in” reality of consumer use cases. Additionally, though traditional IAM was usually run on-premises, pure-play CIAM started out in the cloud as SaaS. Today almost all CIAM, including those with an enterprise IAM history, offer CIAM as SaaS.
Thus, CIAM is a real differentiator that can help businesses grow through the process of DT by providing better consumer experiences, enhanced privacy, and more security. Without CIAM, in the age of DT, businesses face stagnation, lost revenues, and declining customer bases. To learn more about CIAM, see the newly updated KuppingerCole Buyer’s Guide to CIAM.
Figure: The key to success in Digital Business: Stop thinking inside-out – think outside-in. Focus on the consumer and deliver services the way the consumer wants
Getting competitive advantage from data is not a new idea however, the volume of data now available and the way in which it is being collected and analysed has led to increasing concerns. As a result, there are a growing number of regulations over its collection, processing and use. Organization need to take care to ensure compliance with these regulations as well as to secure the data they use. This is increasing the costs and organizations need a more sustainable approach.
In the past organizations could be confident that their data was held internally and was under their own control. Today this is no longer true, as well as internally generated data being held in cloud services, the Internet of Things and Social Media provide rich external sources of valuable data. These innovations have increased the opportunities for organizations to use this data to create new products, get closer to their customers and to improve efficiency. However, this data needs to be controlled to ensure security and to comply with regulatory obligations.
The recent EU GDPR (General Data Protection Regulation) provides a good example of the challenges organizations face. Under GDPR the definition of data processing is very broad, it covers any operation that is performed on personal data or on sets of personal data. It includes everything from the initial collection of personal data through to its final deletion. Processing covers every operation on personal data including storage, alteration, retrieval, use, transmission, dissemination or otherwise making available. The sanctions for non-compliance are very severe and critically, the burden of proof to demonstrate compliance with these principles lies with the Data Controller.
There are several important challenges that need to be addressed – these include:
- Governance – externally sources and unstructured data may be poorly governed, with no clear objectives for its use;
- Ownership – external and unstructured data may have no clear owner to classify its sensitivity and control its lifecycle;
- Sharing - Individuals can easily share unstructured data using email, SharePoint and cloud services;
- Development use - developers can use and share regulated data for development and test purposes in ways that may breach the regulatory obligations;
- Data Analytics - the permissions to use externally acquired data may be unclear and data scientists may use legitimately acquired data in ways that may breach obligations.
To meet these challenges in a sustainable way, organizations need to adopt Good Information Stewardship. This is a subject that KuppingerCole have written about in the past. Good Information Stewardship takes an information centric, rather than technology centric, approach to security and Information centric security starts with good data governance.
Figure: Sustainable Data Management
Access controls are fundamental to ensuring that data is only accessed and used in ways that are authorized. However, additional kinds of controls are needed to protect against illegitimate access, cyber adversaries now routinely target access credentials. Further controls are also needed to ensure the privacy of personal data when it is processed, shared or held in cloud services. Encryption, tokenization, anonymization and pseudonymization technologies provide such “data centric” controls.
Encryption protects data against unauthorized access but is only as strong as the control over the encryption keys. Rights Management using public / private key encryption is an important control that allows controlled sharing of unstructured data. DLP (Data Leak Prevention) technology and CASBs (Cloud Access Security Brokers) are also useful to help to control the movement of regulated data outside of the organization.
Pseudonymisation is especially important because it is accepted by GDPR as an approach to data protection by design and default. GDPR accepts that properly pseudonymized personal data is outside the scope of its obligations. Pseudonymization also allows operational data to be processed while it is still protected. This is very useful since pseudonymized personal data can therefore be used for development and test purposes, as well as for data analytics. Indeed, according to the UK Information Commissioner’s Office “To protect privacy it is better to use or disclose anonymized data than personal data”.
In summary, organizations need to take a sustainable approach to managing the potential risks both to the organization and to those outside (for example to the data subjects) from their use of data. Good information stewardship based on a data centric approach, where security controls follow the data, is best. This provides a sustainable approach that is independent of the infrastructure, tools and technologies used to store, analyse and process data.
For more details see: Advisory Note: Big Data Security, Governance, Stewardship - 72565
Don’t Run into Security Risks by Managing Robot Accounts the Wrong Way
Robotic Process Automation (RPA) is one of the hot IT topics these days. By using robots that automatically perform tasks that humans executed before, companies unlock a significant potential for cost savings. AI (Artificial Intelligence) helps in realizing RPA solutions. However, if done wrong, the use of RPA can cause severe security challenges.
It starts with the definition of the accounts used by the robots. There appears being a tendency of creating sort of “super-robot” accounts – accounts, that are used by various robots and that accumulate entitlements for multiple robots. Obviously, such approaches stand in stark contrast to the Least Privilege principle. They are the direct opposite of what IAM and specifically Access Governance are focusing on: Mitigating access-related risks.
The only argument for such solution is that management of the robot accounts appears being easier, because accounts can be re-used across robots. But that is just a consequence of doing the management of RPA wrong.
RPA accounts are factually technical (functional) user accounts, which are anyway heavily used in IT. In contrast to common approaches for such technical accounts, there is an individual “user” available: The robot. Each robot can run in the context of its own account, that has exactly the entitlements required for the (commonly very narrow) tasks that robot is executing.
Using standard functions of IGA, robots can be managed as a specific type of user. Each robot needs to be onboarded, which is a process that is very similar to onboarding (and changing and offboarding) an external user. There is nothing fundamentally new or different, compared to existing IAM processes. The robot should have a responsible manager, and changes in management can be handled via well-defined mover processes – as they should be for every technical account and every external user.
The entitlements can be granted based on common entitlement structures such as roles and groups, or – in sophisticated IAM infrastructures – be based on runtime authorization, i.e. Dynamic Authorization Management.
As for technical accounts, in the rare case that someone else needs access to that account, Privileged Access Management (PAM) comes into play. Access to that in some sense shared account can be handled via Shared Account Password Management. Behavior of the accounts can be tracked via the UBA (User Behavior Analytics) capabilities found in several of the PAM solutions in the market, in specific UBA products, or in other solutions.
There are no identity and access related functionalities specific to RPA that can’t be managed well by relying on standard IAM capabilities of IGA (Identity Governance & Administration) and PAM. And if the accounts are managed and associated per robot, instead of creating the “super-robot” accounts, RPA is not an IAM challenge, but IAM helps in managing RPA well, without growing security risks.
Data, a massive amount of data, seems to be the holy grail in building more sophisticated AI’s, creating human-like chatbots and selling more products. But is more data actually better? With GDPR significantly limiting the way we generate intelligence through collecting personally identifiable data, what is next? How can we create a specific understanding of our customers to exceed their expectations and needs with less data?
Many of us collect anything we can get our hands on from personal information, behavioral data, to “soft” data that one might run through a natural language processing (NLP) program to examine interactions between devices and humans to pull meaning from conversations to drive sales. We believe the more we collect, the more we know. But is that belief true or merely a belief?
Today, there are an estimated 2.7 Zettabytes of data that exists in the digital world, but only 0.5% of this data is analyzed. With all of this data being collected and only a fraction of it being analyzed, it is no wonder that 85% of data science projects fail, according to a recent analysis by Gartner.
Data science is utterly complex. If you are familiar with the data science hierarchy of needs, AI and machine learning, you know that for a machine to process data and learn on its own, without our constant supervision, it needs massive amounts of quality data. And within that lies a primary question — even if you have an enormous amount of quality data, is it the right data to drive the insights we need to know our customers?
A client of ours shared their dilemma, which might sound familiar to you. They collected all possible data points about their customers, data that was bias-free and clean.
“We know where the user came from, what they bought, how often they used our product, how much was paid, when they returned. Etc. We created clusters within our database, segmented them, overlaid with additional identifiable data about the user, and built our own ML algorithm so we could push the right marketing message and price point to existing and new customers to increase sales volume.”
All this effort and all this data did not lead to increased sales, they still struggled to convert customers. Was their data actionable? Our client assumed that the data available was directly correlated to driving the purchases. And here lies the dilemma with our assumptions and datasets — we believe the data we collect has something to do with the purchase when in reality the answer is, not always.
Humans are “wired up” to use multiple factors to form a buying decision. Those decision are not made in a vacuum. They don’t just happen online, or offline. Each product, service or brand is surrounded by a set of factors or needs by the customer that play a vital role in their decision-making process, which in most cases has nothing or little to do with one’s demographics, lifestyle, price of the product, etc.
This idea is based on the principle of behavioral economics, which explains that multiple factors - cognitive, social, environmental and economic — play a role in one’s decision process and directly correlates to how we decide what to spend our money on and how we choose between competing products/experiences.
All these factors combined allow customers to individually determine if a brand can fulfill their expectations or why they may prefer one brand over another.
Product preference is directly linked to market share in sales volume. An analysis by MASB found a direct linkage between brand preference and market share across 120 brands and 12 categories. So, if the data that is collected does not directly link to preference how can any data model, ML algorithm, or AI be useful in stimulating sales?
Stephen Diorio, from Forbes, argues, “if more executives understood the economics of customer behavior and the financial power of brand preference they would be better armed to work with CMOs to generate better financial performance.” To remain competitive and stop the rat race for more data, many urge that “companies must apply the latest advances in decision science and behavioral economics to ensure that investment in market research and measurement will yield metrics that isolate the most critical drivers of brand preference.”
In the future, collecting data will become more complicated and in some senses, limited by GDPR, CCPA, and other privacy laws worldwide. To get ahead of the curve companies should quickly shift their perspective from gathering data about who, where, when, what and how, to a broader understanding as to why customers prefer what they prefer and what drives those preferences. Understanding your customers from a behavioral economics point of view will deliver superiority over the competition, how to exceed your shoppers’ expectations and how lean their preference in your favor.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]