Blog posts by John Tolbert
This month we launched our Cybersecurity Leadership Summit in Berlin. A pre-conference workshop entitled “Focusing Your Cybersecurity Investments: What Do You Really Need for Mitigating Your Cyber-risks?” was held on Monday. The workshop was both business-oriented and technical in nature. Contemporary CISOs and CIOs must apply risk management strategies, and it can be difficult to determine which cybersecurity projects should be prioritized. Leaders in attendance heard the latest applied research from Martin Kuppinger, Matthias Reinwarth, and Paul Simmonds.
Tuesday’s opening keynote was delivered by Martin Kuppinger on the topic of User Behavioral Analytics (UBA). UBA has become both the successor and adjunct to SIEMs, and as such are link between traditional network-centric cybersecurity and identity management. Torsten George of Centrify pitched the importance of zero-trust concepts. Zero-trust can be seen as improving security by requiring risk-adaptive and continuous authentication. But trust is also a key component of things like federation architecture, so it won’t be going away altogether.
Innovation Night was held on Tuesday. In this event, a number of different speakers competed by describing how their products successfully incorporated Artificial Intelligence / Machine Learning techniques. The winner was Frederic Stallaert, Machine Learning Engineer/ Data Scientist at ML6. His topic was the adversarial uses of AI, and how to defend against them.
Here are some of the highlights. In the social engineering track, Enrico Frumento discussed the DOGANA project. This is the Advanced Social Engineering and Vulnerability Analysis Framework. They have been performing Social Driven Vulnerability Assessments and have interesting but discouraging results. In a recent study, 59% of users tested in an energy sector organization fell prey to a phishing training email. Malicious actors use every bit of information about targets available to them, regardless of legality. Organizations providing anti-phishing training are encumbered by GDPR.
In Threat intelligence, we had a number of good speakers and panelists. Ammi Virk presented on Contextualizing Threat Intelligence. One of his excellent points was recognizing the “con in context”, or guarding against bias, assumptions, and omissions. Context is essential in turning information into intelligence. This point was also made strongly by John Bryk in his session.
JC Gaillard posed a controversial question in his session, “Is the role of CISO outdated?”. He looked at some of the common problems CISOs face, such as being buried in an org chart, inadequate funding, and lack of authority to solve problems. His recommendations were to 1) elevate the CISO role and give it political power, 2) move the purely technical IT Security functions under the CIO or CTO, and 3) put CISOs on the level with newer positions such as CDOs and DPOs.
Internet Balkanization was a topic in the GDPR and Cybersecurity session. Daniel Schnok gave a thought-provoking presentation on the various political, economic, and technological factors that are putting up barriers and fragmenting the Internet today. For example, we know that countries such as China, Iran, and Russia have politically imposed barriers and content restrictions. GDPR is limiting the flow of personal information in Europe, and in some cases, overreaction to GDPR is impairing the flow of other types of data as well. The increasing consolidation of data under the large, US-based tech firms is also another example of balkanization.
In my final keynote I described the role that AI and ML are playing in cybersecurity today. These technologies are not merely nice-to-haves but are essential components, particularly for anti-malware, EDR/MDR, traffic analysis, etc. Nascent work on using ML techniques to facilitate understanding of access control patterns is underway by some vendors. These techniques may lead to a breakthrough in data governance in the mid-term. AI and ML based solutions are subject to attack (or “gaming”). Determined attackers can fool ML enhanced tools into missing detection of malware, for example. Lastly, Generative Adversarial Networks (GANs) serve as an example of how bad actors can use AI technologies as a means to advance attacks. GAN-based tools exist for password-cracking, steganography, and creating fake fingerprints for fooling biometric readers. In short, ML can help, but it can also be attacked and used to create more powerful cyber attacks.
We would like to thank our sponsors: iC Consult, Centrify, Cisco, One Identity, Palo Alto Networks, Airlock, Axiomatics, BigID, ForgeRock, Nexis, Ping Identity, SailPoint, MinerEye, PlainID, FireEye, Varonis, Thycotic, and Kaspersky Lab.
We will return to Berlin for CSLS 2019 on 12-14 November of next year.
Fall is Consumer Identity Season at KuppingerCole, just in time for holiday shopping. Last week we kicked off our 2018 tour in Seattle. The number of attendees and sponsors was well up over last year, indicating the significant increase in interest in the Consumer Identity and Access Management (CIAM) subject. CIAM is one of the fastest growing market segments under IAM, and with good reason. Companies that deploy CIAM solutions find that they can connect with their consumers better, delivering a more positive experience, and generating additional revenue. CIAM can also aid with regulatory compliance, such as those for privacy (GDPR, CCPA, etc.) and finance (AML, KYC, PSD2, etc.).
Some of the big topics last week were authentication methods for CIAM, particularly biometrics, GDPR and privacy regulations around the world, consumer preferences for identity, and blockchain identity.
CIAM requires thinking “outside-in” about authentication. The FIDO Alliance held a workshop on Wednesday. FIDO was a particularly relevant topic for CIW, as there were many discussions on the latest authentication methods and techniques. The turnout was excellent, and attendees heard from some of the leaders and active members of the organization. I believe that FIDO will play a key role in modernizing authentication technology, especially for consumer-facing applications. FIDO specifications have been maturing rapidly. Version 2.0, and the W3C WebAuthN and CTAP protocols are exactly what has been needed to speed adoption. Expect to see FIDO deployments increasing as the major browsers fully support the standard. We can also expect to see higher consumer satisfaction as FIDO rolls out widely, due to ease of use, and better security and privacy. For an overview of how FIDO works, see Alex Takakuwa’s presentation.
Mobile biometric solutions are enjoying popularity, many companies want to find out how to reduce friction for consumers in the authentication process. We considered risk-adaptive and continuous authentication as means to right-size authentication to specific use cases, such as finance and health care.
I noted that the “C” in CIAM can also apply to “citizens” as well as customers and consumers. State and local government agencies are exploring Government-to-Citizen (G2C) identity paradigms, and in some cases CIAM solutions are a good fit.
Privacy is an ever-present concern for consumer-facing systems. GDPR is in effect in Europe, and companies around the world must now abide by it when processing personal data of European persons. Tim Maiorino gave an update on the state of GDPR. The subject of California’s upcoming privacy law arose in some panels. Will the California model be adopted across the US? Probably not at the federal level, at least not in the foreseeable future. However, other states are likely to enact similar privacy laws, leading to discrepancies and possible difficulties in complying with similar but different regulations. We learned from Marisa Rogers that there is a call for participation for an ISO group on privacy by design for consumer services.
There were several speakers and panels addressing consumer wants and preferences with regard to CIAM. We had a few sessions on blockchain and identity. Didier Collin de Causabon gave a good example of how blockchain may be able to aid with KYC. Sarah Squire, co-founder and vice-chair of IDPro, gave a great talk on role of identity professionals in business. Her keynote also contains a lot of practical advice on IAM/CIAM implementations and where we as an industry can go from here.
We are already actively planning on CIW for 2019. Join us at the Motif Hotel in Seattle next September 25-27 for the next edition.
Thanks to all of our speakers and panelists for sharing their knowledge. Also thanks to our event sponsors Gigya – SAP Customer Data Cloud, WSO2, Radiant Logic, Nok Nok Labs, Trusted Key, iWelcome, Auth0 and Uniken.
Entrust Datacard, founded in 1969 and headquartered in Minnesota, announced today that it is making a strategic investment in CensorNet and acquiring the SMS Passcode business from CensorNet (originally a Danish company). Entrust Datacard is a strong brand in IAM, with card and certificate issuance, and financial and government sector business.
CensorNet was founded in 2007 in the UK. Their original product was a secure web gateway. It now includes multi-mode in-line and API-based CASB service. It also has an email security service, which utilizes machine learning algorithms to scan email looking for potential malicious payloads. Entrust Datacard already has substantial capabilities in the adaptive and multi-factor authentication areas, and the SMS Passcode product line will add to that. With this investment and acquisition, Entrust Datacard plans to move beyond digital transformation to realize continuous authentication and enhance its e-government offerings.
The results of the acquisition will be reflected in product roadmaps, likely starting in 2019. Entrust Datacard products and services will continue to handle initial authentication, and CensorNet’s capabilities will be able to add user activity monitoring through the CASB piece. The integration of identity-linked event data from CensorNet CASB will help security analysts to know, for example, which files users are moving around, and who and what are users emailing. This functionality will help administrators reduce the possibility of fraud and data loss.
Why does it seem to be getting harder to delete information online? GDPR will take effect in just a few days. GDPR empowers EU people to take control of their personal information. When in force, GDPR will mandate that companies and other organizations which control or process personal information must comply with delete requests. Users around the world are more cognizant of the data they create and leave online. Even outside the EU, people want to be able to delete data which they deem is no longer useful.
Enter the “archive” button. On some social media sites and other popular applications, the archive button appears to have replaced the old familiar “delete” button. Why? It is ostensibly to make it easier for users to retrieve information that they want out of sight. App makers reason that you don’t always want to delete something once you hit delete. Sometimes, they’re right. But most of the time, “delete” should mean delete. If one searches hard enough, one can usually find ways to actually delete data, even though the top-level UIs only show options to archive.
Another reason “archive” has replaced “delete” is that all information has some value, or at least that is the guiding principle in Big Data circles. Just because a user wants data removed doesn’t mean that it doesn’t have value for others. Social network operators make money off user data, so they believe it must be retained for historical analysis.
Turbulence in the markets and bad press for social media companies may be a leading indicator as to the importance of personal data control for an increasing number of users worldwide. In advance of GDPR, and for the benefit of all users, we urge app makers to bring back the delete button.
As the May 25th, 2018 GDPR enforcement date approaches, more and more companies are actively taking steps to find, evaluate, and protect the personally identifiable information (Personal Data) of EU persons. Organizations that do business with EU persons are conducting data protection impact assessments (DPIAs) to find Personal Data under their control. Many are also asking “do we need to keep the data?” and putting into practice data minimization principles. These are good measures to take.
IT and privacy professionals are inventorying HR, CRM, CIAM, and IAM systems, which is reasonable since these likely contain Personal Data. Administrators should also consider performing DPIAs on security solutions.
Security solutions such as SIEMs, EMMs, and Endpoint Security/EDR tools collect lots of data, including Personal Data, for analysis. Many of the following types of Personal Data (as defined by GDPR) are routinely harvested for ongoing security and risk analysis:
- Email address
- User attributes, including organizational affiliations, citizenship, group membership
- IP address
- User-created data files
Most security solutions allow options for on-premise analysis or cloud-based analysis. As an example, most anti-malware products "scoop up" files for deep inspection at the vendor's cloud, which may be outside of EU. Some vendor solutions are configurable in terms of what attributes can be collected and/or sent elsewhere for analysis; some are not.
Any processing of Personal Data is controlled under GDPR. The definition of processing is so wide that it likely includes these forms of scanning and analysis
In light of GDPR, one question administrators should ask “Is this information collected with user consent?” In some cases, user consent will be required. However, according to GDPR Article 6, personal information collection may proceed for the following purposes:
- for the performance of a contract or legal obligation;
- to protect the vital interests of the data subject;
- for a task in the public interest;
- or where processing is necessary for the legitimate interests of the controller.
Moreover, there will be situations in which Personal Data may be processed by more than one Data Processor. In these joint-processor scenarios, all entities involved in processing share responsibility for ensuring that the use of Personal Data is authorized under one of the GDPR-specified purposes above.
Security administrators should work with their DPOs and legal team to address the following additional points:
- Determine which of your deployed security solutions collect which kinds of data; in effect, do DPIAs on security solutions.
- Ascertain where this data goes: local storage? Telemetry transmitted to the cloud? If so, does it stay in the EU? Could it go outside the EU? GDPR defines the notion of data protection adequacy with regard to countries and organizations outside the EU. The Official Journal of the EU will publish and maintain a list of locations for which no additional data transfer agreements will be required.
- If the security scanning or analysis is performed by a third party or cloud provider, irrespective of wherever this is done there must be a written legal agreement as set out in Article 28 (3).
- Do your security solutions permit Personal Data anonymization? GDPR Recital 26 states that data which is sufficiently masked to prevent the identification of the user will not be subject to the data protection mandates. However, SIEMs and forensic tools sometimes need to be able to pinpoint users. Specifically, IP addresses and user credentials are almost always necessary and serve as “primary keys” on which security analyses are based. Within your security solutions, is it possible to mask user data at a high level for external analysis, but leave details encrypted locally, so that they can be unmasked by authorized security analysts during investigations? This is a difficult technical challenge, which is not supported yet by many security vendors. Regardless, even local processing of data elements such as IP address falls under the jurisdiction of GDPR.
In summary, don’t forget your security solutions when running DPIAs. Check with vendors about what information they collect and how it is treated. Work closely with your DPOs and legal counsel to plan the best course of action if you find that remediation or some re-design is needed.
The Equifax data breach saga continues to unfold. In late 2017, the company admitted it had suffered significant data loss starting in March of last year. There were likely multiple data theft events over a number of months. At some point in May, they notified a small group of customers but kept mostly quiet. Months later the story went public, after Equifax contacted government officials at the US federal and state level. The numbers and locations of consumers affected by the breach keeps growing. As of March 1, 2018, Equifax is reported to have lost control of personally identifiable information on roughly 147 million consumers. Though most of the victims are in the US, Equifax had and lost data on consumers from Argentina to the UK.
Perpetrators made off with data such as names, addresses, Social Security numbers, and in some cases, driver’s license numbers, credit card numbers, and credit dispute files. Much of this is considered highly sensitive PII.
The breach and its effects on consumers is only part of the story. Equifax faces 240 class action lawsuits and legal action from every US state. However, the US Consumer Financial Protection Bureau is not investigating, has issued no subpoenas, and has essentially “put the brakes” on any punitive actions. The US Federal Trade Commission (FTC) can investigate, but its ability to levy fines is limited. On March 14, 2018, the US Securities and Exchange Commission (SEC) brought insider trading charges against one of Equifax’ executives, who exercised his share options and then sold before news of the breach was made public.
Given that Equifax is still profiting, and the stock price seems to have suffered no lasting effects (some financial analysts are predicting the stock price will reach pre-breach levels in a few months), fines are one of the few means of incentivizing good cybersecurity and privacy practices. Aiming for regulatory compliance is considered by most in the field to be the bare minimum that enterprises should strive for with regard to security. A failure to strictly enforce consumer data protection laws, as in the Equifax case so far, may set a precedent, and may allow other custodians of consumers’ personal data to believe that they won’t be prosecuted if they cut corners on cybersecurity and privacy. Weak security and increasing fraud are not good for business in general.
At the end of May 2018, the General Data Protection Regulation (GDPR) comes into effect in the EU. GDPR requires 72-hour breach notification and gives officials the ability to fine companies which fail to protect EU person data up to 4% of global revenue (or €20M) per instance. If an Equifax-like data breach happens in the EU after GDPR takes hold, the results will likely be very different.
Regulators in all jurisdictions must enforce the rules on the books for the good of consumers.
The Facebook data privacy story continues to be in the headlines this week. For many of us in IT, this event is not really a surprise. The sharing of data from social media is not a data breach, it’s a business model. Social media developers make apps (often as quizzes and games) that harvest data in alignment with social networks’ terms of service. By default, these apps can get profile information about the app users and their friends/contacts. There are no granular consent options for users. What gives this story its outrage factor is the onward sharing of Facebook user data from one organization to another, and the political purposes for which the data was used. Facebook now admits that the data of up to 87 million users was used by Cambridge Analytica. If you are a US-based Facebook user, and are curious about how they have categorized your politics, go to Settings | Ads | Your Information | Your Categories | US Politics.
But data made available through unsecured APIs, usually exported in unprotected file formats without fine-grained access controls or DRM, cannot be assumed to be secure in any way. Moreover, the Facebook - Cambridge Analytica incident is probably just the first of many that are as yet unreported. There are thousands of apps and hundreds of thousands of app developers that have had similar access to Facebook and other social media platforms for years.
CNBC reports that Facebook was attempting to acquire health record data from hospitals, but that those plans are on “hiatus” for the moment. Though the story says the data would be anonymized, there is no doubt that unmasked health care records plus social media profile information would be incredibly lucrative for Facebook, health care service providers, pharmaceutical companies, and insurance companies. But again, according to this report, there was no notion of user consent considered.
It is clear that Facebook users across the globe are dissatisfied with the paucity of privacy controls. In many cases, users are opting out by deleting their accounts, since that seems to be the only way at present to limit data sharing. However, the data sharing without user consent problem is endemic to most social networks, telecommunications networks, ISPs, smartphone OSes and apps developers, free email providers, online retailers, and consumer-facing identity providers. They collect information on users and sell it. This is how these “free” services pay for themselves and make a profit. The details of such arrangements are hidden in plain sight in the incomprehensible click-through terms of service and privacy policies that everyone must agree to in order to use the services.
This is certainly not meant to blame the victim. At present, users of most of these services have few if any controls over how their data is used. Even deleting one’s account doesn’t work entirely, as a Belgian court found that (and ruled against) Facebook for collecting information on Belgian citizens who were not even Facebook users.
The rapidly approaching May 25th GDPR effective date will certainly necessitate changes in the data sharing models of social media and all organizations hosting and processing consumer data for EU persons. Many have wondered if GDPR will be aggressively enforced. As a result of this Facebook – Cambridge Analytica incident, EU Justice Commissioner Vera Jourova said “I will take all possible legal measures including the stricter #dataProtection rules and stronger enforcement granted by #GDPR. I expect the companies to take more responsibility when handling our personal data.” We now have the answer to the “Will the EU enforce GDPR?” question.
It is important to note that GDPR does not aim to put a damper on commerce. It only aims to empower consumers by giving them control over what data they share and how it can be used. GDPR requires explicit consent per purpose (with some exceptions for other legitimate processing of personal data). This consent per purpose stipulation will require processors of personal data to clearly ask and get permission from users.
Other countries are looking to the GDPR model for revamping their own consumer privacy regulations. We predict that in many jurisdictions, similar laws will come into effect, forcing social networks and consumer-facing companies to change how they do business in more locations.
Even before the Cambridge Analytica story broke, Facebook, Google, and Twitter were under fire for allowing their networks to spread “fake news” in the run-up to the US election cycle. Disengagement was growing, with some outlets reporting 18-24% less time spent on site per user. Users are quickly losing trust in social media platforms for multiple reasons. This impacts commerce as well, in that many businesses such as online retailers rely on “social logins” such as Facebook, Twitter, Google, etc.
To counter their growing trust problems, social network providers must build in better privacy notifications and consent mechanisms. They must increase the integrity of content without compromising free speech.
Facebook and other social media outlets must also communicate these intentions to improve privacy controls and content integrity monitoring to their users. In the Facebook case, it is absolutely paramount to winning back trust. CEO Mark Zuckerberg announced that Facebook is working on GDPR compliance but provided no details. Furthermore, he has agreed to testify before the US Congress, but his unwillingness to personally appear in the UK strengthens a perception that complying with EU data protection regulations is not a top priority for Facebook.
If social network operators cannot adapt in time, they will almost certainly face large fines under GDPR. It is quite possible that the social media industry may be disrupted by new privacy-protecting alternatives, funded by paid subscriptions rather than advertising. The current business model of collecting and selling user data without explicit consent will not last. Time is running out for Facebook and other social network providers to make needed changes.
Just when you thought we had enough variations of IAM, along comes FIAM. Fake digital identities are not new, but they are getting a lot of attention in the press these days. Some fake accounts are very sophisticated and are difficult for automated methods to recognize. Some are built using real photos and stolen identifiers, such as Social Security Numbers or driver’s license numbers. Many of these accounts look like they belong to real people, making it difficult for social media security analysts to flag them for investigation and remove them. With millions of user credentials, passwords, and other PII available on the dark web as a result of the hundreds of publicly acknowledged data breaches, it’s easy for bad actors to create new email addresses, digital identities, and social media profiles.
As we might guess, fake identities are commonly used for fraud and other types of cybercrime. There are many different types of fraudulent use cases, ranging from building impostor identities and attaching to legitimate user assets, to impersonating users to spread disinformation, and for defamation, extortion, catfishing, stalking, trolling, etc. Fake social media accounts were used by St. Petersburg-based Internet Research Agency to disseminate election-influencing propaganda. Individuals associated with these events have been indicted by the US, but won’t face extradition.
Are there legitimate uses for fake accounts? In many cases, social network sites and digital identity providers have policies and terms of service that prohibit the creation of fake accounts. In the US, violating websites’ terms of service also violates the 1984 Computer Fraud and Abuse Act. Technically then, in certain jurisdictions, creating and using fake accounts is illegal. It is hard to enforce, and sometimes gets in the way of legitimate activities, such as academic research.
However, it is well-known that law enforcement authorities routinely and extensively use fake digital identities to look for criminals. Police have great success with these methods, but also scoop up data on innocent online bystanders as well. National security and intelligence operatives also employ fake accounts to monitor the activities of individuals and groups they suspect might do something illegal and/or harmful. It’s unlikely that cops and spies have to worry much about being prosecuted for using fake accounts.
A common approach that was documented in the 1971 novel “The Day of the Jackal by Frederick Forsyth is to use the names and details of dead children. This creates a persona that is very difficult to identify as being a fraud. It is still reported as being in use and when discovered causes immense distress to the relatives.
In the private sector, employees of asset repossession companies also use fake accounts to get close to their targets to make it easier for them to repo their cars and other possessions. Wells Fargo has had an ongoing fake account creation scandal, where up to 3.5 million fake accounts were created so that the bank could charge customers more fees. The former case is sneaky and technically illegal, while the latter case is clearly illegal. What are the consequences, for Wells Fargo? They may have suffered a temporary stock price setback and credit downgrade, but their CEO got a raise.
FIAM may sound like a joke, but it is a real thing, complete with technical solutions (using above-board IDaaS and social networks), as well as laws and regulations sort of prohibiting the use of fake accounts. FIAM is at once a regular means of doing business, a means for spying, and an essential technique for executing fraud and other illegal activities. It is a growing concern for those who suffer loss, particularly in the financial sector. It is also now a serious threat to social networks, whose analysts must remove fake accounts as quickly as they pop up, lest they be used to promote disinformation.
At KuppingerCole, cybersecurity and identity management product/service analysis are two of our specialties. As one might assume, one of the main functional areas in vendor products we examine in the course of our research is administrative security. There are many components that make up admin security, but here I want to address weak authentication for management utilities.
Most on-premises and IaaS/PaaS/SaaS security and identity tools allow username and password for administrative authentication. Forget an admin password? Recover it with KBA (Knowledge-based authentication).
Many programs accept other stronger forms of authentication, and this should be the default. Here are some better alternatives:
- Web console protected by existing Web Access Management solution utilizing strong authentication methods
- SAML for SaaS
- Mobile apps (if keys are secured in Secure Enclave, Secure Element, and app runs as Trusted App in Trusted Execution Environment [TEE])
- FIDO UAF Mobile apps
- USB Tokens
- FIDO U2F devices
- Smart Cards
Even OATH TOTP and Mobile Push apps, while having some security issues, are still better than username/passwords.
Why? Let’s do some threat modeling.
Scenario #1: Suppose you’re an admin for Acme Corporation, and Acme just uses a SaaS CIAM solution to host consumer data. Your CIAM solution is collecting names, email addresses, physical addresses for shipping, purchase history, search history, etc. Your CIAM service is adding value by turning this consumer data into targeted marketing, yielding higher revenues. Until one day a competitor comes along, guesses your admin password, and steals all that business intelligence. Corporate espionage is real - the “Outsider Threat” still exists.
Scenario # 2: Same CIAM SaaS background as #1, but let’s say you have many EU customers. You’ve implemented a top-of-the-line CIAM solution to collect informed consent to comply with GDPR. If a hacker steals customer information and publishes it without user consent, will Acme be subject to GDPR fines? Can deploying username/password authentication be considered doing due diligence?
Scenario # 3: Acme uses a cloud-based management console for endpoint security. This SaaS platform doesn’t support 2FA, only username/password authentication. A malicious actor uses KBA to reset your admin password. Now he or she is able to turn off software updates, edit application whitelists, remove entries from URL blacklists, or uninstall/de-provision endpoint agents from your company’s machines. To cover their tracks, they edit the logs. This would make targeted attacks so much easier.
Upgrading to MFA or risk-adaptive authentication would decrease the likelihood of these attacks succeeding, though better authentication is not a panacea. There is more to cybersecurity than authentication. However, the problem lies in the fact that many security vendors allow password-based authentication to their management consoles. In some cases, it is not only the default but also the only method available. Products or services purporting to enhance security or manage identities should require strong authentication.
The EU’s General Data Protection Regulation (GDPR) will force many changes in technology and processes when it comes into effect in May 2018. We have heard extensively about how companies and other organizations will have to provide capabilities to:
- Collect explicit consent for the use of PII per purpose
- Allow users to revoke previously given consent
- Allow users to export their data
- Comply with users’ requests to delete the data you are storing about them
- Provide an audit trail of consent actions
Software vendors are preparing, particularly those providing solutions for IAM, CIAM, ERP, CRM, PoS, etc., by building in these features if not currently available. These are necessary precursors for GDPR compliance. However, end user organizations have other steps to take, and they should begin now.
GDPR mandates that, 72 hours after discovering a data breach, the responsible custodian, in many cases it will be the organization’s Data Protection Officer (DPO), must notify the Supervisory Authority (SA). If EU persons’ data is found to have been exfiltrated, those users should also be notified. Organizations must begin preparing now how to execute notifications: define responsible personnel, draft the notifications, and plan for remediation.
Consider some recent estimated notification intervals for major data breaches in the US:
- Equifax: 6 weeks to up to 4-5 months
- Deloitte: perhaps 6 months
- SEC: up to 1 year
- Yahoo: the latest revelations after the Verizon acquisition indicate up to 4 years for complete disclosure
The reasons data custodians need to be quick about breach notifications are very clear and very simple:
- The sooner victims are notified, the sooner they can begin to remediate risks. For example, Deloitte’s customers could have begun to assess which of their intellectual property assets were at risk and how to respond earlier.
- Other affected entities can begin to react. In the SEC case, the malefactors had plenty of time to misuse the information and manipulate stock prices and markets.
- Cleanup costs will be lower for the data custodian. Selling stocks after breaches are discovered but prior to notification may be illegal in many jurisdictions.
- It will be better for the data custodian’s reputation in the long run if they quickly disclose and fix the problems. The erosion of Yahoo’s share price prior to purchase is clear evidence here.
Understandably, executives can be reticent in these matters. But delays give the impression of apathy, incompetence, and even malicious intent on the part of executives by attempting to hide or cover up such events. Though GDPR is an EU regulation, it directly applies to other companies and organizations who host data on EU member nations’ citizens. Even for those organizations not subject to GDPR, fast notification of data breaches is highly recommended.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]