KuppingerCole Blog

Beyond simplistic: Achieving compliance through standards and interoperability

"There is always an easy solution to every problem - neat, plausible, and wrong.
 (
H.L. Mencken)

Finally, it's beginning: GDPR gains more and more visibility.

Do you also get more and more GDPR-related marketing communication from IAM and security vendors, consulting firms and, ehm, analyst companies? They all offer some pieces of advice for starting your individual GDPR project/program/initiative. And of course, they want you to register your personal data (Name, company, position, the size of a company, country, phone, mail etc...) for sending that ultimate info package over to you. And obviously, they want to acquire new customers and provide you and all the others with marketing material.

It usually turns out that the content of these packages is OK, but not really overwhelming.  A summary of the main requirements of the GDPR. Plus, in the best cases, some templates that can be helpful, if you can find them between the marketing material included in the "GDPR resource kit". But the true irony lies in the fact that according to the GDPR it is not allowed to offer a service that has a mandatory consent on data that is not needed for the service being offered (remember?… Name, company, position, the size of a company, country, phone, mail etc...).

The truth is, that GDPR compliance does not come easily and the promise of a shortcut and an easy shortcut via any GDPR readiness kit won't work out. Instead, newly designed but also already implemented processes of how personal and sensitive data is stored and processed, will have to be subject to profound changes.

Don't get me wrong: Having a template for a data protection impact analysis, a prescanned template for breach notification, a decision tree for deciding whether you need a DPO or not, and some training material for your staff are all surely important. But they are only a small part of the actual solution.

So in the meantime, while others promise to give you simple solutions, the Kantara Initiative is working on various aspects for providing processes and standards for adequate and especially GDPR-compliant management of Personally Identifiable Information. These initiatives include UMA (User-Managed Access), Consent and Information Sharing, OTTO (Open Trust Taxonomy for Federation Operators) and IRM (Identity Relationship Management).

Apart from several other objectives and goals, one main task is to be well-prepared for the requirements of GDPR (and e.g. eIDAS). The UMA standards is now reaching a mature 2.0 status. Just a few days ago two closely interrelated documents have been made available for public review, that makes the cross-application implementation of access based on provided consent possible. "UMA 2.0 Grant for OAuth 2.0 Authorization" enables asynchronous party-to-party authorization (between requesting party = client and resource owner) based on rules and policies. "Federated Authorization for User-Managed Access (UMA) 2.0" on the server side defines and implements authorization methods that are interoperating between various trust domains. This, in turn, allows the resource owner to define her/his rules and policies for access to protected resource in one single place.

These methods and technologies serve two major aspects: They enable the resource owner (you and me) to securely and conveniently define consent and implement and ensure it through technology. And it enables requesting partners (companies, governments, and people, again you and me) to have reliable and well-defined access in highly distributed environments.

So, they need to be verified if they can be adequate methods to getting to GDPR compliance and far beyond: By empowering the individual, enabling compliant business models, providing shared infrastructure and by designing means for implementing reliable und user-centric technologies. Following these principles can help achieving compliance. "Beyond" means: Take the opportunity of becoming and being a trusted and respected business partner that is known for proactively valuing customer privacy and security. Which is for sure much better than only preparing for the first/next data breach.

This surely is not an easy approach, but it goes to the core of the actual challenge. Suggested procedures, standards, guidelines and first implementations are available. They are provided to support organizations in moving towards security and privacy from the ground up. The UMA specifications including the ones described above are important building blocks for those who want to go beyond the simple (and insufficient) toolkit approach.

Why I Sometimes Wanna Cry About the Irresponsibility and Shortsightedness of C-Level Executives When It Comes to IT Security

WannaCry counts, without any doubt, amongst the most widely publicized cyber-attacks of the year, although this notoriety may not necessarily be fully justified. Still, it has affected hospitals, public transport, and car manufacturing, to name just a few of the examples that became public. In an earlier blog post, I was looking at the role government agencies play. Here I look at businesses.

Let’s look at the facts: The exploit has been known for a while. A patch for the current Windows systems has been out for months, and I’ve seen multiple warnings in the press indicating the urgent need to apply the appropriate patches.

Unfortunately, that didn’t help, as the number of affected organizations around the world has demonstrated. Were those warnings ignored? Or had security teams missed them? Was that ignorance? Lack of established processes? If they had older Windows versions in place in sensitive areas, why haven’t they replaced them earlier? I could ask many more of these questions. Unfortunately, there is only one answer to them: human failure. There is no excuse.

Somewhere in the organizations affected, someone – most likely several people – have failed. Either they’ve failed by not doing IT security right (and we are not talking about the most sophisticated attacks, but simply about having procedures in place to react to security alerts) or by lacking adequate risk management. Or by a lack of sufficient budgets for addressing the most severe security risks. Unfortunately, most organizations still tend to ignore or belittle the risks we are facing.

Yes, there is no 100% security. But we are all supposed to know how to strengthen our cyber-attack resilience by having right people, right processes, and right tools in place. The right people to deal with alerts and incidents. The right processes for both preparing for and reacting to breaches. And the right tools to detect, protect, respond, and recover. Yes, we have a massive skills gap that is not easy to close. But at least to a certain extent, MSSPs (Managed Security Service Providers) are addressing this problem.

Unfortunately, most organizations don’t have enterprise-wide GRC programs covering all risks including IT security risks, and most organizations don’t have the processes defined for an adequate handling of alerts and incidents – to say nothing about having a fully operational CDC (Cyber Defense Center). Having one is a must for large organizations and organizations in critical industries. Others should work with partners or at least have adequate procedures to recover quickly.

Many organizations still rely on a few isolated, old-fashioned IT security tools. Yes, modern tools cost money. But that is not even where the problem starts. It starts with understanding which tools really help mitigating which risks; with selecting the best combination of tools; with having a plan. Alas, I have seen way too few well-thought-out security blueprints so far. Creating such blueprints is not rocket science. It does not require a lot of time. Why are so many organizations lacking these? Having them would allow for targeted investments in security technology that helps, and also for understanding the organizational consequences. Just think about the intersection of IT security and patch management.

To react to security incidents quickly and efficiently, organizations need a CDC staffed with people, having defined processes in place for breach and incident response, and being well integrated into the overall Risk Management processes, as depicted in the picture below.

Such planning not only includes a formal structure of a CDC, but plans for handling emergencies, ensuring business continuity, and communication in cases of breaches. As there is no 100% security, there always will be remaining risks. No problem with that. But these must be known and there must be a plan in place to react in case of an incident.

Attacks like WannaCry pose a massive risk for organizations and their customers - or, in the case of healthcare, patients. This is a duty for the C-level – the CISOs, the CIOs, the CFOs, and the CEOs – to take finally responsibility and start planning for the next such attack in advance.

Why I Sometimes Wanna Cry About the Irresponsibility and Shortsightedness of Government Agencies

Just a few days ago, in my opening keynote at our European Identity & Cloud Conference I talked about the strong urge to move to more advanced security technologies, particularly cognitive security, to close the skill gap we observe in information security, but also to strengthen our resilience towards cyberattacks. The Friday after that keynote, as I was travelling back from the conference, reports about the massive attack caused by the “WannaCry” malware hit the news. A couple of days later, after the dust has settled, it is time for a few thoughts about the consequences. In this post, I look at the role government agencies play in increasing cyber-risks, while I’ll be looking at the mistakes enterprises have made in a separate post.

Let me start with publishing a figure I used in one of my slides. When looking at how attacks are executed, we can distinguish between five phases – which are more towards a “red hot” or a less critical “green” state. At the beginning, the attack is created. Then it starts spreading out and remains undetected for a while – sometimes only shortly, sometimes for years. This is the most critical phase, because the vulnerabilities used by the malware exist, but aren’t sufficiently protected. During that phase, the exploit is called a “zero-day exploit”, a somewhat misleading term because there might be many days until day zero when the exploit attacks. The term refers to the fact that attacks occur from day zero, the day when the vulnerability is detected and countermeasures can start. In earlier years, there was a belief that there are no attacks that start before a vulnerability is discovered – a naïve belief.

Here, phase 3 begins, with the detection of the exploit, analysis, and creation of countermeasures, most commonly hot fixes (that have been tested only a little and usually must be installed manually) and patches (better tested and cleared for automated deployment). From there on, during the phase 4, patches are distributed and applied.

Ideally, there would be no phase 5, but as we all know, many systems are not patched automatically or, for legacy operating systems, no patches are provided at all. This leaves a significant number of systems unpatched, such as in the case of WannaCry. Notably, there were alerts back in March that warned about that specific vulnerability and strongly recommended to patch potentially affected systems immediately.

In fact, for the first two phases we must deal with unknown attack patterns and assume that these exist, but we don’t know about them yet. This is a valid assumption, given that new exploits across all operating systems (yes, also for Linux, MacOS or Android) are detected regularly. Afterwards, the patterns are known and can be addressed.

In that phase, we can search for indicators of known attack patterns. Before we know these, we can only look for anomalies in behavior. But that is a separate topic, which has been hot at this year’s EIC.

So, why do I sometimes wanna cry about the irresponsibility and shortsightedness of government agencies? Because of what they do in phases 1 and 2. The NSA has been accused of having been aware of the exploit for quite a while, without notifying Microsoft and thus without allowing them to create a patch. Government organizations from all over the world know a lot about exploits without informing vendors about them. They even create backdoors to systems and software, which potentially can be used by attackers as well. While there are reasons for that (cyber-defense, running own nation-state attacks, counter-terrorism, etc.), there are also reasons against it. I don’t want to judge their behavior; however, it seems that many government agencies are not sufficiently aware of the consequences of creating their own malware for their purposes, not notifying vendors about exploits, and mandating backdoors in security products. I doubt that the agencies that do so can sufficiently balance their own specific interests with the cyber-risks they cause for the economies of their own and other countries.

There are some obvious risks. The first one is that a lack of notification extends the phase 2 and attacks stay undetected longer. It would be naïve to assume that only one government agency knows about an exploit. It might be well detected by other agencies, friends or enemies. It might also have been detected by cyber-criminals. This gets even worse when governments buy information about exploits from hackers to use it for their own purposes. It would be also naïve to believe that only that hacker has found/will find that exploit or that he will sell that information only once.

As a consequence, the entire economy is put at risk. People might die in hospitals because of such attacks, public transport might break down and so on. Critical infrastructures become more vulnerable.

Creating own malware and requesting backdoors bears the same risks. Malware will be detected sooner or later, and backdoors also might be opened by the bad guys, whoever they are. The former results in new malware that is created based on the initial one, with some modifications. The latter leads to additional vulnerabilities. The challenge is simply that, in contrast with physical attacks, there is little needed to create a new malware based on an existing attack pattern. Once detected, the knowledge about it is freely available and it just takes a decent software developer to create a new strain. In other words, by creating own malware, government agencies create and publicize blueprints for the next attacks – and sooner or later someone will discover such a blueprint and use it. Cyber-risk for all organizations is thus increased.

This is not a new finding. Many people including myself have been hinting about this dilemma for long in publications and presentations. While, being a dilemma, it is not easy to solve, we need at least to have an open debate on this and we need government agencies that work in this field to at least understand the consequences of what they are doing and balance this with the overall public interest. Not easy to do, but we do need to get away from government agencies acting irresponsibly and shortsighted.

When Are We Finally Going to Do Something About Ransomware?

Just as we have returned from our annual European Identity and Cloud Conference, where we’ve spent four days talking about cybersecurity, identity management and privacy protection with top experts from around the world, we faced the news from Great Britain, where the latest large-scale ransomware attack has nearly shut down IT systems in at least 16 hospitals. Medical workers have been completely locked out of their computers. Patient records, test results, blood banks were no longer available. Critical patients had been rushed to other hospitals for emergency surgeries, while doctors had to switch back to pen and paper to carry on their duties.

How could all this even happen? Sure, the media often present ransomware as some kind of a diabolically complex work of elite hacker groups, but in reality this is one of the least technologically advanced kinds of malware, barely more sophisticated that the proverbial Albanian virus. Typically, ransomware is spread via massive phishing campaigns, luring unsuspecting users to click an attachment and then let the malware exploit a known vulnerability to infect their computers. Finally, ransomware holds the victim’s computer hostage by encrypting their important files or locking access to the whole system, demanding a payment to restore it.

This kind of malware is nowhere new, with a first prototype developed over 20 years ago, but only recently, as the number of computers connected to the Internet has grown exponentially along with availability of online payment services, has it become a profitable business for cybercriminals. After all, there is no need to spend weeks planning a covert targeted attack or developing evasion technologies – one can easily utilize readily available spam networks and vulnerability exploits to start collecting bitcoins or even iTunes gift cards from poor home users mourning the loss of their vacation photos.

In the last couple of years, we’ve learned about several major ransomware types like CryptoLocker or CryptoWall, which have managed to collect millions of dollars in ransom before they were finally taken down by the authorities. Unfortunately, new strains constantly appear to evade antivirus detection and to target various groups of victims around the world. The WannaCry ransomware that affected the hospitals in Britain wasn’t in fact targeting the NHS specifically – within just a few hours after being initially identified, it has already spread around the world, affecting targets in nearly 100 countries including large telecommunications companies in Spain or government agencies in Russia.

Personally, I find it hard to believe that this was the original intention of the people behind this malware campaign. Rather, it looks like “a job done too well”, which led to the uncontrolled spread far beyond what was initially planned. A notable fact about this ransomware strain, however, is that it uses a particular vulnerability in Microsoft Windows that has been weaponized by the NSA and which became public in April after a leak by the Shadow Brokers group.

Although this exploit has been patched by Microsoft even before the leak, a huge number of computers around the world have not yet been updated. This, of course, includes the British hospitals, which still largely utilize extremely outdated computers running Windows XP. Without the budgets needed to upgrade and maintain their IT systems, without properly staffed IT departments and, last but not least, without properly educating the users, the whole IT infrastructure at the NHS was basically a huge ticking bomb, which finally went off today.

So, what can we do to avoid being hit by a ransomware like this? It is worth stressing again that resilience against ransomware attacks is a matter of the most basic “cybersecurity hygiene” practices. My colleague John Tolbert has outlined them in one of his blog posts a month ago. We are planning to publish additional reports on this topic in the near future, including a Leadership Compass on antimalware and endpoint security solutions, so watch this space for new announcements.

There is really nothing complicated about maintaining proper backups and not clicking on attachments in phishing mails, so if an organization was affected by ransomware, this is a strong indicator that its problems lie beyond the realm of technology. For several years, we’ve been talking about the similar divide in the approaches towards cybersecurity between IT and OT. However, where OT experts at least have their reasons for neglecting IT security in favor of safety and process continuity, the glaring disregard for the most basic security best practices in many public-sector institutions can only be attributed to insufficient funding and thus a massive lack of qualified personnel, which is needed not just to operate and secure IT infrastructures, but to continuously educate the users about the latest types of cyberthreats. Unfortunately, the recent cuts in NHS funding do not promise any positive changes for British hospitals.

There is the legal aspect of the problem as well. Whereas oil rigs, nuclear power plants or water supplies are rightfully classified as critical infrastructures, with special government programs created to protect them, hospitals are somehow not yet seen as critical, although many lives obviously depend on them. If an attack on a power plant can be rightfully considered an act of terrorism, why disrupting critical medical services still isn’t?

Quite frankly, I very much hope that, regardless of what the motives of the people behind this ransomware were, cybersecurity experts and international law enforcement agencies team up to find them as quickly as possible and come down on them like a ton of bricks if just for the sake of sending a final warning to other cybercriminals. Because if they don’t, we can only brace ourselves for more catastrophes in the future.

The New Role of Privilege Management

Privilege Management or PxM, also referred to by some vendors as Privileged Account Management, Privileged User Management, Privileged Identity Management, or a number of other terms, is changing rapidly, in two areas:

  1. Privilege Management is not only an IAM (Identity & Access Management) topic anymore, but as well a part of Cyber Defense.
  2. The focus of Privilege Management is shifting from session access to session runtime control.

Thus, the requirements for vendors as well as the starting point of product selection is at least getting broader, and sometimes even changing drastically. While password vaults have been at the center of attention for many years, right now session management capabilities such as monitoring, recording, and real-time threat analytics are considered to be the highest priority.

Regarding the first change, you might argue that Privilege Management has always been not only an IAM topic, but more an IT Security issue. This is partially true, particularly in the early days, when the focus was securing administrative access to shared administrative accounts. These initiatives, which existed way before the term "IAM" came up, were driven by IT Security people. However, Privilege Management (protecting accounts and access) over time became an essential element of IAM.

Nowadays, with ever-increasing cyber-attacks, Privilege Management is becoming an increasingly important element of Cyber Defense. While back in the old days internal fraud was the main risk addressed by Privilege Management, it is now about hijacked accounts. The main goal of targeted external attacks is gaining control of privileged accounts. Privilege Management helps in protecting these accounts, analyzing their usage, and detecting anomalies. Thus, Privilege Management is no longer just a part of the IAM domain (where it remains important), but also a vital element of every Cyber Defense strategy and Cyber Defense Center (CDC). While this might be a challenge, when it comes to defining organizational responsibility, it also is an opportunity: Cyber Defense budgets tend to be significantly bigger than IAM budgets.

The second area of change is tightly related to the first one. It is no longer sufficient to just limit access to shared privileged accounts. There are also individual highly privileged accounts – and not even at the IT administrator and operator level, but also business accounts. Thus, it is no surprise seeing the adoption of session management tools in call centers (to protect PII) and other business areas. Furthermore, identifying anomalies and detecting attacks is not done during the authentication to a privileged account, but must happen during runtime.

That does not mean that Shared Account Password Management is no longer relevant. But it is only one of the essential building blocks, with the entire area of session monitoring and anomaly detection massively gaining momentum. Privilege Management strategies and the tool choice decisions must take this change into account.

OpenC2 – Standards for Faster Response to Security Incidents

Recently, I came across a rather new and interesting standardization initiative, driven by the NSA (U.S. National Security Agency) and several industry organizations, both Cyber Defense software vendors and system integrators. OpenC2 names itself “a forum to promote global development and adoption of command and control” and has the following vision:

The OpenC2 Forum defines a language at a level of abstraction that will enable unambiguous command and control of cyber defense technologies. OpenC2 is broad enough to provide flexibility in the implementations of devices and accommodate future products and will have the precision necessary to achieve the desired effect.

The reasoning behind it is that an effective prevention, detection, and immediate response to cyber-attacks requires not only isolated systems, but a network of systems of various types. These functional blocks must be integrated and coordinated, to act in a synchronized manner, and in real-time, upon attacks. Communication between these systems requires standards – and that is what OpenC2 is working on.

This topic aligns well with the Real-Time Security Intelligence, an evolving area of software solutions and managed services KuppingerCole has been analyzing for a few years already. The main software and service offerings in that area are Security Intelligence Platforms (SIP) and Threat Intelligence Services. SIPs provide advanced analytical capabilities for identifying anomalies and attacks, while Threat Intelligence Services deliver information about newly detected incidents and attack vectors.

For moving from prevention (e.g. traditional firewalls) to detection (e.g. SIPs) to response, OpenC2 can play an important role, because it allows taking standardized actions based on a standardized language. This allows, for example, a SIP system to coordinate with firewalls for changing firewall rules, with SDNs (Software Defined Networks) for isolating systems targeted by the attacks, or with other analytical systems for a deeper level of analysis.

OpenC2 thus is a highly interesting initiative that can become an important building block in strengthening cyber defense. I strongly recommend looking at that initiative and, if your organization is among the players that might benefit from such language, actively engaging therein.

Follow-Up on “Managing the User's Consent Life Cycle: Challenges, GDPR Compliance and (Business) Rewards.”

The GDPR continues to be a hot topic for many organizations, especially for those who store and process customer data. A core requirement for compliance to GDPR is the concept of “consent,” which is fairly new for most data controllers. Coming up with GDPR is that parties processing personally identifiable information need to ask the user for his/her consent to do so and let the user revoke that consent any time and as easily as it was given.

During the KuppingerCole webinar held on April 4th, 2017 and supported by iWelcome, several questions from attendees were left unanswered due to the huge number of questions and a lack of time to answer them all.

Several questions centered around the term “Purpose,” which is key for data processing, but a lot more interesting questions came up, which we think are important to follow up here. Corne van Rooij answers some of the questions which couldn’t be answered live during the webinar.

Q: Purpose is related to your business or more generic things like Marketing, User experience Management, Research, etc.?

Corne van Rooij: Purpose is referring to “the purpose of the processing” and should be specific, explicit and legitimate. “Marketing” (or any other generic thing) is not specific enough; it should state what kind of marketing actions, like profiling or specifically tailored offerings.

Q: Is it true that data collection pure for the fulfillment of contractual obligations and selling a product doesn't require consent?

Corne van Rooij: Yes, that is true, however, keep in mind that data minimisation requires you only to collect data you will actually need to use for the fulfillment of the contract. The collection of extra data or ‘future use’ of data that is not mandatory to fulfill the contract does not fall under this and needs additional consent or another legal basis (Article 6) like “compliance with a legal obligation.“

Q: It appears consent is changing from a static to a dynamic concept.  How can a company manage numerous consent request programs and ensure the right consent is requested at the right time and in the right context?

Corne van Rooij: A very good question and remark. Consent needs its own life cycle management, as it will change over time unless your business is very static itself. The application (e.g. the eBusiness portal) should check if the proper consent is in place and trigger for consent if not, or trigger for an update (of consent or scope) if needed. If the consent status ‘travels’ with the user when he accesses the application/service, let’s say in an assertion, then the application/service can easily check and trigger (or ask itself) for consent or scope change. And register the consent back in the central place that had to send the assertion in the first place, so a close loop. Otherwise, the application can/needs to check the consent (API call) before it can act, ask consent if needed, and write it back (API call).

Q: How does the new ePR publish on the 10/1/2017 impact consent?

Corne van Rooij: The document published on 10th January 2017 is a proposal for a new E-Privacy Regulation. If it came into force in the future, it will not impact the implications of GDPR on ‘consent’ and covers other topics (so complementary) that can require consent. This new proposal updates E-Privacy issues in line with market developments and the GDPR and covers topics like cookies and unsolicited communication. It’s the update of an already existing EU Directive that dates back to 2009.

Q: If I understand it well, I can't collect any data for a first-time visitor to an eCommerce website, and I will first have to give him the possibility to identify himself in order to get into the consent flow?

Corne van Rooij: No, this is actually not true. You can collect data, e.g. based on certain types of cookies for which permission is not required (following ePR rules) and that data could be outside GDPR if you can’t trace it back to an actual individual. If you let him/her register himself for e.g. a newsletter and you ask personal information, then it falls under the GDPR. However, you might be able to stay away from asking consent if you can use another legal basis stated in article 6 for lawful processing. Let’s say a person wants to subscribe to an online magazine, then you only need the email address, and as such, that is enough to fulfill “the contract.” If you ask more, e.g. name, telephone number, etc., which you don’t actually need, then you need to use consent and have to specify a legitimate purpose.

Q: For existing user (customer) accounts, is there a requirement in GDPR to cover proof of previously given consent?

Corne van Rooij: You will have to justify that the processing of the personal data you keep is based on one of the grounds of Article 3 “Lawfulness of processing.“ If your legal basis is consent, you will need proof of this consent, and if consent was given in several steps, proof for all these consents have to be in place.

Q: Please give more detailed information on how to handle all already acquired data from customers and users.

Corne van Rooij: In short: companies need to check what personal data they process and have in their possession. They must then delete or destroy the data when legal basis for processing is no longer there, or the purpose for which the data was obtained or created has been fulfilled.

If the legal basis or the purpose has changed, the data subject needs to be informed, and new consent might be necessary.  Also, when proof of earlier given consent is not available, the data subject has to be asked for consent again.

Q: So there is no need to erase already acquired user/customer/consumer data, as long it is not actively used? - E.g. for already provisioned customer data - especially where the use of personal data had been already agreed by accepting agreements before? Is there a need to renew the request for data use when GDPR goes live?

Corne van Rooij: There is a difference in “not actively used” and “no legal basis or allowed the purpose of using it.” If it’s the latter, you need to remove the data or take action to meet the GDPR requirements. The processing which is necessary for the performance of a contract could comply with Article 6, as GDPR is for most of these things not new. There was already a lot of national legislation in place based on the EU Directive which also covers the topic of the lawfulness of processing.

Q: How long are you required to keep the consent information, after the customer has withdrawn all consents and probably isn't even your customer anymore?

Corne van Rooij: We advise that you keep proof of consent for as long as you keep the personal data. This is often long after consent is withdrawn, as companies have legal obligations to keep data under f.i. Business and tax laws.

Don’t Fall Victim to Ransomware (Links to Free Tools)

Ransomware attacks have increased in popularity, and many outlets predict that it will be a $1 billion-dollar business this year.  Ransomware is a form of malware that either locks users’ screens or encrypts users’ data, demanding that ransom be paid for the return of control or for decryption keys.  Needless to say, but paying the ransom only emboldens the perpetrators and perpetuates the ransomware problem. 

Ransomware is not just a home user problem, in fact many businesses and government agencies have been hit.  Healthcare facilities have been victims.  Even police departments have been attacked and lost valuable data.  As one might expect, protecting against ransomware has become a top priority for CIOs and CISOs in both the public and private sectors.

Much of the cybersecurity industry has, in recent years, shifted focus to detection and response rather than prevention.  However, in the case of ransomware, detection is pretty easy because the malware announces its presence as soon as it has compromised a device.  That leaves the user to deal with the aftermath.  Once infected, the choices are to:

  1. pay the ransom and hope that malefactors return control or send decryption keys (not recommended, and it doesn’t always work that way)
  2. wipe the machine and restore data from backup

Restoration is sometimes problematic if users or organizations haven’t been keeping up with backups. Even if backups are readily available, time will be lost in cleaning up the compromised computer and restoring the data.  Thus, preventing ransomware infections is preferred.  However, no anti-malware product is 100% effective at prevention.  It is still necessary to have good, tested backup/restore processes for cases where anti-malware fails.

Most ransomware attacks arrive as weaponized Office docs via phishing campaigns.  Disabling macros can help, but this is not universally effective since many users need to use legitimate macros.  Ransomware can also come less commonly come from drive-by downloads and malvertising. 

Most endpoint security products have anti-malware capabilities, and many of these can detect and block ransomware payloads before they execute.  All end-user computers should have anti-malware endpoint security clients installed, preferably with up-to-date subscriptions.  Servers and virtual desktops should be protected as well.  Windows platforms are still the most vulnerable, though there are increasing amounts of ransomware for Android.  It is important to remember that Apple’s iOS and Mac devices are not immune from ransomware, or malware in general.

If you or your organization do not have anti-malware packages installed, there are some no-cost anti-ransomware specialty products available.  They do not appear to be limited-time trial versions, but are instead fully functional.  Always check with your organization’s IT management staff and procedures before downloading and installing software.  All the products below are designed for Windows desktops:

Avast: C-Ransomware

Cybereason Ransomfree

Kaspersky Anti-Ransomware Tool

Windows Defender

The links, in alphabetical order by company name, are provided as resources for consideration for the readers rather than recommendations. 

Ransomware hygiene encompasses the following short-list of best practices:

  1. Perform data backups
  2. Disable Office macros by default if feasible
  3. Deliver user training to avoid phishing schemes
  4. Use anti-malware
  5. Develop breach response procedures
  6. Don’t pay ransom

KuppingerCole will be publishing Leadership Compass reports on anti-malware and endpoint security solutions in the weeks ahead.

Cognitive Technologies: The Next Big Thing for IAM and Cybersecurity

The ongoing Digital Transformation has already made a profound impact not just on enterprises, but our whole society. By adopting such technologies as cloud computing, mobile devices or the Internet of Things, enterprises strive to unlock new business models, open up new communication channels with their partners and customers and, of course, save on their capital investments.

For more and more companies, digital information is no longer just another means of improving business efficiency, but in fact their core competence and intellectual property.

Unfortunately, the Digital Transformation does not only enable a whole range of business prospects, it also exposes the company's most valuable assets to new security risks. Since those digital assets are nowadays often located somewhere in the cloud, with an increasing number of people and devices accessing them anywhere at any time, the traditional notion of security perimeter ceases to exist, and traditional security tools cannot keep up with the new sophisticated cyberattack methods.

In the recent years, the IT industry has been busy with developing various solutions to this massive challenge, however each new generation of security tools, be it Next Generation Firewalls (NGFW), Security Information and Event Management (SIEM) or Real-Time Security Intelligence (RTSI) solutions, has never entirely lived up to the expectations. Although they do offer significantly improved threat detection or automation capabilities, their “intelligence level” is still not even close to that of a human security analyst, who still has to operate these tools to perform forensic analysis and make informed decisions quickly and reliably.

All this has led to a massive lack of skilled workforce to man all those battle stations that comprise a modern enterprise’s cyber defense center. There are simply not nearly enough humans to cope with the vast amounts of security-related information generated daily. The fact that the majority of this information is unstructured and thus not available for automated analysis by computers makes the problem much more complicated.

Well, the next big breakthrough promising to overcome this seemingly unsolvable problem is coming from the realm of science fiction. Most people are familiar with the so called cognitive technologies from books or movies, where they are usually referred to as “Artificial Intelligence”. Although the true “strong AI” comparable to a human brain may still remain purely theoretical for quite some time, various practical applications of cognitive technologies (like speech recognition, natural language processing, computer vision or machine learning) have found practical uses in many fields already. From Siri and Alexa to market analysis and law enforcement: these technologies are already in use.

More relevant for us at KuppingerCole (and hopefully for you as well) are potential applications for identity management and cybersecurity.

A cognitive security solution can utilize natural language processing to analyze both structured and unstructured security information the way human analysts currently do. This won’t be limited just to pattern or anomaly recognition, but proper semantic interpretation and logical reasoning based on evidence. Potentially, this may save not days but months of work for an analyst, which would ideally only need to confirm the machine’s decision with a mouse click. Similarly, continuous learning, reasoning and interaction can provide significant improvement to existing dynamic policy-based access management solutions. Taking into account not just simple factors like geolocation and time of the day, but complex business-relevant cognitive decisions will increase operational efficiency, provide better resilience against cyber-threats and, last but not least, improve compliance.

Applications of cognitive technologies for Cybersecurity and IAM will be a significant part of this year’s European Identity & Cloud Conference. We hope to see you in Munich on May 9-12, 2017!

GDPR as an Opportunity to Build Trusted Relationships with Consumers

During the KuppingerCole webinar run March 16th, 2017, which has been supported by ForgeRock, several questions from attendees were left unanswered due to a huge number of questions and a lack of time to cover them all. Here are answers to questions that couldn’t be answered live during the webinar.

Q: How does two factor authentication play into GDPR regulations?

Karsten Kinast: Two factor authentication does not play into GDPR at all.

Martin Kuppinger: While two factor authentication is not a topic of GDPR, it e.g. plays a major role in another upcoming EU regulation, the PSD2 (revised Payment Services Directive), which applies to electronic payments.

Q: How do you see North American companies adhering to GDPR regulations? Do you think it will take a fine before they start incorporating the regulations into their privacy and security policies?

Eve Maler: As I noted on the webinar itself, from my conversations, these companies are even slower than European companies (granting Martin's point that European companies are far from full awareness yet) to "wake up". It seems like a Y2K phenomenon for our times. We at ForgeRock spend a lot of time working with digital transformation teams, and we find they have much lower awareness vs. risk teams. So, we encourage joint stakeholder conversations so that those experienced in the legal situation and those experienced in A/B testing of user experience flows can get together and do better on building trusted digital relationships!

Karsten Kinast: My experience is, that North American companies are adhering better and preparing more intensely for the upcoming GDPR than companies elsewhere. So, I don’t think it will need fines, because they already started preparing.

Q: Sometimes, there seems being a conflict between the “right to be forgotten” and practical requirements, e.g. for clinical trial data. Can consent override the right to be forgotten?

Karsten Kinast: While there might be a consent, the consent can be revoked. Thus, using consent to override the right to be forgotten will not work in practice.

Q: The fines for violating the GDPR regulations can be massive, up to 20 Mio € or 4% of the annual group revenue, whichever is higher. Can the fines be paid over a period of time or compensated by e.g. trainings?

Karsten Kinast: If the fine is imposed, it commonly will be in cash and in one payment.

Q: Where to learn more on consent life cycle management?

Eve Maler: Here are some resources that may be helpful:

  • My recent talk at RSA on designing a new consent strategy for digital transformation, including a proposal for a new classification system for types of permission
  • Information on the emerging Consent Receipts standard
  • Recent ForgeRock webinar on the general topic of data privacy, sharing more details about our Identity Platform and its capabilities

Martin Kuppinger: From our perspective, this is a both interesting and challenging area. Organizations must find ways to gain consent without losing their customers. This will only work when the value of the service is demonstrated to the customers and consumers. On the other hand, this also bears the opportunity of differentiating from others by demonstrating a good balance between the data collected and the value provided.

Q: Who is actually responsible for trusted digital relationships in the enterprise? Is this an IAM function?

Eve Maler: Many stakeholders in an organization have a role to play in delivering on this goal. IAM has a huge role to play, and I see consumer- and customer-facing identity projects more frequently sitting in digital transformation teams. It's my hope that the relatively new role of Chief Trust Officer will grow out of "just" privacy compliance and external evangelism to add more internal advocacy for transparency and user control.

Martin Kuppinger: It depends of the role of the IAM team in the organization. If it is more the traditional, administration and security focused role, this most commonly will an IAM function. However, the more IAM moves towards an entity that understands itself as a business enabler, closely working with other units such as marketing, the more IAM is positioned to take such central role.

Q: How big a role does consent play in solving privacy challenges overall?

Eve Maler: One way to look at it, GDPR-wise, is that it's just one-sixth of the legal bases for processing personal data, so it's a tiny part -- but we know better, if we remember that we're human beings first and ask what we'd like done if it were us in the user's chair! Another way to look at it is that asking for consent is something of an alternative to one of the other legal bases, "legitimate interests". Trust-destroying mischief could be perpetrated here. With the right consent technology and a comprehensive approach, it should be possible for an enterprise to ask for consent -- offer data sharing opportunities -- and enable consent withdrawal more freely, proving its trustworthiness more easily.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Learn more

Connected Consumer

When dealing with consumers and customers directly the most important asset for any forward-thinking organisation is the data provided and collected for these new type of identities. The appropriate management of consumer identities is of utmost importance. Handing over personal data to a commercial organisation the consumer typically does this with two contrasting expectations. On one hand the consumer wants to benefit from the organisation as a contract partner for goods or services. Customer-facing organizations get into direct contact with their customers today as they are accessing their [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00