KuppingerCole Blog

A Great Day for Information Security: Adobe Announces End-of-Life for Flash

Today, Adobe announced that Flash will go end-of-life. Without any doubt, this is great news from an Information Security perspective. Adobe Flash counted for a significant portion of the most severe exploits as, among others, F-Secure has analyzed. I also wrote about this topic back in 2012 in this blog.

From my perspective, and as stated in my post from 2012, the biggest challenge hasn’t been the number of vulnerabilities as such, but the combination of vulnerabilities with the inability to fix them quickly and the lack a well-working patch management approach.

With the shift to standards such as HTML5, today’s announcement finally moves Adobe Flash into the state of a “living zombie” – and with vendors such as Apple and Microsoft either not supporting it or limiting its use, we are ready to switch to better alternatives. Notably, the effective end-of-life date is the end of 2020, and it will still be in use after that. But there will be an end.

Clearly, there are and will be other vulnerabilities in other operating systems, browsers, applications, and so on. They will not go away. But one of the worst tools ever from a security perspective is finally reaching its demise. That is good, and it makes today a great day for Information Security.

The Return of Authorization

Authorization is one of the key concepts and processes involved in security, both in the real world as well as the digital world.  Many formulations of the definition for authorization exist, and some are context dependent.  For IT security purposes, we’ll say authorization is the act of evaluating whether a person, process, or device is allowed to operate on or possess a specific resource, such as data, a program, a computing device, or a cyberphysical object (e.g., a door, a gate, etc.).

The concept of authorization has evolved considerably over the last two decades.  No longer must users be directly assigned entitlements to particular resources. Security administrators can provision groups of users or select attributes of users (e.g. employee, contractor of XYZ Corp, etc.) as determinants for access. 

For some of the most advanced authorization and access control needs, the OASIS eXtensible Access Control Markup Language (XACML) standard can be utilized. Created in the mid-2000s,  XACML is an example of an Attribute-Based Access Control (ABAC) methodology.  XACML is an XML policy language, reference architecture, and request/response protocol. ABAC systems allow administrators to combine specific subject, resource, environmental, and action attributes for access control evaluation.  XACML solutions facilitate run-time processing of dynamic and complex authorization scenarios.  XACML can be somewhat difficult to deploy, given the complexity of some architectural components and the policy language.  Within the last few years, JSON and REST profiles of XACML have been created to make it easier to integrate into modern line-of-business applications.

Just prior to the development of XACML, OASIS debuted Security Assertion Markup Language (SAML).  Numerous profiles of SAML exist, but the most common usage is for identity federation.  SAML assertions serve as proof of authentication at the domain of origin, which can be trusted by other domains.  SAML can also facilitate authorization, in that, other attributes about the subject can be added to the signed assertion. SAML is widely used for federated authentication and limited authorization purposes.

OAuth 2.0 is a lighter weight IETF standard. It takes the access token approach, passing tokens on behalf of authenticated and authorized users, processes, and now even devices.  OAuth 2.0 now serves as a framework upon which additional standard are defined, such as Open ID Connect (OIDC) and User Managed Access (UMA).  OAuth has become a widely used standard across the web.  For example, “social logins”, i.e. using a social network provider for authentication, generally pass OAuth tokens between authorization servers and relying party sites to authorize the subject user.  OAuth is a simpler alternative to XACML and SAML, but also is usually considered less secure.

From an identity management perspective, authentication has received the lion’s share of attention over the last several years.  The reasons for this are two-fold: 

  • the weakness of username/password authentication, which has led to many costly data breaches
  • proliferation of new authenticators, including 2-factor (2FA), multi-factor (MFA), risk-adaptive techniques, and mobile biometrics

However, in 2017 we have noticed an uptick in industry interest in dynamic authorization technologies that can help meet complicated business and regulatory requirements. As authentication technologies improve and become more commonplace, we predict that more organizations with fine-grained access control needs will begin to look at dedicated authorization solutions.  For an in-depth look at dynamic authorization, including guidelines and best practices for the different approaches, see the Advisory Note: Unifying RBAC and ABAC in a Dynamic Authorization Framework.

Organizations that operate in strictly regulated environments find that both MFA / risk adaptive authentication and dynamic authorization are necessary to achieve compliance.  Regulations often mandate 2FA / MFA, e.g. US HSPD-12, NIST 800-63-3, EU PSD2, etc.  Regulations occasionally stipulate certain that access subject or business conditions, expressed as attributes, be met as a precursor to granting permission.  For example, in export regulations these attributes are commonly access subject nationality or licensed company.

Authorization becomes extremely important at the API level.  Consider PSD2: it will require banks and other financial institutions to expose APIs for 3rd party financial processors to utilize.  These APIs will have tiered and firewalled access into core banking functions.  Banks will of course require authentication from trusted 3rd party financial processors.  Moreover, banks will no doubt enforce granular authorization on the use of each API call, per API consumer, and per account.  The stakes are high with PSD2, as banks will need to compete more efficiently and protect themselves from a much greater risk of fraud.

For more information on authentication and authorization technologies, as well as guidance on preparing for PSD2, please visit the Focus Areas section of our website.

GDPR vs. PSD2: Why the European Commission Must Eliminate Screen Scraping

The General Data Protection Regulation (GDPR) and Revised Payment Service Directive (PSD2) are two of the most important and most talked about technical legislative actions to arise in recent years.  Both emanate from the European Commission, and both are aimed at consumer protection.

GDPR will bolster personal privacy for EU residents in a number of ways.  The GDPR definition of personally identifiable information (PII) includes attributes that were not previously construed as PII, such as account names and email addresses.  GDPR will require that data processors obtain clear, unambiguous consent from each user for each use of user data. In the case of PSD2, this means banks and Third-Party Providers (TPPs).  TPPs comprise Account Information Service Providers (AISPs) and Payment Initiation Service Providers (PISPs).  For more information, please see https://www.kuppingercole.com/report/lb72612

Screen scraping has been in practice for many years, though it is widely known that this method is inherently insecure.  In this context, screen scraping is used by TPPs to get access to customer data.  Some FinTechs harvest usernames, email addresses, passwords, and account numbers to act on behalf of the users when interacting with banks and other FinTechs.  This technique exposes users to additional risks, in that, their credentials are more likely to be misused and/or stored in more locations. 

PSD2 will mandate the implementation of APIs by banks, for a more regular and safer way for TPPs to get account information and initiate payments.  This is a significant step forward in scalability and security.  However, the PSD2 Regulatory Technical Standards (RTS) published earlier this year left a screen scraping loophole for financial organizations who have not yet modernized their computing infrastructure to allow more secure access via APIs.  The European Banking Authority (EBA) now rejects the presence of this insecure loophole:  https://www.finextra.com/newsarticle/30772/eba-rejects-commission-amendments-on-screen-scraping-under-psd2.   

KuppingerCole believes that the persistence of the screen scraping exception is bad for security, and therefore ultimately bad for business.  The proliferation of TPPs expected after PSD2 along with the attention drawn to this glaring weakness almost ensures that it will be exploited, and perhaps frequently. 

Furthermore, screen scraping implies that customer PII is being collected and used by TPPs.  This insecure practice, then, by definition goes against the spirit of consumer protection embodied in GDPR and PSD2.  Furthermore, GDPR calls for the principle of Security by Design, and a screen scraping exemption would contravene that.  TPPs can obtain consent for the use of consumer PII, or have it covered contractually, but such a workaround is unnecessary if TPPs utilize PSD2 open banking APIs.  An exemption in a directive should not lead to potential violations of a regulation.  

PSD2 – the EBA’s Wise Decision to Reject Commission Amendments on Screen Scraping

In a response to the EC Commission, the EBA (European Banking Authority) rejected amendments on screen scraping in the PSD2 regulation (Revised Payment Services Directive) that had been pushed by several FInTechs. While it is still the Commission’s place to make the final decision, the statement of the EBA is clear. I fully support the position of the EBA: Screen scraping should be banned in future.

In a “manifesto”, 72 FinTechs had responded to the PSD2 RTS (Regulatory Technical Standards), focusing on the ban of screen scraping or as they named it, “direct access”. In other comments from that FinTech lobby, we can find statements such as “… sharing login details … is perfectly secure”. Nothing could be more wrong. Sharing login details with whomever never ever is perfectly secure.

Screen scraping involves sharing credentials and providing full access to financing services such as online banking to the FinTechs using these technologies. This concept is not new. It is widely used in such FinTech services because there has been a gap in APIs until now. PSD2 will change that, even while we might not end with a standardized API as quickly as we should.

But what is the reasoning of the FinTechs in insisting on screen scraping? The main arguments are that screen scraping is well-established and works well – and that it is secure. The latter obviously is wrong – neither sharing credentials nor injecting credentials into websites can earnestly be considered a secure approach. The other argument, screen scraping being something that works well, also is fundamentally wrong. Screen scraping relies on the target website or application to always have the same structure. Once it changes, the applications (in that case FinTech) accessing these services and websites must be changed as well. Such changes on the target systems might happen without prior notice.

I see two other arguments that FinTech lobby does not raise. One is about liability issues. If a customer gives his credentials to someone else, this is a fundamentally different situation regarding liability then in the structured access via APIs. Just read the terms and conditions of your bank regarding online banking.

The other argument is about limitations. PSD2 request providing APIs for AISP (Account Information Service Providers) and PISP (Payment Initiation Service Providers) – but only for these services. Thus, APIs might be more restrictive than screen scraping.

However, the EBA has very good arguments in favor of getting rid of screen scraping. One of the main targets of PSD2 is a better protection of customers in using online services. That is best achieved by a well-thought-out combination of SCA (Strong Customer Authentication) and defined, limited interfaces for TPPs (Third Party Providers) such as the FinTechs.

Clearly, this means a change for both the technical implementations of FinTech services that rely on screen scraping as, potentially, for the business models and the capabilities provided by these services. When looking at technical implementations, even while there is not yet an established standard API supported by all players, working with APIs is straightforward and far simpler than screen scraping ever can be. If there is not a standard API, work with a layered approach which maps your own, FinTech-internal, API layer to the various variants of the banks out there. There will not be that many variants, because the AISP and PISP services are defined.

Authentication and authorization can be done far better and more convenient to the customer if they are implemented from a customer perspective – I just recently wrote about this.

Yes, that means changes and even restrictions for the FinTechs. But there are good reasons for doing so. EBA is right on their position on screen scraping and hopefully, the EC Commission will finally share the EBA view.

At the Intersection of Identity and Marketing

Digital Transformation is driving a diverse set of business initiatives today, including advanced marketing techniques, creating new consumer services, acquiring better customer information, and even deploying new identity management solutions.  As organizations discover new and efficient methods for engaging customers, they often realize new and more profitable revenue streams.

At the intersection of identity and marketing, we find Consumer Identity and Access Management (CIAM) systems.  CIAM is a relatively new but fast-growing area within the overall IAM market.  As the name implies, Consumer IAM focuses on the consumer.  This means that CIAM solutions feature:

  • Self-registration, with options to use social network credentials
  • Progressive profiling:  collecting information from customers over a period of time through various interactions, rather than asking for a lot of information up front
  • White-labeling for seamless branding
  • Flexible authentication:  username, mobile devices, social logins, and often 2FA or MFA methods
  • Consent management:  easy-to-use and understand opt-ins for data collection
  • Identity and marketing analytics: data about consumers and their activities that can be transformed into business intelligence.

Many CIAM solutions were designed from the ground up to make the customer experience more pleasant.  Other CIAM solutions have evolved from the traditional IAM systems we’ve used in businesses and governments for decades.  Most CIAM solutions can be run from the cloud, either as a turn-key SaaS or as a solution your teams can administer inside IaaS. 

The data generated from CIAM systems is inherently useful for marketing. There are two very different approaches for harvesting and using CIAM data: native tools or exporting to third-party programs. 

The most feature-rich CIAM solutions build in identity and analytics capabilities into their platforms.  Examples of reports that are possible in these types of solutions include:

  • demographics such as gender, age, location, nationality;
  • segmentation analysis such as generation, age range, income bracket;
  • events including logins, registrations, social providers used;
  • “likes” such as favorite TV shows, sports teams, books, music;
  • social engagement including top commenters and time spent on site.

Most CIAM vendors permit programmatic access via REST APIs to integrate with a wide range of 3rd party market analysis tools as well, e.g. Google Analytics and Tableau.  For enterprise or organizational customers, the data is there, but the choice for how to obtain it and analyze it depend on your organizational capabilities and preferences.

Much of this information produced by CIAM systems can be beneficial; however, with the EU General Data Protection Regulation (GDPR) on the horizon, the ability to collect informed consent from consumers about the use of their data becomes paramount.  Among the many provisions of GDPR, the regulation will require that organizations that collect information about users to obtain clear and unambiguous assent for per-purpose processing.  Fortunately, many CIAM vendors have proactively designed their user interfaces to facilitate GDPR compliance to some degree.  In addition to collecting consent and allowing users to change their preferences, data processors will also need to be able to log consent, export or delete user data upon request, and notify users when terms change or when data breaches happen.

In conclusion, well-constructed and configured CIAM solutions can help customers acquire valuable information on their consumers, that, in concert with advanced techniques such as marketing automation, can lead to higher revenues and better consumer satisfaction.  Information gleaned at the intersection of identity and marketing is subject to privacy and other regulations, and as such, needs to be protected appropriately.

PSD2: Strong Customer Authentication Done Right

The Revised Payment Services Directive (PSD2), an upcoming EC regulation, will have a massive impact on the Finance Industry. While the changes to the business are primarily based on the newly introduced TPPs (Third Party Providers), which can initiate payments and request access to account information, the rules for strong customer authentication (SCA) are tightened. The target is better protection for customers of financial online services.

Aside from a couple of exemptions such as small transactions below 30 € and the use of non-supervised payment machines, e.g. in parking lots, the basic rule is that 2FA (Two Factor Authentication) becomes mandatory. Under certain circumstances, 1FA combined with RBA (Risk Based Authentication) will continue to be allowed. I have explained various terms in an earlier post.

For the scenarios where 2FA is required, the obvious question is how best to do that. When looking at how banks and other services implemented 2FA (and 1FA) up until now, there is plenty of room for improvement. While many services, such as PayPal, still only mandate 1FA,  there generally is little choice in which 2FA approach to use. Most banks mandate the use of one specific form of 2FA, e.g. relying on out-of-band SMS or a certain type of token.

However, PSD2 will change the play for financial institutions. It will open the fight for the customer: Who will provide the interface to the customer, who will directly interact with the customer? To win in that fight between traditional and new players, customer convenience is a key success factor. And customer convenience starts with the registration as a one-time action and continues with authentication as what the customers must do every time they access the service.

Until now, strong (and not so strong) authentication to financial services seems to have been driven by an inside-out way of thinking. The institutions think about what works best for them: what fits into their infrastructure; what is the cheapest yet compliant approach? For customers, this means that they must use what their service provider offers to them.

But the world is changing. Many users have their devices of choice, many of these with some form of built-in strong authentication. They have their preferred ways of interacting with services. They also want to use a convenient method for authentication. And in the upcoming world with TPPs that can form the new interface, so there will be competition.

Thus, it is about time to think SCA outside-in, from the customer perspective. The obvious solution is to move to Adaptive Authentication, which allows the use of all (PSD2 compliant) forms of 2FA and leaves it to the choice of the customer which he prefers. There must be flexibility for the customer. The technology is available, with platforms that support many, many different types of authenticators and their combinations for 2FA, but also with standards such as the FIDO Alliance standards that provide interoperability with the ever-growing and ever-changing consumer devices in use.

There is room for being both compliant to the SCA requirements of PSD2 and convenient for the customer. But that requires a move to an outside-in thinking, starting with what the customers want – and these many customers never only want one single choice, they want a real choice. Adaptive Authentication thus is a key success factor for doing SCA right in the days of PSD2.

There Is No Such Thing as GDPR-Compliant Software or SaaS Solution

Recently, I stumbled about the first marketing campaigns of vendors claiming that they have a “GDPR compliant” application or SaaS offering. GDPR stands for General Data Protection Regulation and is the upcoming EC regulation in that field, which also has an extraterritorial effect, because it applies to every organization doing business with EU residents. Unfortunately, neither SaaS services nor software can be GDPR compliant.

GDPR is a regulation for organizations that regulates how to protect the individual’s PII (Personally Identifiable Information), which includes all data that could potentially be used to identify an individual. Thus, organizations must enforce GDPR compliance, which includes, e.g., implementing the new principles for user consent such as informed and unambiguous consent per purpose; the right to be forgotten; and many other requirements. GDPR also states that software which is used to handle PII must follow the principles of Security by Design (SbD) and Privacy by Design (PbD). Both are rather fuzzy principles, not being formally defined yet.

Thus, a software vendor or SaaS provider could state that he believes he is following the SbD and PbD principles. But that does not make him GDPR compliant. It just builds the foundation for a customer, enabling that organization becoming GDPR compliant. But to put it clearly: An organization dealing with PII can be GDPR compliant. A service provider that acts as “data processor” in the context of GDPR can be GDPR compliant (for its part of the system). But a software application or a SaaS service only can provide the foundation for others to become GDPR compliant. There just is no such thing as GDPR compliant software.

Vendor marketing departments would be well advised to use such terms carefully, because claiming to provide a GDPR compliant offering might make their customers think that they just need to install certain software or turn the key of a turnkey SaaS solution and they are done. Nothing could be more misleading. There is so much more to do for an organization to become GDPR compliant, starting from processes and contracts to using the software or SaaS service the right way. Understanding what GDPR really means to an organization is the first step. KuppingerCole has plenty of information on GDPR.

Don’t hesitate to contact KuppingerCole via sales@kuppingercole.com for our brand-new offering of a GDPR Readiness Assessment, which is a lean approach in understanding where your organization is in your journey towards GDPR compliance and which steps you have to take – beyond just choosing a tool.

Beyond Simplistic: Achieving Compliance Through Standards and Interoperability

"There is always an easy solution to every problem - neat, plausible, and wrong.
 (
H.L. Mencken)

Finally, it's beginning: GDPR gains more and more visibility.

Do you also get more and more GDPR-related marketing communication from IAM and security vendors, consulting firms and, ehm, analyst companies? They all offer some pieces of advice for starting your individual GDPR project/program/initiative. And of course, they want you to register your personal data (Name, company, position, the size of a company, country, phone, mail etc...) for sending that ultimate info package over to you. And obviously, they want to acquire new customers and provide you and all the others with marketing material.

It usually turns out that the content of these packages is OK, but not really overwhelming.  A summary of the main requirements of the GDPR. Plus, in the best cases, some templates that can be helpful, if you can find them between the marketing material included in the "GDPR resource kit". But the true irony lies in the fact that according to the GDPR it is not allowed to offer a service that has a mandatory consent on data that is not needed for the service being offered (remember?… Name, company, position, the size of a company, country, phone, mail etc...).

The truth is, that GDPR compliance does not come easily and the promise of a shortcut and an easy shortcut via any GDPR readiness kit won't work out. Instead, newly designed but also already implemented processes of how personal and sensitive data is stored and processed, will have to be subject to profound changes.

Don't get me wrong: Having a template for a data protection impact analysis, a prescanned template for breach notification, a decision tree for deciding whether you need a DPO or not, and some training material for your staff are all surely important. But they are only a small part of the actual solution.

So in the meantime, while others promise to give you simple solutions, the Kantara Initiative is working on various aspects for providing processes and standards for adequate and especially GDPR-compliant management of Personally Identifiable Information. These initiatives include UMA (User-Managed Access), Consent and Information Sharing, OTTO (Open Trust Taxonomy for Federation Operators) and IRM (Identity Relationship Management).

Apart from several other objectives and goals, one main task is to be well-prepared for the requirements of GDPR (and e.g. eIDAS). The UMA standards is now reaching a mature 2.0 status. Just a few days ago two closely interrelated documents have been made available for public review, that makes the cross-application implementation of access based on provided consent possible. "UMA 2.0 Grant for OAuth 2.0 Authorization" enables asynchronous party-to-party authorization (between requesting party = client and resource owner) based on rules and policies. "Federated Authorization for User-Managed Access (UMA) 2.0" on the server side defines and implements authorization methods that are interoperating between various trust domains. This, in turn, allows the resource owner to define her/his rules and policies for access to protected resource in one single place.

These methods and technologies serve two major aspects: They enable the resource owner (you and me) to securely and conveniently define consent and implement and ensure it through technology. And it enables requesting partners (companies, governments, and people, again you and me) to have reliable and well-defined access in highly distributed environments.

So, they need to be verified if they can be adequate methods to getting to GDPR compliance and far beyond: By empowering the individual, enabling compliant business models, providing shared infrastructure and by designing means for implementing reliable und user-centric technologies. Following these principles can help achieving compliance. "Beyond" means: Take the opportunity of becoming and being a trusted and respected business partner that is known for proactively valuing customer privacy and security. Which is for sure much better than only preparing for the first/next data breach.

This surely is not an easy approach, but it goes to the core of the actual challenge. Suggested procedures, standards, guidelines and first implementations are available. They are provided to support organizations in moving towards security and privacy from the ground up. The UMA specifications including the ones described above are important building blocks for those who want to go beyond the simple (and insufficient) toolkit approach.

Why I Sometimes Wanna Cry About the Irresponsibility and Shortsightedness of C-Level Executives When It Comes to IT Security

WannaCry counts, without any doubt, amongst the most widely publicized cyber-attacks of the year, although this notoriety may not necessarily be fully justified. Still, it has affected hospitals, public transport, and car manufacturing, to name just a few of the examples that became public. In an earlier blog post, I was looking at the role government agencies play. Here I look at businesses.

Let’s look at the facts: The exploit has been known for a while. A patch for the current Windows systems has been out for months, and I’ve seen multiple warnings in the press indicating the urgent need to apply the appropriate patches.

Unfortunately, that didn’t help, as the number of affected organizations around the world has demonstrated. Were those warnings ignored? Or had security teams missed them? Was that ignorance? Lack of established processes? If they had older Windows versions in place in sensitive areas, why haven’t they replaced them earlier? I could ask many more of these questions. Unfortunately, there is only one answer to them: human failure. There is no excuse.

Somewhere in the organizations affected, someone – most likely several people – have failed. Either they’ve failed by not doing IT security right (and we are not talking about the most sophisticated attacks, but simply about having procedures in place to react to security alerts) or by lacking adequate risk management. Or by a lack of sufficient budgets for addressing the most severe security risks. Unfortunately, most organizations still tend to ignore or belittle the risks we are facing.

Yes, there is no 100% security. But we are all supposed to know how to strengthen our cyber-attack resilience by having right people, right processes, and right tools in place. The right people to deal with alerts and incidents. The right processes for both preparing for and reacting to breaches. And the right tools to detect, protect, respond, and recover. Yes, we have a massive skills gap that is not easy to close. But at least to a certain extent, MSSPs (Managed Security Service Providers) are addressing this problem.

Unfortunately, most organizations don’t have enterprise-wide GRC programs covering all risks including IT security risks, and most organizations don’t have the processes defined for an adequate handling of alerts and incidents – to say nothing about having a fully operational CDC (Cyber Defense Center). Having one is a must for large organizations and organizations in critical industries. Others should work with partners or at least have adequate procedures to recover quickly.

Many organizations still rely on a few isolated, old-fashioned IT security tools. Yes, modern tools cost money. But that is not even where the problem starts. It starts with understanding which tools really help mitigating which risks; with selecting the best combination of tools; with having a plan. Alas, I have seen way too few well-thought-out security blueprints so far. Creating such blueprints is not rocket science. It does not require a lot of time. Why are so many organizations lacking these? Having them would allow for targeted investments in security technology that helps, and also for understanding the organizational consequences. Just think about the intersection of IT security and patch management.

To react to security incidents quickly and efficiently, organizations need a CDC staffed with people, having defined processes in place for breach and incident response, and being well integrated into the overall Risk Management processes, as depicted in the picture below.

Such planning not only includes a formal structure of a CDC, but plans for handling emergencies, ensuring business continuity, and communication in cases of breaches. As there is no 100% security, there always will be remaining risks. No problem with that. But these must be known and there must be a plan in place to react in case of an incident.

Attacks like WannaCry pose a massive risk for organizations and their customers - or, in the case of healthcare, patients. This is a duty for the C-level – the CISOs, the CIOs, the CFOs, and the CEOs – to take finally responsibility and start planning for the next such attack in advance.

Why I Sometimes Wanna Cry About the Irresponsibility and Shortsightedness of Government Agencies

Just a few days ago, in my opening keynote at our European Identity & Cloud Conference I talked about the strong urge to move to more advanced security technologies, particularly cognitive security, to close the skill gap we observe in information security, but also to strengthen our resilience towards cyberattacks. The Friday after that keynote, as I was travelling back from the conference, reports about the massive attack caused by the “WannaCry” malware hit the news. A couple of days later, after the dust has settled, it is time for a few thoughts about the consequences. In this post, I look at the role government agencies play in increasing cyber-risks, while I’ll be looking at the mistakes enterprises have made in a separate post.

Let me start with publishing a figure I used in one of my slides. When looking at how attacks are executed, we can distinguish between five phases – which are more towards a “red hot” or a less critical “green” state. At the beginning, the attack is created. Then it starts spreading out and remains undetected for a while – sometimes only shortly, sometimes for years. This is the most critical phase, because the vulnerabilities used by the malware exist, but aren’t sufficiently protected. During that phase, the exploit is called a “zero-day exploit”, a somewhat misleading term because there might be many days until day zero when the exploit attacks. The term refers to the fact that attacks occur from day zero, the day when the vulnerability is detected and countermeasures can start. In earlier years, there was a belief that there are no attacks that start before a vulnerability is discovered – a naïve belief.

Here, phase 3 begins, with the detection of the exploit, analysis, and creation of countermeasures, most commonly hot fixes (that have been tested only a little and usually must be installed manually) and patches (better tested and cleared for automated deployment). From there on, during the phase 4, patches are distributed and applied.

Ideally, there would be no phase 5, but as we all know, many systems are not patched automatically or, for legacy operating systems, no patches are provided at all. This leaves a significant number of systems unpatched, such as in the case of WannaCry. Notably, there were alerts back in March that warned about that specific vulnerability and strongly recommended to patch potentially affected systems immediately.

In fact, for the first two phases we must deal with unknown attack patterns and assume that these exist, but we don’t know about them yet. This is a valid assumption, given that new exploits across all operating systems (yes, also for Linux, MacOS or Android) are detected regularly. Afterwards, the patterns are known and can be addressed.

In that phase, we can search for indicators of known attack patterns. Before we know these, we can only look for anomalies in behavior. But that is a separate topic, which has been hot at this year’s EIC.

So, why do I sometimes wanna cry about the irresponsibility and shortsightedness of government agencies? Because of what they do in phases 1 and 2. The NSA has been accused of having been aware of the exploit for quite a while, without notifying Microsoft and thus without allowing them to create a patch. Government organizations from all over the world know a lot about exploits without informing vendors about them. They even create backdoors to systems and software, which potentially can be used by attackers as well. While there are reasons for that (cyber-defense, running own nation-state attacks, counter-terrorism, etc.), there are also reasons against it. I don’t want to judge their behavior; however, it seems that many government agencies are not sufficiently aware of the consequences of creating their own malware for their purposes, not notifying vendors about exploits, and mandating backdoors in security products. I doubt that the agencies that do so can sufficiently balance their own specific interests with the cyber-risks they cause for the economies of their own and other countries.

There are some obvious risks. The first one is that a lack of notification extends the phase 2 and attacks stay undetected longer. It would be naïve to assume that only one government agency knows about an exploit. It might be well detected by other agencies, friends or enemies. It might also have been detected by cyber-criminals. This gets even worse when governments buy information about exploits from hackers to use it for their own purposes. It would be also naïve to believe that only that hacker has found/will find that exploit or that he will sell that information only once.

As a consequence, the entire economy is put at risk. People might die in hospitals because of such attacks, public transport might break down and so on. Critical infrastructures become more vulnerable.

Creating own malware and requesting backdoors bears the same risks. Malware will be detected sooner or later, and backdoors also might be opened by the bad guys, whoever they are. The former results in new malware that is created based on the initial one, with some modifications. The latter leads to additional vulnerabilities. The challenge is simply that, in contrast with physical attacks, there is little needed to create a new malware based on an existing attack pattern. Once detected, the knowledge about it is freely available and it just takes a decent software developer to create a new strain. In other words, by creating own malware, government agencies create and publicize blueprints for the next attacks – and sooner or later someone will discover such a blueprint and use it. Cyber-risk for all organizations is thus increased.

This is not a new finding. Many people including myself have been hinting about this dilemma for long in publications and presentations. While, being a dilemma, it is not easy to solve, we need at least to have an open debate on this and we need government agencies that work in this field to at least understand the consequences of what they are doing and balance this with the overall public interest. Not easy to do, but we do need to get away from government agencies acting irresponsibly and shortsighted.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00