Blog posts by Matthias Reinwarth

KuppingerCole Analyst Chat - Our New Regular Podcast

Today we're officially launching KuppingerCole Analyst Chat - our new soon-to-be-regular audio podcast.

In the pilot episode Martin Kuppinger and I are discussing Identity & Access Management challenges so many are facing now while having to work from home.

At the moment, you can subscribe to our podcast on Spotify or watch new episodes on our YouTube channel. Other platforms will follow soon.

Stay tuned for more regular content from KuppingerCole analyst team!

Home Office in the Times of Pandemic – a Blessing or a Curse?

One of the most interesting office work developments of the last 20-30 years, the home office has radically gained new relevance amid the developing coronavirus pandemic. With the goal of limiting the spread of the virus, many companies and employees must suddenly resort to the option of working entirely from home. This is not only self-evident but also urgently necessary and will support many companies in their continued existence at the same time.

Home office as an immediate pandemic quarantine measure

The advantages are clear: social contacts in real life will be reduced to a minimum, while a large number, if not all, of necessary activities, especially in the digital sector, can be continued. The tremendously important goal, which is propagated as #flattenthecurve via social media, i.e. the prevention of further infections especially at a still early stage of the infection, can thus be combined with business continuity for a multitude of organizations. But in practice, companies also face very specific technological challenges. That is because experiences with working from home are not equally distributed.

Different levels of experience

On one hand, there are companies that have often already geared their processes strongly towards roaming users. As "cloud-first" or even "cloud-only" organizations, they are perhaps already using digital corporate services as SaaS or offering secure access to the company's IT systems, even the critical ones, from outside (if such an "outside" still exists at all). Those employees are familiar with new processes, trustworthy handling of sensitive data, and the proper use of endpoint devices (computers, tablets, and smartphones).

Unfortunately, a large group of companies that have not yet taken these steps earlier will be severely challenged by the pandemic. They are facing major operational changes that must be implemented in a matter of days, which will almost inevitably mean that security might be their second priority at best.

A cultural change in only a few days

This surely shows the negative effects of the reluctance of more traditionally structured companies to adopt more recent, decentralized, agile and alternative working models. But considering the underlying causes is now of minor importance. Companies must enable their employees and their IT as quickly as possible by means of necessary processes and access to relevant systems so that the continued operation of their business is guaranteed even in times of crisis.

However, the crisis does not free the companies from their responsibilities regarding compliance, governance, the protection of personal data or critical company intellectual property. What operators of critical infrastructure have continuously prepared themselves for over the past few years is now necessary for virtually every company wishing to continue operating in a meaningful way.

Of course, it is essential to avoid the concrete physical dangers of the disease for individuals. But it is equally vital to carry out a quick, operative and yet sustainable risk assessment of the necessary systems, access routes and end devices of their users as the foundation for the protection of the company, its services, processes, and data.

Preventing the crisis after the crisis

It does not serve anyone's interests if, as a result of this change in the work model, an organization is exposed to an increasing number of unmanaged security risks.These risks are to a large extent to be addressed individually, but they can nevertheless be classified into a number of complex issues that must be considered: device protection (many users will have to resort to the use of private equipment due to the lack of corporate devices), secured communications, secure authentication, and authorization are increasingly important, particularly in such an exceptional situation.

Understanding the modified attack surface

When moving towards home office work as an undisputedly beneficial, alternative way of contributing to corporate processes, one insight is indispensable: This changes the attack surface of a company dramatically: all at once (without protective measures) a multitude of previously personal network access points and home networks become a vulnerable part of an enterprise network. Information and credentials stored therein are under threat and can presumably be used with little criminal energy as a doorway to a corporate network or digital services provided as Software-as-a-Service.

The loss or theft of an unprotected or inadequately protected access device with local data or credentials can be an immediate threat to a company, an NGO or a public authority not just today, but also later when the current crisis will hopefully be just a vague dark memory.

Taking the first appropriate steps

First of all, of course, all the fundamentally important technical measures are still necessary: local hard disk encryption, patching and monitoring of the clients used, securing home networks, scanners for viruses and other malware on the endpoints, secure access paths with multi-factor authentication and appropriate authorization systems, privilege management for securing critical systems and a multitude of other technologies with which we as analysts for Cybersecurity and Identity and Access Management (IAM) deal with on a daily basis.

However, adequate instruction and training of employees who now access critical systems in the company from their home environment, potentially from private devices, should also be included. Knowledge about malware, viruses, and phishing that is communicated swiftly and efficiently should help prevent negligent handling of these threats, which can be somewhere between annoying and costly in the private environment, but which can threaten a company's existence.

Work from home but work in the cloud

Knowing that the measures described above cannot be implemented quickly and in a scalable manner, it may be useful to consider other approaches: An important alternative to the traditional remote use of corporate resources can be a temporary or permanent switch to collaboration and business services in the cloud and provided as a service. In this case, data and processes remain in managed systems and the risks of working remotely will be noticeably reduced.

Some providers are already offering such platforms as an emergency measure (somewhere between practical solidarity and clever marketing) temporarily at significantly reduced costs or even free of charge.The use of such systems might be a mitigating measure to secure our abrupt change to the home office. But “just because” it’s urgent, such a step into the cloud needs to be well defined, aligned with a corporate cloud strategy and based on a risk assessment (compliance, governance and security).

A current and continuing challenge

The switch to working from a home office is a life-saving step for the individual and an important measure for containing the current pandemic. Enterprises are providing considerable support in this respect.

At the same time, however, they must consider and implement appropriate protective measures for today and beyond. KuppingerCole Analysts will continue to cover these topics in our research and in our blog as trusted advisors, aiming at providing actionable and valuable insights to the practitioners’ current challenges.

The C5:2020 - A Valuable Resource in Securing the Provider-Customer Relationship for Cloud Services

KuppingerCole has accompanied the unprecedented rise of the cloud as a new infrastructure and alternative platform for a multitude of previously unimaginable services – and done this constructively and with the necessary critical distance right from the early beginnings (blog post from 2008). Cybersecurity, governance and compliance have always been indispensable aspects of this.

When moving to the use of cloud services, it is most important to take a risk-based approach. There is nothing like “just the cloud”. It is not a single model but covers a wide and constantly growing spectrum of applications, services and virtualized infrastructure.

The “wild west phase” of early cloud deployments, based on quick decisions and individual, departmental “credit card”-based cloud subscriptions without corporate oversight should lie behind us. An organization adopting a cloud service needs to ensure that it remains in compliance with laws and industry regulations. There are many aspects to look at, including but not limited to compliance, service location, data security, availability, identity and access management, insider abuse of privilege, virtualization, isolation, cybersecurity threats, monitoring and logging.

Moving to the cloud done right

When moving to the use of cloud services, it is most important to take a risk-based approach. There is nothing like one single version of “the cloud”. It is not a single model but covers a wide and constantly growing spectrum of applications, services and virtualized infrastructure all summed up as the cloud service providers. While many people think mainly of the large platform providers like AWS or Microsoft Azure there is a growing number of companies providing services in and from the cloud. To ensure the security of their customers’ data the provider of cloud services should comply with best practice for the provision of the services they offer.

Moving services into the cloud or creating new services within the cloud substantially changes the traditional picture of typical responsibilities for an application/infrastructure and introduces the Cloud Service Provider (CSP) as a new stakeholder to the network of functional roles already established. Depending on the actual decision of which parts of the services are provided by the CSP on behalf of the customer and which parts are implemented by the tenant on top of the provided service layers, the responsibilities are assigned to either the CSP or the tenant.

Shared responsibilities between the provider and the tenant are a key characteristic of every deployment scenario of cloud services. For every real-life cloud service model scenario, all responsibilities identified have to be clearly assigned individually to the appropriate stakeholder. This might be drastically different in scenarios where only infrastructure is provided, for example the provisioning of plain storage or computing services, compared to scenarios where complete "Software as a Service" (SaaS, e.g. Office 365) is provided. Therefore, the prerequisite for an appropriate service contract between provider and the tenant has to be a comprehensive identification of all responsibilities and an agreement on which contract partner within a cloud service scenario these responsibilities have been assigned to.

However, the process involved is often manual and time consuming, and there is a multitude of aspects to consider. From the start it was important to us to support organizations in understanding the risks that come with the adoption of cloud services and in assessing the risks around their use of cloud services in a rapid and repeatable manner.

Best practices as a baseline

There are several definitions of best practice including: ITIL, COBIT, ISO/IEC 270xx, but also industry-specific specifications from the Cloud Security Alliance (CSA). For a primarily German audience (but de facto far beyond that), the BSI (the German Federal Office for Information Security) created the Cloud Computing Compliance Criteria Catalogue (BSI C5 for short) several years ago as a guideline for all those involved (users, vendors, auditors, security providers and service providers and many more) in the process of evaluating cloud services.

It is available free of charge to anyone interested. And many should be interested: The readership benefits from a well-curated and proofread current catalogue of criteria. It is worth noting that the document is updated regularly, while it is openly available for anyone to learn and use.

These criteria can be used by cloud services users to evaluate the services offered. In reverse, service providers can integrate these criteria already at the conceptual phase of their services and thus ensure "compliance by design" in technology and processes.

C5 reloaded – the 2020 version

BSI just published a major update of the C5 entitled C5:2020. Many areas have been thoroughly revised to cover current trends and developments like DevOps. Two further areas have been added:

  • “Product security” focuses on the security of the cloud service itself so that the requirements of the EU Cybersecurity Act are included in the questionnaire.
  • Especially with regard to US authorities, dealing with “Investigation requests from government agencies” for European customers regularly raises questions. For this reason, the second block of questions was designed to ensure appropriate handling of these requests with regard to legal review.

The C5:2020 is clearly an up-to-date and valuable resource for securing the shared responsibility between cloud customer and cloud service provider.

Applying best practices to real-life scenarios

The process of implementing and securing the resulting technical concepts and necessary mitigating measures requires an individual consideration of the specific requirements of a customer company. This includes a risk-oriented approach to identify the criticality of data, services and processes and to evaluate a deep understanding of the effectiveness and impact of implemented measures.

KuppingerCole Research can provide essential information as a valuable foundation for technologies and strategies. KuppingerCole Advisory Services support our clients strategically in the definition and implementation of necessary conceptual and actionable measures. This is particularly true when it comes to finding out how to efficiently close gaps once they have been identified. This includes mitigating measures, accompanying organizational and technical activities, and the efficient selection of the appropriate and optimal portfolio of tools. Finally, the KuppingerCole Academy with its upcoming master classes for Incident Response Management and Privileged Access Management supports companies and employees in creating knowledge and awareness.

Proper Patch Management Is Risk-Oriented

With regard to cybersecurity, the year 2020 kicks off with considerable upheavals. Few days ago, my colleague Warwick wrote about the security problems that arise with some of Citrix's products and that can potentially affect any company, from start-ups and SMEs to large corporations and critical infrastructure operators.

Just a few hours later, NSA and many others reported a vulnerability in the current Windows 10 and Windows Server 2016 and '19 operating systems that causes them to fail to properly validate certificates that use Elliptic Curve Cryptography (ECC). This results in an attacker being able to spoof the authenticity of certificate chains. The effects that can be concealed behind the fabrication of supposedly valid signatures are many and varied. For example, they can identify unwanted code as valid, or corrupt trustworthy communication based on ECC-based X.509 certificates. More information is now available through Microsoft.

Immediate Patching as the default recommendation

What both of these news items have in common is that default recommendations are typically: Patch immediately when a patch is available and implement mitigating measures until then. And you can't really argue with that either. However, this must be executed properly.

If you take a step back from the current, specific events, the patching process as a pivotal challenge for cybersecurity management becomes evident. First and foremost, a comprehensive approach to patch management must exist at all, ideally integrated into a comprehensive release management system. The high number of long-term unpatched systems, such as during the ‘heartbleed’ vulnerability, shows that this is far from being a comprehensively solved problem.

Criticality and number of affected systems as the key parameters

Security patches have a high criticality. Therefore, they usually have to be implemented on all affected systems as quickly as possible. This inevitably leads to a conflict of objectives between the speed of reaction (and thus the elimination of a vulnerability) and the necessary validation of the patch for actual problem resolution and possible side effects. A patch that changes mission-critical systems from the status "vulnerable" to the status "unusable" is the "worst case scenario" for business continuity and resilience.

The greater the number of affected systems, the greater the risk of automatically installing patches. If patching has to be carried out manually (e.g. on servers) and in the context of maintenance windows, questions about a strategy regarding the sequence and criticality of affected systems arise as the number of affected systems increases. Patches affect existing functionalities and processes deeply, so criticalities and dependencies must be taken into account.

Modern DevOps scenarios require the patching of systems also in repositories and tool chains, so that newly generated systems meet the current security requirements and existing ones can be patched or replaced appropriately.

Automated patches are indispensable

It is essential that software vendors provide automated (and well-tested and actually working) patches. There are huge differences when it comes to speed, timeliness and potential problems encountered, no matter how big the vendor. Automated patching is certainly a blessing in many situations in today's security landscape.

The risk assessment between automated patch risk and security risk for an unpatched system in an increasingly hostile Internet has been shifting from 2010 to today (2020). In many cases, the break-even point that occurred somewhere in this period can be used with some conscience as justification for automated patching and some basic confidence in the quality of the patches provided.

But simply patching everything automatically and unmonitored can be a fatal default policy. This is especially true for OT-Systems (Operational technology), e.g. on the factory floor: The risk inherent to automated patches going wrong in such a mission critical might be considered much higher, increasing the desire to manually control the patching process. And even a scheduled update might be a challenge, as maintenance windows require downtimes, which must be coordinated in complex production processes.

Individual risk assessments and smart policies within patch management

It's obvious there's no one-size-fits-all approach here. But it is also clear that every company and every organization must develop and implement a comprehensive and thorough strategy for the timely and risk-oriented handling of vulnerabilities through patch management as part of cybersecurity and business continuity.

This includes policies for the immediate risk assessment of vulnerabilities and their subsequent remediation. This also includes the definition and implementation of mitigating measures as long as no patch is available, even up to the potential temporary shutdown of a system. Decision processes as to whether patches should be automatically installed in the field largely immediately, which systems require special, i.e. manual attention and which patch requires special quality assurance, depend to a large extent on operational and well-defined risk management. In this case, however, processes with minimal time delays (hours or a few days, certainly not months) and with accompanying "compensatory controls" of an organizational or technical nature are required.

Once the dust has settled around the current security challenges, some organizations might do well to put a comprehensive review of their patch management policies on their cybersecurity agenda. And it should be kept in mind that a risk assessment is far from being a mere IT exercise, because IT risks are always business risks.

Assessing and managing IT risks as business risks integrated into an overall risk management exercise is a challenging task and requires changes in operations and often the organization itself. This is even more true when it comes to using risk assessments as the foundation for actionable decision in the daily patching process. The benefits of a reduced overall risk posture and potentially less downtime however make this approach worthwhile.

KuppingerCole Analysts provide research and advisory in that area and many more areas of cybersecurity and operational resilience. Check out e.g. our “Leadership Brief: Responding to Cyber Incidents – 80209” or for the bigger picture the “Advisory Note: GRC Reference Architecture – 72582”. Find out, where we can support you in helping you getting better by maturing your processes. Don’t hesitate to get in touch with us for a first contact.

And yes: You should *very* soon patch your affected systems, as Microsoft provides an exploitability assessment for the above described vulnerability of “1 - Exploitation More Likely”. How to effectively apply this patch? Well, assess your specific risks...

Renovate Your IAM-House While You Continue to Live in It

Do you belong to the group of people who would like to completely retire all obsolete solutions and replace existing solutions with new ones in a Big Bang? Do you do the same with company infrastructures? Then you don't need to read any further here. Please tell us later, how things worked out for you.

Or do you belong in the other extreme to those companies in which infrastructures can be further developed only through current challenges, audit findings, or particularly prestigious projects funded with a budget?

However, you should read on, because we want to give you argumentative backing for a more comprehensive approach.

Identity infrastructure is the basis of enterprise security

In previous articles we have introduced the Identity Fabric, a concept that serves as a viable foundation for enterprise architectures for Identity and Access Management (IAM) in the digital age.

This concept, which KuppingerCole places at the center of its definition of an IAM blueprint, expressly starts from a central assumption: practically every company today operates an identity infrastructure. This infrastructure virtually always forms the central basis of enterprise security, ensures basic compliance and governance, helps with the request for authorizations and perhaps even for their withdrawal, if no longer needed.

As a result, the existing infrastructures already meet basic requirements today, but these were often defined in previous phases for then existing companies.

The demand for new ways of managing identities

Yes, we too cannot avoid the buzzword "digitalization" now, as requirements that cannot be adequately covered by traditional systems arise precisely from this context. And just adding some additional components (a little CIAM here, some MFA there, or Azure AD, which came with the Office 365 installation anyway) won't help. The way we communicate has changed and companies are advancing their operations with entirely new business models and processes. New ways of working together and communicating demand for new ways of managing identities, satisfying regulatory requirements and delivering secure processes, not least to protect customers and indeed your very own business.

What to do if your own IAM (only) follows a classic enterprise focus, i.e. fulfills the following tasks very well?

  • Traditional Lifecycles
  • Traditional Provisioning
  • Traditional Access Governance
  • Traditional Authentication (Username/Password, Hardware Tokens, VPNs)
  • Traditional Authorization (Roles, Roles, Roles)
  • Consumer Identities (somehow)

And what to do if the business wants you, as the system owner of an IAM, to meet the following requirements?

  • High Flexibility
  • High Delivery Speed for New Digital Services
  • Software as a Service
  • Container and Orchestration
  • Identity API and API Security
  • Security and Zero Trust

The development of parallel infrastructures has been largely recognized as a wrong approach.

Convert existing architectures during operation

Therefore, it is necessary to gently convert existing architectures so that the ongoing operation is permanently guaranteed. Ideally, this process also optimizes the architecture in terms of efficiency and costs, while at the same time adding missing functionalities in a scalable and comprehensive manner.

Figuratively speaking, you have to renovate your house while you continue to live in it. Nobody who has fully understood digitization will deny that the proper management of all relevant identities from customers to employees and partners to devices is one, if not the central basic technology for this. But on the way there, everything already achieved must continue to be maintained, quick wins must provide proof that the company is on the right track, and an understanding of the big picture (the "blueprint") must not be lost.

Research forecast

If you want to find out more: Read the "Leadership Brief: Identity Fabrics - Connecting Anyone to Every Service - 80204" as the first introduction to this comprehensive and promising concept. The KuppingerCole "Architecture Blueprint Identity and Access Management -72550" has just been published and wants to provide you with the conceptual foundation for sustainably transforming existing IAM infrastructures into a future-proof basic technology for the 2020s and beyond.

In addition, leading-edge whitepaper documents currently being prepared and soon to be published (watch this space, we will provide links in one of the upcoming issues of our “Analysts’s view IAM”) will provide essential recommendations for the initialization and implementation of such a comprehensive transformation program.

KuppingerCole has supported successful projects over the course of the past months in which existing, powerful but functionally insufficient IAM architectures were put on the road to a sustained transformation into a powerful future infrastructure. The underlying concepts can be found in the documents above, but if you would like us to guide you along this path, please feel free to talk to us about possible support.

Feel free to browse our Focus Area: The Future of Identity & Access Management for more related content.

As You Make Your KRITIS so You Must Audit It

Organizations of major importance to the German state whose failure or disruption would result in sustained supply shortages, significant public safety disruptions, or other dramatic consequences are categorized as critical infrastructure (KRITIS).

Nine sectors and 29 industries currently fall under this umbrella, including healthcare, energy, transport and financial services. Hospitals as part of the health care system are also included if they meet defined criteria.

For hospitals, the implementation instructions of the German Hospital Association (DKG) have proven to be important. The number of fully inpatient hospital treatments in the reference period (i.e. the previous year) was defined as the measurement criterion. With 30,000 fully inpatient treatment cases, the threshold value for the identification of critical infrastructures has been reached, which concerns considerably more than 100 hospitals. These are obliged to fulfil clearly defined requirements, which are derived from the IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - for the security of IT systems and digital infrastructures, including critical infrastructures in Germany, and the BSI-KritisV - "BSI-Kritisverordnung". The above-mentioned implementation instructions of the DKG thus also define proposed measures for the assurance of adequate security, in particular about the IT used.

Companies had until June 30th this year to meet the requirements and to commission a suitable, trustworthy third party for testing and certification.

But according to a report in Tagesspiegel Background, this has been challenging: industry associations have been pointing out for some time that there are not enough suitable auditing firms. This is not least due to the fact that auditors must have a double qualification, which in addition to IT also includes knowledge of the industry, in this case the healthcare system in hospitals. Here, as in many other areas, the infamous skill gap strikes, i.e. the lack of suitable, qualified employees in companies or on the job market.

This led to the companies capable of performing the audits being overloaded and thus to a varying quality and availability of audits and resulting audit reports. According to the press report, these certificates suffer the same fate when they are submitted to the BSI, which evaluates these reports. Here, too, a shortage of skilled workers leads to a backlog of work. A comprehensive evaluation was not available at the time of publication. Even the implementation instructions of the German Hospital Association, on the basis of which many implementations were carried out in the affected hospitals, have not yet been confirmed by the BSI.

Does this place KRITIS in the list of toothless guidelines (such as PSD2 with its large number of national individual regulations) that have not been adequately implemented, at least in this area? Not necessarily.. The obligation to comply has not been suspended; the lack of personnel and skills on the labour market merely prevents consistent, comprehensive testing by suitable bodies such as TÜV, Dekra or specialised auditing firms. However, if such an audit does take place, the necessary guidelines are applied and any non-compliance is followed up in accordance with the audit reports. The hospitals concerned are therefore advised they should have  fulfilled the requirements by the deadline and to continue working on them in the name of continuous implementation and improvement.

Even hospitals that today slightly miss this threshold are now encouraged to prepare for adjustments to requirements or increasing patient numbers. And this means that even without the necessity of a formal attestation, the appropriate basic conditions, such as the establishment of an information security management system (ISMS) in accordance with ISO 27.001, can be created to serve as a foundation.

In addition, the availability of a general framework for the availability and security of IT in this and other industries gives other sector players (such as group practices or specialist institutes) a resilient basis for creating appropriate framework conditions that correspond to the current state of requirements and technology. This also applies if they are not or will not be KRITIS-relevant in the foreseeable future, but want to offer their patients a comparably good degree of security and resulting trustworthiness.

KuppingerCole offers comprehensive support in the form of research and advisory for companies in all KRITIS-relevant areas and beyond. Talk to us to address your cybersecurity, access control and compliance challenges.

Stell Dir vor, es ist KRITIS und keiner geht hin

Kritische Infrastrukturen (KRITIS) sind Organisationen oder Einrichtungen mit wichtiger Bedeutung für das staatliche Gemeinwesen, bei deren Ausfall oder Beeinträchtigung nachhaltig wirkende Versorgungsengpässe, erhebliche Störungen der öffentlichen Sicherheit oder andere dramatische Folgen eintreten würden“.

Neun Sektoren und 29 Branchen gelten derzeit als kritische Infrastrukturen, darunter die Gesundheitsversorgung, Energieversorgung, der Verkehr und Finanzdienstleistungen. Krankenhäuser als Teil des Gesundheitswesens fallen bei Erfüllung definierter Kriterien ebenfalls in die Kategorie „kritische Infrastrukturen“.

Für Krankenhäuser haben sich die Umsetzungshinweise der Deutschen Krankenhausgesellschaft (DKG) als maßgeblich erwiesen. Als Bemessungskriterium wurde hierbei die Anzahl vollstationärer Krankenhausbehandlungen im Bezugszeitraum (Vorjahr) definiert. Mit einer Anzahl von 30.000 vollstationären Behandlungsfällen ist der Schwellenwert zur Identifikation kritischer Infrastrukturen erreicht, was deutlich mehr als 100 Krankenhäuser betrifft, Diese werden zur Erfüllung klar definierter Anforderungen verpflichtet, die aus dem das IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - zur Sicherung von IT-Systemen und digitalen Infrastrukturen, einschließlich kritischer Infrastrukturen in Deutschland, und der BSI-KritisV  - "BSI-Kritisverordnung" abgeleitet sind. Die genannten Umsetzungshinweise der DKG definieren damit dann auch vorgeschlagene Maßnahmen für den Nachweis einer angemessenen Sicherheit, insbesondere mit Blick auf die eingesetzten IT.

Der 30. Juni dieses Jahres stellte die definierte Deadline für die betroffenen Unternehmen dar, die Anforderungen zu erfüllen und einen geeigneten, vertrauenswürdigen Dritten zur Prüfung und Testierung zu beauftragen.

Einem Bericht des Tagesspiegel Background zufolge liegt genau hier derzeit eine Herausforderung: Branchenverbände weisen seit längerem darauf hin, dass es nicht genügend geeignete prüfende Stellen gibt. Das liegt nicht zuletzt daran, dass Auditoren eine doppelte Qualifikation vorweisen müssen, die neben der IT auch die Kenntnis der Branche, hier also des Gesundheitswesens im Krankenhaus umfasst. Hier, wie in vielen anderen Bereichen, schlägt der berüchtigte Skill-Gap zu, also der Mangel an geeigneten, qualifizierten Mitarbeitern in den Unternehmen oder am Arbeitsmarkt.

Dies führte zu einer Überlastung der zur Prüfung befähigten Unternehmen und damit einer unterschiedlichen Güte und Verfügbarkeit der Prüfungen und der resultierenden Prüfberichte. Das gleiche Schicksal erleiden die Testate dem Pressebericht zufolge bei Einreichung beim BSI, das diese Berichte auswertet. Auch hier führt ein Fachkräftemangel zu einem Bearbeitungsstau. Eine Auswertung lag zum Veröffentlichungszeitpunkt nicht vor. Selbst die Umsetzungshinweise der Deutschen Krankenhausgesellschaft, auf deren Basis viele Umsetzungen in den betroffenen Häusern erfolgten, ist vom BSI noch nicht bestätigt.

Reiht sich damit KRITIS zumindest in diesem Bereich in die Liste der zahnlosen, weil nicht angemessen umgesetzten Richtlinien (wie etwa PSD2 mit seiner Vielzahl nationaler Individualregelungen) ein? Aus heutiger Sicht kann das wohl verneint werden. Die Verpflichtung zur Einhaltung ist nicht aufgehoben, der Personal- und Skillmangel am Arbeitsmarkt verhindert lediglich die konsequente, umfassende Prüfung durch geeignete Stellen wie TÜV, Dekra oder spezialisierte Wirtschaftsprüfungsgesellschaften. Findet eine solche aber statt, werden die notwendigen Richtlinien angelegt und deren Nichterfüllung entsprechend der Prüfberichte auch nachverfolgt. Betroffenen Krankenhäuser ist damit nahegelegt, die Anforderungen schon zum Stichtag erfüllt zu haben und im Sinne einer kontinuierlichen Umsetzung und Verbesserung daran auch weiterhin zu arbeiten.

Auch Krankenhäuser, die heute diesen Schwellwert knapp nicht erreichen, sind heute schon aus Vernunftgründen angehalten, sich auf Anpassungen der Anforderungen oder steigende Patientenzahlen vorzubereiten. Und das bedeutet, auch ohne Notwendigkeit eines formalen Testats schon die sinnvollen Rahmenbedingungen, etwa den Aufbau eines Informationssicherheits Management Systems (ISMS) nach ISO 27.001 als Grundlage zu schaffen.

Darüber hinaus gibt die Verfügbarkeit eines allgemeinen Rahmens für die Verfügbarkeit und Sicherheit der IT in dieser und anderen Branchen weiteren Branchenteilnehmern (etwa Gemeinschaftspraxen oder fachlich spezialisierte Institute) eine belastbare Basis, angemessene Rahmenbedingungen zu schaffen, die einem aktuellen Stand der Anforderungen und der Technik entsprechen. Das gilt auch, wenn sie absehbar nicht KRITIS-relevant sind oder werden, aber ihren Patienten einen vergleichbar guten Sicherheitsstandard und die daraus resultierende Vertrauenswürdigkeit bieten wollen.

KuppingerCole bietet umfangreiche Unterstützung in Form von Research und Advisory für Unternehmen in allen KRITIS-relevanten Bereichen und darüber hinaus. Reden Sie mit uns, um Ihren Herausforderungen in den Bereichen Cybersecurity, Zugriffskontrolle und Compliance angemessen zu begegnen.

Cognitive! - Entering a New Era of Business Models Between Converging Technologies and Data

Digitalization or more precisely the "digital transformation" has led us to the "digital enterprise". It strives to deliver on its promise to leverage previously unused data and the information it contains for the benefit of the enterprise and its business. And although these two terms can certainly be described as buzzwords, they have found their way into our way of thinking and into all kinds of publications, so that they will probably continue to exist in the future. 

Thought leaders, analysts, software and service providers and finally practically everyone in between have been proclaiming the "cognitive enterprise" for several months now. This concept - and the mindset associated with it - promises to use the information of the already digital company to achieve productivity, profitability and high innovation for the company.  And they aim at creating and evolving next-generation business models between converging technologies and data.​  

So what is special about this cognitive enterprise“? Defining it usually starts with the idea of applying cognitive concepts and technologies to data in practically all relevant areas of a corporation. Data includes: Open data, public data, subscribed data, enterprise-proprietary data, pre-processed data, structured and unstructured data or simply Big Data). And the technologies involved include the likes of Artificial Intelligence (AI), more specifically Machine Learning (ML), Blockchain, Virtual Reality (VR), Augmented Reality (AR), the Internet of Things (IoT), ubiquitous communication with 5G, and individualized 3D printing​.  

As of now, mainly concepts from AI and machine learning are grouped together as "cognitive", although a uniform understanding of the underlying concepts is often still lacking. They have already proven to do the “heavy lifting” either on behalf of humans, or autonomously. They increasingly understand, they reason, and they interact, e.g. by engaging in meaningful conversations and thus delivering genuine value without human intervention. 

Automation, analytics and decision-making, customer support and communication are key target areas, because many tasks in today’s organizations are in fact repetitive, time-consuming, dull and inefficient. Focus (ideally) lies on relieving and empowering the workforce, when the task can be executed by e.g. bots or through Robotic Process Automation. Every organization is supposed to agree that their staff is better than bots and can perform tasks much more meaningful. So, these measures are intended to benefit both the employee and the company. 

But this is only the starting point. A cognitive enterprise will be interactive in many ways, not only by interacting with its customers, but also with other systems, processes, devices, cloud services and peer organizations. As one result it will be adaptive, as it is designed to be learning from data, even in an unattended manner. The key goal is to foster agility and continuous innovation through cognitive technologies by embracing and institutionalizing a culture that perpetually changes the way an organization works and creates value.  

Beyond the fact that journalists, marketing departments and even analysts tend to outdo each other in the creation and propagation of hype terms, where exactly is the difference between a cognitive and a digital enterprise?  Do we need yet another term, notably for the use of machine learning as an apparently digital technology?  

I don't think so. We are witnessing the evolution, advancement, and ultimately the application of exactly these very digital technologies that lay the foundation of a comprehensive digital transformation. However, the added value of the label "cognitive" is negligible.   

But regardless of how you, me or the buzzword industry really decide to call it in the end, much more relevant are the implications and challenges of this consistent implementation of digital transformation. In my opinion two aspects must not be underestimated: 

First, this transformation is either approached in its entirety, or it is better not to do it at all, there is nothing in between. If you start doing this, it's not enough to quickly look for a few candidates for a bit of Robot Process Automation. There will be no successful, "slightly cognitive” companies. This will be a waste of the actual potential of a comprehensive redesign of corporate processes and is worth little more than a placebo. Rather, it is necessary to model internal knowledge, to gain and to interconnect data.  Jobs and tasks will change, become obsolete and will be replaced by new and more demanding ones (otherwise they could be executed by a bot again). 

Second: The importance of managing constant organizational change and restructuring is often overlooked. After all, the transformation to a Digital/Cognitive Enterprise is by far not entirely about AI, Robotic Process Automation or technology. Rather, focus has to be put on the individual as well, i.e. each member of the entire workforce (both internal and external). Established processes have to be managed, adjusted or even reengineered and this also applies to processes affecting partners, suppliers and thus any kind of cooperation or interaction.  

One of the most important departments in this future will be the human resources department and specifically talent management. Getting people on board and retaining them sustainably will be a key challenge. In particular, this means providing them with ongoing training and enabling them to perform qualitatively demanding tasks in a highly volatile environment. And it is precisely such an extremely responsible task that will certainly not be automated even in the long term...

PSD2 in a Europe of Small Principalities

Europe’s consumers have been promised for some years now that strong customer authentication (SCA) was on its way. And the rules as to when this should be applied in e-commerce are being tightened. The aim is to better protect the customers of e-commerce services.  

This sounds like a good development for us all, since we are all regular customers of online merchants or providers of online services. And if you look at the details of SCA, this impression is further enhanced. Logins with only username and password are theoretically a thing of the past, the risk of possible fraud on the basis of compromised credentials is potentially considerably reduced.

The Payment Services Directive (PSD II) requires multi-factor authentication (MFA) as the implementation of SCA for all payments over €10. MFA stands for Multi Factor Authentication, i.e. all approaches involving more than one factor. The most common variant is Two Factor Authentication (2FA), i.e. the use of two factors. There are three classes of factors: Knowledge, Possession and Biometrics – or “what you know”, “what you have”, “what you are”. For each factor, there might be various “means”, e.g. username and password for knowledge, a hard token or a phone for possession, fingerprint or iris for biometrics.  

The use of this results in improved protection for virtually all parties involved: E-commerce site, payment processors and customers can be more confident that transactions are legitimate and trustworthy.

A short look at the history: On November 16, 2015, the Council of the European Union passed the PSD2 and gave Member States two years to transpose the Directive into their national laws and regulations. It should be expected that the broad and comprehensive implementation of SCA as part of the PSD2 will be achieved in a timely manner, as the benefits are obvious. Of course, purchasing processes become a little more complex, because card data and account number or username and password for payment services are no longer enough for checkout. A second, different feature such as a fingerprint or an SMS to your own registered smartphone becomes necessary to increase security.  

But shouldn’t we value this significantly increased security and the trust that goes with it? On the contrary, retailers, for example in Germany, are far from positive about stricter security standards. Every change and especially increase in complexity of the purchasing process is regarded as an obstacle, a potential point for dropping out of the customer journey.

And yet the development now emerging was not unexpected. As early as July 2019, the European Banking Authority (EBA) stated that some players were not sufficiently prepared for the PSD2, SCA and thus the required protection of consumers.  

As a measure, the member states were offered an extension of the deadline. First and foremost, this was used extensively by the UK, but also by some other countries. In Germany, the new regulations for payments without cash will enter into force on 14 September 2019, almost four years after the European Directive PSD2 was approved. This means that only payment services that implement SCA and are therefore PSD2 compliant can be used for online purchases using credit cards.  

And, you guessed it, just recently BaFin (Germany’s financial watchdog) announced in a press release that “As a temporary measure, payment service providers domiciled in Germany will still be allowed to execute credit card payments online without strong customer authentication after 14 September 2019”.  

This does not only mean an immense delay of unclear duration; the otherwise rather homogeneous European market is now being chopped up into a multitude of different regulations and exceptions. The direct opposite of what was planned has been achieved, since it is unclear when and where which requirements will apply, in the European Union and in a global Internet. The obvious losers are the customers and online security and trust in reliable online purchases, at least for the short to mid-term.

Forward-looking organizations who value their customers and their security and trust are now able to implement security through SCA, even without BaFin checks. Those companies that benefit from a short delay to meet PSD2 requirements soon should quickly seize this opportunity and join the latter group. But those companies that, since the release of PSD2 and its requirements, have preferred to complain about more complex payment processes and lament EU regulations should reconsider their relationship to security and customer satisfaction (and thus to their customers). And they should rapidly start on a straight path to comprehensive PSD2 compliance. Because temporary measures and extended deadlines are exactly that, they are temporary, and they are deadlines.

To meet them successfully and in time, KuppingerCole Analysts can support organizations by providing expertise through our research in the areas of PSD2, SCA and MFA. And our Advisory Services are here to support you in identifying and working towards your individual requirements while maintaining user experience, meeting business requirements and achieving compliance. And our upcoming Digital Finance World event in Frankfurt next week is the place to be to learn from experts and exchange your thoughts with peers.

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00