Blog posts by Matthias Reinwarth
KuppingerCole has accompanied the unprecedented rise of the cloud as a new infrastructure and alternative platform for a multitude of previously unimaginable services – and done this constructively and with the necessary critical distance right from the early beginnings (blog post from 2008). Cybersecurity, governance and compliance have always been indispensable aspects of this.
When moving to the use of cloud services, it is most important to take a risk-based approach. There is nothing like “just the cloud”. It is not a single model but covers a wide and constantly growing spectrum of applications, services and virtualized infrastructure.
The “wild west phase” of early cloud deployments, based on quick decisions and individual, departmental “credit card”-based cloud subscriptions without corporate oversight should lie behind us. An organization adopting a cloud service needs to ensure that it remains in compliance with laws and industry regulations. There are many aspects to look at, including but not limited to compliance, service location, data security, availability, identity and access management, insider abuse of privilege, virtualization, isolation, cybersecurity threats, monitoring and logging.
Moving to the cloud done right
When moving to the use of cloud services, it is most important to take a risk-based approach. There is nothing like one single version of “the cloud”. It is not a single model but covers a wide and constantly growing spectrum of applications, services and virtualized infrastructure all summed up as the cloud service providers. While many people think mainly of the large platform providers like AWS or Microsoft Azure there is a growing number of companies providing services in and from the cloud. To ensure the security of their customers’ data the provider of cloud services should comply with best practice for the provision of the services they offer.
Moving services into the cloud or creating new services within the cloud substantially changes the traditional picture of typical responsibilities for an application/infrastructure and introduces the Cloud Service Provider (CSP) as a new stakeholder to the network of functional roles already established. Depending on the actual decision of which parts of the services are provided by the CSP on behalf of the customer and which parts are implemented by the tenant on top of the provided service layers, the responsibilities are assigned to either the CSP or the tenant.
Shared responsibilities between the provider and the tenant are a key characteristic of every deployment scenario of cloud services. For every real-life cloud service model scenario, all responsibilities identified have to be clearly assigned individually to the appropriate stakeholder. This might be drastically different in scenarios where only infrastructure is provided, for example the provisioning of plain storage or computing services, compared to scenarios where complete "Software as a Service" (SaaS, e.g. Office 365) is provided. Therefore, the prerequisite for an appropriate service contract between provider and the tenant has to be a comprehensive identification of all responsibilities and an agreement on which contract partner within a cloud service scenario these responsibilities have been assigned to.
However, the process involved is often manual and time consuming, and there is a multitude of aspects to consider. From the start it was important to us to support organizations in understanding the risks that come with the adoption of cloud services and in assessing the risks around their use of cloud services in a rapid and repeatable manner.
Best practices as a baseline
There are several definitions of best practice including: ITIL, COBIT, ISO/IEC 270xx, but also industry-specific specifications from the Cloud Security Alliance (CSA). For a primarily German audience (but de facto far beyond that), the BSI (the German Federal Office for Information Security) created the Cloud Computing Compliance Criteria Catalogue (BSI C5 for short) several years ago as a guideline for all those involved (users, vendors, auditors, security providers and service providers and many more) in the process of evaluating cloud services.
It is available free of charge to anyone interested. And many should be interested: The readership benefits from a well-curated and proofread current catalogue of criteria. It is worth noting that the document is updated regularly, while it is openly available for anyone to learn and use.
These criteria can be used by cloud services users to evaluate the services offered. In reverse, service providers can integrate these criteria already at the conceptual phase of their services and thus ensure "compliance by design" in technology and processes.
C5 reloaded – the 2020 version
- “Product security” focuses on the security of the cloud service itself so that the requirements of the EU Cybersecurity Act are included in the questionnaire.
- Especially with regard to US authorities, dealing with “Investigation requests from government agencies” for European customers regularly raises questions. For this reason, the second block of questions was designed to ensure appropriate handling of these requests with regard to legal review.
The C5:2020 is clearly an up-to-date and valuable resource for securing the shared responsibility between cloud customer and cloud service provider.
Applying best practices to real-life scenarios
The process of implementing and securing the resulting technical concepts and necessary mitigating measures requires an individual consideration of the specific requirements of a customer company. This includes a risk-oriented approach to identify the criticality of data, services and processes and to evaluate a deep understanding of the effectiveness and impact of implemented measures.
KuppingerCole Research can provide essential information as a valuable foundation for technologies and strategies. KuppingerCole Advisory Services support our clients strategically in the definition and implementation of necessary conceptual and actionable measures. This is particularly true when it comes to finding out how to efficiently close gaps once they have been identified. This includes mitigating measures, accompanying organizational and technical activities, and the efficient selection of the appropriate and optimal portfolio of tools. Finally, the KuppingerCole Academy with its upcoming master classes for Incident Response Management and Privileged Access Management supports companies and employees in creating knowledge and awareness.
With regard to cybersecurity, the year 2020 kicks off with considerable upheavals. Few days ago, my colleague Warwick wrote about the security problems that arise with some of Citrix's products and that can potentially affect any company, from start-ups and SMEs to large corporations and critical infrastructure operators.
Just a few hours later, NSA and many others reported a vulnerability in the current Windows 10 and Windows Server 2016 and '19 operating systems that causes them to fail to properly validate certificates that use Elliptic Curve Cryptography (ECC). This results in an attacker being able to spoof the authenticity of certificate chains. The effects that can be concealed behind the fabrication of supposedly valid signatures are many and varied. For example, they can identify unwanted code as valid, or corrupt trustworthy communication based on ECC-based X.509 certificates. More information is now available through Microsoft.
Immediate Patching as the default recommendation
What both of these news items have in common is that default recommendations are typically: Patch immediately when a patch is available and implement mitigating measures until then. And you can't really argue with that either. However, this must be executed properly.
If you take a step back from the current, specific events, the patching process as a pivotal challenge for cybersecurity management becomes evident. First and foremost, a comprehensive approach to patch management must exist at all, ideally integrated into a comprehensive release management system. The high number of long-term unpatched systems, such as during the ‘heartbleed’ vulnerability, shows that this is far from being a comprehensively solved problem.
Criticality and number of affected systems as the key parameters
Security patches have a high criticality. Therefore, they usually have to be implemented on all affected systems as quickly as possible. This inevitably leads to a conflict of objectives between the speed of reaction (and thus the elimination of a vulnerability) and the necessary validation of the patch for actual problem resolution and possible side effects. A patch that changes mission-critical systems from the status "vulnerable" to the status "unusable" is the "worst case scenario" for business continuity and resilience.
The greater the number of affected systems, the greater the risk of automatically installing patches. If patching has to be carried out manually (e.g. on servers) and in the context of maintenance windows, questions about a strategy regarding the sequence and criticality of affected systems arise as the number of affected systems increases. Patches affect existing functionalities and processes deeply, so criticalities and dependencies must be taken into account.
Modern DevOps scenarios require the patching of systems also in repositories and tool chains, so that newly generated systems meet the current security requirements and existing ones can be patched or replaced appropriately.
Automated patches are indispensable
It is essential that software vendors provide automated (and well-tested and actually working) patches. There are huge differences when it comes to speed, timeliness and potential problems encountered, no matter how big the vendor. Automated patching is certainly a blessing in many situations in today's security landscape.
The risk assessment between automated patch risk and security risk for an unpatched system in an increasingly hostile Internet has been shifting from 2010 to today (2020). In many cases, the break-even point that occurred somewhere in this period can be used with some conscience as justification for automated patching and some basic confidence in the quality of the patches provided.
But simply patching everything automatically and unmonitored can be a fatal default policy. This is especially true for OT-Systems (Operational technology), e.g. on the factory floor: The risk inherent to automated patches going wrong in such a mission critical might be considered much higher, increasing the desire to manually control the patching process. And even a scheduled update might be a challenge, as maintenance windows require downtimes, which must be coordinated in complex production processes.
Individual risk assessments and smart policies within patch management
It's obvious there's no one-size-fits-all approach here. But it is also clear that every company and every organization must develop and implement a comprehensive and thorough strategy for the timely and risk-oriented handling of vulnerabilities through patch management as part of cybersecurity and business continuity.
This includes policies for the immediate risk assessment of vulnerabilities and their subsequent remediation. This also includes the definition and implementation of mitigating measures as long as no patch is available, even up to the potential temporary shutdown of a system. Decision processes as to whether patches should be automatically installed in the field largely immediately, which systems require special, i.e. manual attention and which patch requires special quality assurance, depend to a large extent on operational and well-defined risk management. In this case, however, processes with minimal time delays (hours or a few days, certainly not months) and with accompanying "compensatory controls" of an organizational or technical nature are required.
Once the dust has settled around the current security challenges, some organizations might do well to put a comprehensive review of their patch management policies on their cybersecurity agenda. And it should be kept in mind that a risk assessment is far from being a mere IT exercise, because IT risks are always business risks.
Assessing and managing IT risks as business risks integrated into an overall risk management exercise is a challenging task and requires changes in operations and often the organization itself. This is even more true when it comes to using risk assessments as the foundation for actionable decision in the daily patching process. The benefits of a reduced overall risk posture and potentially less downtime however make this approach worthwhile.
KuppingerCole Analysts provide research and advisory in that area and many more areas of cybersecurity and operational resilience. Check out e.g. our “Leadership Brief: Responding to Cyber Incidents – 80209” or for the bigger picture the “Advisory Note: GRC Reference Architecture – 72582”. Find out, where we can support you in helping you getting better by maturing your processes. Don’t hesitate to get in touch with us for a first contact.
And yes: You should *very* soon patch your affected systems, as Microsoft provides an exploitability assessment for the above described vulnerability of “1 - Exploitation More Likely”. How to effectively apply this patch? Well, assess your specific risks...
Do you belong to the group of people who would like to completely retire all obsolete solutions and replace existing solutions with new ones in a Big Bang? Do you do the same with company infrastructures? Then you don't need to read any further here. Please tell us later, how things worked out for you.
Or do you belong in the other extreme to those companies in which infrastructures can be further developed only through current challenges, audit findings, or particularly prestigious projects funded with a budget?
However, you should read on, because we want to give you argumentative backing for a more comprehensive approach.
Identity infrastructure is the basis of enterprise security
In previous articles we have introduced the Identity Fabric, a concept that serves as a viable foundation for enterprise architectures for Identity and Access Management (IAM) in the digital age.
This concept, which KuppingerCole places at the center of its definition of an IAM blueprint, expressly starts from a central assumption: practically every company today operates an identity infrastructure. This infrastructure virtually always forms the central basis of enterprise security, ensures basic compliance and governance, helps with the request for authorizations and perhaps even for their withdrawal, if no longer needed.
As a result, the existing infrastructures already meet basic requirements today, but these were often defined in previous phases for then existing companies.
The demand for new ways of managing identities
Yes, we too cannot avoid the buzzword "digitalization" now, as requirements that cannot be adequately covered by traditional systems arise precisely from this context. And just adding some additional components (a little CIAM here, some MFA there, or Azure AD, which came with the Office 365 installation anyway) won't help. The way we communicate has changed and companies are advancing their operations with entirely new business models and processes. New ways of working together and communicating demand for new ways of managing identities, satisfying regulatory requirements and delivering secure processes, not least to protect customers and indeed your very own business.
What to do if your own IAM (only) follows a classic enterprise focus, i.e. fulfills the following tasks very well?
- Traditional Lifecycles
- Traditional Provisioning
- Traditional Access Governance
- Traditional Authentication (Username/Password, Hardware Tokens, VPNs)
- Traditional Authorization (Roles, Roles, Roles)
- Consumer Identities (somehow)
And what to do if the business wants you, as the system owner of an IAM, to meet the following requirements?
- High Flexibility
- High Delivery Speed for New Digital Services
- Software as a Service
- Container and Orchestration
- Identity API and API Security
- Security and Zero Trust
The development of parallel infrastructures has been largely recognized as a wrong approach.
Convert existing architectures during operation
Therefore, it is necessary to gently convert existing architectures so that the ongoing operation is permanently guaranteed. Ideally, this process also optimizes the architecture in terms of efficiency and costs, while at the same time adding missing functionalities in a scalable and comprehensive manner.
Figuratively speaking, you have to renovate your house while you continue to live in it. Nobody who has fully understood digitization will deny that the proper management of all relevant identities from customers to employees and partners to devices is one, if not the central basic technology for this. But on the way there, everything already achieved must continue to be maintained, quick wins must provide proof that the company is on the right track, and an understanding of the big picture (the "blueprint") must not be lost.
If you want to find out more: Read the "Leadership Brief: Identity Fabrics - Connecting Anyone to Every Service - 80204" as the first introduction to this comprehensive and promising concept. The KuppingerCole "Architecture Blueprint Identity and Access Management -72550" has just been published and wants to provide you with the conceptual foundation for sustainably transforming existing IAM infrastructures into a future-proof basic technology for the 2020s and beyond.
In addition, leading-edge whitepaper documents currently being prepared and soon to be published (watch this space, we will provide links in one of the upcoming issues of our “Analysts’s view IAM”) will provide essential recommendations for the initialization and implementation of such a comprehensive transformation program.
KuppingerCole has supported successful projects over the course of the past months in which existing, powerful but functionally insufficient IAM architectures were put on the road to a sustained transformation into a powerful future infrastructure. The underlying concepts can be found in the documents above, but if you would like us to guide you along this path, please feel free to talk to us about possible support.
Feel free to browse our Focus Area: The Future of Identity & Access Management for more related content.
Organizations of major importance to the German state whose failure or disruption would result in sustained supply shortages, significant public safety disruptions, or other dramatic consequences are categorized as critical infrastructure (KRITIS).
Nine sectors and 29 industries currently fall under this umbrella, including healthcare, energy, transport and financial services. Hospitals as part of the health care system are also included if they meet defined criteria.
For hospitals, the implementation instructions of the German Hospital Association (DKG) have proven to be important. The number of fully inpatient hospital treatments in the reference period (i.e. the previous year) was defined as the measurement criterion. With 30,000 fully inpatient treatment cases, the threshold value for the identification of critical infrastructures has been reached, which concerns considerably more than 100 hospitals. These are obliged to fulfil clearly defined requirements, which are derived from the IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - for the security of IT systems and digital infrastructures, including critical infrastructures in Germany, and the BSI-KritisV - "BSI-Kritisverordnung". The above-mentioned implementation instructions of the DKG thus also define proposed measures for the assurance of adequate security, in particular about the IT used.
Companies had until June 30th this year to meet the requirements and to commission a suitable, trustworthy third party for testing and certification.
But according to a report in Tagesspiegel Background, this has been challenging: industry associations have been pointing out for some time that there are not enough suitable auditing firms. This is not least due to the fact that auditors must have a double qualification, which in addition to IT also includes knowledge of the industry, in this case the healthcare system in hospitals. Here, as in many other areas, the infamous skill gap strikes, i.e. the lack of suitable, qualified employees in companies or on the job market.
This led to the companies capable of performing the audits being overloaded and thus to a varying quality and availability of audits and resulting audit reports. According to the press report, these certificates suffer the same fate when they are submitted to the BSI, which evaluates these reports. Here, too, a shortage of skilled workers leads to a backlog of work. A comprehensive evaluation was not available at the time of publication. Even the implementation instructions of the German Hospital Association, on the basis of which many implementations were carried out in the affected hospitals, have not yet been confirmed by the BSI.
Does this place KRITIS in the list of toothless guidelines (such as PSD2 with its large number of national individual regulations) that have not been adequately implemented, at least in this area? Not necessarily.. The obligation to comply has not been suspended; the lack of personnel and skills on the labour market merely prevents consistent, comprehensive testing by suitable bodies such as TÜV, Dekra or specialised auditing firms. However, if such an audit does take place, the necessary guidelines are applied and any non-compliance is followed up in accordance with the audit reports. The hospitals concerned are therefore advised they should have fulfilled the requirements by the deadline and to continue working on them in the name of continuous implementation and improvement.
Even hospitals that today slightly miss this threshold are now encouraged to prepare for adjustments to requirements or increasing patient numbers. And this means that even without the necessity of a formal attestation, the appropriate basic conditions, such as the establishment of an information security management system (ISMS) in accordance with ISO 27.001, can be created to serve as a foundation.
In addition, the availability of a general framework for the availability and security of IT in this and other industries gives other sector players (such as group practices or specialist institutes) a resilient basis for creating appropriate framework conditions that correspond to the current state of requirements and technology. This also applies if they are not or will not be KRITIS-relevant in the foreseeable future, but want to offer their patients a comparably good degree of security and resulting trustworthiness.
KuppingerCole offers comprehensive support in the form of research and advisory for companies in all KRITIS-relevant areas and beyond. Talk to us to address your cybersecurity, access control and compliance challenges.
„Kritische Infrastrukturen (KRITIS) sind Organisationen oder Einrichtungen mit wichtiger Bedeutung für das staatliche Gemeinwesen, bei deren Ausfall oder Beeinträchtigung nachhaltig wirkende Versorgungsengpässe, erhebliche Störungen der öffentlichen Sicherheit oder andere dramatische Folgen eintreten würden“.
Neun Sektoren und 29 Branchen gelten derzeit als kritische Infrastrukturen, darunter die Gesundheitsversorgung, Energieversorgung, der Verkehr und Finanzdienstleistungen. Krankenhäuser als Teil des Gesundheitswesens fallen bei Erfüllung definierter Kriterien ebenfalls in die Kategorie „kritische Infrastrukturen“.
Für Krankenhäuser haben sich die Umsetzungshinweise der Deutschen Krankenhausgesellschaft (DKG) als maßgeblich erwiesen. Als Bemessungskriterium wurde hierbei die Anzahl vollstationärer Krankenhausbehandlungen im Bezugszeitraum (Vorjahr) definiert. Mit einer Anzahl von 30.000 vollstationären Behandlungsfällen ist der Schwellenwert zur Identifikation kritischer Infrastrukturen erreicht, was deutlich mehr als 100 Krankenhäuser betrifft, Diese werden zur Erfüllung klar definierter Anforderungen verpflichtet, die aus dem das IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - zur Sicherung von IT-Systemen und digitalen Infrastrukturen, einschließlich kritischer Infrastrukturen in Deutschland, und der BSI-KritisV - "BSI-Kritisverordnung" abgeleitet sind. Die genannten Umsetzungshinweise der DKG definieren damit dann auch vorgeschlagene Maßnahmen für den Nachweis einer angemessenen Sicherheit, insbesondere mit Blick auf die eingesetzten IT.
Der 30. Juni dieses Jahres stellte die definierte Deadline für die betroffenen Unternehmen dar, die Anforderungen zu erfüllen und einen geeigneten, vertrauenswürdigen Dritten zur Prüfung und Testierung zu beauftragen.
Einem Bericht des Tagesspiegel Background zufolge liegt genau hier derzeit eine Herausforderung: Branchenverbände weisen seit längerem darauf hin, dass es nicht genügend geeignete prüfende Stellen gibt. Das liegt nicht zuletzt daran, dass Auditoren eine doppelte Qualifikation vorweisen müssen, die neben der IT auch die Kenntnis der Branche, hier also des Gesundheitswesens im Krankenhaus umfasst. Hier, wie in vielen anderen Bereichen, schlägt der berüchtigte Skill-Gap zu, also der Mangel an geeigneten, qualifizierten Mitarbeitern in den Unternehmen oder am Arbeitsmarkt.
Dies führte zu einer Überlastung der zur Prüfung befähigten Unternehmen und damit einer unterschiedlichen Güte und Verfügbarkeit der Prüfungen und der resultierenden Prüfberichte. Das gleiche Schicksal erleiden die Testate dem Pressebericht zufolge bei Einreichung beim BSI, das diese Berichte auswertet. Auch hier führt ein Fachkräftemangel zu einem Bearbeitungsstau. Eine Auswertung lag zum Veröffentlichungszeitpunkt nicht vor. Selbst die Umsetzungshinweise der Deutschen Krankenhausgesellschaft, auf deren Basis viele Umsetzungen in den betroffenen Häusern erfolgten, ist vom BSI noch nicht bestätigt.
Reiht sich damit KRITIS zumindest in diesem Bereich in die Liste der zahnlosen, weil nicht angemessen umgesetzten Richtlinien (wie etwa PSD2 mit seiner Vielzahl nationaler Individualregelungen) ein? Aus heutiger Sicht kann das wohl verneint werden. Die Verpflichtung zur Einhaltung ist nicht aufgehoben, der Personal- und Skillmangel am Arbeitsmarkt verhindert lediglich die konsequente, umfassende Prüfung durch geeignete Stellen wie TÜV, Dekra oder spezialisierte Wirtschaftsprüfungsgesellschaften. Findet eine solche aber statt, werden die notwendigen Richtlinien angelegt und deren Nichterfüllung entsprechend der Prüfberichte auch nachverfolgt. Betroffenen Krankenhäuser ist damit nahegelegt, die Anforderungen schon zum Stichtag erfüllt zu haben und im Sinne einer kontinuierlichen Umsetzung und Verbesserung daran auch weiterhin zu arbeiten.
Auch Krankenhäuser, die heute diesen Schwellwert knapp nicht erreichen, sind heute schon aus Vernunftgründen angehalten, sich auf Anpassungen der Anforderungen oder steigende Patientenzahlen vorzubereiten. Und das bedeutet, auch ohne Notwendigkeit eines formalen Testats schon die sinnvollen Rahmenbedingungen, etwa den Aufbau eines Informationssicherheits Management Systems (ISMS) nach ISO 27.001 als Grundlage zu schaffen.
Darüber hinaus gibt die Verfügbarkeit eines allgemeinen Rahmens für die Verfügbarkeit und Sicherheit der IT in dieser und anderen Branchen weiteren Branchenteilnehmern (etwa Gemeinschaftspraxen oder fachlich spezialisierte Institute) eine belastbare Basis, angemessene Rahmenbedingungen zu schaffen, die einem aktuellen Stand der Anforderungen und der Technik entsprechen. Das gilt auch, wenn sie absehbar nicht KRITIS-relevant sind oder werden, aber ihren Patienten einen vergleichbar guten Sicherheitsstandard und die daraus resultierende Vertrauenswürdigkeit bieten wollen.
KuppingerCole bietet umfangreiche Unterstützung in Form von Research und Advisory für Unternehmen in allen KRITIS-relevanten Bereichen und darüber hinaus. Reden Sie mit uns, um Ihren Herausforderungen in den Bereichen Cybersecurity, Zugriffskontrolle und Compliance angemessen zu begegnen.
Digitalization or more precisely the "digital transformation" has led us to the "digital enterprise". It strives to deliver on its promise to leverage previously unused data and the information it contains for the benefit of the enterprise and its business. And although these two terms can certainly be described as buzzwords, they have found their way into our way of thinking and into all kinds of publications, so that they will probably continue to exist in the future.
Thought leaders, analysts, software and service providers and finally practically everyone in between have been proclaiming the "cognitive enterprise" for several months now. This concept - and the mindset associated with it - promises to use the information of the already digital company to achieve productivity, profitability and high innovation for the company. And they aim at creating and evolving next-generation business models between converging technologies and data.
So what is special about this “cognitive enterprise“? Defining it usually starts with the idea of applying cognitive concepts and technologies to data in practically all relevant areas of a corporation. Data includes: Open data, public data, subscribed data, enterprise-proprietary data, pre-processed data, structured and unstructured data or simply Big Data). And the technologies involved include the likes of Artificial Intelligence (AI), more specifically Machine Learning (ML), Blockchain, Virtual Reality (VR), Augmented Reality (AR), the Internet of Things (IoT), ubiquitous communication with 5G, and individualized 3D printing.
As of now, mainly concepts from AI and machine learning are grouped together as "cognitive", although a uniform understanding of the underlying concepts is often still lacking. They have already proven to do the “heavy lifting” either on behalf of humans, or autonomously. They increasingly understand, they reason, and they interact, e.g. by engaging in meaningful conversations and thus delivering genuine value without human intervention.
Automation, analytics and decision-making, customer support and communication are key target areas, because many tasks in today’s organizations are in fact repetitive, time-consuming, dull and inefficient. Focus (ideally) lies on relieving and empowering the workforce, when the task can be executed by e.g. bots or through Robotic Process Automation. Every organization is supposed to agree that their staff is better than bots and can perform tasks much more meaningful. So, these measures are intended to benefit both the employee and the company.
But this is only the starting point. A cognitive enterprise will be interactive in many ways, not only by interacting with its customers, but also with other systems, processes, devices, cloud services and peer organizations. As one result it will be adaptive, as it is designed to be learning from data, even in an unattended manner. The key goal is to foster agility and continuous innovation through cognitive technologies by embracing and institutionalizing a culture that perpetually changes the way an organization works and creates value.
Beyond the fact that journalists, marketing departments and even analysts tend to outdo each other in the creation and propagation of hype terms, where exactly is the difference between a cognitive and a digital enterprise? Do we need yet another term, notably for the use of machine learning as an apparently digital technology?
I don't think so. We are witnessing the evolution, advancement, and ultimately the application of exactly these very digital technologies that lay the foundation of a comprehensive digital transformation. However, the added value of the label "cognitive" is negligible.
But regardless of how you, me or the buzzword industry really decide to call it in the end, much more relevant are the implications and challenges of this consistent implementation of digital transformation. In my opinion two aspects must not be underestimated:
First, this transformation is either approached in its entirety, or it is better not to do it at all, there is nothing in between. If you start doing this, it's not enough to quickly look for a few candidates for a bit of Robot Process Automation. There will be no successful, "slightly cognitive” companies. This will be a waste of the actual potential of a comprehensive redesign of corporate processes and is worth little more than a placebo. Rather, it is necessary to model internal knowledge, to gain and to interconnect data. Jobs and tasks will change, become obsolete and will be replaced by new and more demanding ones (otherwise they could be executed by a bot again).
Second: The importance of managing constant organizational change and restructuring is often overlooked. After all, the transformation to a Digital/Cognitive Enterprise is by far not entirely about AI, Robotic Process Automation or technology. Rather, focus has to be put on the individual as well, i.e. each member of the entire workforce (both internal and external). Established processes have to be managed, adjusted or even reengineered and this also applies to processes affecting partners, suppliers and thus any kind of cooperation or interaction.
One of the most important departments in this future will be the human resources department and specifically talent management. Getting people on board and retaining them sustainably will be a key challenge. In particular, this means providing them with ongoing training and enabling them to perform qualitatively demanding tasks in a highly volatile environment. And it is precisely such an extremely responsible task that will certainly not be automated even in the long term...
Europe’s consumers have been promised for some years now that strong customer authentication (SCA) was on its way. And the rules as to when this should be applied in e-commerce are being tightened. The aim is to better protect the customers of e-commerce services.
This sounds like a good development for us all, since we are all regular customers of online merchants or providers of online services. And if you look at the details of SCA, this impression is further enhanced. Logins with only username and password are theoretically a thing of the past, the risk of possible fraud on the basis of compromised credentials is potentially considerably reduced.
The Payment Services Directive (PSD II) requires multi-factor authentication (MFA) as the implementation of SCA for all payments over €10. MFA stands for Multi Factor Authentication, i.e. all approaches involving more than one factor. The most common variant is Two Factor Authentication (2FA), i.e. the use of two factors. There are three classes of factors: Knowledge, Possession and Biometrics – or “what you know”, “what you have”, “what you are”. For each factor, there might be various “means”, e.g. username and password for knowledge, a hard token or a phone for possession, fingerprint or iris for biometrics.
The use of this results in improved protection for virtually all parties involved: E-commerce site, payment processors and customers can be more confident that transactions are legitimate and trustworthy.
A short look at the history: On November 16, 2015, the Council of the European Union passed the PSD2 and gave Member States two years to transpose the Directive into their national laws and regulations. It should be expected that the broad and comprehensive implementation of SCA as part of the PSD2 will be achieved in a timely manner, as the benefits are obvious. Of course, purchasing processes become a little more complex, because card data and account number or username and password for payment services are no longer enough for checkout. A second, different feature such as a fingerprint or an SMS to your own registered smartphone becomes necessary to increase security.
But shouldn’t we value this significantly increased security and the trust that goes with it? On the contrary, retailers, for example in Germany, are far from positive about stricter security standards. Every change and especially increase in complexity of the purchasing process is regarded as an obstacle, a potential point for dropping out of the customer journey.
And yet the development now emerging was not unexpected. As early as July 2019, the European Banking Authority (EBA) stated that some players were not sufficiently prepared for the PSD2, SCA and thus the required protection of consumers.
As a measure, the member states were offered an extension of the deadline. First and foremost, this was used extensively by the UK, but also by some other countries. In Germany, the new regulations for payments without cash will enter into force on 14 September 2019, almost four years after the European Directive PSD2 was approved. This means that only payment services that implement SCA and are therefore PSD2 compliant can be used for online purchases using credit cards.
And, you guessed it, just recently BaFin (Germany’s financial watchdog) announced in a press release that “As a temporary measure, payment service providers domiciled in Germany will still be allowed to execute credit card payments online without strong customer authentication after 14 September 2019”.
This does not only mean an immense delay of unclear duration; the otherwise rather homogeneous European market is now being chopped up into a multitude of different regulations and exceptions. The direct opposite of what was planned has been achieved, since it is unclear when and where which requirements will apply, in the European Union and in a global Internet. The obvious losers are the customers and online security and trust in reliable online purchases, at least for the short to mid-term.
Forward-looking organizations who value their customers and their security and trust are now able to implement security through SCA, even without BaFin checks. Those companies that benefit from a short delay to meet PSD2 requirements soon should quickly seize this opportunity and join the latter group. But those companies that, since the release of PSD2 and its requirements, have preferred to complain about more complex payment processes and lament EU regulations should reconsider their relationship to security and customer satisfaction (and thus to their customers). And they should rapidly start on a straight path to comprehensive PSD2 compliance. Because temporary measures and extended deadlines are exactly that, they are temporary, and they are deadlines.
To meet them successfully and in time, KuppingerCole Analysts can support organizations by providing expertise through our research in the areas of PSD2, SCA and MFA. And our Advisory Services are here to support you in identifying and working towards your individual requirements while maintaining user experience, meeting business requirements and achieving compliance. And our upcoming Digital Finance World event in Frankfurt next week is the place to be to learn from experts and exchange your thoughts with peers.
The other day I found a notebook on a train. It was in a compartment on the seat of a first-class car. The compartment was empty, no more passengers to see, no luggage, nothing.
And no, it wasn't a laptop or tablet, it was a *notebook*. One made of paper, very pretty, with the name of a big consulting company printed on it. So, it was either a promotional gift or one that employees use. Two thirds of it had been used, which could be seen from the edge of the paper.
Everyone knows these notebooks, from simple A4 college pads with cheap ballpoint pens to expensive, leather-bound prestige models combined with an equally expensive writing device such as a fountain pen.
They serve as brain extensions in meetings, for planning and conducting conversations. They contain details about the owner. And they contain sketches, meeting minutes, information about contact persons (--> GDPR), your business, the business of your partners. You can find sales figures, business plans, product developments, vulnerability analyses and architectural plans. The private mobile phone number of the important point of contact, the passwords to company infrastructure along with computer addresses. Confidential and critical data is thoughtlessly recorded on paper and then elaborated on the way home on the train, at home on the couch or the next day in the office.
Everyone worries about the loss of their computer or of the still ubiquitous, unencrypted USB stick. Rightly so. And today you also have to think about the cloud, because it bears a multitude of risks, which you have to address consistently, comprehensively and correctly (and yes, we can help you with that, but that's not the point here).
However, leakage of sensitive data does not necessarily require a nation state hacker or a violation of the confidentiality of credentials. Clumsiness, haste and forgetfulness can sometimes be enough. And that's why you should be particularly concerned about your paper notes.
You can encrypt a USB stick (yes, you can). You can encrypt whole computers, too. Your corporate laptop should be, anyway, and the encryption of your private computers and data carriers is your own personal responsibility. Most mobile phones and tablets today come with biometrics and also with potential encryption.
But this notebook is still beautiful and has so many free pages, so on to the next meeting? - So let me ask you: What is written in your current notebook? Would you have wanted me to have read all that on the train? Got a bad conscience now? Rightly so.
Paper cannot be encrypted. So, there are only the following two main approaches of data avoidance and data deletion to mitigate these risks: Give the next promotional notebook to a child for drawing (--> avoidance). Destroy all the notebooks you still have (and possibly still use) by means of your home or office shredder (--> deletion). What is still important can be scanned before and stored safely and of course encrypted.
I did not open this notebook and instead handed it over to the conductor and thus to the Deutsche Bahn "lost and found" service. But we can't expect everyone to handle it that way.
As a recommendation: For the future, for all notes that go beyond your private poems (and perhaps for your own self-protection include those as well), use mechanisms that meet your company's security requirements. Notebooks for sure don’t.
Requirements for - and context of - the future Identity Fabric.
We call it Digital Transformation for lack of a better term, but it consists of much more than this buzzword is able to convey. Digital technologies are influencing and changing all areas of a company, and this is fundamentally reshaping the way communication takes place, how people work together and how customers are delivered value.
IT architectures, in turn, are undergoing profound structural transformations to enable and accelerate this creeping paradigm shift. This evolution reflects the changes resulting from the changing challenges facing companies, government agencies and educational institutions. These challenges, which virtually every organization worldwide has faced for a long time, change processes and systems in the same way that they affect the underlying architectures.
In order to survive in this highly competitive environment, companies are striving to be as agile as possible by adapting and modifying business models and, last but not least, opening up new communication channels with their partners and customers. Thanks to the rapidly growing spread of cloud and mobile computing, companies are becoming increasingly networked with each other. The very idea of an outer boundary of a company, the concept of a security perimeter, have practically ceased to exist.
And with that the idea that different identities can be treated fundamentally isolated in one enterprise has come to an end.
Figure: The Road to Integrated, Hybrid and Heterogeneous IAM Architectures
Managing identities and access in digital transformation is the key to security, governance and audit, but also to usability and user satisfaction. The challenges for a future-proof IAM are complex, diverse and sometimes even contradictory:
- We need to integrate consumers into the system, but they often want to retain control over their identity by bringing their own identity (BYOID).
- We want our employees (internal and external) to be able to use the end-user devices they prefer to use to gain secure access to their work environment, wherever they are.
- We need to link identities and model real-life dependencies within teams, companies, families or our partner organizations.
- Maybe we even want to trust identities that are maintained in other organizations and reliably integrate them and authorize them in our IAM.
- We need to integrate identity, payment and trade.
- We need to comply with laws and regulations and yet eliminate annoying KYC processes making site visitors leave without completing registration.
- We want to use existing data to enable artificial intelligence for ongoing business transformation, while ensuring compliance, consent and customer security.
- We need to extend identities beyond people and integrate devices, services and networks into our next-generation IAM infrastructure.
Workforce mobility, rapidly changing business models and business partnerships are all contributing to a trend where companies need to be able to seamlessly provide access for everyone and to any digital service. These services can be in a public cloud, they can be web applications with or without support of federation standards, they can solely be backend services accessed through APIs, or even legacy applications accessible only through some kind of middleware. However, the agility of the digital journey requires IT to provide seamless access to all these services while maintaining control and enforcing security.
In a nutshell: We need to reconsider IAM as a whole and step by step transform it into a set of services which allow to connect everything via an overarching architecture, making our services available to everyone, everywhere, without losing control.
KuppingerCole Analysts strongly support the concept of Identity Fabrics as a logical infrastructure that enables access for all, from anywhere to any service while integrating advanced approaches such as support for adaptive authentication, auditing capabilities, comprehensive federation services, and dynamic authorization capabilities. In this context, it is of no importance where and in which deployment model IT services are provided. These can be legacy services encapsulated in APIs, current standard services either “as a service” or in your own data center and future digital offerings. Identity fabrics are designed to integrate these services regardless of where they are provided, i.e. anywhere between on-premises, hybrid, public or private clouds, managed by MSPs or in outsourcing data centers, or completely serverless.
We expect Identity Fabrics to be an integral part of current and future architectures for many organizations and their IT. Future issues of KuppingerColes Analysts' View on IAM will look at this topic from multiple perspectives, with particular emphasis on architectural, technical and process-related aspects. KuppingerCole Analysts research will explore this concept of "One IAM for the Digital Age" in detail and KuppingerCole Advisory clients will be among the first to benefit from sophisticated identity fabric architectures. Watch this space, especially our blogs and our research for more to come on all things “Identity Fabric”. And remember: You’ve heard it here at KC first.
Acronyms are an ever-growing species. Technologies, standards and concepts come with their share of new acronyms to know and to consider. In recent years we had to learn and understand what GDPR or PSD2 stand for. And we have learned that IT security, compliance and data protection are key requirements for virtually any enterprise. The following acronyms and more importantly the concepts behind them can teach us about what forward-looking organizations and their leaders should be thinking of.
MTPD stands for "Maximum Tolerable Period of Disruption". Its value determines the longest possible amount of time an organization can endure until the impact of an incident leading to a partial or complete disruption of service becomes inacceptable or a recovery becomes more or less useless. Determining this period is an exercise every reader of this text might want to do just now. It might be surprisingly low.
MBCO, closely related to the MTPD, is short for "Minimum Business Continuity Objective". It describes the baseline of services that are necessary for an organization to survive during a disruption. Another important aspect for all of us to think of. MTDL describes the “Maximum Tolerable Data Loss”. It is usually defined as the largest possible amount of data in IT systems (or analog media, like files and binders) an organization can accept to lose and still be able to recover successful operations afterwards. These terms (and many more related and relevant concepts) stem originally from the area of Business Continuity Planning, but they become increasingly important also to management and staff of IT security departments.
One reason for that is yet another acryonym, namely “KRITIS”, which is an abbreviation of „KRITische InfraStrukturen“ (“critical infrastructure”). Critical infrastructure is defined as organizations or institutions of major importance to the state community whose failure or degradation would result in sustained supply shortages, significant public safety disruptions or other dramatic consequences.
Originating from an EU Directive in 2008 the term is closely linked to the Federal Republic of Germany, its legislation and its efforts to reduce potential vulnerabilities of critical infrastructure. The concept aims at improving protection and resilience as a result of the increasing extent of pervasiveness and dependence of almost all areas of life with and from critical infrastructure. A German law (“IT-SiG”), and a regulation “BSI-Kritisverordnung” (“Kritis regulation”) issued in 2015/2016 are the foundation for the specification and enforcement of this significant set of requirements.
Many countries are already looking at regulating and securing critical infrastructure as well, including the US (Department of Homeland Security), so this is far from being just yet another German or European issue. But taking Germany as an example, the overall picture of critical infrastructure includes Energy, Information Technology and Telecommunications, Nutrition and Water, Healthcare, Finance and Insurance, Transport and Traffic. The actual scope of organizations affected can be looked up online. The core legislation is the same for each critical infrastructure, the challenge for individual industries is that sector-specific requirements need to be identified individually. The definition of industry-specific requirements is the responsibility of the individual industries, their industry associations and key corporations as exemplary representatives of their sector. However, these documents need to be government-approved.
Implementing these requirements requires organizations to think in more than just in terms of IT security. While the industry-specific requirement documents often have some IT security specific bias (usually starting with implementing an ISO 27xxx ISMS), organizations also need to consider the acronyms in the beginning of this text. This “paradigm shift” that critical infrastructure has to deal with now (and obviously had to deal with before already) is an important step for any organization. Extending security towards resilience, business continuity will be essential for almost any organization within a world of increasing challenges, including but not limited to cyber threats.
To make systems, processes and organizations future-proof, it is highly recommended to consider security, safety and business continuity more holistically. Why not use related KRITIS-requirements as a benchmark that could help you to increase your organizational maturity? Just because you are not obliged to comply does not mean that going beyond your individual, mandatory requirements cannot improve your overall security posture and business continuity approach.
The definitions and requirements concerning critical infrastructure as they exist at an European and, in particular, German level can be regarded as exemplary in many respect. Even if they have direct relevance primarily for operators of critical infrastructure in Germany, they can serve as a basis for the design, operation and documentation of resilient architectures in Europe and beyond, due to the degree of detail and their comprehensive coverage of a multitude of sectors and industries.
And as a heads up for German readers, the update of the IT-SiG (“IT-Sicherheitsgesetz 2.0”) could be yet another game changer, so they should be prepared for more major changes in systems, processes and organization.
Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]