Organizations of major importance to the German state whose failure or disruption would result in sustained supply shortages, significant public safety disruptions, or other dramatic consequences are categorized as critical infrastructure (KRITIS).
Nine sectors and 29 industries currently fall under this umbrella, including healthcare, energy, transport and financial services. Hospitals as part of the health care system are also included if they meet defined criteria.
For hospitals, the implementation instructions of the German Hospital Association (DKG) have proven to be important. The number of fully inpatient hospital treatments in the reference period (i.e. the previous year) was defined as the measurement criterion. With 30,000 fully inpatient treatment cases, the threshold value for the identification of critical infrastructures has been reached, which concerns considerably more than 100 hospitals. These are obliged to fulfil clearly defined requirements, which are derived from the IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - for the security of IT systems and digital infrastructures, including critical infrastructures in Germany, and the BSI-KritisV - "BSI-Kritisverordnung". The above-mentioned implementation instructions of the DKG thus also define proposed measures for the assurance of adequate security, in particular about the IT used.
Companies had until June 30th this year to meet the requirements and to commission a suitable, trustworthy third party for testing and certification.
But according to a report in Tagesspiegel Background, this has been challenging: industry associations have been pointing out for some time that there are not enough suitable auditing firms. This is not least due to the fact that auditors must have a double qualification, which in addition to IT also includes knowledge of the industry, in this case the healthcare system in hospitals. Here, as in many other areas, the infamous skill gap strikes, i.e. the lack of suitable, qualified employees in companies or on the job market.
This led to the companies capable of performing the audits being overloaded and thus to a varying quality and availability of audits and resulting audit reports. According to the press report, these certificates suffer the same fate when they are submitted to the BSI, which evaluates these reports. Here, too, a shortage of skilled workers leads to a backlog of work. A comprehensive evaluation was not available at the time of publication. Even the implementation instructions of the German Hospital Association, on the basis of which many implementations were carried out in the affected hospitals, have not yet been confirmed by the BSI.
Does this place KRITIS in the list of toothless guidelines (such as PSD2 with its large number of national individual regulations) that have not been adequately implemented, at least in this area? Not necessarily.. The obligation to comply has not been suspended; the lack of personnel and skills on the labour market merely prevents consistent, comprehensive testing by suitable bodies such as TÜV, Dekra or specialised auditing firms. However, if such an audit does take place, the necessary guidelines are applied and any non-compliance is followed up in accordance with the audit reports. The hospitals concerned are therefore advised they should have fulfilled the requirements by the deadline and to continue working on them in the name of continuous implementation and improvement.
Even hospitals that today slightly miss this threshold are now encouraged to prepare for adjustments to requirements or increasing patient numbers. And this means that even without the necessity of a formal attestation, the appropriate basic conditions, such as the establishment of an information security management system (ISMS) in accordance with ISO 27.001, can be created to serve as a foundation.
In addition, the availability of a general framework for the availability and security of IT in this and other industries gives other sector players (such as group practices or specialist institutes) a resilient basis for creating appropriate framework conditions that correspond to the current state of requirements and technology. This also applies if they are not or will not be KRITIS-relevant in the foreseeable future, but want to offer their patients a comparably good degree of security and resulting trustworthiness.
KuppingerCole offers comprehensive support in the form of research and advisory for companies in all KRITIS-relevant areas and beyond. Talk to us to address your cybersecurity, access control and compliance challenges.
„Kritische Infrastrukturen (KRITIS) sind Organisationen oder Einrichtungen mit wichtiger Bedeutung für das staatliche Gemeinwesen, bei deren Ausfall oder Beeinträchtigung nachhaltig wirkende Versorgungsengpässe, erhebliche Störungen der öffentlichen Sicherheit oder andere dramatische Folgen eintreten würden“.
Neun Sektoren und 29 Branchen gelten derzeit als kritische Infrastrukturen, darunter die Gesundheitsversorgung, Energieversorgung, der Verkehr und Finanzdienstleistungen. Krankenhäuser als Teil des Gesundheitswesens fallen bei Erfüllung definierter Kriterien ebenfalls in die Kategorie „kritische Infrastrukturen“.
Für Krankenhäuser haben sich die Umsetzungshinweise der Deutschen Krankenhausgesellschaft (DKG) als maßgeblich erwiesen. Als Bemessungskriterium wurde hierbei die Anzahl vollstationärer Krankenhausbehandlungen im Bezugszeitraum (Vorjahr) definiert. Mit einer Anzahl von 30.000 vollstationären Behandlungsfällen ist der Schwellenwert zur Identifikation kritischer Infrastrukturen erreicht, was deutlich mehr als 100 Krankenhäuser betrifft, Diese werden zur Erfüllung klar definierter Anforderungen verpflichtet, die aus dem das IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - zur Sicherung von IT-Systemen und digitalen Infrastrukturen, einschließlich kritischer Infrastrukturen in Deutschland, und der BSI-KritisV - "BSI-Kritisverordnung" abgeleitet sind. Die genannten Umsetzungshinweise der DKG definieren damit dann auch vorgeschlagene Maßnahmen für den Nachweis einer angemessenen Sicherheit, insbesondere mit Blick auf die eingesetzten IT.
Der 30. Juni dieses Jahres stellte die definierte Deadline für die betroffenen Unternehmen dar, die Anforderungen zu erfüllen und einen geeigneten, vertrauenswürdigen Dritten zur Prüfung und Testierung zu beauftragen.
Einem Bericht des Tagesspiegel Background zufolge liegt genau hier derzeit eine Herausforderung: Branchenverbände weisen seit längerem darauf hin, dass es nicht genügend geeignete prüfende Stellen gibt. Das liegt nicht zuletzt daran, dass Auditoren eine doppelte Qualifikation vorweisen müssen, die neben der IT auch die Kenntnis der Branche, hier also des Gesundheitswesens im Krankenhaus umfasst. Hier, wie in vielen anderen Bereichen, schlägt der berüchtigte Skill-Gap zu, also der Mangel an geeigneten, qualifizierten Mitarbeitern in den Unternehmen oder am Arbeitsmarkt.
Dies führte zu einer Überlastung der zur Prüfung befähigten Unternehmen und damit einer unterschiedlichen Güte und Verfügbarkeit der Prüfungen und der resultierenden Prüfberichte. Das gleiche Schicksal erleiden die Testate dem Pressebericht zufolge bei Einreichung beim BSI, das diese Berichte auswertet. Auch hier führt ein Fachkräftemangel zu einem Bearbeitungsstau. Eine Auswertung lag zum Veröffentlichungszeitpunkt nicht vor. Selbst die Umsetzungshinweise der Deutschen Krankenhausgesellschaft, auf deren Basis viele Umsetzungen in den betroffenen Häusern erfolgten, ist vom BSI noch nicht bestätigt.
Reiht sich damit KRITIS zumindest in diesem Bereich in die Liste der zahnlosen, weil nicht angemessen umgesetzten Richtlinien (wie etwa PSD2 mit seiner Vielzahl nationaler Individualregelungen) ein? Aus heutiger Sicht kann das wohl verneint werden. Die Verpflichtung zur Einhaltung ist nicht aufgehoben, der Personal- und Skillmangel am Arbeitsmarkt verhindert lediglich die konsequente, umfassende Prüfung durch geeignete Stellen wie TÜV, Dekra oder spezialisierte Wirtschaftsprüfungsgesellschaften. Findet eine solche aber statt, werden die notwendigen Richtlinien angelegt und deren Nichterfüllung entsprechend der Prüfberichte auch nachverfolgt. Betroffenen Krankenhäuser ist damit nahegelegt, die Anforderungen schon zum Stichtag erfüllt zu haben und im Sinne einer kontinuierlichen Umsetzung und Verbesserung daran auch weiterhin zu arbeiten.
Auch Krankenhäuser, die heute diesen Schwellwert knapp nicht erreichen, sind heute schon aus Vernunftgründen angehalten, sich auf Anpassungen der Anforderungen oder steigende Patientenzahlen vorzubereiten. Und das bedeutet, auch ohne Notwendigkeit eines formalen Testats schon die sinnvollen Rahmenbedingungen, etwa den Aufbau eines Informationssicherheits Management Systems (ISMS) nach ISO 27.001 als Grundlage zu schaffen.
Darüber hinaus gibt die Verfügbarkeit eines allgemeinen Rahmens für die Verfügbarkeit und Sicherheit der IT in dieser und anderen Branchen weiteren Branchenteilnehmern (etwa Gemeinschaftspraxen oder fachlich spezialisierte Institute) eine belastbare Basis, angemessene Rahmenbedingungen zu schaffen, die einem aktuellen Stand der Anforderungen und der Technik entsprechen. Das gilt auch, wenn sie absehbar nicht KRITIS-relevant sind oder werden, aber ihren Patienten einen vergleichbar guten Sicherheitsstandard und die daraus resultierende Vertrauenswürdigkeit bieten wollen.
KuppingerCole bietet umfangreiche Unterstützung in Form von Research und Advisory für Unternehmen in allen KRITIS-relevanten Bereichen und darüber hinaus. Reden Sie mit uns, um Ihren Herausforderungen in den Bereichen Cybersecurity, Zugriffskontrolle und Compliance angemessen zu begegnen.
Well, if you ask me, the short answer is – why not? After all, companies around the world have a long history of employing people with weird titles ranging from “Chief Happiness Officer” to “Galactic Viceroy of Research Excellence”. A more reasonable response, however, would need to take one important thing into consideration – what a CAIO’s job in your organization would be?
There is no doubt that “Artificial Intelligence” has already become an integral part of our daily lives, both at home and at work. In just a few years, machine learning and other technologies that power various AI applications evolved from highly complicated and prohibitively expensive research prototypes to a variety of specialized solutions available as a service. From image recognition and language processing to predictive analytics and intelligent automation - a broad range of useful AI-powered tools is now available to everyone.
Just like the cloud a decade ago (and Big Data even earlier), AI is universally perceived as a major competitive advantage, a solution for numerous business challenges and even as an enabler of new revenue streams. However, does it really imply that every organization needs an “AI strategy” along with a dedicated executive to implement it?
Sure, there are companies around the world that have made AI a major part of their core business. Cloud service providers, business intelligence vendors or large manufacturing and logistics companies – for them, AI is a major part of the core business expertise or even a revenue-generating product. For the rest of us, however, AI is just another toolkit, powerful and convenient, to address specific business challenges.
Whether your goal is to improve the efficiency of your marketing campaign, to optimize equipment maintenance cycle or to make your IT infrastructure more resilient against cyberattacks – a sensible strategy to achieve such a goal never starts with picking up a single tool. Hiring a highly motivated AI specialist to tackle these challenges would have exactly the opposite effect: armed with a hammer, a person is inevitably going to treat any problem as if it were a nail.
This, of course, by no means implies that companies should not hire AI specialists. However, just like the AI itself was never intended to replace humans, “embracing the AI” should not overshadow the real business goals. We only need to look at Blockchain for a similar story: just a couple years ago adding a Blockchain to any project seemed like a sensible goal regardless of any potential practical gains. Today, the technology has already passed the peak of inflated expectations and it finally seems that the fad is transitioning to the productive phase, at least in those usage scenarios where lack of reliable methods of establishing distributed trust was indeed a business challenge.
Another aspect to consider is the sheer breadth of the AI frontier, both from the AI expert’s perspective and from the point of view of a potential user. Even within such a specialized application area as cybersecurity, the choice of available tools and strategies can be quite bewildering. Looking at the current AI landscape as a whole, one cannot but realize that it encompasses many complex and quite unrelated technologies and problem domains. Last but not least, consider the new problems that AI itself is creating: many of those lie very much outside of the technology scope and come with social, ethical or legal implications.
In this regard, coming up with a single strategy that is supposed to incorporate so many disparate factors and can potentially influence every aspect of a company’s core business goals and processes seems like a leap of faith that not many organizations are ready to make just yet. Maybe a more rational approach towards AI is the same as with the cloud or any other new technology before that: identify the most important challenges your business is facing, set reasonable goals, find the experts that can help identify the most appropriate tools for achieving them and work together on delivering tangible results. Even better if you can collaborate on (probably different) experts on outlining a long-term AI adoption strategy that would ensure that your individual projects and investments align with each other and avoid wasting time and resources. In other words: Think Big, Start Small, Learn Fast.
If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.
Car buyers gathering at the Frankfurt Motor Show last month will have witnessed the usual glitz as car makers went into overdrive launching new models, including of course many new electric vehicles reflecting big change in the industry. Behind the glamour of the show, the world’s biggest car makers are heavily investing in new technologies to remain competitive, including Artificial Intelligence (AI) and Machine Learning. While perfecting algorithms for self-driving cars is a longer-term goal and grabs the headlines, much is being done with AI to improve the design, manufacture and marketing of cars.
In an industry characterized by high costs and low margins, car makers (OEMs) are turning to AI to improve efficiencies, improve quality control and understand their markets and buyers better. Five years ago, Volkswagen opened its Data:Lab in Munich. It is now the company’s main research base for AI with around 80 IT specialists, data scientists, programmers, physicists, and mathematicians researching and developing applications in machine learning and AI. Volkswagen goes as far to say that AI will fundamentally change the company’s value chain as it will now begin, not end, with the production of the vehicle.
An area of focus is applying AI to market research and marketing to pre-empt changes in demand and consumer choice outside of OEMs traditional 7-year model cycle. Any manufacturer that can be ahead of the curve in marketing will have a significant advantage. Volkswagen is using AI to create precise market forecasts containing a multitude of variables including economic development, household income, customer preferences, model availability and price.
With this kind of insight, it is possible that the company could configure model choice (specs, optional extras, engine sizes etc) and order production to meet buyer preferences on a smaller regional or even hyper local level. For example, a Golf special edition that appeals to specific buyers in London or an Amarok truck configured for the needs of farmers in the Rhineland.
Volkswagen’s German rivals are also scaling investment in AI technologies and are keen to be seen doing so with positive statements on their websites, and active recruitment drives to get the best developer talent. All three of Germany’s OEMs are aware that they need to be technological leaders in IT as much as engineering as cars become more connected and software driven.
At its factory in Stuttgart, Daimler has created a knowledge base that stores all the existing vehicle designs at the company which any new engineer can tap into. More than this, the algorithm has been trained to suggest that a new engineer contacts a more experienced colleague for human advice in certain circumstances. A good example, of how AI can be trained to interact with human workers.
At the final inspection area at BMW’s Dingolfing plant, an AI application compares the vehicle order data with a live image of the model designation of the newly produced car. If the live image and order data don’t correspond, for example if a designation is missing, the final inspection team receives a notification. This frees up human employees to work elsewhere. Algorithms are also being taught to tell the difference between a hairline crack in sheet metal and simple dust particles, something that is beyond the scope of human eyesight. Meanwhile in paint shops, AI and analytics applications offer the potential to detect sources of error at much earlier stages of the process. If no dust attaches to the car body before painting in the first place, none needs be polished off later.
While these examples of AI applications may lack the sci-fi appeal of self-driving cars, they are presently more important to the future survival of the car industry, not just in Germany but across the globe. AI is being used effectively to meet the three fundamental challenges of the industry’s survival: improved quality, cost and waste reduction, and customer demands.
If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.
Digitalization or more precisely the "digital transformation" has led us to the "digital enterprise". It strives to deliver on its promise to leverage previously unused data and the information it contains for the benefit of the enterprise and its business. And although these two terms can certainly be described as buzzwords, they have found their way into our way of thinking and into all kinds of publications, so that they will probably continue to exist in the future.
Thought leaders, analysts, software and service providers and finally practically everyone in between have been proclaiming the "cognitive enterprise" for several months now. This concept - and the mindset associated with it - promises to use the information of the already digital company to achieve productivity, profitability and high innovation for the company. And they aim at creating and evolving next-generation business models between converging technologies and data.
So what is special about this “cognitive enterprise“? Defining it usually starts with the idea of applying cognitive concepts and technologies to data in practically all relevant areas of a corporation. Data includes: Open data, public data, subscribed data, enterprise-proprietary data, pre-processed data, structured and unstructured data or simply Big Data). And the technologies involved include the likes of Artificial Intelligence (AI), more specifically Machine Learning (ML), Blockchain, Virtual Reality (VR), Augmented Reality (AR), the Internet of Things (IoT), ubiquitous communication with 5G, and individualized 3D printing.
As of now, mainly concepts from AI and machine learning are grouped together as "cognitive", although a uniform understanding of the underlying concepts is often still lacking. They have already proven to do the “heavy lifting” either on behalf of humans, or autonomously. They increasingly understand, they reason, and they interact, e.g. by engaging in meaningful conversations and thus delivering genuine value without human intervention.
Automation, analytics and decision-making, customer support and communication are key target areas, because many tasks in today’s organizations are in fact repetitive, time-consuming, dull and inefficient. Focus (ideally) lies on relieving and empowering the workforce, when the task can be executed by e.g. bots or through Robotic Process Automation. Every organization is supposed to agree that their staff is better than bots and can perform tasks much more meaningful. So, these measures are intended to benefit both the employee and the company.
But this is only the starting point. A cognitive enterprise will be interactive in many ways, not only by interacting with its customers, but also with other systems, processes, devices, cloud services and peer organizations. As one result it will be adaptive, as it is designed to be learning from data, even in an unattended manner. The key goal is to foster agility and continuous innovation through cognitive technologies by embracing and institutionalizing a culture that perpetually changes the way an organization works and creates value.
Beyond the fact that journalists, marketing departments and even analysts tend to outdo each other in the creation and propagation of hype terms, where exactly is the difference between a cognitive and a digital enterprise? Do we need yet another term, notably for the use of machine learning as an apparently digital technology?
I don't think so. We are witnessing the evolution, advancement, and ultimately the application of exactly these very digital technologies that lay the foundation of a comprehensive digital transformation. However, the added value of the label "cognitive" is negligible.
But regardless of how you, me or the buzzword industry really decide to call it in the end, much more relevant are the implications and challenges of this consistent implementation of digital transformation. In my opinion two aspects must not be underestimated:
First, this transformation is either approached in its entirety, or it is better not to do it at all, there is nothing in between. If you start doing this, it's not enough to quickly look for a few candidates for a bit of Robot Process Automation. There will be no successful, "slightly cognitive” companies. This will be a waste of the actual potential of a comprehensive redesign of corporate processes and is worth little more than a placebo. Rather, it is necessary to model internal knowledge, to gain and to interconnect data. Jobs and tasks will change, become obsolete and will be replaced by new and more demanding ones (otherwise they could be executed by a bot again).
Second: The importance of managing constant organizational change and restructuring is often overlooked. After all, the transformation to a Digital/Cognitive Enterprise is by far not entirely about AI, Robotic Process Automation or technology. Rather, focus has to be put on the individual as well, i.e. each member of the entire workforce (both internal and external). Established processes have to be managed, adjusted or even reengineered and this also applies to processes affecting partners, suppliers and thus any kind of cooperation or interaction.
One of the most important departments in this future will be the human resources department and specifically talent management. Getting people on board and retaining them sustainably will be a key challenge. In particular, this means providing them with ongoing training and enabling them to perform qualitatively demanding tasks in a highly volatile environment. And it is precisely such an extremely responsible task that will certainly not be automated even in the long term...
The days in which having just an Identity and Access Management (IAM) system on-premises are long gone. With organizations moving to hybrid on-premises, cloud, and even multi-cloud environments, the number of cyber-attacks is growing. The types and sophistication of these attacks are continually changing to get around any new security controls put in place. In fact, it is much easier for the cyber attacker to change tactics than it is for organizations to bring in new solutions to mitigate current attack vulnerabilities.
Organizations must realize that they will never be 100% secure, and there will always be attacks on their systems. Don't get me wrong. I'm not saying to give up on continually assessing and updating an organization's security controls to block the latest and most significant attack vectors. But instead, take the next step and plan for the worst. Organizations should integrate their Business Continuity Management (BCM) with their cybersecurity initiatives. This means being able to detect, respond recover, and improve from any attack that potentially brings down their business.
Recently, Microsoft Azure announced its global availability of the Windows Virtual Desktop (WVD). WVD not only provides the ability to deploy and scale Azure-based virtualization of Windows 10 multi-session, Windows Server, and Windows 7 desktops, but it also provides something that is sometimes overlooked. WVD gives the enterprise the ability to recover from being compromised when attacked, at least from the desktop endpoint perspective. Through Microsoft's acquisition of FSLogix and its solutions, WVD takes advantage of virtualization & containerization technologies. Using these technologies, Microsoft ensures that its Windows desktops and servers can be powered up or restarted in a consistent and safe state with respect to user profiles and applications, adding to the BCM and “recover from an attack” capabilities business must implement today. FSLogix does this by bringing both profile and office containers to the table.
So, when reviewing cybersecurity and BCM strategies, organizations shouldn’t take the view of “if”, but “when” their systems will be compromised, and their data breached. Then ask themselves how they will recover.
KuppingerCole Principal Analyst Martin Kuppinger emphasized the changing role of the CISO recently in a blog and also covered that topic in a webinar on cybersecurity budgeting which you can watch below. To get a more hands-on approach, see below for our Incident Response Boot Camp at Cybersecurity Leadership Summit 2019.
A visit to HP Labs offices in central Bristol, about 120 miles west of London, was a chance to catch up with the hardware part of the former Hewlett Packard conglomerate, which split in two four years ago. The split also meant that there are now two HP Labs, one for the HP business and the other for Hewlett Packard Enterprise.
To perhaps position itself as a serious B2B vendor we were told that HP is an “endpoint infrastructure company”, which kind of works, but its US, Chinese and Taiwanese competition could conceivably claim the same.
To counter this, HP is tapping into the shared legacy of the research and development focus that the original Mr. Hewlett and Mr. Packard founded in that famous garage in Palo Alto – hence the trip to HP Labs. A single floor of an office block in Bristol lacks some of the wow factor of the more campus feel of the old joined-up and bigger HP Labs but, on the other hand, the ideas that came out of those Labs did not always see practical application.
The focus then was on HP’s security credentials for innovations that have found their way into products. In a series of demonstrations of its Sure Suite technologies, HP made a case for why its line of laptops and PCs are better equipped to withstand attacks on the endpoint.
Sure Start protects the BIOS from attack each time the PC or laptop is booted on a network or standalone and automatically validates the integrity of the BIOS code. Once the PC is operational, runtime intrusion detection monitors memory. In the case of an attack, the PC can self-heal using an isolated “golden copy” of the BIOS. The live demo on the day showed a laptop that had been locked by ransomware being brought back to operable life. Sure Recover is a tool squarely aimed at the SMB market allowing end users to recover their operating system even after it has been wiped out by an attack, without recourse to IT. It uses HP’s chip-based Endpoint Security Controller (ESC) to image the latest OS using a wired network connection.
A new announcement on the day was HP Sure Admin which extends the automation of secure endpoint management into the corporate domain and builds on the user-friendly technologies of Sure Start and Sure Recover, to reduce the threat of attacks on the surface created by remote management tools. Traditionally, BIOS updates on endpoint PCs have been administered through passwords which are at risk of theft or intervention. Sure Admin uses public/private key encryption to authorise remote BIOS changes. For local access, Sure Admin runs as an app on a smartphone accessed by a private key, which then generates a onetime PIN for an admin to access an endpoint that needs maintenance or recovery. Also demonstrated was HP Sure Sense which uses AI to recognise unknown malware to mitigate zero-day attacks, with a less than 20 millisecond detection rate claimed by HP.
Any kind of demo must be viewed objectively, and these technologies will only prove their mettle in the wild. The other issue is how well any of these endpoints so equipped would embed into an existing corporate environment. Sure Admin needs a serious examination of how it can be integrated into the wider enterprise IT, access management and security portfolio.
This is not to disparage the progress HP has made, and my feeling at the end of the day was that HP is using its Labs for real-world security applications. But they are currently more efficient iterations of existing technologies rather than great leaps forward. However, endpoint protection is essential for business environments that are more open, extended and connected than before. HP’s recent acquisition of endpoint security start-up Bromium will no doubt impact on HP’s future plans to improve on these technologies further.
Almost one and a half years after the introduction of GDPR (EU General Data Protection Regulation), some companies still struggle with implementing appropriate measures to deal with Personally Identifiable Information (PII) in a compliant fashion. Last week the Commissioner for Data Protection and Freedom of Information of the city state Berlin Maja Smoltczyk imposed a 195,000 euro fine on the German food delivery service provider Delivery Hero after it had committed a series of data protection law violations with its subsidiaries Foodora, Lieferheld and Pizza.de. It is Germany’s highest GDPR-related fine to date.
According to the press release by the Commissioner for Data Protection and Freedom of Information, the majority of privacy breaches displayed disregard of the rights of the affected parties. In ten cases, the delivery provider had not deleted personal data of former clients, despite the latter having ceased activity on the platform for several years. Among other things, this led to marketing mails sent out without the consent of the recipients. In a statement to the privacy officer, Delivery Hero argued that some violations could be traced back to technical glitches and employee accidents but “due to the high number of repeated violations a general, structural organizational problem was assumed.” Delivery Hero was acquired by the Dutch company Takeaway.com at the end of last year and states that all violations happened prior to the takeover.
Having understood early how crucial it is for a company to be GDPR-compliant, KuppingerCole Analysts already published a Leadership Brief in May 2017 in preparation for GDPR in which Senior Analyst Mike Small identified six key actions that IT needs to take to prepare for compliance. He stressed that the Data Controller or Data Processor must ensure that Personally Identifiable Information (PII) is “only accessed in accordance with the consent given by the data subject”. This was obviously not the case when – as stated above – in most breaches the rights of data subjects were disregarded.
Another point of emphasis in the Leadership Brief is that “organizations must have processes and technology to track the consent lifecycle for each data subject”. By admitting technical glitches, employee accidents and a lack of adequate structure and organization behind the data lifecycle process, Delivery Hero essentially made a confession of grave data negligence.
Not being in comprehensive control of internal processes, employees and technologies, it can be assumed that Delivery Hero was and maybe still is not sufficiently prepared for a potential data breach and would be unable to react to an incident in a timely manner without undue delay.
Other companies can only take this case as a learning opportunity and – in order to comply with regulations such as GDPR – implement reliable processes and technologies that do not depend on the diligence of single employees.
Nevertheless, the latter should not be ignored altogether. All employees should be trained in GDPR-relevant questions about their specific work tasks.
KuppingerCole offers a wide variety of research, blog posts and recorded webinars covering many different aspects of GDPR that can support you and your company in achieving and maintaining compliance. For example, there are several technical solutions for locating and classifying structured and unstructured data. These can assist companies in determining where PII and other regulatory information is located. KuppingerCole constantly investigates these markets and provides guidance.
If you have any specific questions, please do not hesitate to get in touch with us. KuppingerCole Advisory Services can efficiently support you in establishing appropriate processes and their technical implementation, strengthened by long-term practical experience and comprehensive market knowledge.
Cyberattack resilience requires way more than just protective and defensive security tools and training. Resilience is about being able to recover rapidly and thus must include BCM (Business Continuity Management) activities. It is time to redefine the role of CISOs. I made this point in yesterday’s webinar on cybersecurity budgeting. If you missed it, you can watch the webcast here.
Prevention is key in limiting cyberattacks. A Chief Information Security Officer is responsible for prevention. Best practices of employees are responsible for prevention. From the top down the conversation surrounding cybersecurity has always been how to prevent an attack. But yet, despite the prevention, cyberattacks occur more frequently than ever before – and with more severe intensity.
Attacks will not only continue; they are continually evolving to exploit new vectors with new tools. Don’t assume that no one will attack you – they are attacking permanently. So, is prevention enough?
What are the crown jewels? What would happen to your business if they were attacked? How would you get them up and running again? And how do you prepare your C level for crisis communication?
A far more realistic ambition is to be able to react so that business can go back to usual as quickly as possible. Detect, respond, recover, and improve. How can a business react to an attack while still planning for its future? By not segregating preventative action and BCM. Do not fall prey to the blame game, allowing the BCM team to blame the CISO for a failed prevention. A fusion of creative expertise will mitigate an attack and streamline the recovery.
My suggestion for every CISO, CIO, SOC and CDC: Extend the scope of what you’re doing. It’s more than just traditional cybersecurity. Business continuity is part of the picture. Even more so, BCM is key to cybersecurity. Take a step back and reflect about your cybersecurity portfolio. You can’t manage a portfolio that is too complex.
This will definitely be a hot topic at our cybersecurity events in Washington, D.C. and Berlin. If you want to take your cybersecurity portfolio under scrutiny, you should check out our Portfolio Compass service which is explained in our Advisory Services flyer. We have a lot of current research on cybersecurity issues on our new content platform KC PLUS.
Regulation has the uncomfortable task of limiting untapped potential. I was surprised when I recently received the advice to think of life like a box. “The walls of this box are all the rules you should follow. But inside the box, you have perfect freedom.” Stunned as I was at the irony of having complete freedom to think inside the box, those at the forefront of AI development and implementation are faced with the irony of limiting projects with undefined potential.
Although Artificial General Intelligence – the ability of a machine to intuitively react to situations that it has not been trained to handle in an intelligent, human way – is still unrealized, narrow AI that enables applications to independently complete a specified task is becoming a more accepted addition to a business’ digital toolkit. Regulations that address AI are built on preexisting principles, primarily data privacy and protection against discrimination. They deal with the known risks that come with AI development. In 2018, biometric data was added to the European GDPR framework to require extra protection. In both the US and Europe, proposals are currently being discussed to monitor AI systems for algorithmic bias and govern facial recognition use by public and private actors. Before implementing any AI tool, companies should be familiar with the national laws for the region in which they operate.
These regulations have a limited scope, and in order to address the future unknown risks that AI development will pose, a handful of policy groups have published guidelines that attempt to set a model for responsible AI development.
The major bodies of work include:
- The Montreal Declaration for Responsible Development of AI from the University of Montreal and Fonds de Recherche du Quebec (published December 2018)
- Guidelines on Artificial Intelligence and Data Protection from the Council of Europe (published January 2019)
- Ethics Guidelines on Trustworthy AI from The EU Commission (published April 2019)
- The OECD Principles on AI from the OECD (published May 2019)
The principles developed by each body are largely similar. The main principles that all guidelines discussed address the need for developers and AI implementers to protect human autonomy, obey the rule of law, prevent harm and promote inclusive growth, maintain fairness, develop robust, prudent, and secure technology, and ensure transparency.
The single outstanding feature is that only one document provides measurable and immediately implementable action. The EU Commission included an assessment for developers and corporate AI implementors to conduct to ensure that AI applications become and remain trustworthy. The assessment is currently in a pilot phase and will be updated in January 2020 to reflect the comments from businesses and developers. The other guidelines offer compatible principles but are general enough to allow any of the public, private, or individual stakeholders interacting with AI to deflect responsibility.
This collection of guidelines from the international community are not legally binding restrictions, but are porous barriers that allow sufficiently cautious and responsible innovations to grow and expand as the trustworthiness of AI increases. The challenge in creating regulations for an intensely innovative industry is to build in flexibility and the ability to mitigate unknown risks without compromising the artistic license. These guidelines attempt to set an ethical example to follow, but it is essential to use tools like the EU Commission’s assessment tool which establish an appropriate responsibility, no matter the status as developer, implementor, or user.
Alongside the caution from governing bodies comes a clear that AI development can bring significant economic, social, and environmental growth. The US issued an executive order in February 2019 to prioritize AI R&D projects, while the EU takes a more cautiously optimistic approach by building of recognizing the opportunities but prioritizing building and maintaining a uniform EU strategy for AI adoption.
If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.
Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]