KuppingerCole Blog

Privileged Access Management Can Take on AI-Powered Malware to Protect Identity-Based Computing

Much is written about the growth of AI in the enterprise and how, as part of digital transformation, it will enable companies to create value and innovate faster. At the same time, cybersecurity researchers are increasingly looking to AI to enhance security solutions to better protect organizations against attackers and malware. What is overlooked is the same determination by criminals to use AI to assist them in their efforts to undermine organizations through persistent malware attacks.

The success of most malware directed at organizations depends on an opportunistic model; sent out by bots in the hope that it infects as many organizations as possible and then executes its payload. In business terms, while relatively cheap, it represents a poor return on investment and is easier for conventional anti-malware solutions to block. On the other hand, malware that is targeted and guided by human controllers at a command and control point (2C) may well result in a bigger payoff if it manages to penetrate privileged accounts, but it is expensive and time consuming for criminal gangs to operate.

Imagine if automated malware attacks were to benefit from embedded algorithms that have learned how to navigate to where they can do the most damage; this would deliver scale and greater profitability to the criminal gangs. Organizations are facing malware that learns how to hide and perform non-suspicious actions while silently exfiltrating critical data without human control.

AI powered malware will change tactics once inside an organization. It could, for example, automatically switch to lateral movement if it finds its path blocked. The malware could also sit undetected and learn from regular data flows what is normal, and emulate this pattern accordingly. It could learn which devices the infected machines communicate with, its ports and protocols, and the user accounts which access it. All done without the current need for communication back to 2C servers – thus further protecting the malware from discovery.

It is access to user accounts that should worry organizations – particularly privileged accounts. Digital transformation has led to an increase in the number of privileged accounts in companies and attackers are targeting those directly. The use of intelligent agents will make it easier for them to discover privileged accounts such as those accessed via a corporate endpoint. At the same time, malware will learn the best times and situations in which to upload stolen data to 2C servers by blending into legitimate high bandwidth operations such as such as videoconferencing or legitimate file uploads. This may not be happening yet but all of this is feasible given the technical resources that state sponsored cyber attackers and cash rich criminal gangs have access to.

To prove what’s possible IBM research scientists created a proof-of-concept AI-powered malware called Deep Locker. The malware contained hidden code to generate keys which could unlock malicious payloads if certain conditions were met. It was demonstrated at a Las Vegas technology conference last year, using a genuine webcam application with embedded code to deploy ransomware when the right person looked at the laptop webcam. The code was encrypted to conceal its payload and to prevent reverse engineering for traditional anti-malware applications.

IBM also said in its presentation that current defences are obsolete and new defences are needed. This may not be true. AI is not yet magic. As in the corporate world, much AI assisted software, benefits from the learning capabilities of its algorithms which automate the tasks that humans have previously held. In the criminal ecosystem this includes directing malware towards privilege accounts. Therefore, it makes sense that if Privileged Access Management (PAM) does a good job of defecting human led attempts to hijack accounts then it should do the same when confronted with the same techniques orchestrated by algorithms. Already the best PAM solutions are smart enough to monitor M2M communications and DevOps that need access to resources on the fly.

But we must not stop there. Future IAM and Pam solutions must be able to detect hijacked accounts or erroneous data flows in real time and shut them down so that even AI cannot do its work.  Despite the sophistication that AI will bring to malware, its target will remain the same in many attacks: business critical data that is accessed by privileged account users, which will include third parties and machines. It is one more way in which Identity – of people, data and machines  - is taking centre stage in securing the digital organizations of the future. For more on KuppingerCole’s research into Identity and the digital enterprise please see our most recent reports.

Leading IDaaS Supplier OneLogin Aiming for the Top

OneLogin is among the leading vendors in the overall, product, innovation and market leadership ratings in KuppingerCole’s latest Leadership Compass Report on IDaaS Access Management, but is aiming to move even further up the ranks.

In a media and analyst briefing, OneLogin representatives talked through key and recent product features and capabilities in an ongoing effort improve the completeness of products.

Innovation is a key capability in IT market segments, and unsurprisingly this is an important area for OneLogin.

The most recent innovations include Vigilance AI, the new artificial intelligence and machine learning (AI/ML) risk engine, and SmartFactor Authentication, a context-aware authentication methodology to help organizations move beyond text-based passwords.

Both these capabilities are in line with the trend towards using AI in the context of Identity and Access Management (IAM)  and are aimed at supporting OneLogin’s mission to enable enterprises to move beyond password-based authentication and improve their overall cyber defense capabilities in the light of the massive uptick in cyber attacks targeting credentials, including brute force and breach replay attacks.

OneLogin’s Vigilance AI is designed to use AI and ML to ingests and analyze data from a multiple of third-party sources to identify anomalies and communicate risk across OneLogin services.

Vigilance AI also applies User and Entity Behavior Analytics (UEBA) capabilities to build a profile of typical user behavior to identify anomalies in real-time to improve threat defense.

Other recent product innovations include:

  • Adaptive login flows functionality that uses Vigilance AI to restructure authentication flow automatically based on risk to include Multifactor Authentication (MFA) where appropriate;
  • Compromised credential check functionality to prevents users from using credentials that have been breached and posted on the dark web; and
  • Risk-aware access and adaptive deny functionality to block access to systems and applications when extreme risk is detected.

In these ways, OneLogin is striving to address its leadership challenges by increasing the range of authentication factors, increasing collaboration with third-party threat intelligence services, working towards providing support for IoT, and planning to enable more complex reporting capabilities.

The use of AI in the identity solutions market is likely to increase, with a growing number of vendors incorporating AI-driven capabilities such as OneLogin, SailPoint with its AI-driven Predictive Identity cloud identity platform, and others.

If you liked this text, feel free to browse our IAM focus area for more related content.

As You Make Your KRITIS so You Must Audit It

Organizations of major importance to the German state whose failure or disruption would result in sustained supply shortages, significant public safety disruptions, or other dramatic consequences are categorized as critical infrastructure (KRITIS).

Nine sectors and 29 industries currently fall under this umbrella, including healthcare, energy, transport and financial services. Hospitals as part of the health care system are also included if they meet defined criteria.

For hospitals, the implementation instructions of the German Hospital Association (DKG) have proven to be important. The number of fully inpatient hospital treatments in the reference period (i.e. the previous year) was defined as the measurement criterion. With 30,000 fully inpatient treatment cases, the threshold value for the identification of critical infrastructures has been reached, which concerns considerably more than 100 hospitals. These are obliged to fulfil clearly defined requirements, which are derived from the IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - for the security of IT systems and digital infrastructures, including critical infrastructures in Germany, and the BSI-KritisV - "BSI-Kritisverordnung". The above-mentioned implementation instructions of the DKG thus also define proposed measures for the assurance of adequate security, in particular about the IT used.

Companies had until June 30th this year to meet the requirements and to commission a suitable, trustworthy third party for testing and certification.

But according to a report in Tagesspiegel Background, this has been challenging: industry associations have been pointing out for some time that there are not enough suitable auditing firms. This is not least due to the fact that auditors must have a double qualification, which in addition to IT also includes knowledge of the industry, in this case the healthcare system in hospitals. Here, as in many other areas, the infamous skill gap strikes, i.e. the lack of suitable, qualified employees in companies or on the job market.

This led to the companies capable of performing the audits being overloaded and thus to a varying quality and availability of audits and resulting audit reports. According to the press report, these certificates suffer the same fate when they are submitted to the BSI, which evaluates these reports. Here, too, a shortage of skilled workers leads to a backlog of work. A comprehensive evaluation was not available at the time of publication. Even the implementation instructions of the German Hospital Association, on the basis of which many implementations were carried out in the affected hospitals, have not yet been confirmed by the BSI.

Does this place KRITIS in the list of toothless guidelines (such as PSD2 with its large number of national individual regulations) that have not been adequately implemented, at least in this area? Not necessarily.. The obligation to comply has not been suspended; the lack of personnel and skills on the labour market merely prevents consistent, comprehensive testing by suitable bodies such as TÜV, Dekra or specialised auditing firms. However, if such an audit does take place, the necessary guidelines are applied and any non-compliance is followed up in accordance with the audit reports. The hospitals concerned are therefore advised they should have  fulfilled the requirements by the deadline and to continue working on them in the name of continuous implementation and improvement.

Even hospitals that today slightly miss this threshold are now encouraged to prepare for adjustments to requirements or increasing patient numbers. And this means that even without the necessity of a formal attestation, the appropriate basic conditions, such as the establishment of an information security management system (ISMS) in accordance with ISO 27.001, can be created to serve as a foundation.

In addition, the availability of a general framework for the availability and security of IT in this and other industries gives other sector players (such as group practices or specialist institutes) a resilient basis for creating appropriate framework conditions that correspond to the current state of requirements and technology. This also applies if they are not or will not be KRITIS-relevant in the foreseeable future, but want to offer their patients a comparably good degree of security and resulting trustworthiness.

KuppingerCole offers comprehensive support in the form of research and advisory for companies in all KRITIS-relevant areas and beyond. Talk to us to address your cybersecurity, access control and compliance challenges.

Stell Dir vor, es ist KRITIS und keiner geht hin

Kritische Infrastrukturen (KRITIS) sind Organisationen oder Einrichtungen mit wichtiger Bedeutung für das staatliche Gemeinwesen, bei deren Ausfall oder Beeinträchtigung nachhaltig wirkende Versorgungsengpässe, erhebliche Störungen der öffentlichen Sicherheit oder andere dramatische Folgen eintreten würden“.

Neun Sektoren und 29 Branchen gelten derzeit als kritische Infrastrukturen, darunter die Gesundheitsversorgung, Energieversorgung, der Verkehr und Finanzdienstleistungen. Krankenhäuser als Teil des Gesundheitswesens fallen bei Erfüllung definierter Kriterien ebenfalls in die Kategorie „kritische Infrastrukturen“.

Für Krankenhäuser haben sich die Umsetzungshinweise der Deutschen Krankenhausgesellschaft (DKG) als maßgeblich erwiesen. Als Bemessungskriterium wurde hierbei die Anzahl vollstationärer Krankenhausbehandlungen im Bezugszeitraum (Vorjahr) definiert. Mit einer Anzahl von 30.000 vollstationären Behandlungsfällen ist der Schwellenwert zur Identifikation kritischer Infrastrukturen erreicht, was deutlich mehr als 100 Krankenhäuser betrifft, Diese werden zur Erfüllung klar definierter Anforderungen verpflichtet, die aus dem das IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - zur Sicherung von IT-Systemen und digitalen Infrastrukturen, einschließlich kritischer Infrastrukturen in Deutschland, und der BSI-KritisV  - "BSI-Kritisverordnung" abgeleitet sind. Die genannten Umsetzungshinweise der DKG definieren damit dann auch vorgeschlagene Maßnahmen für den Nachweis einer angemessenen Sicherheit, insbesondere mit Blick auf die eingesetzten IT.

Der 30. Juni dieses Jahres stellte die definierte Deadline für die betroffenen Unternehmen dar, die Anforderungen zu erfüllen und einen geeigneten, vertrauenswürdigen Dritten zur Prüfung und Testierung zu beauftragen.

Einem Bericht des Tagesspiegel Background zufolge liegt genau hier derzeit eine Herausforderung: Branchenverbände weisen seit längerem darauf hin, dass es nicht genügend geeignete prüfende Stellen gibt. Das liegt nicht zuletzt daran, dass Auditoren eine doppelte Qualifikation vorweisen müssen, die neben der IT auch die Kenntnis der Branche, hier also des Gesundheitswesens im Krankenhaus umfasst. Hier, wie in vielen anderen Bereichen, schlägt der berüchtigte Skill-Gap zu, also der Mangel an geeigneten, qualifizierten Mitarbeitern in den Unternehmen oder am Arbeitsmarkt.

Dies führte zu einer Überlastung der zur Prüfung befähigten Unternehmen und damit einer unterschiedlichen Güte und Verfügbarkeit der Prüfungen und der resultierenden Prüfberichte. Das gleiche Schicksal erleiden die Testate dem Pressebericht zufolge bei Einreichung beim BSI, das diese Berichte auswertet. Auch hier führt ein Fachkräftemangel zu einem Bearbeitungsstau. Eine Auswertung lag zum Veröffentlichungszeitpunkt nicht vor. Selbst die Umsetzungshinweise der Deutschen Krankenhausgesellschaft, auf deren Basis viele Umsetzungen in den betroffenen Häusern erfolgten, ist vom BSI noch nicht bestätigt.

Reiht sich damit KRITIS zumindest in diesem Bereich in die Liste der zahnlosen, weil nicht angemessen umgesetzten Richtlinien (wie etwa PSD2 mit seiner Vielzahl nationaler Individualregelungen) ein? Aus heutiger Sicht kann das wohl verneint werden. Die Verpflichtung zur Einhaltung ist nicht aufgehoben, der Personal- und Skillmangel am Arbeitsmarkt verhindert lediglich die konsequente, umfassende Prüfung durch geeignete Stellen wie TÜV, Dekra oder spezialisierte Wirtschaftsprüfungsgesellschaften. Findet eine solche aber statt, werden die notwendigen Richtlinien angelegt und deren Nichterfüllung entsprechend der Prüfberichte auch nachverfolgt. Betroffenen Krankenhäuser ist damit nahegelegt, die Anforderungen schon zum Stichtag erfüllt zu haben und im Sinne einer kontinuierlichen Umsetzung und Verbesserung daran auch weiterhin zu arbeiten.

Auch Krankenhäuser, die heute diesen Schwellwert knapp nicht erreichen, sind heute schon aus Vernunftgründen angehalten, sich auf Anpassungen der Anforderungen oder steigende Patientenzahlen vorzubereiten. Und das bedeutet, auch ohne Notwendigkeit eines formalen Testats schon die sinnvollen Rahmenbedingungen, etwa den Aufbau eines Informationssicherheits Management Systems (ISMS) nach ISO 27.001 als Grundlage zu schaffen.

Darüber hinaus gibt die Verfügbarkeit eines allgemeinen Rahmens für die Verfügbarkeit und Sicherheit der IT in dieser und anderen Branchen weiteren Branchenteilnehmern (etwa Gemeinschaftspraxen oder fachlich spezialisierte Institute) eine belastbare Basis, angemessene Rahmenbedingungen zu schaffen, die einem aktuellen Stand der Anforderungen und der Technik entsprechen. Das gilt auch, wenn sie absehbar nicht KRITIS-relevant sind oder werden, aber ihren Patienten einen vergleichbar guten Sicherheitsstandard und die daraus resultierende Vertrauenswürdigkeit bieten wollen.

KuppingerCole bietet umfangreiche Unterstützung in Form von Research und Advisory für Unternehmen in allen KRITIS-relevanten Bereichen und darüber hinaus. Reden Sie mit uns, um Ihren Herausforderungen in den Bereichen Cybersecurity, Zugriffskontrolle und Compliance angemessen zu begegnen.

Do You Need a Chief Artificial Intelligence Officer?

Well, if you ask me, the short answer is – why not? After all, companies around the world have a long history of employing people with weird titles ranging from “Chief Happiness Officer” to “Galactic Viceroy of Research Excellence”. A more reasonable response, however, would need to take one important thing into consideration – what a CAIO’s job in your organization would be?

There is no doubt that “Artificial Intelligence” has already become an integral part of our daily lives, both at home and at work. In just a few years, machine learning and other technologies that power various AI applications evolved from highly complicated and prohibitively expensive research prototypes to a variety of specialized solutions available as a service. From image recognition and language processing to predictive analytics and intelligent automation - a broad range of useful AI-powered tools is now available to everyone.

Just like the cloud a decade ago (and Big Data even earlier), AI is universally perceived as a major competitive advantage, a solution for numerous business challenges and even as an enabler of new revenue streams. However, does it really imply that every organization needs an “AI strategy” along with a dedicated executive to implement it?

Sure, there are companies around the world that have made AI a major part of their core business. Cloud service providers, business intelligence vendors or large manufacturing and logistics companies – for them, AI is a major part of the core business expertise or even a revenue-generating product. For the rest of us, however, AI is just another toolkit, powerful and convenient, to address specific business challenges.

Whether your goal is to improve the efficiency of your marketing campaign, to optimize equipment maintenance cycle or to make your IT infrastructure more resilient against cyberattacks – a sensible strategy to achieve such a goal never starts with picking up a single tool. Hiring a highly motivated AI specialist to tackle these challenges would have exactly the opposite effect: armed with a hammer, a person is inevitably going to treat any problem as if it were a nail.

This, of course, by no means implies that companies should not hire AI specialists. However, just like the AI itself was never intended to replace humans, “embracing the AI” should not overshadow the real business goals. We only need to look at Blockchain for a similar story: just a couple years ago adding a Blockchain to any project seemed like a sensible goal regardless of any potential practical gains. Today, the technology has already passed the peak of inflated expectations and it finally seems that the fad is transitioning to the productive phase, at least in those usage scenarios where lack of reliable methods of establishing distributed trust was indeed a business challenge.

Another aspect to consider is the sheer breadth of the AI frontier, both from the AI expert’s perspective and from the point of view of a potential user. Even within such a specialized application area as cybersecurity, the choice of available tools and strategies can be quite bewildering. Looking at the current AI landscape as a whole,  one cannot but realize that it encompasses many complex and quite unrelated technologies and problem domains. Last but not least, consider the new problems that AI itself is creating: many of those lie very much outside of the technology scope and come with social, ethical or legal implications.

In this regard, coming up with a single strategy that is supposed to incorporate so many disparate factors and can potentially influence every aspect of a company’s core business goals and processes seems like a leap of faith that not many organizations are ready to make just yet. Maybe a more rational approach towards AI is the same as with the cloud or any other new technology before that: identify the most important challenges your business is facing, set reasonable goals, find the experts that can help identify the most appropriate tools for achieving them and work together on delivering tangible results. Even better if you can collaborate on (probably different) experts on outlining a long-term AI adoption strategy that would ensure that your individual projects and investments align with each other and avoid wasting time and resources. In other words: Think Big, Start Small, Learn Fast.

If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.

AI in the Auto Industry Is About More Than Self-Driving Cars

Car buyers gathering at the Frankfurt Motor Show last month will have witnessed the usual glitz as car makers went into overdrive launching new models, including of course many new electric vehicles reflecting big change in the industry. Behind the glamour of the show, the world’s biggest car makers are heavily investing in new technologies to remain competitive, including Artificial Intelligence (AI) and Machine Learning. While perfecting algorithms for self-driving cars is a longer-term goal and grabs the headlines, much is being done with AI to improve the design, manufacture and marketing of cars. 

In an industry characterized by high costs and low margins, car makers (OEMs) are turning to AI to improve efficiencies, improve quality control and understand their markets and buyers better. Five years ago, Volkswagen opened its Data:Lab in Munich. It is now the company’s main research base for AI with around 80 IT specialists, data scientists, programmers, physicists, and mathematicians researching and developing applications in machine learning and AI. Volkswagen goes as far to say that AI will fundamentally change the company’s value chain as it will now begin, not end, with the production of the vehicle.

An area of focus is applying AI to market research and marketing to pre-empt changes in demand and consumer choice outside of OEMs traditional 7-year model cycle. Any manufacturer that can be ahead of the curve in marketing will have a significant advantage. Volkswagen is using AI to create precise market forecasts containing a multitude of variables including economic development, household income, customer preferences, model availability and price.

With this kind of insight, it is possible that the company could configure model choice (specs, optional extras, engine sizes etc) and order production to meet buyer preferences on a smaller regional or even hyper local level. For example, a Golf special edition that appeals to specific buyers in London or an Amarok truck configured for the needs of farmers in the Rhineland. 

Volkswagen’s German rivals are also scaling investment in AI technologies and are keen to be seen doing so with positive statements on their websites, and active recruitment drives to get the best developer talent. All three of Germany’s OEMs are aware that they need to be technological leaders in IT as much as engineering as cars become more connected and software driven.

At its factory in Stuttgart, Daimler has created a knowledge base that stores all the existing vehicle designs at the company which any new engineer can tap into. More than this, the algorithm has been trained to suggest that a new engineer contacts a more experienced colleague for human advice in certain circumstances. A good example, of how AI can be trained to interact with human workers.

At the final inspection area at BMW’s Dingolfing plant, an AI application compares the vehicle order data with a live image of the model designation of the newly produced car. If the live image and order data don’t correspond, for example if a designation is missing, the final inspection team receives a notification. This frees up human employees to work elsewhere. Algorithms are also being taught to tell the difference between a hairline crack in sheet metal and simple dust particles, something that is beyond the scope of human eyesight. Meanwhile in paint shops, AI and analytics applications offer the potential to detect sources of error at much earlier stages of the process. If no dust attaches to the car body before painting in the first place, none needs be polished off later.

While these examples of AI applications may lack the sci-fi appeal of self-driving cars, they are presently more important to the future survival of the car industry, not just in Germany but across the globe. AI is being used effectively to meet the three fundamental challenges of the industry’s survival: improved quality, cost and waste reduction, and customer demands.  

If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.

Cognitive! - Entering a New Era of Business Models Between Converging Technologies and Data

Digitalization or more precisely the "digital transformation" has led us to the "digital enterprise". It strives to deliver on its promise to leverage previously unused data and the information it contains for the benefit of the enterprise and its business. And although these two terms can certainly be described as buzzwords, they have found their way into our way of thinking and into all kinds of publications, so that they will probably continue to exist in the future. 

Thought leaders, analysts, software and service providers and finally practically everyone in between have been proclaiming the "cognitive enterprise" for several months now. This concept - and the mindset associated with it - promises to use the information of the already digital company to achieve productivity, profitability and high innovation for the company.  And they aim at creating and evolving next-generation business models between converging technologies and data.​  

So what is special about this cognitive enterprise“? Defining it usually starts with the idea of applying cognitive concepts and technologies to data in practically all relevant areas of a corporation. Data includes: Open data, public data, subscribed data, enterprise-proprietary data, pre-processed data, structured and unstructured data or simply Big Data). And the technologies involved include the likes of Artificial Intelligence (AI), more specifically Machine Learning (ML), Blockchain, Virtual Reality (VR), Augmented Reality (AR), the Internet of Things (IoT), ubiquitous communication with 5G, and individualized 3D printing​.  

As of now, mainly concepts from AI and machine learning are grouped together as "cognitive", although a uniform understanding of the underlying concepts is often still lacking. They have already proven to do the “heavy lifting” either on behalf of humans, or autonomously. They increasingly understand, they reason, and they interact, e.g. by engaging in meaningful conversations and thus delivering genuine value without human intervention. 

Automation, analytics and decision-making, customer support and communication are key target areas, because many tasks in today’s organizations are in fact repetitive, time-consuming, dull and inefficient. Focus (ideally) lies on relieving and empowering the workforce, when the task can be executed by e.g. bots or through Robotic Process Automation. Every organization is supposed to agree that their staff is better than bots and can perform tasks much more meaningful. So, these measures are intended to benefit both the employee and the company. 

But this is only the starting point. A cognitive enterprise will be interactive in many ways, not only by interacting with its customers, but also with other systems, processes, devices, cloud services and peer organizations. As one result it will be adaptive, as it is designed to be learning from data, even in an unattended manner. The key goal is to foster agility and continuous innovation through cognitive technologies by embracing and institutionalizing a culture that perpetually changes the way an organization works and creates value.  

Beyond the fact that journalists, marketing departments and even analysts tend to outdo each other in the creation and propagation of hype terms, where exactly is the difference between a cognitive and a digital enterprise?  Do we need yet another term, notably for the use of machine learning as an apparently digital technology?  

I don't think so. We are witnessing the evolution, advancement, and ultimately the application of exactly these very digital technologies that lay the foundation of a comprehensive digital transformation. However, the added value of the label "cognitive" is negligible.   

But regardless of how you, me or the buzzword industry really decide to call it in the end, much more relevant are the implications and challenges of this consistent implementation of digital transformation. In my opinion two aspects must not be underestimated: 

First, this transformation is either approached in its entirety, or it is better not to do it at all, there is nothing in between. If you start doing this, it's not enough to quickly look for a few candidates for a bit of Robot Process Automation. There will be no successful, "slightly cognitive” companies. This will be a waste of the actual potential of a comprehensive redesign of corporate processes and is worth little more than a placebo. Rather, it is necessary to model internal knowledge, to gain and to interconnect data.  Jobs and tasks will change, become obsolete and will be replaced by new and more demanding ones (otherwise they could be executed by a bot again). 

Second: The importance of managing constant organizational change and restructuring is often overlooked. After all, the transformation to a Digital/Cognitive Enterprise is by far not entirely about AI, Robotic Process Automation or technology. Rather, focus has to be put on the individual as well, i.e. each member of the entire workforce (both internal and external). Established processes have to be managed, adjusted or even reengineered and this also applies to processes affecting partners, suppliers and thus any kind of cooperation or interaction.  

One of the most important departments in this future will be the human resources department and specifically talent management. Getting people on board and retaining them sustainably will be a key challenge. In particular, this means providing them with ongoing training and enabling them to perform qualitatively demanding tasks in a highly volatile environment. And it is precisely such an extremely responsible task that will certainly not be automated even in the long term...

When Cyber "Defense" is no Longer Enough

The days in which having just an Identity and Access Management (IAM) system on-premises are long gone. With organizations moving to hybrid on-premises, cloud, and even multi-cloud environments, the number of cyber-attacks is growing. The types and sophistication of these attacks are continually changing to get around any new security controls put in place. In fact, it is much easier for the cyber attacker to change tactics than it is for organizations to bring in new solutions to mitigate current attack vulnerabilities.

Organizations must realize that they will never be 100% secure, and there will always be attacks on their systems. Don't get me wrong. I'm not saying to give up on continually assessing and updating an organization's security controls to block the latest and most significant attack vectors. But instead, take the next step and plan for the worst. Organizations should integrate their Business Continuity Management (BCM) with their cybersecurity initiatives. This means being able to detect, respond recover, and improve from any attack that potentially brings down their business.

Recently, Microsoft Azure announced its global availability of the Windows Virtual Desktop (WVD). WVD not only provides the ability to deploy and scale Azure-based virtualization of Windows 10 multi-session, Windows Server, and Windows 7 desktops, but it also provides something that is sometimes overlooked. WVD gives the enterprise the ability to recover from being compromised when attacked, at least from the desktop endpoint perspective. Through Microsoft's acquisition of FSLogix and its solutions, WVD takes advantage of virtualization & containerization technologies. Using these technologies, Microsoft ensures that its Windows desktops and servers can be powered up or restarted in a consistent and safe state with respect to user profiles and applications, adding to the BCM and “recover from an attack” capabilities business must implement today. FSLogix does this by bringing both profile and office containers to the table.

So, when reviewing cybersecurity and BCM strategies, organizations shouldn’t take the view of “if”, but “when” their systems will be compromised, and their data breached. Then ask themselves how they will recover.

KuppingerCole Principal Analyst Martin Kuppinger emphasized the changing role of the CISO recently in a blog and also covered that topic in a webinar on cybersecurity budgeting which you can watch below. To get a more hands-on approach, see below for our Incident Response Boot Camp at Cybersecurity Leadership Summit 2019.

HP Labs Renewed Focus on Endpoint Security Is Worth Watching

A visit to HP Labs offices in central Bristol, about 120 miles west of London, was a chance to catch up with the hardware part of the former Hewlett Packard conglomerate, which split in two four years ago. The split also meant that there are now two HP Labs, one for the HP business and the other for Hewlett Packard Enterprise.

To perhaps position itself as a serious B2B vendor we were told that HP is an “endpoint infrastructure company”, which kind of works, but its US, Chinese and Taiwanese competition could conceivably claim the same.

To counter this, HP is tapping into the shared legacy of the research and development focus that the original Mr. Hewlett and Mr. Packard founded in that famous garage in Palo Alto – hence the trip to HP Labs. A single floor of an office block in Bristol lacks some of the wow factor of the more campus feel of the old joined-up and bigger HP Labs but, on the other hand, the ideas that came out of those Labs did not always see practical application.

The focus then was on HP’s security credentials for innovations that have found their way into products. In a series of demonstrations of its Sure Suite technologies, HP made a case for why its line of laptops and PCs are better equipped to withstand attacks on the endpoint.

Sure Start protects the BIOS from attack each time the PC or laptop is booted on a network or standalone and automatically validates the integrity of the BIOS code. Once the PC is operational, runtime intrusion detection monitors memory. In the case of an attack, the PC can self-heal using an isolated “golden copy” of the BIOS. The live demo on the day showed a laptop that had been locked by ransomware being brought back to operable life. Sure Recover is a tool squarely aimed at the SMB market allowing end users to recover their operating system even after it has been wiped out by an attack, without recourse to IT. It uses HP’s chip-based Endpoint Security Controller (ESC) to image the latest OS using a wired network connection.

A new announcement on the day was HP Sure Admin which extends the automation of secure endpoint management into the corporate domain and builds on the user-friendly technologies of Sure Start and Sure Recover, to reduce the threat of attacks on the surface created by remote management tools. Traditionally, BIOS updates on endpoint PCs have been administered through passwords which are at risk of theft or intervention. Sure Admin uses public/private key encryption to authorise remote BIOS changes. For local access, Sure Admin runs as an app on a smartphone accessed by a private key, which then generates a onetime PIN for an admin to access an endpoint that needs maintenance or recovery. Also demonstrated was HP Sure Sense which uses AI to recognise unknown malware to mitigate zero-day attacks, with a less than 20 millisecond detection rate claimed by HP.

Any kind of demo must be viewed objectively, and these technologies will only prove their mettle in the wild. The other issue is how well any of these endpoints so equipped would embed into an existing corporate environment. Sure Admin needs a serious examination of how it can be integrated into the wider enterprise IT, access management and security portfolio.

This is not to disparage the progress HP has made, and my feeling at the end of the day was that HP is using its Labs for real-world security applications. But they are currently more efficient iterations of existing technologies rather than great leaps forward. However, endpoint protection is essential for business environments that are more open, extended and connected than before. HP’s recent acquisition of endpoint security start-up Bromium will no doubt impact on HP’s future plans to improve on these technologies further.


GDP R U Compliant?

Almost one and a half years after the introduction of GDPR (EU General Data Protection Regulation), some companies still struggle with implementing appropriate measures to deal with Personally Identifiable Information (PII) in a compliant fashion. Last week the Commissioner for Data Protection and Freedom of Information of the city state Berlin Maja Smoltczyk imposed a 195,000 euro fine on the German food delivery service provider Delivery Hero after it had committed a series of data protection law violations with its subsidiaries Foodora, Lieferheld and Pizza.de. It is Germany’s highest GDPR-related fine to date.

According to the press release by the Commissioner for Data Protection and Freedom of Information, the majority of privacy breaches displayed disregard of the rights of the affected parties. In ten cases, the delivery provider had not deleted personal data of former clients, despite the latter having ceased activity on the platform for several years. Among other things, this led to marketing mails sent out without the consent of the recipients. In a statement to the privacy officer, Delivery Hero argued that some violations could be traced back to technical glitches and employee accidents but “due to the high number of repeated violations a general, structural organizational problem was assumed.” Delivery Hero was acquired by the Dutch company Takeaway.com at the end of last year and states that all violations happened prior to the takeover.

Having understood early how crucial it is for a company to be GDPR-compliant, KuppingerCole Analysts already published a Leadership Brief in May 2017 in preparation for GDPR in which Senior Analyst Mike Small identified six key actions that IT needs to take to prepare for compliance. He stressed that the Data Controller or Data Processor must ensure that Personally Identifiable Information (PII) is “only accessed in accordance with the consent given by the data subject”. This was obviously not the case when – as stated above – in most breaches the rights of data subjects were disregarded.

Another point of emphasis in the Leadership Brief is that “organizations must have processes and technology to track the consent lifecycle for each data subject”. By admitting technical glitches, employee accidents and a lack of adequate structure and organization behind the data lifecycle process, Delivery Hero essentially made a confession of grave data negligence.

Not being in comprehensive control of internal processes, employees and technologies, it can be assumed that Delivery Hero was and maybe still is not sufficiently prepared for a potential data breach and would be unable to react to an incident in a timely manner without undue delay.

Other companies can only take this case as a learning opportunity and – in order to comply with regulations such as GDPR – implement reliable processes and technologies that do not depend on the diligence of single employees.

Nevertheless, the latter should not be ignored altogether. All employees should be trained in GDPR-relevant questions about their specific work tasks.

KuppingerCole offers a wide variety of research, blog posts and recorded webinars covering many different aspects of GDPR that can support you and your company in achieving and maintaining compliance. For example, there are several technical solutions for locating and classifying structured and unstructured data. These can assist companies in determining where PII and other regulatory information is located. KuppingerCole constantly investigates these markets and provides guidance.

If you have any specific questions, please do not hesitate to get in touch with us. KuppingerCole Advisory Services can efficiently support you in establishing appropriate processes and their technical implementation, strengthened by long-term practical experience and comprehensive market knowledge.

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Subscribe to our Podcasts

KuppingerCole Podcasts - watch or listen anywhere

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00