KuppingerCole Blog

Microsoft Partnership Enables Security at Firmware Level

Microsoft has partnered with Windows PC makers to add another level of cyber attack protection for users of Windows 10 to defend against threats targeting firmware and the operating system.

The move is in response to attackers developing threats that specifically target firmware as the IT industry has built more protections into operating systems and connected devices. A trend that appears to have been gaining popularity since Russian espionage group APT28 – also known as Fancy Bear, Pawn Storm, Sofacy Group, Sednit, and Strontium – was found to be exploiting firmware vulnerabilities in firmware to distribute the LoJax malware by security researchers at ESET.

The LoJax malware targeting European government organizations exploited a firmware vulnerability to effectively hide inside the computer's flash memory. As a result, malware was difficult to detect and able to persist even after an operating system reinstall because whenever the infected PC booted up, the malware would re-execute.

In a bid to gain more control over the hardware on which its Windows operating system runs like Apple, Microsoft has worked with PC and chip makers on an initiative dubbed “Secured-core PCs” to apply the security best practices of isolation and minimal trust to the firmware layer to protect Windows devices from attacks that exploit the fact that firmware has a higher level of access and higher privileges than the Windows kernel. This means attackers can undermine protections such as secure boot and other defenses implemented by the hypervisor or operating system.

The initiative appears to be aimed at industries that handle highly-sensitive data, including personal, financial and intellectual property data, such as financial services, government and healthcare rather than the consumer market. However, consumers using new high-end hardware like the Surface Pro X and HP's Dragonfly laptops will benefit from an extra layer of security that isolates encryption keys and identity material from Windows 10.

According to Microsoft, Secured-core PCs combine identity, virtualization, operating system, hardware and firmware protection to add another layer of security underneath the operating system to prevent firmware attacks by using new hardware Dynamic Root of Trust for Measurement (DRTM) capabilities from AMD, Intel and Qualcomm to implement Microsoft’s System Guard Secure Launch as part of Windows Defender in Windows 10.

This effectively removes trust from the firmware because although Microsoft introduced Secure Boot in Windows 8 to mitigate the risk posed by malicious bootloaders and rootkits that relied on Unified Extensible Firmware Interface (UEFI) firmware, the firmware is already trusted to verify the bootloaders, which means that Secure Boot on its own does not protect from threats that exploit vulnerabilities in the trusted firmware.

The DRTM capability also helps to protect the integrity of the virtualization-based security (VBS) functionality implemented by the hypervisor from firmware compromise. VBS then relies on the hypervisor to isolate sensitive functionality from the rest of the OS which helps to protect the VBS functionality from malware that may have infected the normal OS even with elevated privileges, according to Microsoft, which adds that protecting VBS is critical because it is used as a building block for important operating system security capabilities like Windows Defender Credential Guard which protects against malware maliciously using OS credentials and Hypervisor-protected Code Integrity (HVCI) which ensures that a strict code integrity policy is enforced and that all kernel code is signed and verified.

It is worth noting that the Trusted Platform Module 2.0 (TPM) has been implemented as one of the device requirements for Secured-core PCs to measure the components that are used during the secure launch process, which Microsoft claims can help organisations enable zero-trust networks using System Guard runtime attestation.

Although ESET has responded to its researchers’ UEFI rootkit discovery by introducing a UEFI Scanner to detect malicious components in the firmware, and some chip manufacturers are aiming to do something similar with specific security chips, Microsoft’s Secured-core PC initiative is aimed at blocking firmware attacks rather than just detecting them and is cross-industry, involving a wide range of CPU architectures and Original Equipment Manufacturers (OEMs), which means that the firmware defence will be available to all Windows 10 users regardless of the PC maker and form factor they choose.

It will be interesting to see what effect this initiative has in reducing the number of successful ransomware and other BIOS/UEFI or firmware-based cyber attacks on critical industries. A high success rate is likely to see commoditization of the technology and result in availability for all PC users in all industries.

Can Your Antivirus Be Too Intelligent Sometimes?

Current and future applications of artificial intelligence (or should we rather stick to a more appropriate term “Machine Learning”?) in cybersecurity have been one of the hottest discussion topics in recent years. Some experts, especially those employed by anti-malware vendors, see ML-powered malware detection as the ultimate solution to replace all previous-generation security tools. Others are more cautious, seeing great potential in such products, but warning about the inherent challenges of current ML algorithms.

One particularly egregious example of “AI security gone wrong” was covered in an earlier post by my colleague John Tolbert. In short, to reduce the number of false positives produced by an AI-based malware detection engine, developers have added another engine that whitelisted popular software and games. Unfortunately, the second engine worked a bit too well, allowing hackers to mask any malware as innocent code just by appending some strings copied from a whitelisted application.

However, such cases where bold marketing claims contradict not just common sense but the reality itself and thus force engineers to fix their ML model shortcomings with clumsy workarounds, are hopefully not particularly common. However, every ML-based security product does face the same challenge – whenever a particular file triggers a false positive, there is no way to tell the model to just stop it. After all, machine learning is not based on rules, you have to feed the model with lots of training data to gradually guide it to a correct decision and re-labeling just one sample is not enough.

This is exactly the problem the developers of Dolphin Emulator have recently faced: for quite some time, every build of their application has been recognized by Windows Defender as a malware based on Microsoft’s AI-powered behavior analysis. Every time the developers would submit a report to Microsoft, it would be dutifully added to the application whitelist, and the case would be closed. Until the next build with a different file hash is released.

Apparently, the way this cloud-based ML-powered detection engine is designed, there is simply no way to fix a false positive once and for all future builds. However, the company obviously does not want to make the same mistake as Cylance and inadvertently whitelist too much, creating potential false negatives. Thus, the developers and users of the Dolphin Emulator are left with the only option: submit more and more false-positive reports and hope that sooner or later the ML engine will “change its mind” on the issue.

Machine learning enhanced security tools are supposed to eliminate the tedious manual labor by security analysts; however, this issue shows that sometimes just the opposite happens. Antimalware vendors, application developers, and even users must do more work to overcome this ML interpretation problem. Yet, does it really mean that incorporating machine learning into an antivirus was a mistake? Of course not, but giving too much authority to an ML engine which is, in a sense, incapable of explaining its decisions and does not react well to criticism, probably was.

Potential solutions for these shortcomings do exist, the most obvious being the ongoing work on making machine learning models more explainable, giving insights into the ways they are making decisions on particular data samples, instead of just presenting themselves to users as a kind of a black box. However, we’re yet to see commercial solutions based on this research. In the future, a broader approach towards the “artificial intelligence lifecycle” will surely be needed, covering not just developing and debugging models, but stretching from the initial training data management all the way up to ethical and legal implications of AI.

By the way, we’re going to discuss the latest developments and challenges of AI in cybersecurity at our upcoming Cybersecurity Leadership Summit in Berlin. Looking forward to meeting you there! If you want to read up on Artificial Intelligence and Machine Learning, be sure to browse our KC+ research platform.

Privileged Access Management Can Take on AI-Powered Malware to Protect Identity-Based Computing

Much is written about the growth of AI in the enterprise and how, as part of digital transformation, it will enable companies to create value and innovate faster. At the same time, cybersecurity researchers are increasingly looking to AI to enhance security solutions to better protect organizations against attackers and malware. What is overlooked is the same determination by criminals to use AI to assist them in their efforts to undermine organizations through persistent malware attacks.

The success of most malware directed at organizations depends on an opportunistic model; sent out by bots in the hope that it infects as many organizations as possible and then executes its payload. In business terms, while relatively cheap, it represents a poor return on investment and is easier for conventional anti-malware solutions to block. On the other hand, malware that is targeted and guided by human controllers at a command and control point (2C) may well result in a bigger payoff if it manages to penetrate privileged accounts, but it is expensive and time consuming for criminal gangs to operate.

Imagine if automated malware attacks were to benefit from embedded algorithms that have learned how to navigate to where they can do the most damage; this would deliver scale and greater profitability to the criminal gangs. Organizations are facing malware that learns how to hide and perform non-suspicious actions while silently exfiltrating critical data without human control.

AI powered malware will change tactics once inside an organization. It could, for example, automatically switch to lateral movement if it finds its path blocked. The malware could also sit undetected and learn from regular data flows what is normal, and emulate this pattern accordingly. It could learn which devices the infected machines communicate with, its ports and protocols, and the user accounts which access it. All done without the current need for communication back to 2C servers – thus further protecting the malware from discovery.

It is access to user accounts that should worry organizations – particularly privileged accounts. Digital transformation has led to an increase in the number of privileged accounts in companies and attackers are targeting those directly. The use of intelligent agents will make it easier for them to discover privileged accounts such as those accessed via a corporate endpoint. At the same time, malware will learn the best times and situations in which to upload stolen data to 2C servers by blending into legitimate high bandwidth operations such as such as videoconferencing or legitimate file uploads. This may not be happening yet but all of this is feasible given the technical resources that state sponsored cyber attackers and cash rich criminal gangs have access to.

To prove what’s possible IBM research scientists created a proof-of-concept AI-powered malware called Deep Locker. The malware contained hidden code to generate keys which could unlock malicious payloads if certain conditions were met. It was demonstrated at a Las Vegas technology conference last year, using a genuine webcam application with embedded code to deploy ransomware when the right person looked at the laptop webcam. The code was encrypted to conceal its payload and to prevent reverse engineering for traditional anti-malware applications.

IBM also said in its presentation that current defences are obsolete and new defences are needed. This may not be true. AI is not yet magic. As in the corporate world, much AI assisted software, benefits from the learning capabilities of its algorithms which automate the tasks that humans have previously held. In the criminal ecosystem this includes directing malware towards privilege accounts. Therefore, it makes sense that if Privileged Access Management (PAM) does a good job of defecting human led attempts to hijack accounts then it should do the same when confronted with the same techniques orchestrated by algorithms. Already the best PAM solutions are smart enough to monitor M2M communications and DevOps that need access to resources on the fly.

But we must not stop there. Future IAM and Pam solutions must be able to detect hijacked accounts or erroneous data flows in real time and shut them down so that even AI cannot do its work.  Despite the sophistication that AI will bring to malware, its target will remain the same in many attacks: business critical data that is accessed by privileged account users, which will include third parties and machines. It is one more way in which Identity – of people, data and machines  - is taking centre stage in securing the digital organizations of the future. For more on KuppingerCole’s research into Identity and the digital enterprise please see our most recent reports.

Leading IDaaS Supplier OneLogin Aiming for the Top

OneLogin is among the leading vendors in the overall, product, innovation and market leadership ratings in KuppingerCole’s latest Leadership Compass Report on IDaaS Access Management, but is aiming to move even further up the ranks.

In a media and analyst briefing, OneLogin representatives talked through key and recent product features and capabilities in an ongoing effort improve the completeness of products.

Innovation is a key capability in IT market segments, and unsurprisingly this is an important area for OneLogin.

The most recent innovations include Vigilance AI, the new artificial intelligence and machine learning (AI/ML) risk engine, and SmartFactor Authentication, a context-aware authentication methodology to help organizations move beyond text-based passwords.

Both these capabilities are in line with the trend towards using AI in the context of Identity and Access Management (IAM)  and are aimed at supporting OneLogin’s mission to enable enterprises to move beyond password-based authentication and improve their overall cyber defense capabilities in the light of the massive uptick in cyber attacks targeting credentials, including brute force and breach replay attacks.

OneLogin’s Vigilance AI is designed to use AI and ML to ingests and analyze data from a multiple of third-party sources to identify anomalies and communicate risk across OneLogin services.

Vigilance AI also applies User and Entity Behavior Analytics (UEBA) capabilities to build a profile of typical user behavior to identify anomalies in real-time to improve threat defense.

Other recent product innovations include:

  • Adaptive login flows functionality that uses Vigilance AI to restructure authentication flow automatically based on risk to include Multifactor Authentication (MFA) where appropriate;
  • Compromised credential check functionality to prevents users from using credentials that have been breached and posted on the dark web; and
  • Risk-aware access and adaptive deny functionality to block access to systems and applications when extreme risk is detected.

In these ways, OneLogin is striving to address its leadership challenges by increasing the range of authentication factors, increasing collaboration with third-party threat intelligence services, working towards providing support for IoT, and planning to enable more complex reporting capabilities.

The use of AI in the identity solutions market is likely to increase, with a growing number of vendors incorporating AI-driven capabilities such as OneLogin, SailPoint with its AI-driven Predictive Identity cloud identity platform, and others.

If you liked this text, feel free to browse our IAM focus area for more related content.

As You Make Your KRITIS so You Must Audit It

Organizations of major importance to the German state whose failure or disruption would result in sustained supply shortages, significant public safety disruptions, or other dramatic consequences are categorized as critical infrastructure (KRITIS).

Nine sectors and 29 industries currently fall under this umbrella, including healthcare, energy, transport and financial services. Hospitals as part of the health care system are also included if they meet defined criteria.

For hospitals, the implementation instructions of the German Hospital Association (DKG) have proven to be important. The number of fully inpatient hospital treatments in the reference period (i.e. the previous year) was defined as the measurement criterion. With 30,000 fully inpatient treatment cases, the threshold value for the identification of critical infrastructures has been reached, which concerns considerably more than 100 hospitals. These are obliged to fulfil clearly defined requirements, which are derived from the IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - for the security of IT systems and digital infrastructures, including critical infrastructures in Germany, and the BSI-KritisV - "BSI-Kritisverordnung". The above-mentioned implementation instructions of the DKG thus also define proposed measures for the assurance of adequate security, in particular about the IT used.

Companies had until June 30th this year to meet the requirements and to commission a suitable, trustworthy third party for testing and certification.

But according to a report in Tagesspiegel Background, this has been challenging: industry associations have been pointing out for some time that there are not enough suitable auditing firms. This is not least due to the fact that auditors must have a double qualification, which in addition to IT also includes knowledge of the industry, in this case the healthcare system in hospitals. Here, as in many other areas, the infamous skill gap strikes, i.e. the lack of suitable, qualified employees in companies or on the job market.

This led to the companies capable of performing the audits being overloaded and thus to a varying quality and availability of audits and resulting audit reports. According to the press report, these certificates suffer the same fate when they are submitted to the BSI, which evaluates these reports. Here, too, a shortage of skilled workers leads to a backlog of work. A comprehensive evaluation was not available at the time of publication. Even the implementation instructions of the German Hospital Association, on the basis of which many implementations were carried out in the affected hospitals, have not yet been confirmed by the BSI.

Does this place KRITIS in the list of toothless guidelines (such as PSD2 with its large number of national individual regulations) that have not been adequately implemented, at least in this area? Not necessarily.. The obligation to comply has not been suspended; the lack of personnel and skills on the labour market merely prevents consistent, comprehensive testing by suitable bodies such as TÜV, Dekra or specialised auditing firms. However, if such an audit does take place, the necessary guidelines are applied and any non-compliance is followed up in accordance with the audit reports. The hospitals concerned are therefore advised they should have  fulfilled the requirements by the deadline and to continue working on them in the name of continuous implementation and improvement.

Even hospitals that today slightly miss this threshold are now encouraged to prepare for adjustments to requirements or increasing patient numbers. And this means that even without the necessity of a formal attestation, the appropriate basic conditions, such as the establishment of an information security management system (ISMS) in accordance with ISO 27.001, can be created to serve as a foundation.

In addition, the availability of a general framework for the availability and security of IT in this and other industries gives other sector players (such as group practices or specialist institutes) a resilient basis for creating appropriate framework conditions that correspond to the current state of requirements and technology. This also applies if they are not or will not be KRITIS-relevant in the foreseeable future, but want to offer their patients a comparably good degree of security and resulting trustworthiness.

KuppingerCole offers comprehensive support in the form of research and advisory for companies in all KRITIS-relevant areas and beyond. Talk to us to address your cybersecurity, access control and compliance challenges.

Stell Dir vor, es ist KRITIS und keiner geht hin

Kritische Infrastrukturen (KRITIS) sind Organisationen oder Einrichtungen mit wichtiger Bedeutung für das staatliche Gemeinwesen, bei deren Ausfall oder Beeinträchtigung nachhaltig wirkende Versorgungsengpässe, erhebliche Störungen der öffentlichen Sicherheit oder andere dramatische Folgen eintreten würden“.

Neun Sektoren und 29 Branchen gelten derzeit als kritische Infrastrukturen, darunter die Gesundheitsversorgung, Energieversorgung, der Verkehr und Finanzdienstleistungen. Krankenhäuser als Teil des Gesundheitswesens fallen bei Erfüllung definierter Kriterien ebenfalls in die Kategorie „kritische Infrastrukturen“.

Für Krankenhäuser haben sich die Umsetzungshinweise der Deutschen Krankenhausgesellschaft (DKG) als maßgeblich erwiesen. Als Bemessungskriterium wurde hierbei die Anzahl vollstationärer Krankenhausbehandlungen im Bezugszeitraum (Vorjahr) definiert. Mit einer Anzahl von 30.000 vollstationären Behandlungsfällen ist der Schwellenwert zur Identifikation kritischer Infrastrukturen erreicht, was deutlich mehr als 100 Krankenhäuser betrifft, Diese werden zur Erfüllung klar definierter Anforderungen verpflichtet, die aus dem das IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - zur Sicherung von IT-Systemen und digitalen Infrastrukturen, einschließlich kritischer Infrastrukturen in Deutschland, und der BSI-KritisV  - "BSI-Kritisverordnung" abgeleitet sind. Die genannten Umsetzungshinweise der DKG definieren damit dann auch vorgeschlagene Maßnahmen für den Nachweis einer angemessenen Sicherheit, insbesondere mit Blick auf die eingesetzten IT.

Der 30. Juni dieses Jahres stellte die definierte Deadline für die betroffenen Unternehmen dar, die Anforderungen zu erfüllen und einen geeigneten, vertrauenswürdigen Dritten zur Prüfung und Testierung zu beauftragen.

Einem Bericht des Tagesspiegel Background zufolge liegt genau hier derzeit eine Herausforderung: Branchenverbände weisen seit längerem darauf hin, dass es nicht genügend geeignete prüfende Stellen gibt. Das liegt nicht zuletzt daran, dass Auditoren eine doppelte Qualifikation vorweisen müssen, die neben der IT auch die Kenntnis der Branche, hier also des Gesundheitswesens im Krankenhaus umfasst. Hier, wie in vielen anderen Bereichen, schlägt der berüchtigte Skill-Gap zu, also der Mangel an geeigneten, qualifizierten Mitarbeitern in den Unternehmen oder am Arbeitsmarkt.

Dies führte zu einer Überlastung der zur Prüfung befähigten Unternehmen und damit einer unterschiedlichen Güte und Verfügbarkeit der Prüfungen und der resultierenden Prüfberichte. Das gleiche Schicksal erleiden die Testate dem Pressebericht zufolge bei Einreichung beim BSI, das diese Berichte auswertet. Auch hier führt ein Fachkräftemangel zu einem Bearbeitungsstau. Eine Auswertung lag zum Veröffentlichungszeitpunkt nicht vor. Selbst die Umsetzungshinweise der Deutschen Krankenhausgesellschaft, auf deren Basis viele Umsetzungen in den betroffenen Häusern erfolgten, ist vom BSI noch nicht bestätigt.

Reiht sich damit KRITIS zumindest in diesem Bereich in die Liste der zahnlosen, weil nicht angemessen umgesetzten Richtlinien (wie etwa PSD2 mit seiner Vielzahl nationaler Individualregelungen) ein? Aus heutiger Sicht kann das wohl verneint werden. Die Verpflichtung zur Einhaltung ist nicht aufgehoben, der Personal- und Skillmangel am Arbeitsmarkt verhindert lediglich die konsequente, umfassende Prüfung durch geeignete Stellen wie TÜV, Dekra oder spezialisierte Wirtschaftsprüfungsgesellschaften. Findet eine solche aber statt, werden die notwendigen Richtlinien angelegt und deren Nichterfüllung entsprechend der Prüfberichte auch nachverfolgt. Betroffenen Krankenhäuser ist damit nahegelegt, die Anforderungen schon zum Stichtag erfüllt zu haben und im Sinne einer kontinuierlichen Umsetzung und Verbesserung daran auch weiterhin zu arbeiten.

Auch Krankenhäuser, die heute diesen Schwellwert knapp nicht erreichen, sind heute schon aus Vernunftgründen angehalten, sich auf Anpassungen der Anforderungen oder steigende Patientenzahlen vorzubereiten. Und das bedeutet, auch ohne Notwendigkeit eines formalen Testats schon die sinnvollen Rahmenbedingungen, etwa den Aufbau eines Informationssicherheits Management Systems (ISMS) nach ISO 27.001 als Grundlage zu schaffen.

Darüber hinaus gibt die Verfügbarkeit eines allgemeinen Rahmens für die Verfügbarkeit und Sicherheit der IT in dieser und anderen Branchen weiteren Branchenteilnehmern (etwa Gemeinschaftspraxen oder fachlich spezialisierte Institute) eine belastbare Basis, angemessene Rahmenbedingungen zu schaffen, die einem aktuellen Stand der Anforderungen und der Technik entsprechen. Das gilt auch, wenn sie absehbar nicht KRITIS-relevant sind oder werden, aber ihren Patienten einen vergleichbar guten Sicherheitsstandard und die daraus resultierende Vertrauenswürdigkeit bieten wollen.

KuppingerCole bietet umfangreiche Unterstützung in Form von Research und Advisory für Unternehmen in allen KRITIS-relevanten Bereichen und darüber hinaus. Reden Sie mit uns, um Ihren Herausforderungen in den Bereichen Cybersecurity, Zugriffskontrolle und Compliance angemessen zu begegnen.

Do You Need a Chief Artificial Intelligence Officer?

Well, if you ask me, the short answer is – why not? After all, companies around the world have a long history of employing people with weird titles ranging from “Chief Happiness Officer” to “Galactic Viceroy of Research Excellence”. A more reasonable response, however, would need to take one important thing into consideration – what a CAIO’s job in your organization would be?

There is no doubt that “Artificial Intelligence” has already become an integral part of our daily lives, both at home and at work. In just a few years, machine learning and other technologies that power various AI applications evolved from highly complicated and prohibitively expensive research prototypes to a variety of specialized solutions available as a service. From image recognition and language processing to predictive analytics and intelligent automation - a broad range of useful AI-powered tools is now available to everyone.

Just like the cloud a decade ago (and Big Data even earlier), AI is universally perceived as a major competitive advantage, a solution for numerous business challenges and even as an enabler of new revenue streams. However, does it really imply that every organization needs an “AI strategy” along with a dedicated executive to implement it?

Sure, there are companies around the world that have made AI a major part of their core business. Cloud service providers, business intelligence vendors or large manufacturing and logistics companies – for them, AI is a major part of the core business expertise or even a revenue-generating product. For the rest of us, however, AI is just another toolkit, powerful and convenient, to address specific business challenges.

Whether your goal is to improve the efficiency of your marketing campaign, to optimize equipment maintenance cycle or to make your IT infrastructure more resilient against cyberattacks – a sensible strategy to achieve such a goal never starts with picking up a single tool. Hiring a highly motivated AI specialist to tackle these challenges would have exactly the opposite effect: armed with a hammer, a person is inevitably going to treat any problem as if it were a nail.

This, of course, by no means implies that companies should not hire AI specialists. However, just like the AI itself was never intended to replace humans, “embracing the AI” should not overshadow the real business goals. We only need to look at Blockchain for a similar story: just a couple years ago adding a Blockchain to any project seemed like a sensible goal regardless of any potential practical gains. Today, the technology has already passed the peak of inflated expectations and it finally seems that the fad is transitioning to the productive phase, at least in those usage scenarios where lack of reliable methods of establishing distributed trust was indeed a business challenge.

Another aspect to consider is the sheer breadth of the AI frontier, both from the AI expert’s perspective and from the point of view of a potential user. Even within such a specialized application area as cybersecurity, the choice of available tools and strategies can be quite bewildering. Looking at the current AI landscape as a whole,  one cannot but realize that it encompasses many complex and quite unrelated technologies and problem domains. Last but not least, consider the new problems that AI itself is creating: many of those lie very much outside of the technology scope and come with social, ethical or legal implications.

In this regard, coming up with a single strategy that is supposed to incorporate so many disparate factors and can potentially influence every aspect of a company’s core business goals and processes seems like a leap of faith that not many organizations are ready to make just yet. Maybe a more rational approach towards AI is the same as with the cloud or any other new technology before that: identify the most important challenges your business is facing, set reasonable goals, find the experts that can help identify the most appropriate tools for achieving them and work together on delivering tangible results. Even better if you can collaborate on (probably different) experts on outlining a long-term AI adoption strategy that would ensure that your individual projects and investments align with each other and avoid wasting time and resources. In other words: Think Big, Start Small, Learn Fast.

If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.

AI in the Auto Industry Is About More Than Self-Driving Cars

Car buyers gathering at the Frankfurt Motor Show last month will have witnessed the usual glitz as car makers went into overdrive launching new models, including of course many new electric vehicles reflecting big change in the industry. Behind the glamour of the show, the world’s biggest car makers are heavily investing in new technologies to remain competitive, including Artificial Intelligence (AI) and Machine Learning. While perfecting algorithms for self-driving cars is a longer-term goal and grabs the headlines, much is being done with AI to improve the design, manufacture and marketing of cars. 

In an industry characterized by high costs and low margins, car makers (OEMs) are turning to AI to improve efficiencies, improve quality control and understand their markets and buyers better. Five years ago, Volkswagen opened its Data:Lab in Munich. It is now the company’s main research base for AI with around 80 IT specialists, data scientists, programmers, physicists, and mathematicians researching and developing applications in machine learning and AI. Volkswagen goes as far to say that AI will fundamentally change the company’s value chain as it will now begin, not end, with the production of the vehicle.

An area of focus is applying AI to market research and marketing to pre-empt changes in demand and consumer choice outside of OEMs traditional 7-year model cycle. Any manufacturer that can be ahead of the curve in marketing will have a significant advantage. Volkswagen is using AI to create precise market forecasts containing a multitude of variables including economic development, household income, customer preferences, model availability and price.

With this kind of insight, it is possible that the company could configure model choice (specs, optional extras, engine sizes etc) and order production to meet buyer preferences on a smaller regional or even hyper local level. For example, a Golf special edition that appeals to specific buyers in London or an Amarok truck configured for the needs of farmers in the Rhineland. 

Volkswagen’s German rivals are also scaling investment in AI technologies and are keen to be seen doing so with positive statements on their websites, and active recruitment drives to get the best developer talent. All three of Germany’s OEMs are aware that they need to be technological leaders in IT as much as engineering as cars become more connected and software driven.

At its factory in Stuttgart, Daimler has created a knowledge base that stores all the existing vehicle designs at the company which any new engineer can tap into. More than this, the algorithm has been trained to suggest that a new engineer contacts a more experienced colleague for human advice in certain circumstances. A good example, of how AI can be trained to interact with human workers.

At the final inspection area at BMW’s Dingolfing plant, an AI application compares the vehicle order data with a live image of the model designation of the newly produced car. If the live image and order data don’t correspond, for example if a designation is missing, the final inspection team receives a notification. This frees up human employees to work elsewhere. Algorithms are also being taught to tell the difference between a hairline crack in sheet metal and simple dust particles, something that is beyond the scope of human eyesight. Meanwhile in paint shops, AI and analytics applications offer the potential to detect sources of error at much earlier stages of the process. If no dust attaches to the car body before painting in the first place, none needs be polished off later.

While these examples of AI applications may lack the sci-fi appeal of self-driving cars, they are presently more important to the future survival of the car industry, not just in Germany but across the globe. AI is being used effectively to meet the three fundamental challenges of the industry’s survival: improved quality, cost and waste reduction, and customer demands.  

If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.

Cognitive! - Entering a New Era of Business Models Between Converging Technologies and Data

Digitalization or more precisely the "digital transformation" has led us to the "digital enterprise". It strives to deliver on its promise to leverage previously unused data and the information it contains for the benefit of the enterprise and its business. And although these two terms can certainly be described as buzzwords, they have found their way into our way of thinking and into all kinds of publications, so that they will probably continue to exist in the future. 

Thought leaders, analysts, software and service providers and finally practically everyone in between have been proclaiming the "cognitive enterprise" for several months now. This concept - and the mindset associated with it - promises to use the information of the already digital company to achieve productivity, profitability and high innovation for the company.  And they aim at creating and evolving next-generation business models between converging technologies and data.​  

So what is special about this cognitive enterprise“? Defining it usually starts with the idea of applying cognitive concepts and technologies to data in practically all relevant areas of a corporation. Data includes: Open data, public data, subscribed data, enterprise-proprietary data, pre-processed data, structured and unstructured data or simply Big Data). And the technologies involved include the likes of Artificial Intelligence (AI), more specifically Machine Learning (ML), Blockchain, Virtual Reality (VR), Augmented Reality (AR), the Internet of Things (IoT), ubiquitous communication with 5G, and individualized 3D printing​.  

As of now, mainly concepts from AI and machine learning are grouped together as "cognitive", although a uniform understanding of the underlying concepts is often still lacking. They have already proven to do the “heavy lifting” either on behalf of humans, or autonomously. They increasingly understand, they reason, and they interact, e.g. by engaging in meaningful conversations and thus delivering genuine value without human intervention. 

Automation, analytics and decision-making, customer support and communication are key target areas, because many tasks in today’s organizations are in fact repetitive, time-consuming, dull and inefficient. Focus (ideally) lies on relieving and empowering the workforce, when the task can be executed by e.g. bots or through Robotic Process Automation. Every organization is supposed to agree that their staff is better than bots and can perform tasks much more meaningful. So, these measures are intended to benefit both the employee and the company. 

But this is only the starting point. A cognitive enterprise will be interactive in many ways, not only by interacting with its customers, but also with other systems, processes, devices, cloud services and peer organizations. As one result it will be adaptive, as it is designed to be learning from data, even in an unattended manner. The key goal is to foster agility and continuous innovation through cognitive technologies by embracing and institutionalizing a culture that perpetually changes the way an organization works and creates value.  

Beyond the fact that journalists, marketing departments and even analysts tend to outdo each other in the creation and propagation of hype terms, where exactly is the difference between a cognitive and a digital enterprise?  Do we need yet another term, notably for the use of machine learning as an apparently digital technology?  

I don't think so. We are witnessing the evolution, advancement, and ultimately the application of exactly these very digital technologies that lay the foundation of a comprehensive digital transformation. However, the added value of the label "cognitive" is negligible.   

But regardless of how you, me or the buzzword industry really decide to call it in the end, much more relevant are the implications and challenges of this consistent implementation of digital transformation. In my opinion two aspects must not be underestimated: 

First, this transformation is either approached in its entirety, or it is better not to do it at all, there is nothing in between. If you start doing this, it's not enough to quickly look for a few candidates for a bit of Robot Process Automation. There will be no successful, "slightly cognitive” companies. This will be a waste of the actual potential of a comprehensive redesign of corporate processes and is worth little more than a placebo. Rather, it is necessary to model internal knowledge, to gain and to interconnect data.  Jobs and tasks will change, become obsolete and will be replaced by new and more demanding ones (otherwise they could be executed by a bot again). 

Second: The importance of managing constant organizational change and restructuring is often overlooked. After all, the transformation to a Digital/Cognitive Enterprise is by far not entirely about AI, Robotic Process Automation or technology. Rather, focus has to be put on the individual as well, i.e. each member of the entire workforce (both internal and external). Established processes have to be managed, adjusted or even reengineered and this also applies to processes affecting partners, suppliers and thus any kind of cooperation or interaction.  

One of the most important departments in this future will be the human resources department and specifically talent management. Getting people on board and retaining them sustainably will be a key challenge. In particular, this means providing them with ongoing training and enabling them to perform qualitatively demanding tasks in a highly volatile environment. And it is precisely such an extremely responsible task that will certainly not be automated even in the long term...

When Cyber "Defense" is no Longer Enough

The days in which having just an Identity and Access Management (IAM) system on-premises are long gone. With organizations moving to hybrid on-premises, cloud, and even multi-cloud environments, the number of cyber-attacks is growing. The types and sophistication of these attacks are continually changing to get around any new security controls put in place. In fact, it is much easier for the cyber attacker to change tactics than it is for organizations to bring in new solutions to mitigate current attack vulnerabilities.

Organizations must realize that they will never be 100% secure, and there will always be attacks on their systems. Don't get me wrong. I'm not saying to give up on continually assessing and updating an organization's security controls to block the latest and most significant attack vectors. But instead, take the next step and plan for the worst. Organizations should integrate their Business Continuity Management (BCM) with their cybersecurity initiatives. This means being able to detect, respond recover, and improve from any attack that potentially brings down their business.

Recently, Microsoft Azure announced its global availability of the Windows Virtual Desktop (WVD). WVD not only provides the ability to deploy and scale Azure-based virtualization of Windows 10 multi-session, Windows Server, and Windows 7 desktops, but it also provides something that is sometimes overlooked. WVD gives the enterprise the ability to recover from being compromised when attacked, at least from the desktop endpoint perspective. Through Microsoft's acquisition of FSLogix and its solutions, WVD takes advantage of virtualization & containerization technologies. Using these technologies, Microsoft ensures that its Windows desktops and servers can be powered up or restarted in a consistent and safe state with respect to user profiles and applications, adding to the BCM and “recover from an attack” capabilities business must implement today. FSLogix does this by bringing both profile and office containers to the table.

So, when reviewing cybersecurity and BCM strategies, organizations shouldn’t take the view of “if”, but “when” their systems will be compromised, and their data breached. Then ask themselves how they will recover.

KuppingerCole Principal Analyst Martin Kuppinger emphasized the changing role of the CISO recently in a blog and also covered that topic in a webinar on cybersecurity budgeting which you can watch below. To get a more hands-on approach, see below for our Incident Response Boot Camp at Cybersecurity Leadership Summit 2019.

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00