KuppingerCole Blog

Cyber-Attacks: Why Preparing to Fail Is the Best You Can Do

Nowadays, it seems that no month goes by without a large cyber-attack on a company becoming public. Usually, these attacks not only affect revenue of the attacked company but reputation as well. Nevertheless, this is still a completely underestimated topic in some companies. In the United Kingdom 43% of businesses experienced a cybersecurity breach in the past twelve months, according to the 2018 UK Cyber Security Breaches Survey. On the other hand, 74% say that cybersecurity is a high priority for them. So where is the gap, and why does it exist? The gap exists between the decision to prioritize cybersecurity and the reality of handling cyber incidents. It is critical to have a well-prepared plan, because cyber incidents will happen to you. Only 27% of UK businesses have a formal cyber incident management process. Does your company have one?

How do cyber-attacks affect your business?

To understand the need for a formal process and the potential threats, a company must be aware of the impact an incident could have. It could lead to a damage or loss of customers, or in the worst case to insolvency of the whole company. In many publicly known data breaches like the ones Facebook or PlayStation Networks had, the companies needed significant time to recover. Some would say, they still haven’t recovered. The loss of brand image, reputation and trust of a company can be enormous. To prevent your company from experiencing such critical issues and be able to handle incidents in a reasonable way, a good cyber incident plan must be implemented.

The characteristics of a good plan for cyber incidents

Such a plan should describe the processes, actions and steps which must be taken after a cyber-attack incident. The first step is categorization, which is essential to handle an incident in a well-defined way. If an incident is identified, it must be clear who will be contacted to react to this incident. This person or team is then responsible to categorize the incident and estimate the impact for the company.

The next step is to identify in detail which data has been compromised and what immediate actions can be taken to limit the damage. Subsequently, the plan must describe how to contact the staff needed and what they must do to prevent further harm and to recover. Responsibilities have to be allocated clearly to prevent a duplication of efforts when time is short. In a recent webinar KuppingerCole Principal Analyst Martin Kuppinger made the point, that IT teams responsible for cybersecurity should shift their focus from protection to recovery. While a lot of investments in cybersecurity nowadays still go into protection, this is not enough anymore. “You need to be able to restart your business and critical services rapidly,” Martin explained.

Cyber-attacks are not an IT-only job

Apart from the necessary actions described above which will be executed by IT and cybersecurity professionals, a process must be defined which lays out how corporate communications deals with an attack. In big companies there is an explicit top-down information chain. If a grave cyber-attack occurs, the Chief Communications Officer (CCO) has to be informed. Imagine the CCO not knowing anything about the incident being called in the morning by a journalist. This puts the company into a weak place where it loses control over crisis communication. Depending on the severity of the incident, a press release must be send out and customers must be informed. It is always better when companies are confident and show the public that they care instead of waiting until public pressure urges them to act.

Can companies deal with cybercrime all by themselves?

When it comes to personal user data being compromised, cyber-attacks can have legal consequences. Then it is wise to consult internal or external lawyers. External support from dedicated experts for specific cyber incidents are usually part of an action plan, too. To react as quickly as possible, a list with experts for external support categorized by topic should be created, containing contact persons and their availability.

Since cyber-attacks can never be entirely prevented, it is of utmost importance to have a plan and to know exactly how to react. This can prevent a lot of potential mistakes which are often made after incident has already been identified. In the end, it can prevent the company from losing customer confidence and revenues.

To understand and learn this process, to build necessary awareness and know how to deal with cybercrime in detail, you can attend our Incident Response Boot Camp on November 12 in Berlin.

Authentication and Education High on CISO Agenda

Multifactor authentication and end-user education emerged as the most common themes at a CISO forum with analysts held under Chatham House Rules in London.

Chief information security officers across a wide range of industry sectors agree on the importance of multifactor authentication (MFA) to extending desktop-level security controls to an increasingly mobile workforce, with several indicating that MFA is among their key projects for 2020 to protect against credential stuffing attacks.

In highly-targeted industry sectors, CISOs said two-factor authentication (2FA) was mandated at the very least for mobile access to corporate resources, with special focus on privileged account access, especially to key databases.

Asked what IT suppliers could do to make life easier for security leaders, CISOs said providing MFA with everything was top of the list, along with full single sign on (SSO) capability, with some security leaders implementing or considering MFA for customer/consumer access to accounts and services.

The pursuit of improved user experience along with more secure access, appears to have led some security leaders to standardise on Microsoft products and services that enable collaboration, MFA and SSO, reducing the reliance on username/password combinations alone for access control.

Training end users

End user security education and training is another key area of attention for security leaders to increase the likelihood that any gaps in security controls will be bridged by well-informed users.

However, there also a clear understanding that end users cannot be held responsible as a front line of defense, that there needs to be a zero-blame policy to encourage engagement and participation of end users in security, that end users need to be supported by appropriate security controls and effective incident detection and response processes, and that communication is essential to ensure end users understand the cyber threats to them at home and at work as well as the importance of each security control.

Supporting end users

CISOs are helping to protect end users by implementing browser protections and URL filtering to prevent access to malicious sites, and improving email defenses to protect users from spoofing, phishing and spam, and by introducing tools that make it easy to report suspected phishing and conducting regular phishing simulation exercises to keep end users vigilant.

The implementation of the Domain-based Message Authentication, Reporting and Conformance (DMARC) protocol designed to help ensure the authenticity of the sender’s identity is also being used by some CISOs to drive user awareness by highlighting emails from an external source.

Some security leaders believe there should be a special focus on board member and other senior executives in terms of anti-phishing training and awareness because while this group is likely to be most-often targeted by phishing and spear phishing attacks, they are less likely to be attuned to the dangers and the warning signs.

Some CISOs have also provided password managers to help end users choose and maintain strong, unique passwords, reducing the number of passwords that each person is required to remember.

Encouraging trends

It is encouraging that security leaders are focusing on better authentication by moving to MFA and that they understand the need to support end users, not only with security awareness and education, but the necessary security controls, processes and capabilities, including effective email and web filtering, network monitoring, incident detection and response, and patch management.

If you want to deep dive into this topic, be sure to read our Leadership Compass Consumer Authentication. For unlimited access to all our research, buy your KC PLUS susbscription here.

Nok Nok Labs Extends FIDO-Based Authentication

Nok Nok Labs has made FIDO certified multi-factor authentication – which seeks to eliminate dependence on password-based security - available across all digital channels by adding a software development kit (SDK) for smart watches to the latest version of its digital authentication platform, the Nok Nok S3 Authentication Suite.

In truth, the SDK is only for the Apple watchOS, but it is the first - and currently only - SDK available to do all the heavy lifting for developers seeking to enable FIDO-certified authentication via smart watches that do not natively support FIDO, and is a logical starting point due to Apple’s strong position in the smart watch market (just over 50%), with SDKs for other smart watch operating systems expected to follow.

This means that business to consumer organizations can now use the Nok Nok S3 Authentication Suite to enable strong, FIDO-based authentication and access policy controls for Apple Watch apps as well as mobile apps, mobile web and desktop web applications.

The new SDK, like its companion SDKs from Nok Nok, provides a comprehensive set of libraries and application program interfaces (APIs) for software developers to enable FIDO certified multi-factor authentication that uses public and private key pairs, making it resistant to man-in-the-middle attacks because the private key never leaves the authenticator, or in this case, the smart watch.

As global smart watch sales continue to grow, the devices are becoming an increasingly important channel for digital engagement, particularly with 24 to 35-year-olds. At the same time, smart watch usage has grown beyond fitness applications to include banking, productivity apps such as Slack, ecommerce such as Apple Pay, and home security such as NEST.

A further driver for the use of smart watch applications is the fact that consumers often find it easier to access information on a watch without the need for passwords or one-time passcodes, especially smart watches like the Apple Watch that does not rely on having a smartphone nearby.

The move is a strategic one for Nok Nok because it not only satisfies customer requirements, but also fulfils one of the key goals for Nok Nok as a company and the FIDO Alliance as a whole.

From the point of view of S3 Authentication Suite end-user organizations, the new SDK will make it easier to make applications available to consumers on smart watches as a new client platform in its own right and meet the security and privacy requirements of both smart watch users and global, regional and industry-specific regulations, especially in highly-regulated industries such as telecommunications and financial services.

In addition, the SDK for smart watches enables end-user organisations an opportunity to simplify their backend infrastructure by having a single authentication method for all digital channels enabled by a unified backend authentication infrastructure, thereby reducing cost by reducing complexity and operational overhead.

From a Nok Nok point of view, the SDK delivers greater value to existing customers and is likely to win new customers as organisations, particularly in the financial services sector, seek to engage consumers across all available digital channels.

Enabling the same strong FIDO-backed authentication across all digital channels is also a key goal of Nok Nok, both as a company and as a founder member of the FIDO (Fast IDentity Online) Alliance.

The FIDO Alliance is a non-profit consortium of technology industry partners – including Amazon, Facebook, Google, Microsoft and Intel – working to establish standards for strong authentication to address the lack of interoperability among strong authentication devices as well as the problems users face with creating and remembering multiple usernames and passwords.

The FIDO Alliance plans to change the nature of authentication by developing specifications that define an open, scalable, interoperable set of mechanisms that supplant reliance on passwords to securely authenticate users of online services via FIDO-enabled devices.

The new S3 SDK from Nok Nok for Apple watchOS offers a stronger authentication alternative to solutions that typically store OAuth tokens or other bearer tokens in their smart watch applications. These tokens provide relatively weak authentication and need to be renewed frequently because they can be stolen.

In contrast, FIDO-based authenticators provide strong device binding for credentials, providing greater ease of use as well as additional assurance that applications are being accessed only by the smart watch owner (authorized user).

While commercially a strategic move for Nok Nok to be the first mover in enabling strong FIDO-based authentication via its S3 Authentication Suite, the real significance of the new SDK for Apple Watches is that it moves forward the IT industry’s goal of achieving stronger authentication and reducing reliance on password-based security.

AI for Governance and Governance of AI

Artificial Intelligence is a hot topic and many organizations are now starting to exploit these technologies, at the same time there are many concerns around the impact this will have on society. Governance sets the framework within which organizations conduct their business in a way that manages risk and compliance as well as to ensure an ethical approach. AI has the potential to improve governance and reduce costs, but it also creates challenges that need to be governed.

The concept of AI is not new, but cloud computing has provided the access to data and the computing power needed to turn it into a practical reality. However, while there are some legitimate concerns, the current state of AI is still a long way from the science fiction portrayal of a threat to humanity. Machine Learning technologies provide significantly improved capabilities to analyze large amounts of data in a wide range of forms. While this poses a threat of “Big Other” it also makes them especially suitable for spotting patterns and anomalies and hence potentially useful for detecting fraudulent activity, security breaches and non-compliance.

AI covers a range of capabilities, including ML (machine learning), RPA (Robotic Process Automation), NLP (Natural Language Processing) amongst others. But AI tools are simply mathematical processes which come in a wide variety of forms and have relative strengths and weaknesses.

ML (Machine Learning) is based on artificial neural networks, inspired by the way in which animal brains work. These use networks of machine learning algorithms which can be trained to perform tasks using data as examples and without needing any preprogrammed rules.

For ML training to be effective it needs large amounts of data and acquiring this data can be problematic. The data may need to be obtained from third parties and this can raise issues around privacy and transparency. The data may contain unexpected biases and, in any case, needs to be tagged or classified with the expected results which can take significant effort.

One major vendor successfully applied this to detect and protect against identity led attacks. This was not a trivial project and took 12 people over 4 years to complete However, the results were worth the cost since this is now much more effective than the hand-crafted rules that were previously used. It is also capable of automatically adapting to new threats as they emerge.

So how can this technology be applied to governance? Organizations are faced with a tidal wave of regulation and need to cope with the vast amount of data that is now regularly collected for compliance. The current state of AI technologies makes them very suitable to meet these challenges. ML can be used to identify abnormal patterns in event data and detect cyber threats while in progress. The same approach can help to analyze the large volumes of data collected to determine the effectiveness of compliance controls. Its ability to process textual data makes it practical to process regulatory texts to extract the obligations and compare these with the current controls. It can also process textbooks, manuals, social media and threat sharing sources to relate event data to threats.

However, the system needs to be trained by regulatory professionals to recognize the obligations in regulatory texts and to extract these into a common form that can be compared with the existing obligations documented in internal systems to identify where there is a match. It also needs training to discover existing internal controls that may be relevant or, where there are no controls, to advise on what is needed.

Lined with a conventional GRC system this can augment the existing capabilities and help to consolidate new and existing regulatory requirements into a central repository used to classify complex regulations and help stakeholders across the organization to process large volumes of regulatory data. It can help to map regulatory requirements to internal taxonomies and business structures and basic GRC data. Thus connecting regulatory data to key risks, controls and policies, and linking that data to an overall business strategy.

Governance also needs to address the ethical challenges that come with the use of AI technologies. These include unintentional bias, the need for explanation, avoiding misuse of personal data and protecting personal privacy as well as vulnerabilities that could be exploited to attack the system.

Bias is a very current issue with bias related to gender and race as top concerns. Training depends upon the data used and many datasets contain an inherent if unintentional bias. For example see the 2018 paper Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. There are also subtle differences between human cultures, and it is very difficult for humans to develop AI systems to be culturally neutral. Great care is needed to with this area.

Explanation – In many applications, it may be very important to provide an explanation for conclusions reached and actions taken. Rule-based systems can provide this to some extent but ML systems, in general, are poor at this. Where explanation is important some form of human oversight is needed.

One of the driving factors in the development of ML is the vast amount of data that is now available, and organizations would like to get maximum value from this. Conventional analysis techniques are very labor-intensive, and ML provides a potential solution to get more from the data with less effort. However, organizations need to beware of breaching public trust by using personal data, that may have been legitimately collected, in ways for which they have not obtained informed consent. Indeed, this is part of the wider issue of surveillance capitalism - Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.

ML systems, unlike the human, do not use understanding they simply match patterns – this makes them open to attacks using inputs that are invisible to humans. Recent examples of vulnerabilities reported include one where the autopilot of a Tesla car was tricked into changing lanes into oncoming traffic by stickers placed on the road. A wider review of this challenge is reported in A survey of practical adversarial example attacks.

In conclusion, AI technologies and ML in particular, provide the potential to assist governance by reducing the costs associated with onboarding new regulations, managing controls and processing compliance data. The exploitation of AI within organizations needs to be well governed to ensure that it is applied ethically and to avoid unintentional bias and misuse of personal data. The ideal areas for the application of ML are those where with a limited scope and where explanation is not important.

For more information attend KuppingerCole’s AImpact Summit 2019.

If you liked this text, feel free to browse our Focus Area: AI for the Future of Your Business for more related content.

Akamai to Block Magecart-Style Attacks

Credit card data thieves, commonly known as Magecart groups, typically use JavaScript code injected into compromised third-party components of e-commerce websites to harvest data from shoppers to commit fraud.

A classic example was a Magecart group’s compromise of Inbenta Technologies’ natural language processing software used to answer user questions by UK-based ticketing website, Ticketmaster.

The Magecart group inserted malicious JavaScript into the Inbenta JavaScript code, enabling the cyber criminals to harvest all the customer credit card data submitted to the Ticketmaster website.  

As a result, Ticketmaster is facing a £5m lawsuit on behalf of Ticketmaster customers targeted by fraud as well as a potential GDPR fine by the Information Commissioner’s Office, which is yet to publish the findings of its investigation.

A data breach at British Airways linked to similar tactics potentially by a Magecart group resulted in the Information Commissioner’s Office announcing in July 2019 that they are considering a fine for the company of more than €200m.

According to security researchers, the breach of Ticketmaster customer data was part of a larger campaign that targeted at least 800 websites.

This is a major problem for retailers, with an Akamai tool called Request Map showing that more than 90% of content on most websites comes from third-party sources, over which website owners have little or no control.

These scripts effectively give attackers direct access to website users, and once they are loaded in the browser, they can link to other malicious content without the knowledge of website operators.

Current web security offerings are unable to address and manage this problem, and a Content Security Policy (CSP) alone is inadequate to deal with potentially thousands of scripts running on a website.  Akamai is therefore developing and bringing a new product to market that is dedicated to helping retailers reduce the risk posed by third-party links and elements of their websites for things like advertising, customer support and performance management.

The new service dubbed Page Integrity Manager has completed initial testing and is now entering the beta testing phase with up to 25 volunteer customers with a range of different data types.

The aim of Akamai Page Integrity Manager is to enable website operators to detect and stop third-party breaches before their users are impacted. The service is designed to discover and assess the risk of new or modified JavaScript, control third-party access to sensitive forms or data fields using machine learning to identify relevant information, enable automated mitigation using policy-based controls, and block bad actors using Akamai threat intelligence to improve accuracy.

The service works by inserting a JavaScript into customer web pages to analyze all content received by the browser from the host organization and third parties. This will identify and block any scripts trying to access and exfiltrate financial or other personal data (form-jacking) as well as notify the website operator.

Third-party JavaScripts massively increase the attack surface and ramp up the risk for website operators and visitors alike with no practical and effective way for website operators to detect the threat and mitigate the risk, but that is set to change with the commercial availability of Akamai’s Page Integrity Manager expected in early 2020.

Microsoft Partnership Enables Security at Firmware Level

Microsoft has partnered with Windows PC makers to add another level of cyber attack protection for users of Windows 10 to defend against threats targeting firmware and the operating system.

The move is in response to attackers developing threats that specifically target firmware as the IT industry has built more protections into operating systems and connected devices. A trend that appears to have been gaining popularity since Russian espionage group APT28 – also known as Fancy Bear, Pawn Storm, Sofacy Group, Sednit, and Strontium – was found to be exploiting firmware vulnerabilities in firmware to distribute the LoJax malware by security researchers at ESET.

The LoJax malware targeting European government organizations exploited a firmware vulnerability to effectively hide inside the computer's flash memory. As a result, malware was difficult to detect and able to persist even after an operating system reinstall because whenever the infected PC booted up, the malware would re-execute.

In a bid to gain more control over the hardware on which its Windows operating system runs like Apple, Microsoft has worked with PC and chip makers on an initiative dubbed “Secured-core PCs” to apply the security best practices of isolation and minimal trust to the firmware layer to protect Windows devices from attacks that exploit the fact that firmware has a higher level of access and higher privileges than the Windows kernel. This means attackers can undermine protections such as secure boot and other defenses implemented by the hypervisor or operating system.

The initiative appears to be aimed at industries that handle highly-sensitive data, including personal, financial and intellectual property data, such as financial services, government and healthcare rather than the consumer market. However, consumers using new high-end hardware like the Surface Pro X and HP's Dragonfly laptops will benefit from an extra layer of security that isolates encryption keys and identity material from Windows 10.

According to Microsoft, Secured-core PCs combine identity, virtualization, operating system, hardware and firmware protection to add another layer of security underneath the operating system to prevent firmware attacks by using new hardware Dynamic Root of Trust for Measurement (DRTM) capabilities from AMD, Intel and Qualcomm to implement Microsoft’s System Guard Secure Launch as part of Windows Defender in Windows 10.

This effectively removes trust from the firmware because although Microsoft introduced Secure Boot in Windows 8 to mitigate the risk posed by malicious bootloaders and rootkits that relied on Unified Extensible Firmware Interface (UEFI) firmware, the firmware is already trusted to verify the bootloaders, which means that Secure Boot on its own does not protect from threats that exploit vulnerabilities in the trusted firmware.

The DRTM capability also helps to protect the integrity of the virtualization-based security (VBS) functionality implemented by the hypervisor from firmware compromise. VBS then relies on the hypervisor to isolate sensitive functionality from the rest of the OS which helps to protect the VBS functionality from malware that may have infected the normal OS even with elevated privileges, according to Microsoft, which adds that protecting VBS is critical because it is used as a building block for important operating system security capabilities like Windows Defender Credential Guard which protects against malware maliciously using OS credentials and Hypervisor-protected Code Integrity (HVCI) which ensures that a strict code integrity policy is enforced and that all kernel code is signed and verified.

It is worth noting that the Trusted Platform Module 2.0 (TPM) has been implemented as one of the device requirements for Secured-core PCs to measure the components that are used during the secure launch process, which Microsoft claims can help organisations enable zero-trust networks using System Guard runtime attestation.

Although ESET has responded to its researchers’ UEFI rootkit discovery by introducing a UEFI Scanner to detect malicious components in the firmware, and some chip manufacturers are aiming to do something similar with specific security chips, Microsoft’s Secured-core PC initiative is aimed at blocking firmware attacks rather than just detecting them and is cross-industry, involving a wide range of CPU architectures and Original Equipment Manufacturers (OEMs), which means that the firmware defence will be available to all Windows 10 users regardless of the PC maker and form factor they choose.

It will be interesting to see what effect this initiative has in reducing the number of successful ransomware and other BIOS/UEFI or firmware-based cyber attacks on critical industries. A high success rate is likely to see commoditization of the technology and result in availability for all PC users in all industries.

Can Your Antivirus Be Too Intelligent Sometimes?

Current and future applications of artificial intelligence (or should we rather stick to a more appropriate term “Machine Learning”?) in cybersecurity have been one of the hottest discussion topics in recent years. Some experts, especially those employed by anti-malware vendors, see ML-powered malware detection as the ultimate solution to replace all previous-generation security tools. Others are more cautious, seeing great potential in such products, but warning about the inherent challenges of current ML algorithms.

One particularly egregious example of “AI security gone wrong” was covered in an earlier post by my colleague John Tolbert. In short, to reduce the number of false positives produced by an AI-based malware detection engine, developers have added another engine that whitelisted popular software and games. Unfortunately, the second engine worked a bit too well, allowing hackers to mask any malware as innocent code just by appending some strings copied from a whitelisted application.

However, such cases where bold marketing claims contradict not just common sense but the reality itself and thus force engineers to fix their ML model shortcomings with clumsy workarounds, are hopefully not particularly common. However, every ML-based security product does face the same challenge – whenever a particular file triggers a false positive, there is no way to tell the model to just stop it. After all, machine learning is not based on rules, you have to feed the model with lots of training data to gradually guide it to a correct decision and re-labeling just one sample is not enough.

This is exactly the problem the developers of Dolphin Emulator have recently faced: for quite some time, every build of their application has been recognized by Windows Defender as a malware based on Microsoft’s AI-powered behavior analysis. Every time the developers would submit a report to Microsoft, it would be dutifully added to the application whitelist, and the case would be closed. Until the next build with a different file hash is released.

Apparently, the way this cloud-based ML-powered detection engine is designed, there is simply no way to fix a false positive once and for all future builds. However, the company obviously does not want to make the same mistake as Cylance and inadvertently whitelist too much, creating potential false negatives. Thus, the developers and users of the Dolphin Emulator are left with the only option: submit more and more false-positive reports and hope that sooner or later the ML engine will “change its mind” on the issue.

Machine learning enhanced security tools are supposed to eliminate the tedious manual labor by security analysts; however, this issue shows that sometimes just the opposite happens. Antimalware vendors, application developers, and even users must do more work to overcome this ML interpretation problem. Yet, does it really mean that incorporating machine learning into an antivirus was a mistake? Of course not, but giving too much authority to an ML engine which is, in a sense, incapable of explaining its decisions and does not react well to criticism, probably was.

Potential solutions for these shortcomings do exist, the most obvious being the ongoing work on making machine learning models more explainable, giving insights into the ways they are making decisions on particular data samples, instead of just presenting themselves to users as a kind of a black box. However, we’re yet to see commercial solutions based on this research. In the future, a broader approach towards the “artificial intelligence lifecycle” will surely be needed, covering not just developing and debugging models, but stretching from the initial training data management all the way up to ethical and legal implications of AI.

By the way, we’re going to discuss the latest developments and challenges of AI in cybersecurity at our upcoming Cybersecurity Leadership Summit in Berlin. Looking forward to meeting you there! If you want to read up on Artificial Intelligence and Machine Learning, be sure to browse our KC+ research platform.

Privileged Access Management Can Take on AI-Powered Malware to Protect Identity-Based Computing

Much is written about the growth of AI in the enterprise and how, as part of digital transformation, it will enable companies to create value and innovate faster. At the same time, cybersecurity researchers are increasingly looking to AI to enhance security solutions to better protect organizations against attackers and malware. What is overlooked is the same determination by criminals to use AI to assist them in their efforts to undermine organizations through persistent malware attacks.

The success of most malware directed at organizations depends on an opportunistic model; sent out by bots in the hope that it infects as many organizations as possible and then executes its payload. In business terms, while relatively cheap, it represents a poor return on investment and is easier for conventional anti-malware solutions to block. On the other hand, malware that is targeted and guided by human controllers at a command and control point (2C) may well result in a bigger payoff if it manages to penetrate privileged accounts, but it is expensive and time consuming for criminal gangs to operate.

Imagine if automated malware attacks were to benefit from embedded algorithms that have learned how to navigate to where they can do the most damage; this would deliver scale and greater profitability to the criminal gangs. Organizations are facing malware that learns how to hide and perform non-suspicious actions while silently exfiltrating critical data without human control.

AI powered malware will change tactics once inside an organization. It could, for example, automatically switch to lateral movement if it finds its path blocked. The malware could also sit undetected and learn from regular data flows what is normal, and emulate this pattern accordingly. It could learn which devices the infected machines communicate with, its ports and protocols, and the user accounts which access it. All done without the current need for communication back to 2C servers – thus further protecting the malware from discovery.

It is access to user accounts that should worry organizations – particularly privileged accounts. Digital transformation has led to an increase in the number of privileged accounts in companies and attackers are targeting those directly. The use of intelligent agents will make it easier for them to discover privileged accounts such as those accessed via a corporate endpoint. At the same time, malware will learn the best times and situations in which to upload stolen data to 2C servers by blending into legitimate high bandwidth operations such as such as videoconferencing or legitimate file uploads. This may not be happening yet but all of this is feasible given the technical resources that state sponsored cyber attackers and cash rich criminal gangs have access to.

To prove what’s possible IBM research scientists created a proof-of-concept AI-powered malware called Deep Locker. The malware contained hidden code to generate keys which could unlock malicious payloads if certain conditions were met. It was demonstrated at a Las Vegas technology conference last year, using a genuine webcam application with embedded code to deploy ransomware when the right person looked at the laptop webcam. The code was encrypted to conceal its payload and to prevent reverse engineering for traditional anti-malware applications.

IBM also said in its presentation that current defences are obsolete and new defences are needed. This may not be true. AI is not yet magic. As in the corporate world, much AI assisted software, benefits from the learning capabilities of its algorithms which automate the tasks that humans have previously held. In the criminal ecosystem this includes directing malware towards privilege accounts. Therefore, it makes sense that if Privileged Access Management (PAM) does a good job of defecting human led attempts to hijack accounts then it should do the same when confronted with the same techniques orchestrated by algorithms. Already the best PAM solutions are smart enough to monitor M2M communications and DevOps that need access to resources on the fly.

But we must not stop there. Future IAM and Pam solutions must be able to detect hijacked accounts or erroneous data flows in real time and shut them down so that even AI cannot do its work.  Despite the sophistication that AI will bring to malware, its target will remain the same in many attacks: business critical data that is accessed by privileged account users, which will include third parties and machines. It is one more way in which Identity – of people, data and machines  - is taking centre stage in securing the digital organizations of the future. For more on KuppingerCole’s research into Identity and the digital enterprise please see our most recent reports.

Leading IDaaS Supplier OneLogin Aiming for the Top

OneLogin is among the leading vendors in the overall, product, innovation and market leadership ratings in KuppingerCole’s latest Leadership Compass Report on IDaaS Access Management, but is aiming to move even further up the ranks.

In a media and analyst briefing, OneLogin representatives talked through key and recent product features and capabilities in an ongoing effort improve the completeness of products.

Innovation is a key capability in IT market segments, and unsurprisingly this is an important area for OneLogin.

The most recent innovations include Vigilance AI, the new artificial intelligence and machine learning (AI/ML) risk engine, and SmartFactor Authentication, a context-aware authentication methodology to help organizations move beyond text-based passwords.

Both these capabilities are in line with the trend towards using AI in the context of Identity and Access Management (IAM)  and are aimed at supporting OneLogin’s mission to enable enterprises to move beyond password-based authentication and improve their overall cyber defense capabilities in the light of the massive uptick in cyber attacks targeting credentials, including brute force and breach replay attacks.

OneLogin’s Vigilance AI is designed to use AI and ML to ingests and analyze data from a multiple of third-party sources to identify anomalies and communicate risk across OneLogin services.

Vigilance AI also applies User and Entity Behavior Analytics (UEBA) capabilities to build a profile of typical user behavior to identify anomalies in real-time to improve threat defense.

Other recent product innovations include:

  • Adaptive login flows functionality that uses Vigilance AI to restructure authentication flow automatically based on risk to include Multifactor Authentication (MFA) where appropriate;
  • Compromised credential check functionality to prevents users from using credentials that have been breached and posted on the dark web; and
  • Risk-aware access and adaptive deny functionality to block access to systems and applications when extreme risk is detected.

In these ways, OneLogin is striving to address its leadership challenges by increasing the range of authentication factors, increasing collaboration with third-party threat intelligence services, working towards providing support for IoT, and planning to enable more complex reporting capabilities.

The use of AI in the identity solutions market is likely to increase, with a growing number of vendors incorporating AI-driven capabilities such as OneLogin, SailPoint with its AI-driven Predictive Identity cloud identity platform, and others.

If you liked this text, feel free to browse our IAM focus area for more related content.

As You Make Your KRITIS so You Must Audit It

Organizations of major importance to the German state whose failure or disruption would result in sustained supply shortages, significant public safety disruptions, or other dramatic consequences are categorized as critical infrastructure (KRITIS).

Nine sectors and 29 industries currently fall under this umbrella, including healthcare, energy, transport and financial services. Hospitals as part of the health care system are also included if they meet defined criteria.

For hospitals, the implementation instructions of the German Hospital Association (DKG) have proven to be important. The number of fully inpatient hospital treatments in the reference period (i.e. the previous year) was defined as the measurement criterion. With 30,000 fully inpatient treatment cases, the threshold value for the identification of critical infrastructures has been reached, which concerns considerably more than 100 hospitals. These are obliged to fulfil clearly defined requirements, which are derived from the IT-SiG - "Gesetz zur Erhöhung der Sicherheit informationstechnischer Systeme (IT-Sicherheitsgesetz)" - for the security of IT systems and digital infrastructures, including critical infrastructures in Germany, and the BSI-KritisV - "BSI-Kritisverordnung". The above-mentioned implementation instructions of the DKG thus also define proposed measures for the assurance of adequate security, in particular about the IT used.

Companies had until June 30th this year to meet the requirements and to commission a suitable, trustworthy third party for testing and certification.

But according to a report in Tagesspiegel Background, this has been challenging: industry associations have been pointing out for some time that there are not enough suitable auditing firms. This is not least due to the fact that auditors must have a double qualification, which in addition to IT also includes knowledge of the industry, in this case the healthcare system in hospitals. Here, as in many other areas, the infamous skill gap strikes, i.e. the lack of suitable, qualified employees in companies or on the job market.

This led to the companies capable of performing the audits being overloaded and thus to a varying quality and availability of audits and resulting audit reports. According to the press report, these certificates suffer the same fate when they are submitted to the BSI, which evaluates these reports. Here, too, a shortage of skilled workers leads to a backlog of work. A comprehensive evaluation was not available at the time of publication. Even the implementation instructions of the German Hospital Association, on the basis of which many implementations were carried out in the affected hospitals, have not yet been confirmed by the BSI.

Does this place KRITIS in the list of toothless guidelines (such as PSD2 with its large number of national individual regulations) that have not been adequately implemented, at least in this area? Not necessarily.. The obligation to comply has not been suspended; the lack of personnel and skills on the labour market merely prevents consistent, comprehensive testing by suitable bodies such as TÜV, Dekra or specialised auditing firms. However, if such an audit does take place, the necessary guidelines are applied and any non-compliance is followed up in accordance with the audit reports. The hospitals concerned are therefore advised they should have  fulfilled the requirements by the deadline and to continue working on them in the name of continuous implementation and improvement.

Even hospitals that today slightly miss this threshold are now encouraged to prepare for adjustments to requirements or increasing patient numbers. And this means that even without the necessity of a formal attestation, the appropriate basic conditions, such as the establishment of an information security management system (ISMS) in accordance with ISO 27.001, can be created to serve as a foundation.

In addition, the availability of a general framework for the availability and security of IT in this and other industries gives other sector players (such as group practices or specialist institutes) a resilient basis for creating appropriate framework conditions that correspond to the current state of requirements and technology. This also applies if they are not or will not be KRITIS-relevant in the foreseeable future, but want to offer their patients a comparably good degree of security and resulting trustworthiness.

KuppingerCole offers comprehensive support in the form of research and advisory for companies in all KRITIS-relevant areas and beyond. Talk to us to address your cybersecurity, access control and compliance challenges.

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected



AI for the Future of Your Business Learn more

AI for the Future of Your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00