KuppingerCole Blog

Renovate Your IAM-House While You Continue to Live in It

Do you belong to the group of people who would like to completely retire all obsolete solutions and replace existing solutions with new ones in a Big Bang? Do you do the same with company infrastructures? Then you don't need to read any further here. Please tell us later, how things worked out for you.

Or do you belong in the other extreme to those companies in which infrastructures can be further developed only through current challenges, audit findings, or particularly prestigious projects funded with a budget?

However, you should read on, because we want to give you argumentative backing for a more comprehensive approach.

Identity infrastructure is the basis of enterprise security

In previous articles we have introduced the Identity Fabric, a concept that serves as a viable foundation for enterprise architectures for Identity and Access Management (IAM) in the digital age.

This concept, which KuppingerCole places at the center of its definition of an IAM blueprint, expressly starts from a central assumption: practically every company today operates an identity infrastructure. This infrastructure virtually always forms the central basis of enterprise security, ensures basic compliance and governance, helps with the request for authorizations and perhaps even for their withdrawal, if no longer needed.

As a result, the existing infrastructures already meet basic requirements today, but these were often defined in previous phases for then existing companies.

The demand for new ways of managing identities

Yes, we too cannot avoid the buzzword "digitalization" now, as requirements that cannot be adequately covered by traditional systems arise precisely from this context. And just adding some additional components (a little CIAM here, some MFA there, or Azure AD, which came with the Office 365 installation anyway) won't help. The way we communicate has changed and companies are advancing their operations with entirely new business models and processes. New ways of working together and communicating demand for new ways of managing identities, satisfying regulatory requirements and delivering secure processes, not least to protect customers and indeed your very own business.

What to do if your own IAM (only) follows a classic enterprise focus, i.e. fulfills the following tasks very well?

  • Traditional Lifecycles
  • Traditional Provisioning
  • Traditional Access Governance
  • Traditional Authentication (Username/Password, Hardware Tokens, VPNs)
  • Traditional Authorization (Roles, Roles, Roles)
  • Consumer Identities (somehow)

And what to do if the business wants you, as the system owner of an IAM, to meet the following requirements?

  • High Flexibility
  • High Delivery Speed for New Digital Services
  • Software as a Service
  • Container and Orchestration
  • Identity API and API Security
  • Security and Zero Trust

The development of parallel infrastructures has been largely recognized as a wrong approach.

Convert existing architectures during operation

Therefore, it is necessary to gently convert existing architectures so that the ongoing operation is permanently guaranteed. Ideally, this process also optimizes the architecture in terms of efficiency and costs, while at the same time adding missing functionalities in a scalable and comprehensive manner.

Figuratively speaking, you have to renovate your house while you continue to live in it. Nobody who has fully understood digitization will deny that the proper management of all relevant identities from customers to employees and partners to devices is one, if not the central basic technology for this. But on the way there, everything already achieved must continue to be maintained, quick wins must provide proof that the company is on the right track, and an understanding of the big picture (the "blueprint") must not be lost.

Research forecast

If you want to find out more: Read the "Leadership Brief: Identity Fabrics - Connecting Anyone to Every Service - 80204" as the first introduction to this comprehensive and promising concept. The KuppingerCole "Architecture Blueprint Identity and Access Management -72550" has just been published and wants to provide you with the conceptual foundation for sustainably transforming existing IAM infrastructures into a future-proof basic technology for the 2020s and beyond.

In addition, leading-edge whitepaper documents currently being prepared and soon to be published (watch this space, we will provide links in one of the upcoming issues of our “Analysts’s view IAM”) will provide essential recommendations for the initialization and implementation of such a comprehensive transformation program.

KuppingerCole has supported successful projects over the course of the past months in which existing, powerful but functionally insufficient IAM architectures were put on the road to a sustained transformation into a powerful future infrastructure. The underlying concepts can be found in the documents above, but if you would like us to guide you along this path, please feel free to talk to us about possible support.

Feel free to browse our Focus Area: The Future of Identity & Access Management for more related content.

OVHCloud Bets on Shift Back to Private Cloud

There is more to the cloud than AWS, Azure, IBM and Google according to OVHCloud - the new name for OVH as it celebrates its 20th anniversary. While the big four have carved up the public cloud between them, the French cloud specialist believes that business needs are changing, which gives them an opportunity in the enterprise market it is now targeting. In short, OVHCloud believes there is a small, but discernible shift back to the private cloud - for security and compliance imperatives.

That does not mean that OVHCloud is abandoning the public cloud to the Americans. At October’s OVHCloud Summit in Paris, CEO Michel Paulin spoke forcefully of the need for Europe (for that, read France) to compete in this space. “We believe we can take on the US and Chinese hegemony. Europe has all the talents needed to build a digital channel that can rival all the other continents.” he said.

OVHCloud needs to shift focus and mature

The company is growing, with 2,200 employees and revenue estimated at around $500m. For comparison, AWS posted revenue of $9bn for its third quarter in 2019 – spot the difference. OVHCloud is doubling down on the Security as a Service (SaaS) market with 100 SaaS products announced for a new dedicated marketplace. The company says the focus will be on collaboration and productivity tools, web presence and cloud communication. On the security front, OVHCloud is promising the following soon: Managed Private Registry, Public Cloud Object Storage Encryption and K8s private network.

If OVHCloud is to take even a chunk of the Big Four’s market, it needs to shift focus and mature. It believes it can by moving from what it terms the “startup world” of digital native companies into the traditional enterprise sector (without neglecting its cloud native customers). Gained customers so far include insurance, aviation, big IT services and some finance and retail customers. OVHCloud believes the enterprise market is lagging its traditional customers in digital innovation and transformation.

Security reasons and better data oversight bring customers back to private clouds

Crucially, the company thinks that enterprise customers are coming back to private clouds for security reasons and better oversight of data in the age of big compliance. At the same time, it predicts that the future of cloud should remain open and multi-cloud, something I and others would agree with.

In terms of business strategy, OVHCloud is moving from a product approach to a solution approach along with the shift towards enterprise customers – this makes sense. OVHCloud makes much of its ability to build its own servers and cooling systems, and sees this as a USP, claiming the industry’s lowest TCO for energy usage. Such an advantage depends on scale, however, and in an open multi-cloud, multi-vendor market, the cost savings may make little difference to enterprise customers. But the green message may play well in today’s climate conscious market for some buyers in the startup crowd and potentially in the more digital parts of larger enterprises.

For more insight into the enterprise cloud market please read our reports or contact one of our analysts.

There Is No “One Stop Shop” for API Management and Security Yet

From what used to be a purely technical concept created to make developers’ lives easier, Application Programming Interfaces (APIs) have evolved into one of the foundations of modern digital business. Today, APIs can be found everywhere – at homes and in mobile devices, in corporate networks and in the cloud, even in industrial environments, to say nothing about the Internet of Things.

When dealing with APIs, security should not be an afterthought

In a world where digital information is one of the “crown jewels” of many modern businesses (and even the primary source of revenue for some), APIs are now powering the logistics of delivering digital products to partners and customers. Almost every software product or cloud service now comes with a set of APIs for management, integration, monitoring or a multitude of other purposes.

As it often happens in such scenarios, security quickly becomes an afterthought at best or, even worse, it is seen as a nuisance and an obstacle on the road to success. The success of an API is measured by its adoption and security mechanisms are seen as friction that limits this adoption. There are also several common misconceptions around the very notion of API security, notably the idea that existing security products like web application firewalls are perfectly capable of addressing API-related risks.

An integrated API security strategy is indispensable

Creating a well-planned strategy and reliable infrastructure to expose their business functionality securely to be consumed by partners, customers, and developers is a significant challenge that has to be addressed not just at the gateway level, but along the whole information chain from backend systems to endpoint applications. It is therefore obvious that point solutions addressing specific links in this chain are not viable in the long term.

Only by combining proactive application security measures for developers with continuous activity monitoring and deep API-specific threat analysis for operations teams and smart, risk-based and actionable automation for security analysts one can ensure consistent management, governance and security of corporate APIs and thus the continuity of business processes depending on them.

Security challenges often remain underestimated

We have long recognized API Economy as one of the most important current IT trends. Rapidly growing demand for exposing and consuming APIs, which enables organizations to create new business models and connect with partners and customers, has tipped the industry towards adopting lightweight RESTful APIs, which are commonly used today.

Unfortunately, many organizations tend to underestimate potential security challenges of opening up their APIs without a security strategy and infrastructure in place. Such popular emerging technologies as the Internet of Things or Software Defined Computing Infrastructure (SDCI), which rely significantly on API ecosystems, are also bringing new security challenges with them. New distributed application architectures like those based on microservices, are introducing their own share of technical and business problems as well.

KuppingerCole’s analysis is primarily looking at integrated API management platforms, but with a strong focus on security features either embedded directly into these solutions or provided by specialized third party tools closely integrated with them.

The API market has changed dramatically within just a few years

When we started following the API security market over 5 years ago, the industry was still in a rather early emerging stage, with most large vendors focusing primarily on operational capabilities, with very rudimentary threat protection functions built into API management platforms and dedicated API security solutions almost non-existent. In just a few years, the market has changed dramatically.

On one hand, the core API management capabilities are quickly becoming almost a commodity, with, for example, every cloud service provider offering at least some basic API gateway functionality built into their cloud platforms utilizing their native identity management, monitoring, and analytics capabilities. Enterprise-focused API management vendors are therefore looking into expanding the coverage of their solutions to address new business, security or compliance challenges. Some, more future-minded vendors are even no longer considering API management a separate discipline within IT and offer their existing tools as a part of a larger enterprise integration platforms.

On the other hand, the growing awareness of the general public about API security challenges has dramatically increased the demand for specialized tools for securing existing APIs. This has led to the emergence of numerous security-focused startups, offering their innovative solutions, usually within a single area of the API security discipline.

Despite consolidation, there is no “one stop shop” for API security yet

Unfortunately, the field of API security is very broad and complicated, and very few (if any) vendors are currently capable of delivering a comprehensive security solution that could cover all required functional areas. Although the market is already showing signs of undergoing consolidation, with larger vendors acquiring these startups and incorporating their technologies into existing products, expecting to find a “one stop shop” for API security is still a bit premature.

Although the current state of API management and security market is radically different from the situation just a few years ago, and the overall developments are extremely positive, indicating growing demand for more universal and convenient tools and increasing quality of available solutions, it is yet to reach anything resembling the stage of maturity. Thus, it’s even more important for companies developing their API strategies to be aware of the current developments and to look for solutions that implement the required capabilities and integrate well with other existing tools and processes.

Hybrid deployment model is the only flexible and future-proof security option

Since most API management solutions are expected to provide management and protection for APIs regardless of where they are deployed – on-premises, in any cloud or within containerized or serverless environments – the very notion of the delivery model becomes complicated.

Most API management platforms are designed to be loosely coupled, flexible, scalable and environment-agnostic, with a goal to provide consistent functional coverage for all types of APIs and other services. While the gateway-based deployment model remains the most widespread, with API gateways deployed either closer to existing backends or to API consumers, modern application architectures may require alternative deployment scenarios like service meshes for microservices.

Dedicated API security solutions that rely on real-time monitoring and analytics may be deployed either in-line, intercepting API traffic or rely on out-of-band communications with API management platforms. However, management consoles, developer portals, analytics platforms and many other components are usually deployed in the cloud to enable a single pane of glass view across heterogeneous deployments. A growing number of additional capabilities are now being offered as Software-as-a-Service with consumption-based licensing.

In short, for a comprehensive API management and security architecture a hybrid deployment model is the only flexible and future-proof option. Still, for highly sensitive or regulated environments customers may opt for a fully on-premises deployment.

Required Capabilities

In our upcoming Leadership Compass on API Management and Security, we evaluate products according to multiple key functional areas of API management and security solutions. These include API Lifecycle Management core capabilities, flexibility of Deployment and Integration, developer engagement with Developer Portal and Tools, strength and flexibility of Identity and Access Control, API Vulnerability Management for proactive hardening of APIs, Real-time Security Intelligence for detecting ongoing attacks, Integrity and Threat Protection means for securing the data processed by APIs, and, last but not least, each solution’s Scalability and Performance.

The preliminary results of our comparison will be presented at out Cybersecurity Leadership Summit, which will take place next week in Berlin.

Cyber-Attacks: Why Preparing to Fail Is the Best You Can Do

Nowadays, it seems that no month goes by without a large cyber-attack on a company becoming public. Usually, these attacks not only affect revenue of the attacked company but reputation as well. Nevertheless, this is still a completely underestimated topic in some companies. In the United Kingdom 43% of businesses experienced a cybersecurity breach in the past twelve months, according to the 2018 UK Cyber Security Breaches Survey. On the other hand, 74% say that cybersecurity is a high priority for them. So where is the gap, and why does it exist? The gap exists between the decision to prioritize cybersecurity and the reality of handling cyber incidents. It is critical to have a well-prepared plan, because cyber incidents will happen to you. Only 27% of UK businesses have a formal cyber incident management process. Does your company have one?

How do cyber-attacks affect your business?

To understand the need for a formal process and the potential threats, a company must be aware of the impact an incident could have. It could lead to a damage or loss of customers, or in the worst case to insolvency of the whole company. In many publicly known data breaches like the ones Facebook or PlayStation Networks had, the companies needed significant time to recover. Some would say, they still haven’t recovered. The loss of brand image, reputation and trust of a company can be enormous. To prevent your company from experiencing such critical issues and be able to handle incidents in a reasonable way, a good cyber incident plan must be implemented.

The characteristics of a good plan for cyber incidents

Such a plan should describe the processes, actions and steps which must be taken after a cyber-attack incident. The first step is categorization, which is essential to handle an incident in a well-defined way. If an incident is identified, it must be clear who will be contacted to react to this incident. This person or team is then responsible to categorize the incident and estimate the impact for the company.

The next step is to identify in detail which data has been compromised and what immediate actions can be taken to limit the damage. Subsequently, the plan must describe how to contact the staff needed and what they must do to prevent further harm and to recover. Responsibilities have to be allocated clearly to prevent a duplication of efforts when time is short. In a recent webinar KuppingerCole Principal Analyst Martin Kuppinger made the point, that IT teams responsible for cybersecurity should shift their focus from protection to recovery. While a lot of investments in cybersecurity nowadays still go into protection, this is not enough anymore. “You need to be able to restart your business and critical services rapidly,” Martin explained.

Cyber-attacks are not an IT-only job

Apart from the necessary actions described above which will be executed by IT and cybersecurity professionals, a process must be defined which lays out how corporate communications deals with an attack. In big companies there is an explicit top-down information chain. If a grave cyber-attack occurs, the Chief Communications Officer (CCO) has to be informed. Imagine the CCO not knowing anything about the incident being called in the morning by a journalist. This puts the company into a weak place where it loses control over crisis communication. Depending on the severity of the incident, a press release must be send out and customers must be informed. It is always better when companies are confident and show the public that they care instead of waiting until public pressure urges them to act.

Can companies deal with cybercrime all by themselves?

When it comes to personal user data being compromised, cyber-attacks can have legal consequences. Then it is wise to consult internal or external lawyers. External support from dedicated experts for specific cyber incidents are usually part of an action plan, too. To react as quickly as possible, a list with experts for external support categorized by topic should be created, containing contact persons and their availability.

Since cyber-attacks can never be entirely prevented, it is of utmost importance to have a plan and to know exactly how to react. This can prevent a lot of potential mistakes which are often made after incident has already been identified. In the end, it can prevent the company from losing customer confidence and revenues.

To understand and learn this process, to build necessary awareness and know how to deal with cybercrime in detail, you can attend our Incident Response Boot Camp on November 12 in Berlin.

Authentication and Education High on CISO Agenda

Multifactor authentication and end-user education emerged as the most common themes at a CISO forum with analysts held under Chatham House Rules in London.

Chief information security officers across a wide range of industry sectors agree on the importance of multifactor authentication (MFA) to extending desktop-level security controls to an increasingly mobile workforce, with several indicating that MFA is among their key projects for 2020 to protect against credential stuffing attacks.

In highly-targeted industry sectors, CISOs said two-factor authentication (2FA) was mandated at the very least for mobile access to corporate resources, with special focus on privileged account access, especially to key databases.

Asked what IT suppliers could do to make life easier for security leaders, CISOs said providing MFA with everything was top of the list, along with full single sign on (SSO) capability, with some security leaders implementing or considering MFA for customer/consumer access to accounts and services.

The pursuit of improved user experience along with more secure access, appears to have led some security leaders to standardise on Microsoft products and services that enable collaboration, MFA and SSO, reducing the reliance on username/password combinations alone for access control.

Training end users

End user security education and training is another key area of attention for security leaders to increase the likelihood that any gaps in security controls will be bridged by well-informed users.

However, there also a clear understanding that end users cannot be held responsible as a front line of defense, that there needs to be a zero-blame policy to encourage engagement and participation of end users in security, that end users need to be supported by appropriate security controls and effective incident detection and response processes, and that communication is essential to ensure end users understand the cyber threats to them at home and at work as well as the importance of each security control.

Supporting end users

CISOs are helping to protect end users by implementing browser protections and URL filtering to prevent access to malicious sites, and improving email defenses to protect users from spoofing, phishing and spam, and by introducing tools that make it easy to report suspected phishing and conducting regular phishing simulation exercises to keep end users vigilant.

The implementation of the Domain-based Message Authentication, Reporting and Conformance (DMARC) protocol designed to help ensure the authenticity of the sender’s identity is also being used by some CISOs to drive user awareness by highlighting emails from an external source.

Some security leaders believe there should be a special focus on board member and other senior executives in terms of anti-phishing training and awareness because while this group is likely to be most-often targeted by phishing and spear phishing attacks, they are less likely to be attuned to the dangers and the warning signs.

Some CISOs have also provided password managers to help end users choose and maintain strong, unique passwords, reducing the number of passwords that each person is required to remember.

Encouraging trends

It is encouraging that security leaders are focusing on better authentication by moving to MFA and that they understand the need to support end users, not only with security awareness and education, but the necessary security controls, processes and capabilities, including effective email and web filtering, network monitoring, incident detection and response, and patch management.

If you want to deep dive into this topic, be sure to read our Leadership Compass Consumer Authentication. For unlimited access to all our research, buy your KC PLUS susbscription here.

Nok Nok Labs Extends FIDO-Based Authentication

Nok Nok Labs has made FIDO certified multi-factor authentication – which seeks to eliminate dependence on password-based security - available across all digital channels by adding a software development kit (SDK) for smart watches to the latest version of its digital authentication platform, the Nok Nok S3 Authentication Suite.

In truth, the SDK is only for the Apple watchOS, but it is the first - and currently only - SDK available to do all the heavy lifting for developers seeking to enable FIDO-certified authentication via smart watches that do not natively support FIDO, and is a logical starting point due to Apple’s strong position in the smart watch market (just over 50%), with SDKs for other smart watch operating systems expected to follow.

This means that business to consumer organizations can now use the Nok Nok S3 Authentication Suite to enable strong, FIDO-based authentication and access policy controls for Apple Watch apps as well as mobile apps, mobile web and desktop web applications.

The new SDK, like its companion SDKs from Nok Nok, provides a comprehensive set of libraries and application program interfaces (APIs) for software developers to enable FIDO certified multi-factor authentication that uses public and private key pairs, making it resistant to man-in-the-middle attacks because the private key never leaves the authenticator, or in this case, the smart watch.

As global smart watch sales continue to grow, the devices are becoming an increasingly important channel for digital engagement, particularly with 24 to 35-year-olds. At the same time, smart watch usage has grown beyond fitness applications to include banking, productivity apps such as Slack, ecommerce such as Apple Pay, and home security such as NEST.

A further driver for the use of smart watch applications is the fact that consumers often find it easier to access information on a watch without the need for passwords or one-time passcodes, especially smart watches like the Apple Watch that does not rely on having a smartphone nearby.

The move is a strategic one for Nok Nok because it not only satisfies customer requirements, but also fulfils one of the key goals for Nok Nok as a company and the FIDO Alliance as a whole.

From the point of view of S3 Authentication Suite end-user organizations, the new SDK will make it easier to make applications available to consumers on smart watches as a new client platform in its own right and meet the security and privacy requirements of both smart watch users and global, regional and industry-specific regulations, especially in highly-regulated industries such as telecommunications and financial services.

In addition, the SDK for smart watches enables end-user organisations an opportunity to simplify their backend infrastructure by having a single authentication method for all digital channels enabled by a unified backend authentication infrastructure, thereby reducing cost by reducing complexity and operational overhead.

From a Nok Nok point of view, the SDK delivers greater value to existing customers and is likely to win new customers as organisations, particularly in the financial services sector, seek to engage consumers across all available digital channels.

Enabling the same strong FIDO-backed authentication across all digital channels is also a key goal of Nok Nok, both as a company and as a founder member of the FIDO (Fast IDentity Online) Alliance.

The FIDO Alliance is a non-profit consortium of technology industry partners – including Amazon, Facebook, Google, Microsoft and Intel – working to establish standards for strong authentication to address the lack of interoperability among strong authentication devices as well as the problems users face with creating and remembering multiple usernames and passwords.

The FIDO Alliance plans to change the nature of authentication by developing specifications that define an open, scalable, interoperable set of mechanisms that supplant reliance on passwords to securely authenticate users of online services via FIDO-enabled devices.

The new S3 SDK from Nok Nok for Apple watchOS offers a stronger authentication alternative to solutions that typically store OAuth tokens or other bearer tokens in their smart watch applications. These tokens provide relatively weak authentication and need to be renewed frequently because they can be stolen.

In contrast, FIDO-based authenticators provide strong device binding for credentials, providing greater ease of use as well as additional assurance that applications are being accessed only by the smart watch owner (authorized user).

While commercially a strategic move for Nok Nok to be the first mover in enabling strong FIDO-based authentication via its S3 Authentication Suite, the real significance of the new SDK for Apple Watches is that it moves forward the IT industry’s goal of achieving stronger authentication and reducing reliance on password-based security.

AI for Governance and Governance of AI

Artificial Intelligence is a hot topic and many organizations are now starting to exploit these technologies, at the same time there are many concerns around the impact this will have on society. Governance sets the framework within which organizations conduct their business in a way that manages risk and compliance as well as to ensure an ethical approach. AI has the potential to improve governance and reduce costs, but it also creates challenges that need to be governed.

The concept of AI is not new, but cloud computing has provided the access to data and the computing power needed to turn it into a practical reality. However, while there are some legitimate concerns, the current state of AI is still a long way from the science fiction portrayal of a threat to humanity. Machine Learning technologies provide significantly improved capabilities to analyze large amounts of data in a wide range of forms. While this poses a threat of “Big Other” it also makes them especially suitable for spotting patterns and anomalies and hence potentially useful for detecting fraudulent activity, security breaches and non-compliance.

AI covers a range of capabilities, including ML (machine learning), RPA (Robotic Process Automation), NLP (Natural Language Processing) amongst others. But AI tools are simply mathematical processes which come in a wide variety of forms and have relative strengths and weaknesses.

ML (Machine Learning) is based on artificial neural networks, inspired by the way in which animal brains work. These use networks of machine learning algorithms which can be trained to perform tasks using data as examples and without needing any preprogrammed rules.

For ML training to be effective it needs large amounts of data and acquiring this data can be problematic. The data may need to be obtained from third parties and this can raise issues around privacy and transparency. The data may contain unexpected biases and, in any case, needs to be tagged or classified with the expected results which can take significant effort.

One major vendor successfully applied this to detect and protect against identity led attacks. This was not a trivial project and took 12 people over 4 years to complete However, the results were worth the cost since this is now much more effective than the hand-crafted rules that were previously used. It is also capable of automatically adapting to new threats as they emerge.

So how can this technology be applied to governance? Organizations are faced with a tidal wave of regulation and need to cope with the vast amount of data that is now regularly collected for compliance. The current state of AI technologies makes them very suitable to meet these challenges. ML can be used to identify abnormal patterns in event data and detect cyber threats while in progress. The same approach can help to analyze the large volumes of data collected to determine the effectiveness of compliance controls. Its ability to process textual data makes it practical to process regulatory texts to extract the obligations and compare these with the current controls. It can also process textbooks, manuals, social media and threat sharing sources to relate event data to threats.

However, the system needs to be trained by regulatory professionals to recognize the obligations in regulatory texts and to extract these into a common form that can be compared with the existing obligations documented in internal systems to identify where there is a match. It also needs training to discover existing internal controls that may be relevant or, where there are no controls, to advise on what is needed.

Lined with a conventional GRC system this can augment the existing capabilities and help to consolidate new and existing regulatory requirements into a central repository used to classify complex regulations and help stakeholders across the organization to process large volumes of regulatory data. It can help to map regulatory requirements to internal taxonomies and business structures and basic GRC data. Thus connecting regulatory data to key risks, controls and policies, and linking that data to an overall business strategy.

Governance also needs to address the ethical challenges that come with the use of AI technologies. These include unintentional bias, the need for explanation, avoiding misuse of personal data and protecting personal privacy as well as vulnerabilities that could be exploited to attack the system.

Bias is a very current issue with bias related to gender and race as top concerns. Training depends upon the data used and many datasets contain an inherent if unintentional bias. For example see the 2018 paper Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. There are also subtle differences between human cultures, and it is very difficult for humans to develop AI systems to be culturally neutral. Great care is needed to with this area.

Explanation – In many applications, it may be very important to provide an explanation for conclusions reached and actions taken. Rule-based systems can provide this to some extent but ML systems, in general, are poor at this. Where explanation is important some form of human oversight is needed.

One of the driving factors in the development of ML is the vast amount of data that is now available, and organizations would like to get maximum value from this. Conventional analysis techniques are very labor-intensive, and ML provides a potential solution to get more from the data with less effort. However, organizations need to beware of breaching public trust by using personal data, that may have been legitimately collected, in ways for which they have not obtained informed consent. Indeed, this is part of the wider issue of surveillance capitalism - Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.

ML systems, unlike the human, do not use understanding they simply match patterns – this makes them open to attacks using inputs that are invisible to humans. Recent examples of vulnerabilities reported include one where the autopilot of a Tesla car was tricked into changing lanes into oncoming traffic by stickers placed on the road. A wider review of this challenge is reported in A survey of practical adversarial example attacks.

In conclusion, AI technologies and ML in particular, provide the potential to assist governance by reducing the costs associated with onboarding new regulations, managing controls and processing compliance data. The exploitation of AI within organizations needs to be well governed to ensure that it is applied ethically and to avoid unintentional bias and misuse of personal data. The ideal areas for the application of ML are those where with a limited scope and where explanation is not important.

For more information attend KuppingerCole’s AImpact Summit 2019.

If you liked this text, feel free to browse our Focus Area: AI for the Future of Your Business for more related content.

Akamai to Block Magecart-Style Attacks

Credit card data thieves, commonly known as Magecart groups, typically use JavaScript code injected into compromised third-party components of e-commerce websites to harvest data from shoppers to commit fraud.

A classic example was a Magecart group’s compromise of Inbenta Technologies’ natural language processing software used to answer user questions by UK-based ticketing website, Ticketmaster.

The Magecart group inserted malicious JavaScript into the Inbenta JavaScript code, enabling the cyber criminals to harvest all the customer credit card data submitted to the Ticketmaster website.  

As a result, Ticketmaster is facing a £5m lawsuit on behalf of Ticketmaster customers targeted by fraud as well as a potential GDPR fine by the Information Commissioner’s Office, which is yet to publish the findings of its investigation.

A data breach at British Airways linked to similar tactics potentially by a Magecart group resulted in the Information Commissioner’s Office announcing in July 2019 that they are considering a fine for the company of more than €200m.

According to security researchers, the breach of Ticketmaster customer data was part of a larger campaign that targeted at least 800 websites.

This is a major problem for retailers, with an Akamai tool called Request Map showing that more than 90% of content on most websites comes from third-party sources, over which website owners have little or no control.

These scripts effectively give attackers direct access to website users, and once they are loaded in the browser, they can link to other malicious content without the knowledge of website operators.

Current web security offerings are unable to address and manage this problem, and a Content Security Policy (CSP) alone is inadequate to deal with potentially thousands of scripts running on a website.  Akamai is therefore developing and bringing a new product to market that is dedicated to helping retailers reduce the risk posed by third-party links and elements of their websites for things like advertising, customer support and performance management.

The new service dubbed Page Integrity Manager has completed initial testing and is now entering the beta testing phase with up to 25 volunteer customers with a range of different data types.

The aim of Akamai Page Integrity Manager is to enable website operators to detect and stop third-party breaches before their users are impacted. The service is designed to discover and assess the risk of new or modified JavaScript, control third-party access to sensitive forms or data fields using machine learning to identify relevant information, enable automated mitigation using policy-based controls, and block bad actors using Akamai threat intelligence to improve accuracy.

The service works by inserting a JavaScript into customer web pages to analyze all content received by the browser from the host organization and third parties. This will identify and block any scripts trying to access and exfiltrate financial or other personal data (form-jacking) as well as notify the website operator.

Third-party JavaScripts massively increase the attack surface and ramp up the risk for website operators and visitors alike with no practical and effective way for website operators to detect the threat and mitigate the risk, but that is set to change with the commercial availability of Akamai’s Page Integrity Manager expected in early 2020.

Microsoft Partnership Enables Security at Firmware Level

Microsoft has partnered with Windows PC makers to add another level of cyber attack protection for users of Windows 10 to defend against threats targeting firmware and the operating system.

The move is in response to attackers developing threats that specifically target firmware as the IT industry has built more protections into operating systems and connected devices. A trend that appears to have been gaining popularity since Russian espionage group APT28 – also known as Fancy Bear, Pawn Storm, Sofacy Group, Sednit, and Strontium – was found to be exploiting firmware vulnerabilities in firmware to distribute the LoJax malware by security researchers at ESET.

The LoJax malware targeting European government organizations exploited a firmware vulnerability to effectively hide inside the computer's flash memory. As a result, malware was difficult to detect and able to persist even after an operating system reinstall because whenever the infected PC booted up, the malware would re-execute.

In a bid to gain more control over the hardware on which its Windows operating system runs like Apple, Microsoft has worked with PC and chip makers on an initiative dubbed “Secured-core PCs” to apply the security best practices of isolation and minimal trust to the firmware layer to protect Windows devices from attacks that exploit the fact that firmware has a higher level of access and higher privileges than the Windows kernel. This means attackers can undermine protections such as secure boot and other defenses implemented by the hypervisor or operating system.

The initiative appears to be aimed at industries that handle highly-sensitive data, including personal, financial and intellectual property data, such as financial services, government and healthcare rather than the consumer market. However, consumers using new high-end hardware like the Surface Pro X and HP's Dragonfly laptops will benefit from an extra layer of security that isolates encryption keys and identity material from Windows 10.

According to Microsoft, Secured-core PCs combine identity, virtualization, operating system, hardware and firmware protection to add another layer of security underneath the operating system to prevent firmware attacks by using new hardware Dynamic Root of Trust for Measurement (DRTM) capabilities from AMD, Intel and Qualcomm to implement Microsoft’s System Guard Secure Launch as part of Windows Defender in Windows 10.

This effectively removes trust from the firmware because although Microsoft introduced Secure Boot in Windows 8 to mitigate the risk posed by malicious bootloaders and rootkits that relied on Unified Extensible Firmware Interface (UEFI) firmware, the firmware is already trusted to verify the bootloaders, which means that Secure Boot on its own does not protect from threats that exploit vulnerabilities in the trusted firmware.

The DRTM capability also helps to protect the integrity of the virtualization-based security (VBS) functionality implemented by the hypervisor from firmware compromise. VBS then relies on the hypervisor to isolate sensitive functionality from the rest of the OS which helps to protect the VBS functionality from malware that may have infected the normal OS even with elevated privileges, according to Microsoft, which adds that protecting VBS is critical because it is used as a building block for important operating system security capabilities like Windows Defender Credential Guard which protects against malware maliciously using OS credentials and Hypervisor-protected Code Integrity (HVCI) which ensures that a strict code integrity policy is enforced and that all kernel code is signed and verified.

It is worth noting that the Trusted Platform Module 2.0 (TPM) has been implemented as one of the device requirements for Secured-core PCs to measure the components that are used during the secure launch process, which Microsoft claims can help organisations enable zero-trust networks using System Guard runtime attestation.

Although ESET has responded to its researchers’ UEFI rootkit discovery by introducing a UEFI Scanner to detect malicious components in the firmware, and some chip manufacturers are aiming to do something similar with specific security chips, Microsoft’s Secured-core PC initiative is aimed at blocking firmware attacks rather than just detecting them and is cross-industry, involving a wide range of CPU architectures and Original Equipment Manufacturers (OEMs), which means that the firmware defence will be available to all Windows 10 users regardless of the PC maker and form factor they choose.

It will be interesting to see what effect this initiative has in reducing the number of successful ransomware and other BIOS/UEFI or firmware-based cyber attacks on critical industries. A high success rate is likely to see commoditization of the technology and result in availability for all PC users in all industries.

Can Your Antivirus Be Too Intelligent Sometimes?

Current and future applications of artificial intelligence (or should we rather stick to a more appropriate term “Machine Learning”?) in cybersecurity have been one of the hottest discussion topics in recent years. Some experts, especially those employed by anti-malware vendors, see ML-powered malware detection as the ultimate solution to replace all previous-generation security tools. Others are more cautious, seeing great potential in such products, but warning about the inherent challenges of current ML algorithms.

One particularly egregious example of “AI security gone wrong” was covered in an earlier post by my colleague John Tolbert. In short, to reduce the number of false positives produced by an AI-based malware detection engine, developers have added another engine that whitelisted popular software and games. Unfortunately, the second engine worked a bit too well, allowing hackers to mask any malware as innocent code just by appending some strings copied from a whitelisted application.

However, such cases where bold marketing claims contradict not just common sense but the reality itself and thus force engineers to fix their ML model shortcomings with clumsy workarounds, are hopefully not particularly common. However, every ML-based security product does face the same challenge – whenever a particular file triggers a false positive, there is no way to tell the model to just stop it. After all, machine learning is not based on rules, you have to feed the model with lots of training data to gradually guide it to a correct decision and re-labeling just one sample is not enough.

This is exactly the problem the developers of Dolphin Emulator have recently faced: for quite some time, every build of their application has been recognized by Windows Defender as a malware based on Microsoft’s AI-powered behavior analysis. Every time the developers would submit a report to Microsoft, it would be dutifully added to the application whitelist, and the case would be closed. Until the next build with a different file hash is released.

Apparently, the way this cloud-based ML-powered detection engine is designed, there is simply no way to fix a false positive once and for all future builds. However, the company obviously does not want to make the same mistake as Cylance and inadvertently whitelist too much, creating potential false negatives. Thus, the developers and users of the Dolphin Emulator are left with the only option: submit more and more false-positive reports and hope that sooner or later the ML engine will “change its mind” on the issue.

Machine learning enhanced security tools are supposed to eliminate the tedious manual labor by security analysts; however, this issue shows that sometimes just the opposite happens. Antimalware vendors, application developers, and even users must do more work to overcome this ML interpretation problem. Yet, does it really mean that incorporating machine learning into an antivirus was a mistake? Of course not, but giving too much authority to an ML engine which is, in a sense, incapable of explaining its decisions and does not react well to criticism, probably was.

Potential solutions for these shortcomings do exist, the most obvious being the ongoing work on making machine learning models more explainable, giving insights into the ways they are making decisions on particular data samples, instead of just presenting themselves to users as a kind of a black box. However, we’re yet to see commercial solutions based on this research. In the future, a broader approach towards the “artificial intelligence lifecycle” will surely be needed, covering not just developing and debugging models, but stretching from the initial training data management all the way up to ethical and legal implications of AI.

By the way, we’re going to discuss the latest developments and challenges of AI in cybersecurity at our upcoming Cybersecurity Leadership Summit in Berlin. Looking forward to meeting you there! If you want to read up on Artificial Intelligence and Machine Learning, be sure to browse our KC+ research platform.

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00