Blog posts by Anne Bailey

Compromise of IOTA

Turning a blind eye to security in favor of optimism

If you have any take-away from reading KuppingerCole research, hopefully it is that APIs are a critical element to protect. This is true regardless of the industry. Even cryptocurrencies.

IOTA, the blockchain-like cryptocurrency and transaction network was compromised in mid-February. The API access to the IOTA crypto wallet via a payment service was targeted and exploited for potentially two to three weeks. Approximately 50 accounts were compromised, leading to the eventual theft of around 2 million Euros.

There is a risk in trusting the promises of hyped technology. Blockchain is often praised as being tamperproof and highly secure, and it still is. The blockchain – or more specifically, the DAG protocol that is similar to blockchain – didn’t cause the vulnerability. However, somebody – perhaps network overseers, third-party services, or Content Delivery Networks – trusted this claim a little too much and neglected to protect the mundane aspects of the solution.

Do we want decentralization?

A delay in communication caused the attacker to exit with their payload. The third-party service that was compromised became aware of the breach on February 10th and removed the attacker’s entry point to stealing private key information. Only five days later did the third-party service communicate and collaborate with the IOTA Foundation to freeze the network and all transactions. In that period of time, the attacker was able to empty the compromised accounts of approximately 2 million Euros.

The damage to individual accounts wasn’t higher because the IOTA Foundation has some degree of control over the network. This level of control allows the network to be arbitrarily halted, and for the Foundation to implement a claims registration tool to offer some degree of user protection. These basic tasks are completely absent from fully decentralized solutions like Blockchain or Ethereum. But in instances like this, perhaps some centralized support is not amiss.

Ambient Intelligence Can’t Mature Without an Identity Protocol

Every day we are experiencing the intersection of IoT and AI. The interactions of users, sensors, robots, vehicles, smart buildings, and much more is creating a new status quo for digital experiences. This growing range of smart devices – both in the IoT sense and the intelligent AI sense – mean we are moving beyond a singular focus on the smartphone. This heightened immersion into increasingly distributed, decentralized digital networks is what KuppingerCole has termed “Ambient Intelligence”.

The synergy of AI and IoT that Ambient Intelligence enables will be a key driver for the machine-to-machine (M2M) economy, with businesses and consumers already demanding it in daily tasks. However, advancing the M2M economy is held back by the lack of a reliable, secure identity protocol for objects and devices. Without this sort of protocol, companies use strange workarounds to meet the demands of users – using a smartphone as a proxy for a vehicle, for example.

When a phone isn’t a phone

Artist Simon Weckert publicized photo and video evidence of his performance art piece on 1 February 2020, showing himself pulling a wagon filled with 99 smartphones through Berlin. His message of how the presence of a smartphone impacts the way traffic is represented in Google Maps received over three million views. With so many smartphones, he easily turned a “green”, traffic-free section of road to “red” and congested, rerouting actual vehicles to side streets. Weckert’s purpose was to illuminate the influence that digital tools such as Google Maps have over our choices in the physical world, but this performance invites other takeaways, such as the misrepresentation of devices.

Artist Simon Weckert publicized photo and video evidence of his performance art piece on 1 February 2020, showing himself pulling a wagon filled with 99 smartphones through Berlin. His message of how the presence of a smartphone impacts the way traffic is represented in Google Maps received over three million views. With so many smartphones, he easily turned a “green”, traffic-free section of road to “red” and congested, rerouting actual vehicles to side streets. Weckert’s purpose was to illuminate the influence that digital tools such as Google Maps have over our choices in the physical world, but this performance invites other takeaways, such as the misrepresentation of devices.

IOTA’s role in delivering an identity protocol

IOTA has very promising contributions to digital identity, for humans as well as devices and objects. It is an open-source, blockchain-like protocol that among other things can host the emerging Unified Identity Protocol that would enable a secure M2M exchange of data – including ambient devices. Established in Germany as a non-profit foundation to provide “the next generation of protocols for the connected world”, IOTA is both the name of the foundation and the DAG-based technology layer which is free for public use. It stands out as a digital (and decentralized) identity solution for a number of reasons:

  • The use of Directed Acyclic Graphs (DAG) solves the chronic scalability weakness of typical blockchains – the more transactions are executed with IOTA, the faster each transaction is executed. A participating node’s transaction is only confirmed when it approves two previous transactions. Therefore, the more transactions in IOTA, the faster each can be approved.
  • Transactions are fee-less, again because of the blockchain-like DAG structure. Unlike the typical proof-of-work consensus mechanism which incentivizes honest participation by awarding cryptocurrencies and charging transaction fees, a participating node’s incentive in IOTA is to have its own transaction approved.
  • Microtransactions (such as sharing the battery status of an electric vehicle every few minutes) are possible because transactions are fee-less. The potential for objects and devices to share data on a constant basis using IOTA is much more feasible if there are not prohibitive costs associated with it.
  • IOTA is bringing Self-Sovereign Identity for human and device and object identity. It arranges an identity ecosystem made up of Holder, Issuer, and Verifier roles following emerging standards for DID and Verifiable Credentials. By employing cryptographic techniques such as zero-knowledge proofs, users – and objects – can prove an identity attribute is true without over-revealing information.
  • The partner ecosystem includes key players that are highly invested in bringing IoT and Industry 4.0 to maturity. Siemens alone has been granted 13 patents for IOTA-based identification and authentication technologies.

Multiple decentralized identity solutions are under development, but this typically means blockchain. While they have their merits, IOTA is a non-blockchain option for decentralized identity that may outpace them all. To learn more about the future of digital and decentralized identity, join KuppingerCole at the European Identity & Cloud Conference in May 2020 for over 200 sessions and insight from a wide array of global experts.

Three Critical Elements Required to Close the Cybersecurity Skills Gap

The status on cybersecurity is fairly clear: 82% of employers report that their cybersecurity skills are not enough to handle the rising number of cyber incidents (Center for Strategic & International Studies, 2019. The Cybersecurity Workforce Gap). There is a gap – a gap between the skills needed for strong cybersecurity, and the skills you actually have. It is an individual problem, but also an enterprise problem and a global problem. The vast majority of the world simply does not have the skills to keep up with the cyber risks that we know exist.

Three Critical Elements to Closing the Skills Gap

KuppingerCole research shows that there are three critical elements required to close the cybersecurity skills gap: education, tools, and collaboration. Skills require having adequate knowledge: what are the typical attack vectors of a cyber incident? What are the best processes to have in place? Skills also require using the correct tools: a skilled carpenter would never use a welder in his woodwork. So why do many still cut corners by jerry-rigging inadequate tools to fit security purposes? Lastly, these skills require collaboration. Some aspects of cybersecurity should come from in-house; others would be far more efficient coming from a Managed Security Service Provider (MSSP). Deciding what the appropriate balance is requires insight into your own team’s capabilities.

The Role of Organizational Change Management

Closing the cybersecurity skills gap is also an organizational change problem. Very often, incident response management programs do not have the full support of senior management, or face implementation challenges when employees do not fully understand new processes. Experience plays a dominant role here; the misconception is that only a few people are relevant to cybersecurity programs when in fact, every person in an organization should play an active role. Taking the time to build allies in an organization, communicate with and train coworkers, and assess progress is fundamental to building cybersecurity skills in an organization.

This skills shortage paradigm is shifting. Having identified the critical elements to building cybersecurity capacity, KuppingerCole Analysts pulled from years of experience working alongside companies to implement top-of-the-line cybersecurity programs to create a master class bringing pivotal knowledge to the right people. Every individual is a critical actor in a cybersecurity program. The global economy does lack trained cybersecurity professionals, but training for these skills is no longer inaccessible.

A Solution to the Skills Gap

The first steps to building up cybersecurity begin with knowing the organization in question. An analysis of capabilities already covered in an organization should be made, and careful consideration should be given to where an organization should supplement with MSSPs. KuppingerCole can help support in this process. The KC Master Class facilitates a tight relationship with the trainer, a senior KC analyst. Individualized workshops, 1:1 problem solving sessions, and decision support is built into the masterclass. A modern learning style combines a digital/analog instructional environment with real-world, bootcamp-style meetings and eLearning materials. The process is conducted in close contact with the trainer and expert community, using standard collaboration software such as MS Teams.

Lead Analyst Alexei Balaganski writes: “the primary reason for not doing security properly is insufficient guidance and a lack of widely accepted best practices in every area of cybersecurity.” Each individual has the capacity to change this reality. KuppingerCole can help do this.

RPA and AI: Don’t Isolate Your Systems, Synchronize Them

We already hear a lot about artificial intelligence (AI) systems being able to automate repetitive tasks. But AI is such a large term that encompasses many types of very different technologies. What type of solutions are really able to do this?

Robotic Process Automation (RPA) configures software to mimic human actions on a graphic user interface (GUI) to carry out a business process.  For example, an RPA system could open a relevant email, extract information from an attached invoice, and input it in an internal billing system. Although modern RPA solutions are already relying on various AI-powered technologies like image recognition to perform their functions, positioning RPA within the spectrum of AI-powered tools is still somewhat premature: on its own, RPA is basically just an alternative to scripting for non-technical users.

Enterprises that are currently beginning with automating prescribed tasks hope to adopt more advanced capabilities like data-based analytics, machine learning, and ending with cognitive decision making; they should however realize that existing RPA solutions might not yet be intelligent enough for such aspirations.

Filling in the Gaps

If RPA sounds limited, then you are correct; it is not a one-stop-shop for intelligent automation. RPA only automates the button clicks of a multi-step process across multiple programs. If you’re under the impression that RPA can deliver end-to-end process automation, pause and reassess. RPA can do a limited and explicitly defined set of tasks well, but faces serious limitation when flexibility is required.

As soon as any deviation from the defined process is needed, RPA cannot and does not function. However, it can be part of a larger business process orchestration that operates from an understanding of what must be done instead of how. RPA delivers some value in isolation, but much more is possible when coordinated with other AI systems.

The weaknesses of RPA systems overlap nicely with the potential that machine learning (ML)-based AI can offer. ML happens to be capable of adding flexibility to a process based on data inputs. Solutions are coming available that learn from each situation – unlike RPA – and produce interchangeable steps so that the system can assess the type of issue to be solved, and build the correct process to handle it from the repository of already learned steps. It widens the spectrum of actions that an RPA system can make.

Synchronization Adds Value

AI does have strengths that overlap with RPA weaknesses like handling unstructured data. An AI-enabled RPA system can process unstructured data from multiple channels (email, document, web) in order to input information later in the RPA process. The analytics functionality of ML can add value to an RPA process, such as identifying images of a defective product in a customer complaint email and downloading them to the appropriate file. There are aspects that the pairing of RPA and AI do not solve, such as end-to-end process automation, or understanding context (at least not yet).

Overall, RPA’s value to a process increases when used in combination with other relevant AI tools.

API Platforms as the Secure Front Door to Your Identity Fabric

Identity and Access Management (IAM) is on the cusp of a new era: that of the Identity Fabric. An Identity Fabric is a new logical infrastructure that acts as a platform to provide and orchestrate separate IAM services in a cohesive way. Identity Fabrics help the enterprise meet the current expanded needs of IAM, like integrating many different identities quickly and securely, allow BYOID, enable accessibility regardless of geographic location or device, link identity to relationship, and more.

The unique aspect of Identity Fabrics is the many interlinking connections between IAM services and front- and back-end systems. Application Programming Interfaces (APIs) are the secure access points to the Identity Fabric, and can make or break it. APIs are defined interfaces that can be used to call a service and get a defined result, and have become a far more critical tool than simply for the benefit of developers.

Because APIs are now the main form of communication and delivery of services in an Identity Fabric, they – by default – become the security gatekeeper. With an API facilitating each interface between aspects of the fabric, it is potentially a weakness.

API security should be comprehensive, serving the key areas of an Identity Fabric. These include:

  • Directory Services, one or more authoritative sources managing data on identities of humans, devices, things, etc. at large scale
  • Identity Management, i.e. the Identity Lifecycle Management capabilities required for setting up user accounts in target systems, including SaaS applications; this also covers Identity Relationship Management, which is essential for digital services where the relationship of humans, devices, and things must be managed
  • Identity Governance, supporting access requests, approvals, and reviews
  • Access Management, covering the key element of an Identity Fabric, which is authenticating the users and providing them access to target applications; this includes authentication and authorization, and builds specifically on support for standards around authentication and Identity Federation
  • Analytics, i.e. understanding the user behavior and inputs from a variety of sources to control access and mitigate risks
  • IoT Support, with the ability of managing and accessing IoT devices, specifically for Consumer IoT – from health trackers in health insurance business cases to connected vehicles or traffic control systems for smart traffic and smart cities

API security is developing as a market space in its own right, and it is recommended that enterprises that are moving towards the Identity Fabric model of IAM be up to date on API security management. The recent Leadership Compass on API Management and Security has the most up-to-date information on the API market, critical to addressing the new era of identity.

Dive deep into API Management and Security with Alexei Balaganski's Leadership Compass.

Regulating AI's Limitless Potential

Regulation has the uncomfortable task of limiting untapped potential. I was surprised when I recently received the advice to think of life like a box. “The walls of this box are all the rules you should follow. But inside the box, you have perfect freedom.” Stunned as I was at the irony of having complete freedom to think inside the box, those at the forefront of AI development and implementation are faced with the irony of limiting projects with undefined potential.

Although Artificial General Intelligence – the ability of a machine to intuitively react to situations that it has not been trained to handle in an intelligent, human way – is still unrealized, narrow AI that enables applications to independently complete a specified task is becoming a more accepted addition to a business’ digital toolkit. Regulations that address AI are built on preexisting principles, primarily data privacy and protection against discrimination. They deal with the known risks that come with AI development. In 2018, biometric data was added to the European GDPR framework to require extra protection. In both the US and Europe, proposals are currently being discussed to monitor AI systems for algorithmic bias and govern facial recognition use by public and private actors. Before implementing any AI tool, companies should be familiar with the national laws for the region in which they operate.

These regulations have a limited scope, and in order to address the future unknown risks that AI development will pose, a handful of policy groups have published guidelines that attempt to set a model for responsible AI development.

The major bodies of work include:

The principles developed by each body are largely similar. The main principles that all guidelines discussed address the need for developers and AI implementers to protect human autonomy, obey the rule of law, prevent harm and promote inclusive growth, maintain fairness, develop robust, prudent, and secure technology, and ensure transparency.

The single outstanding feature is that only one document provides measurable and immediately implementable action. The EU Commission included an assessment for developers and corporate AI implementors to conduct to ensure that AI applications become and remain trustworthy. The assessment is currently in a pilot phase and will be updated in January 2020 to reflect the comments from businesses and developers. The other guidelines offer compatible principles but are general enough to allow any of the public, private, or individual stakeholders interacting with AI to deflect responsibility.

This collection of guidelines from the international community are not legally binding restrictions, but are porous barriers that allow sufficiently cautious and responsible innovations to grow and expand as the trustworthiness of AI increases. The challenge in creating regulations for an intensely innovative industry is to build in flexibility and the ability to mitigate unknown risks without compromising the artistic license. These guidelines attempt to set an ethical example to follow, but it is essential to use tools like the EU Commission’s assessment tool which establish an appropriate responsibility, no matter the status as developer, implementor, or user.

Alongside the caution from governing bodies comes a clear that AI development can bring significant economic, social, and environmental growth. The US issued an executive order in February 2019 to prioritize AI R&D projects, while the EU takes a more cautiously optimistic approach by building of recognizing the opportunities but prioritizing building and maintaining a uniform EU strategy for AI adoption.

If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.

What Does AI in Human Resources Mean for the Small Business?

Thanks to an incessant desire to remove repetitive tasks from our to-do lists, researchers and companies are developing AI solutions to HR – namely to streamline recruiting, improve the employee experience, and to assess performance.

AI driven HR management will look different in small businesses than in large companies and multinationals. There are different barriers that will have to be navigated, but also different priorities and opportunities that small businesses will have with AI.

Smaller budgets create price barriers to implementing an AI system, and likely psychological barriers as the self-built CEO resists delegating tasks that would otherwise rely on his or her gut instinct. Access to a sufficient quantity of data to optimize algorithms is perhaps the largest challenge that small businesses will face when integrating AI into their HR practices. Companies typically gather data from their own databases, assembling a wide range of hiring documents, employee evaluations, etcetera. Large companies have decades of stored HR data from thousands of employees, and clearly have an advantage when it comes to gathering a large volume of usable data.

In terms of priorities, there is a huge divide between the value proposition that AI offers to large and small businesses. Big companies need to leverage time-saving aspects, especially to create a customized connection for thousands of employees. Routine communication, building employee engagement, and monitoring employee attrition are all aspects that minimize repetitive work and save time. In a sense, the goal is to give institutional bureaucracy a personal touch – like a small business has. A small company’s strengths come from its unique organizational culture, which is heavily dependent on natural, human interaction and well-designed teams. It is this “small company” feel that large companies try to imitate with AI customization features.

Of course, small companies also need to save time, especially because many do not have a dedicated HR department – in some cases, the department consists of one person dividing time between their main role and HR tasks. Their time is limited, so instead of implementing FAQ chatbots that make the organization feel small and accessible, small businesses should focus on another area which consumes too much time: recruiting and promoting visibility.

Finding qualified and competitive candidates is challenging when a firm’s circle of influence is geographically limited. A factor often contributing to success in small firms is the ability to hire for organizational fit, thus building tightly knit teams to deliver agile service. To increase the chances of attracting highly qualified candidates, small businesses should focus on using AI systems to support recruiting and hiring for organizational fit.

Small businesses are always under pressure to do more with less. When implementation costs are high and internal resources limited, small businesses can consider plug and play tools which rely on external datasets. For those who are open to experiment, they can look for AI projects that have overlap with their goals. For example, socially minded companies looking to attract more diverse applicants can participate in studies like AI-enabled refugee resettlement, placing people in areas where they will be most likely to find employment. A project like this could shift setup costs for implementing new technology and achieve wider HR goals that the company may have, like gaining employees with specific skills that are not common in the area, opening up more opportunities for innovation through diversity, gaining different language capabilities, and so on.

The risk of using AI technologies to support hiring has already played out in the case of Amazon. With the best intentions, the research team designing a hiring tool to select the highest qualified candidates based on their resumes noticed that their algorithm had learned to value traits that indicated the candidate was male, and penalize indicators that the candidate was female. The cause was imbedded in their input data: the CVs and associated data given to the system to learn from was influenced by years of gendered hiring practices. The project was quietly put to rest. This example was luckily only a pilot version and wasn’t the deciding factor in any applications, but provides a valuable lesson to developers and adopters of recruitment AI: maintaining transparency throughout development and beyond will illuminate weaknesses with time. Robust checks by outside parties will be necessary, because one’s own biases are most difficult to see.

AI can have a role to play in small business HR strategies just as much as the large corporations. But as with any strategy, the decision should be aimed at delivering clear advantages with a plan to mitigate any risks.

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Subscribe to our Podcasts

KuppingerCole Podcasts - watch or listen anywhere

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00