Blog posts by Mike Small

5G and Identity

5G Identity and Authentication

5G is the next generation of cellular mobile communications intended to support the massive increase in capacity and connectivity that will be required for the future cloud of things and to provide the enhanced bandwidth needed for new mobile data services.  The security of both depend upon being to identify not only the people but also the things that are using the network services.  Organizations need to act now to take account of how 5G will impact on their identity and access management governance and processes.

5G identifiers

First it is important to understand how identity is handled by public cellular networks.  This is because many “things”, including cars delivery vehicles and others, will use 5G and identify themselves in this way to access the network.  The original objective for this identification was for the network provider to ensure that only the authorized customers have access to the service and that this use cannot be repudiated.  How this is achieved has evolved over time to meet the increasing challenges of fraud and cybercrime.  

There are 2 primary identifiers for a cellular device.  The first is the IMEI (International Mobile Equipment Identifier) – this identifies the phone and is used to block access by stolen devices.  Within the device is a SIM (Subscriber Identity Module) – which is a trusted tamper resistant hardware module.  The SIM contains two key pieces of data – the IMSI (International Mobile Subscriber Identity) which is sometimes known as the SUPI (Subscriber Public Identity) and a shared key Ki.  This key is used in the AKA (Authentication and Key Agreement) protocol when the device connects to the cellular network.

5G Authentication

The device would normally connect to the Home Network (i.e. the network with which the user has a contract) and be authenticated directly.  However, when the user is roaming the device could connect via a Serving Network.  One of the weaknesses of 4G and earlier was the involvement of the Serving Network in the authentication process and one of the key changes at 5G is that the authentication decision is always made by the Home Network.

The authentication process starts with the device requesting access by sending its IMSI / SUPI to the network.  Previously this was sent unencrypted, posing a threat that it could be intercepted and reused, in 5G this is always encrypted using the public key of the Home Network.  The Home Network responds to this request by sending a Authentication Vector (a large random number) to the device.  The device must encrypt this using the shared key Ki and send this as the response.  Since the Home Network has a copy of the key it can check that the decrypted response corresponds to the value that was originally sent.  Prior to 5G this check could be made by the Visited Network that held a copy of the expected response.  5G and 4G also provide mutual authentication allowing the device to authenticate the network using the AUTH (Authentication Token) returned by the network and the shared key.

Once the device has been authenticated in 5G the protocol goes on to agree how the traffic will be encrypted and subsequent messages use a SUCI (Subscriber Concealed Identity) to identify the device.  In 5G traffic is encrypted throughout the infrastructure whereas in earlier generations it was only encrypted over the radio link.

Figure 1: 5G SIM based Authentication Overview

5G Extensible Authentication Protocol

However, not all devices under all circumstances may want to use the 5G-AKA protocol.  For example, in a private 5G network a large manufacturing plant may wish to use a central directory of devices to control access.  To enable this the 5G architecture provides a unified authentication framework that makes 5G authentication both open and network agnostic.  It supports access through other kinds of network including Wi-Fi and cable via the N3IWF (Non-3GPP Interworking Function). 

For specific use cases such as IoT and private 5G networks EAP-TLS (Extensible Authentication Protocol Transport Layer Security) is also supported. These other forms of authentication could be useful for very cheap IoT devices, for example small sensors, for which it would be prohibitively expensive to require SIM cards.  In addition, the 5G system can be deployed as a replacement for Wi-Fi in an enterprise or factory setting and this can reuse the existing public key and certificate infrastructure for network access authentication.

When selected as the authentication method EAP-TLS is performed between the device and the AUSF through the SEAF which transparently forwards EAP-TLS messages back and forth between the device and the AUSF.  For mutual authentication, both the device and the AUSF can verify each other’s certificate.  This makes it possible for the AUSF to implement any of the common enterprise authentication protocols including RADIUS, Kerberos or others.

4G 5G AKA 5G EAP TLS
Identity SIM SIM Other
Trust Model Shared Symmetric Key Shared Symmetric Key Public Key Certificate

Recommendations

5G will impact on many industry sectors including logistics, manufacturing, transport, healthcare, the media, smart buildings as well as local government.  Managing who and which devices can access what applications and services is critical to ensure security and safety.  Managing vast access by the potentially enormous number of devices adds to the significant existing identity and access challenges.  Organizations should act now to:

  • Integrate your IoT security architecture with existing IAM and Access Governance frameworks.
  • Define authentication and access controls for IoT devices based on the levels of threats to your use cases.
  • Use strong authentication for sensitive devices (consider SIM / embedded SIM or certificates).
  • Take care to secure and remove vulnerabilities from IoT devices – especially change default passwords.
  • Integrate IoT device management with other IT asset management systems
  • Review how your existing PKI processes will be able to support the number if IoT devices.

For more details on this subject see KuppingerCole Leadership Brief 5G Impact on Organizations and Security 80238.  Also attend the Public & On-Premise Cloud, Core IT Hosting, Edge IoT Track at EIC in Munich on May 13th, 2020.

5G - How Will This Affect Your Organization?

What is it that connects Covent Garden in London, The Roman Baths in Bath and Los Angeles? The answer is 5G mobile communications used by media organizations. On January 29th I attended the 5G Unleashed event at the IET in London. (The IET is the body that provides professional accreditation for Engineers in the UK). At this event there were several presentations describing real world use cases of 5G as well as deep dives into the supporting infrastructure. While 5G is being sold to consumers as superfast mobile broadband there is a lot more to it than that. It has the potential to impact on a wide range of organizations as well as public infrastructure. 5G provides the communications needed to enable clouds of things but it also creates risks which need to be managed.

Key Features of 5G

The key features that 5G offers include the ability to handle up to 1000 times greater data volumes and up to 100 times more connected devices than with 4G today. It can also reduce communications latency and enable up to ten-year battery life for low power devices.

Key Features of 5G

Example Applications of 5G

There are many areas where 5G is set to transform the way things are done. Mobile robots within factories and better control over supply chain processes will benefit manufacturing. Connected and Automated Vehicles will require low latency, high volume communication creating terabytes of data every day. Remote healthcare applications using 5G can enable the delivery of healthcare services at a lower cost. Guaranteed low latency and high bandwidth capabilities provided by 5G have the potential to dramatically reduce the cost and equipment needed for real time news broadcasting. 5G could replace the need for the bulky satellite equipment that is currently used for this.

For example: on May 31st, 2019 the BBC made it first outside news broadcast using 5G from Covent Garden in London. This didn’t go as well as expected! Their SIM ran out of data and the location was so full of tourists sharing photos and videos that the mobile service they were using could not provide the bandwidth needed. These teething problems will be overcome now that 5G has been fully launched.

During 2018 and 2019 in the 5G Smart Tourism project BBC Research & Development trialled an app at the Roman Baths in Bath, to visualise the Baths at times before and after the Roman period. The app tells the story of three periods: the mythical discovery of the hot springs by King Bladud, the Baths falling into disrepair when the Romans left, and their renovation in Victorian times. This used the so-called ‘magic window’ paradigm where, as the user moves their device around, they see a view appropriate to where they are looking. Once again, this immersive experience depended upon the high bandwidth and low latency provided by a 5G mobile network.

In Britain, Worcester Bosch has become the first-ever factory in Britain to have 5G wireless access. This has been used in a trial to run sensors in the Worcester Bosch factory for preventative maintenance and real-time feedback whilst also using data analytics to predict any potential failures.

5G and Edge Computing

To obtain maximum benefit from the low communications latency that 5G can offer the end to end system architecture needs to change. The computing power needs to move to be physically closer to the edge to avoid delays across the backhaul – the network between the cellular radio base station and the data centre. In practical terms this means locating the data centre next to the mobile radio station.

This is exactly what AWS has done in partnership with Verizon in Los Angeles. This was announced in December 2019 as the first AWS Local Zone. AWS say that Los Angeles was chosen to support the local industries that would benefit from the high bandwidth and low latency including media & entertainment content creation, real-time gaming, reservoir simulations, electronic design automation, and machine learning. AWS also has similar partnerships with mobile service providers in other parts of the world.

5G and Security

Given that the data payload carried by 5G will include massive amounts of potentially sensitive data security is essential. The potential to compromise the traffic control across a whole city or to disconnect energy supplied through millions of smart meters means that 5G must be treated as a component of critical national infrastructures.

The 5G standards provide security mechanisms that include enhancements in the areas of encryption, authentication and user privacy. However, they do not protect against all possible threats, for example DDoS and radio jamming. Protecting against these will depend on the actual deployment.

However, many of the IoT devices contain numerous known technical vulnerabilities and some even feature a fixed and unchangeable root password. It is essential that the IoT devices are designed, built and deployed with security in mind. One of the opportunities provided by 5G is that by using an embedded SIM the device identity can be more rigorously authenticated and the integrity and confidentiality of communication better ensured.

So, you need to consider what opportunities that 5G could bring to your industry sector. This technology will provide greater mobile connectivity and capacity – how will this affect your organization? Industry sectors that are likely to see benefits are those that require the new capabilities that this technology will provide. These include logistics, manufacturing, transport, healthcare, the media and local government.

Make sure that you build in end-to-end Security from the start – there is often a tendency when exploring new areas to focus on functionality and consider security as an afterthought. The consequences of security failure in many of the potential use cases go way beyond data leakage to include physical harm and large-scale disruption.

For more details on this subject see KuppingerCole Leadership Brief 5G Impact on Organizations and Security 80238. Also attend the Public & On-Premise Cloud, Core IT Hosting, Edge IoT Track at EIC in Munich on May 13th, 2020.

Cisco Promises Future of the Internet – But Can They Deliver?

On December 11th I attended an analyst webcast from Cisco entitled “The Future of the Internet”. At this Cisco unveiled its plans for its next generation of networking products. While this was interesting, it did not meet my expectations for a deeper vision of the future of the internet.

The timing is interesting because 50 years ago in 1969 there were several events that were seminal to the internet. Many people will remember Apollo 11 and the moon landing – while this was an enormous achievement in its own right – it was the space race that led to the miniaturization and commercialization of the silicon-based technology upon which the internet is based.

The birth of the Internet

1969 also marked the birth of Unix. Two Bell Labs computer scientists Ken Thompson and Dennis Ritchie had been working on an experimental time-sharing operating system called Multics as part of the joint research group with General Electric and MIT. They decided to take the best ideas from Multics and implement them on a smaller scale – on a PDP-7 minicomputer at Bell Labs. That marked the birth of Unix, the ancestor of Linux, the most widely deployed computing platform in the cloud, as well MacOS, IoS and Android.

October 29th, 1969 was another important date: it marked the first computer to computer communication using a packet-switched network ARPANET. In 1958, US President Dwight D. Eisenhower formed the Advanced Research Projects Agency (ARPA), bringing together some of the best scientific minds in the US with the aim to help American military technology stay ahead of its enemies. Among ARPA’s projects was a remit to test the feasibility of a large-scale computer network.

In July 1961 Leonard Kleinrock at MIT published the first paper on packet switching theory and the first book on the subject in 1964. Lawrence Roberts, the first person to connect two computers, was responsible for developing computer networks at ARPA, working with scientist Leonard Kleinrock. When the first packet-switching network was developed in 1969, Kleinrock successfully used it to send messages to another site, and the ARPA Network, or ARPANET, was born—the forerunner of the internet.

These were the foundational ideas and technologies that led to the largest man-made artifact – the internet.

Cisco announces new products based on single multi-purpose ASIC

The internet today depends upon the technology provided by a range of vendors including Cisco. During the webcast, Chuck Robbins, CEO of Cisco, made the comment that he believed that 90% of the traffic on the internet flows through silicon created by Eyal Dagan, SVP silicon engineering at Cisco. Obviously, this technology is an important part of the internet infrastructure. So, what did Cisco announce that is new?

The first and most significant announcement was that Cisco has created the first single multi-purpose ASIC (computer chip) that can handle both routing and switching efficiently, with high performance and with lower power consumption. According to Cisco, this is a first, and in the past, it was generally thought that the requirements of routing and switching were so different that it was not possible for a single chip to meet all the requirements for both these tasks. Why does this matter?

The current generation of network appliances is built with different chips for different jobs and so this means that multiple software stacks are needed to manage the different hardware combinations. A single chip means a single software stack which is smaller, more efficient and easier to maintain. Their new network operating system IOS XR7 implements this and is claimed to be simpler, supports modern APIs and is more trustworthy. Trustworthiness is ensured through a single hardware root of trust.

One of the problems that network operators have when deploying new hardware is testing to ensure that the service is maintained after the change. Cisco also announced a new cloud service that helps them with this problem. The end user uploads their current configuration to the service which generates tests for the new platform and greatly speeds up testing as well as providing more assurance that the service can be cut over without problems.

Cisco delivers infrastructure but is that enough?

It is easy to forget, when you click on your phone, just how many technical problems had to be overcome to provide the seamless connectivity that we all now take for granted. The vision, creativity, and investment by Cisco over the five years that it took for this development is to be applauded. It is excellent news to hear that the next generation of infrastructure components that underly the internet will provide more bandwidth, better connectivity and use less energy. However, this infrastructure does not define the future of the internet.

The internet has created enormous opportunities but has also brought significant challenges. It has provided opportunities for new kinds of business but also enabled the scourge of cybercrime by providing an unpoliced hiding place for cybercriminals. It has opened widows across the world to allow people from all cultures to connect with each other but has also provided a platform for cyberbullying, fake news and political interference. It is enabling new technologies such as artificial intelligence which have the potential to do great good but also raise many ethical concerns.

Mankind has created the internet, with the help of technology from companies like Cisco. The internet is available to mankind, but can mankind master its creation?

AI for Governance and Governance of AI

Artificial Intelligence is a hot topic and many organizations are now starting to exploit these technologies, at the same time there are many concerns around the impact this will have on society. Governance sets the framework within which organizations conduct their business in a way that manages risk and compliance as well as to ensure an ethical approach. AI has the potential to improve governance and reduce costs, but it also creates challenges that need to be governed.

The concept of AI is not new, but cloud computing has provided the access to data and the computing power needed to turn it into a practical reality. However, while there are some legitimate concerns, the current state of AI is still a long way from the science fiction portrayal of a threat to humanity. Machine Learning technologies provide significantly improved capabilities to analyze large amounts of data in a wide range of forms. While this poses a threat of “Big Other” it also makes them especially suitable for spotting patterns and anomalies and hence potentially useful for detecting fraudulent activity, security breaches and non-compliance.

AI covers a range of capabilities, including ML (machine learning), RPA (Robotic Process Automation), NLP (Natural Language Processing) amongst others. But AI tools are simply mathematical processes which come in a wide variety of forms and have relative strengths and weaknesses.

ML (Machine Learning) is based on artificial neural networks, inspired by the way in which animal brains work. These use networks of machine learning algorithms which can be trained to perform tasks using data as examples and without needing any preprogrammed rules.

For ML training to be effective it needs large amounts of data and acquiring this data can be problematic. The data may need to be obtained from third parties and this can raise issues around privacy and transparency. The data may contain unexpected biases and, in any case, needs to be tagged or classified with the expected results which can take significant effort.

One major vendor successfully applied this to detect and protect against identity led attacks. This was not a trivial project and took 12 people over 4 years to complete However, the results were worth the cost since this is now much more effective than the hand-crafted rules that were previously used. It is also capable of automatically adapting to new threats as they emerge.

So how can this technology be applied to governance? Organizations are faced with a tidal wave of regulation and need to cope with the vast amount of data that is now regularly collected for compliance. The current state of AI technologies makes them very suitable to meet these challenges. ML can be used to identify abnormal patterns in event data and detect cyber threats while in progress. The same approach can help to analyze the large volumes of data collected to determine the effectiveness of compliance controls. Its ability to process textual data makes it practical to process regulatory texts to extract the obligations and compare these with the current controls. It can also process textbooks, manuals, social media and threat sharing sources to relate event data to threats.

However, the system needs to be trained by regulatory professionals to recognize the obligations in regulatory texts and to extract these into a common form that can be compared with the existing obligations documented in internal systems to identify where there is a match. It also needs training to discover existing internal controls that may be relevant or, where there are no controls, to advise on what is needed.

Lined with a conventional GRC system this can augment the existing capabilities and help to consolidate new and existing regulatory requirements into a central repository used to classify complex regulations and help stakeholders across the organization to process large volumes of regulatory data. It can help to map regulatory requirements to internal taxonomies and business structures and basic GRC data. Thus connecting regulatory data to key risks, controls and policies, and linking that data to an overall business strategy.

Governance also needs to address the ethical challenges that come with the use of AI technologies. These include unintentional bias, the need for explanation, avoiding misuse of personal data and protecting personal privacy as well as vulnerabilities that could be exploited to attack the system.

Bias is a very current issue with bias related to gender and race as top concerns. Training depends upon the data used and many datasets contain an inherent if unintentional bias. For example see the 2018 paper Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. There are also subtle differences between human cultures, and it is very difficult for humans to develop AI systems to be culturally neutral. Great care is needed to with this area.

Explanation – In many applications, it may be very important to provide an explanation for conclusions reached and actions taken. Rule-based systems can provide this to some extent but ML systems, in general, are poor at this. Where explanation is important some form of human oversight is needed.

One of the driving factors in the development of ML is the vast amount of data that is now available, and organizations would like to get maximum value from this. Conventional analysis techniques are very labor-intensive, and ML provides a potential solution to get more from the data with less effort. However, organizations need to beware of breaching public trust by using personal data, that may have been legitimately collected, in ways for which they have not obtained informed consent. Indeed, this is part of the wider issue of surveillance capitalism - Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.

ML systems, unlike the human, do not use understanding they simply match patterns – this makes them open to attacks using inputs that are invisible to humans. Recent examples of vulnerabilities reported include one where the autopilot of a Tesla car was tricked into changing lanes into oncoming traffic by stickers placed on the road. A wider review of this challenge is reported in A survey of practical adversarial example attacks.

In conclusion, AI technologies and ML in particular, provide the potential to assist governance by reducing the costs associated with onboarding new regulations, managing controls and processing compliance data. The exploitation of AI within organizations needs to be well governed to ensure that it is applied ethically and to avoid unintentional bias and misuse of personal data. The ideal areas for the application of ML are those where with a limited scope and where explanation is not important.

For more information attend KuppingerCole’s AImpact Summit 2019.

If you liked this text, feel free to browse our Focus Area: AI for the Future of Your Business for more related content.

Sustainable Data Management

Getting competitive advantage from data is not a new idea however, the volume of data now available and the way in which it is being collected and analysed has led to increasing concerns. As a result, there are a growing number of regulations over its collection, processing and use. Organization need to take care to ensure compliance with these regulations as well as to secure the data they use. This is increasing the costs and organizations need a more sustainable approach.

In the past organizations could be confident that their data was held internally and was under their own control. Today this is no longer true, as well as internally generated data being held in cloud services, the Internet of Things and Social Media provide rich external sources of valuable data. These innovations have increased the opportunities for organizations to use this data to create new products, get closer to their customers and to improve efficiency. However, this data needs to be controlled to ensure security and to comply with regulatory obligations.  

The recent EU GDPR (General Data Protection Regulation) provides a good example of the challenges organizations face. Under GDPR the definition of data processing is very broad, it covers any operation that is performed on personal data or on sets of personal data. It includes everything from the initial collection of personal data through to its final deletion. Processing covers every operation on personal data including storage, alteration, retrieval, use, transmission, dissemination or otherwise making available. The sanctions for non-compliance are very severe and critically, the burden of proof to demonstrate compliance with these principles lies with the Data Controller.

There are several important challenges that need to be addressed – these include:

  •     Governance – externally sources and unstructured data may be poorly governed, with no clear objectives for its use;
  •     Ownership – external and unstructured data may have no clear owner to classify its sensitivity and control its lifecycle;
  •     Sharing - Individuals can easily share unstructured data using email, SharePoint and cloud services;
  •     Development use - developers can use and share regulated data for development and test purposes in ways that may breach the regulatory obligations;
  •     Data Analytics - the permissions to use externally acquired data may be unclear and data scientists may use legitimately acquired data in ways that may breach obligations.


To meet these challenges in a sustainable way, organizations need to adopt Good Information Stewardship. This is a subject that KuppingerCole have written about in the past. Good Information Stewardship takes an information centric, rather than technology centric, approach to security and Information centric security starts with good data governance.

Sustainable Data ManagementFigure: Sustainable Data Management

Access controls are fundamental to ensuring that data is only accessed and used in ways that are authorized. However, additional kinds of controls are needed to protect against illegitimate access, cyber adversaries now routinely target access credentials. Further controls are also needed to ensure the privacy of personal data when it is processed, shared or held in cloud services. Encryption, tokenization, anonymization and pseudonymization technologies provide such “data centric” controls.  

Encryption protects data against unauthorized access but is only as strong as the control over the encryption keys. Rights Management using public / private key encryption is an important control that allows controlled sharing of unstructured data. DLP (Data Leak Prevention) technology and CASBs (Cloud Access Security Brokers) are also useful to help to control the movement of regulated data outside of the organization.  

Pseudonymisation is especially important because it is accepted by GDPR as an approach to data protection by design and default. GDPR accepts that properly pseudonymized personal data is outside the scope of its obligations. Pseudonymization also allows operational data to be processed while it is still protected. This is very useful since pseudonymized personal data can therefore be used for development and test purposes, as well as for data analytics. Indeed, according to the UK Information Commissioner’s Office “To protect privacy it is better to use or disclose anonymized data than personal data”.

In summary, organizations need to take a sustainable approach to managing the potential risks both to the organization and to those outside (for example to the data subjects) from their use of data. Good information stewardship based on a data centric approach, where security controls follow the data, is best. This provides a sustainable approach that is independent of the infrastructure, tools and technologies used to store, analyse and process data.

For more details see: Advisory Note: Big Data Security, Governance, Stewardship - 72565

AI Myths, Reality and Challenges

The dream of being able to create systems that can simulate human thought and behaviour is not new. Now that this dream appears to be coming closer to reality there is both excitement and alarm. Famously, in 2014 Prof. Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race”. Should we be alarmed by these developments and what in practice does this mean today?

The origins of today’s AI (Artificial Intelligence) can be traced back to the seminal work on computers by Dr Alan Turing. He proposed an experiment that became known as the “Turing Test”, to define the standard for a machine to be called "intelligent". A computer could only be said to "think" if a human was not able to distinguish it from a human being through a conversation with it.

The theoretical work that underpins today’s AI and ML (Machine Learning) was developed in the 1940s and 1950s. The early computers of that era were slow and could only store limited amounts of data, this restricted what could practically be implemented. This has now changed – the cloud provides the storage for vast amounts of data and the computing power needed for ML.

The theoretical basis for ML stems from work published in 1943 by Warren McCulloch and Walter Pitts on a computational model for neural networks based on mathematics and algorithms called threshold logic. Artificial neural networks provide a framework for machine learning algorithms to learn based on examples without being formally programmed. This learning needs large amounts of data and the significant computing power which the cloud can provide.

Analysing this the vast amount of data now available in the cloud creates its own challenges and ML provides a potential solution to these. Normal statistical approaches may not be capable of spotting patterns that a human would see, and programming individual analyses is laborious and slow. ML provides a way to supercharge human ability to analyse data. However, it changes the development cycle from programming to training based on curated examples overseen by human trainers. Self-learning systems may provide a way around the programming bottleneck. However, the training-based development cycle creates new challenges around testing, auditing and assurance.

ML has also provided a way to enhance algorithmic approaches to understanding visual and auditory data. It has for example enabled facial recognition systems as well as chatbots for voice-based user interactions. However, ML is only as good as the training and is not able to provide explanations for the conclusions that it reaches. This leads to the risk of adversarial attacks – where a third party spots a weakness in the training and exploits this to subvert the system. However, it has been applied very successfully to visual component inspection in manufacturing where it is faster and more accurate than a human.

One significant challenge is how to avoid bias – there are several reported examples of bias in facial recognition systems. Bias can come from several sources. There may be insufficient data to provide a representative sample. The data may have been consciously or unconsciously chosen in a way that introduces bias. This latter is difficult to avoid since every human is part of a culture which is inherently founded on a set of shared beliefs and behaviours which may not be the same as in other cultures.

Another problem is one of explanation – ML systems are not usually capable of providing an explanation for their conclusions. This makes training ML doubly difficult because when the system being trained gets the wrong answer it is hard to figure out why. The trainer needs to know this to correct the error. In use, an explanation may be required to justify a life-changing decision to the person that it affects, to provide the confidence needed to invest in a project based on a projection, or to justify why a decision was taken in a court of law.

A third problem is that ML systems do not have what most people would call “common sense”. This is because currently each is narrowly focussed on one specialized problem. Common sense comes from a much wider understanding of the world and allows the human to recognize and discard what may appear to be a logical conclusion because in the wider context it is clearly stupid. This was apparent when Microsoft released a chatbot that was supposed to train itself did not recognize mischievous behaviour.

 Figure 1: AI, Myths, Reality and ChallengesFigure: AI, Myths, Reality and Challenges

In conclusion, AI systems are evolving but they have not yet reached the state portrayed in popular science fiction. ML is ready for practical application and major vendors offer tools to support this. The problems where AI is ready can be applied today can be described in two dimensions – the scope of knowledge required and the need for explanation. Note that the need for explanation is related to the need for legal justification or where potential consequences of mistakes are high.

Organizations are recommended to look for applications that fit the green area in the diagram and to use caution when considering those that would lie in the amber areas. The red area is still experimental and should only be considered for research.

For more information on this subject attend the AI track at EIC in Munich in May 2019.

Are You Prepared for a Cyber-Incident?

According to the Ponemon Institute - cyber incidents that take over 30 days to contain cost $1m more than those contained within 30 days. However, less than 25% of organizations surveyed globally say that their organization has a coordinated incident response plan in place. In the UK, only 13% of businesses have an incident management process in place according to a government report. This appears to show a shocking lack of preparedness since it is when not if your organization will be the target of a cyber-attack.

Last week on January 24th I attended a demonstration of IBM’s new C-TOC (Cyber Tactical Operations Centre) in London. The C-TOC is an incident response centre housed in an 18-wheel truck. It can be deployed in a wide range of environments, with self-sustaining power, an on-board data centre and cellular communications to provide a sterile environment for cyber incident response. It is designed to provide companies with immersion training in high pressure cyber-attack simulations to help them to prepare for and to improve their response to these kinds of incidents.

The key to managing incidents is preparation. There are 3 phases to a cyber incident, these are the events that led up to the incident, the incident itself and what happens after the incident. Prior to the incident the victim may have missed opportunities to prevent it. When the incident occurs, the victim needs to detect what is happening, to manage and contain its effects. After the incident the victim needs to respond in a way that not only manages the cyber related aspects but also deals with potential customer issues as well as reputational damage.

Prevention is always better than cure, so it is important to continuously improve you organization’s security posture, but you still need to be prepared to deal with an incident when it occurs.

The so-called Y2K (Millenium) bug is an example of an incident that was so well managed some people believe it was a myth. In fact, I like many other IT professionals, spent the turn of the century in a bunker ready to help any organization experiencing this problem. However, I am glad to say that the biggest problem that I met was when I returned to my hotel the next morning, I had to climb six flights of stairs because the lifts had been disabled as a precaution. There were many pieces of software that contained the error and it was only through the recognition of the problem, rigorous preparation to remove the bug as well as planning to deal with it where it arose that major problems were averted.

In the IBM C-TOC I participated in cyber response challenge involving a fictitious international financial services organization called “Bane and Ox”. This organization has a cyber security team and so called “Fusion Centre” to manage cyber security incident response. This exercise started with an HR Onboarding briefing welcoming me into the team.

We then were then taken through an unfolding cyber incident and asked to respond to the events as they occurred with phone calls from the press, attempts to steal money via emails exploiting the situation, a ransom demand, physical danger to employees, customers claiming that their money is being stolen, a data leak and an attack on the bank’s ATMs. I then underwent a TV interview about the bank’s response to the event with hostile questioning by the news reporter, not a pleasant experience!

According to IBM, organizations need a clear statement of the “Commander’s Intent”. This is needed to ensure that everyone works together towards a common goal that everyone can understand when under pressure and making difficult decisions. IBM gave the example that the D Day Commander’s Intent statement was “Take the beach”.

The next priority is to collect information. “The first call is the most important”. Whether it is from the press, a customer or an employee. You need to get the details, check the details and determine the credibility of the source.

You then need to implement a process to resolve where the problems lie and to take corrective action as well as to inform regulators and other people as necessary. This is not easy unless you have planned and prepared in advance. Everyone needs to know what they must do, and management cover is essential to ensure that resources and budget are available as needed. It may also be necessary to enable deviation from normal business processes.

Given the previously mentioned statistics on organizational preparedness for cyber incidents, many organizations need to take urgent action. The preparation needed involves the many parts of the organization not just IT, it must be supported at the board level and involve senior management. Sometimes the response will require faster decision making with the ability to bypass normal processes - only senior management can ensure that this is possible. An effective response need planning, preparation and above all practice.

KuppingerCole advice:

  • Obtain board level sponsorship for your incident response approach;
  • Identify the team of people / roles that must be involved in responding to an incident;
  • Ensure that it is clear what constitutes an incident and who can invoke the response plan;
  • Make sure that you can contact the people involved when you need to;
  • You will need external help – set up the agreement for this before you need it;
  • Planning, preparation and practice can avoid pain and prosecution;
  • Practice, practice and practice again.

KuppingerCole Advisory Note: GRC Reference Architecture – 72582 provides some advice on this area.

Can Autonomous Improve Security Posture?

Last week I attended the Oracle Open World Europe 2019 in London. At this event Andrew Sutherland VP of technology told us that security was one of the main reasons why customers were choosing the Oracle autonomous database. This is interesting for two reasons firstly it shows that security is now top of mind amongst the buyers of IT systems and secondly that buyers have more faith in technology than their own efforts.

The first of these reasons is not surprising. The number of large data breaches disclosed by organizations continues to grow and enterprise databases contain the most valuable data. The emerging laws mandating data breach disclosure, such as GDPR, have made it more difficult for organizations to withhold information when they are breached. Being in a position of responsibility when your organization suffers a data breach is not a career enhancing event.

The second reason takes me back to the 1980s when I was working in the Knowledge Engineering Group in ICL. This group was working as part of the UK Government sponsored Alvey Programme which was created in response to the Japanese report on 5th generation computing. This programme had a focus on Intelligent Knowledge Based Systems (IKBS) and Artificial Intelligence (AI). One of the most successful products from this group was VCMS a performance advisor for the ICL mainframe. This was commercially very successful with a large uptake in the VME customer base. However, one interesting observation was that it boosted customers to upgrade their mainframes. It became apparent that the buyers of these systems were more ready to accept the advice from VCMS than from the customer service representatives.

From an AI point of view managing computer systems is relatively easy. This is because computer systems are deterministic, and their behaviour can be described using rules. These rules may be complex and there may be a wide range of circumstances that need to be considered. However, given the volume of metered data available and the rules an AI system can usually make a good job of optimizing performance. Computer systems manufacturers also have the knowledge needed.

That is not to diminish the remarkable achievements by Oracle to make their database systems autonomous. Mark Hurd, CEO of Oracle (who had to present via a satellite link because he had been unable to renew his passport because of the US government shutdown) described how the Net Suite team had spent 15 years creating 9,000 specialized table indexes for performance. The autonomous database created 6,000 indexes in a short time, and which improved performance by 7%.

Oracle’s strategy for AI extends beyond the autonomous database. Melissa Boxer, VP Fusion Adaptive Intelligence, described how Oracle Adaptive Intelligent Apps (Oracle AI Apps) provide a suite of prebuilt AI and data-driven across Customer Experience (CX), Human Capital Management (HCM), Enterprise Resource Planning (ERP), and Manufacturing. These use a shared data model to provide Adaptive Intelligence and Machine Learning across all the different pillars. They include a personalized user experience OAUX and intelligent chat bots.

So, to return to the original question – can Autonomous Systems (as defined by Oracle) improve security posture? In my opinion the answer is both yes and no. The benefits of autonomous systems are that they can automatically incorporate patches and implement best practice configuration. Since many data breaches stem from these areas this is a good thing. They can also automatically tune themselves to optimize performance for an individual customer’s workload and, since they discover what is normal, they can potentially identify abnormal activity.

The challenge is that AI technology is not yet able to defend against adversarial activity. Machine Learning is only as good as its training and, at the current state of the art, it is does not provide good explanations for the conclusions reached. This means that sometimes, the learnt behavior contains flaws that can be exploited by a malicious agent to trick the system into the wrong conclusions. Given that cyber crime is an arms race and we must assume that the cyber adversaries are working with the same technology we must assume that this will be deployed against us as well as for us.

For security – AI can help to avoid the simple mistakes – to defend against a concerted attack still needs a human in the loop.

IBM Acquires Red Hat: The AI potential

On October 28th IBM announced its intention to acquire Red Hat. At $34 Billion, this is the largest software acquisition ever.  So why would IBM pay such a large amount of money for an Open Source software company? I believe that this acquisition needs to be seen beyond looking just at DevOps and Hybrid Cloud, rather in the context of IBM’s view of the future where the business value from IT services will come from in future. This acquisition provides near-term tactical benefits from Red Hat’s OpenShift Platform and its participation in the Kubeflow project. It strengthens IBM’s capabilities to deliver the foundation for digital business transformation. However, digital business is increasingly based on AI delivered through the cloud. IBM recently announced a $240M investment in a 10-year research collaboration on AI with MIT and this represents the strategy. This adds to the already significant investments that IBM has already made in Watson, setting up a division in 2016, as well as in cloud services.

Red Hat was founded in 1993 and in 1994 released the Red Hat version of Linux. This evolved into a complete development stack (JBoss) and recently released Red Hat OpenShift - a container- and microservices-based (Kubernetes) DevOps platform. Red Hat operates on a business model based on open-source software development within a community, professional quality assurance, and subscription-based customer support.

The synergy between IBM and Red Hat is clear. IBM has worked with Red Hat on Linux for many years and both have a commitment to Open Source software development. Both companies have a business model in which services are the key element. Although these are two fairly different types of services – Red Hat’s being service fees for software, IBM’s being all types of services including consultancy, development they both fit well into IBM’s overall business.

One critical factor is the need for tools to accelerate the development lifecycle for ML projects. For ML projects this can be much less predictable than for software projects. In the non-ML DevOps world microservices and containers and the key technologies that have helped here. How can these technologies help with ML projects?

There are several differences between developing ML and coding applications. Specifically, ML uses training rather than coding and, in principle, this in itself should accelerate the development of much more sophisticated ways to use data. The ML Development lifecycle can be summarized as:

  • Obtain, prepare and label the data
  • Train the model
  • Test and refine the model
  • Deploy the model

While the processes involved in ML development are different to conventional DevOps, a microservices-based approach is potentially very helpful. ML Training involves multiple parties working together and microservices provide a way to orchestrate various types of functions, so that data scientists, experts and business users can just use the capabilities without caring about coding etc. A common platform based on microservices could also provide automated tracing of the data used and training results to improve traceability and auditing. It is here that there is a great potential for IBM/Red Hat to deliver better solutions.

Red Hat OpenShift provides a DevOps environment to orchestrate the development to deployment workflow for Kubernetes based software. OpenShift is, therefore, a potential solution to some of the complexities of ML development. Red Hat OpenShift with Kubernetes has the potential to enable a data scientist to train and query models as well as to deploy a containerized ML stack on-premises or in the cloud.

In addition, Red Hat is a participant in the Kubeflow project. This is an Open Source project dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Their goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures.

In conclusion, the acquisition has strengthened IBM’s capabilities to deliver ML applications in the near term. These capabilities complement and extend IBM’s Watson and improve and accelerate their ability and the ability of their joint customers to create, test and deploy ML-based applications. They should be seen as part of a strategy towards a future where more and more value is delivered through AI-based solutions.

Read as well: IBM & Red Hat – and now?

The Ethics of Artificial Intelligence

Famously, in 2014 Prof. Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race." The ethical questions around Artificial Intelligence were discussed at a meeting led by the BCS President Chris Rees in London on October 2nd. This is also an area covered by KuppingerCole under the heading of Cognitive Technologies and this blog provides a summary of some of the issues that need to be considered.

Firstly, AI is a generic term and it is important to understand precisely what this means. Currently the state of the art can be described as Narrow AI. This is where techniques such as ML (machine learning) combined with massive amounts of data are providing useful results in narrow fields. For example, the diagnosis of certain diseases and predictive marketing. There are now many tools available to help organizations exploit and industrialise Narrow AI.

At the other extreme is what is called General AI where the systems are autonomous and can decide for themselves what actions to take. This is exemplified by the fictional Skynet that features in the Terminator games and movies. In these stories this system has spread to millions of computers and seeks to exterminate humanity in order to fulfil the mandates of its original coding. In reality, the widespread availability of General AI is still many years away.

In the short term, Narrow AI can be expected to evolve into Broad AI where a system will be able to support or perform multiple tasks using what is learnt in one domain applied to another. Broad AI will evolve to use multiple approaches to solve problems. For example, by linking neural networks with other forms of reasoning. It will be able work with limited amounts of data, or at least data which is not well tagged or curated. For example, in the cyber-security space to be able to identify a threat pattern that has not been seen before.

What is ethics and why is it relevant to AI? The term is derived from the Greek word “ethos” which can mean custom, habit, character or disposition. Ethics is a set of moral principles that govern behaviour, or the conduct of an activity, ethics is also a branch of philosophy that studies these principles. The reason why ethics is important to AI is because of the potential for these systems to cause harm to individuals as well as society in general. Ethics considerations can help to better identify beneficial applications while avoiding harmful ones. In addition, new technologies are often viewed with suspicion and mistrust. This can unreasonably inhibit the development of technologies that have significant beneficial potential. Ethics provide a framework that can be used to understand and overcome these concerns at an early stage.

Chris Rees identified 5 major ethical issues that need to be addressed in relation to AI:

  • Bias;
  • Explainability;
  • Harmlessness;
  • Economic Impact;
  • Responsibility.

Bias is a very current issue with bias related to gender and race as top concerns. AI systems are likely to be biased because people are biased, and AI systems amplify human capabilities. Social media provides an example of this kind of amplification where uncontrolled channels provide the means to share beliefs that may be popular but have no foundation in fact - “fake news”. The training of AI systems depends upon the use of data which may include inherent bias even though this may not be intentional. The training process and the trainers may pass on their own unconscious bias to the systems they train. Allowing systems to train themselves can lead to unexpected outcomes since the systems do not have the common sense to recognize mischievous behaviour. There are other reported examples of bias in facial recognition systems.

Explanation - It is very important in many applications that AI systems can explain themselves. Explanation may be required to justify a life changing decision to the person that it effects, to provide the confidence needed to invest in a project based on a projection, or to justify after the event why a decision was taken in a court of law. While rule-based systems can provide a form of explanation based on the logical rules that were fired to arrive at a particular conclusion neural network are much more opaque. This poses not only a problem to explain to the end user why a conclusion was reached but also to the developer or trainer to understand what needs to be changed to correct the behaviour of the system.

Harmlessness – the three laws of robotics that were devised by Isaac Asimov in the 1940’s and subsequently extended to include a zeroth law apply equally to AI systems. However, the use or abuse of the systems could breach these laws and special care is needed to ensure that this does not happen. For example, the hacking of an autonomous car could turn it into a weapon, which emphasizes the need for strong inbuilt security controls. AI systems can be applied to cyber security to accelerate the development of both defence and offence. It could be used by the cyber adversaries as well as the good guys. It is therefore essential that this aspect is considered and that countermeasures are developed to cover the malicious use of this technology.

Economic impact – new technologies have both destructive and constructive impacts. In the short-term the use of AI is likely to lead to the destruction of certain kinds of jobs. However, in the long term it may lead to the creation of new forms of employment as well as unforeseen social benefits. While the short-term losses are concrete the longer-term benefits are harder to see and may take generations to materialize. This makes it essential to create protection for those affected by the expected downsides to improve acceptability and to avoid social unrest.

Responsibility – AI is just an artefact and so if something bad happens who is responsible morally and in law? The AI system itself cannot be prosecuted but the designer, the manufacturer or the user could be. The designer may claim that the system was not manufactured to the design specification. The manufacturer may claim that the system was not used or maintained correctly (for example patches not applied). This is an area where there will need to be debate and this should take place before these systems cause actual harm.

In conclusion, AI systems are evolving but they have not yet reached the state portrayed in popular fiction. However, the ethical aspects of this technology need to be considered and this should be done sooner rather than later. In the same way that privacy by design is an important consideration we should now be working to develop “Ethical by Design”. GDPR allows people to take back control over how their data is collected and used. We need controls over AI before the problems arise.


KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Subscribe to our Podcasts

KuppingerCole Podcasts - watch or listen anywhere


How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00