Artificial Intelligence is a hot topic and many organizations are now starting to exploit these technologies, at the same time there are many concerns around the impact this will have on society. Governance sets the framework within which organizations conduct their business in a way that manages risk and compliance as well as to ensure an ethical approach. AI has the potential to improve governance and reduce costs, but it also creates challenges that need to be governed.

The concept of AI is not new, but cloud computing has provided the access to data and the computing power needed to turn it into a practical reality. However, while there are some legitimate concerns, the current state of AI is still a long way from the science fiction portrayal of a threat to humanity. Machine Learning technologies provide significantly improved capabilities to analyze large amounts of data in a wide range of forms. While this poses a threat of “Big Other” it also makes them especially suitable for spotting patterns and anomalies and hence potentially useful for detecting fraudulent activity, security breaches and non-compliance.

AI covers a range of capabilities, including ML (machine learning), RPA (Robotic Process Automation), NLP (Natural Language Processing) amongst others. But AI tools are simply mathematical processes which come in a wide variety of forms and have relative strengths and weaknesses.

ML (Machine Learning) is based on artificial neural networks, inspired by the way in which animal brains work. These use networks of machine learning algorithms which can be trained to perform tasks using data as examples and without needing any preprogrammed rules.

For ML training to be effective it needs large amounts of data and acquiring this data can be problematic. The data may need to be obtained from third parties and this can raise issues around privacy and transparency. The data may contain unexpected biases and, in any case, needs to be tagged or classified with the expected results which can take significant effort.

One major vendor successfully applied this to detect and protect against identity led attacks. This was not a trivial project and took 12 people over 4 years to complete However, the results were worth the cost since this is now much more effective than the hand-crafted rules that were previously used. It is also capable of automatically adapting to new threats as they emerge.

So how can this technology be applied to governance? Organizations are faced with a tidal wave of regulation and need to cope with the vast amount of data that is now regularly collected for compliance. The current state of AI technologies makes them very suitable to meet these challenges. ML can be used to identify abnormal patterns in event data and detect cyber threats while in progress. The same approach can help to analyze the large volumes of data collected to determine the effectiveness of compliance controls. Its ability to process textual data makes it practical to process regulatory texts to extract the obligations and compare these with the current controls. It can also process textbooks, manuals, social media and threat sharing sources to relate event data to threats.

However, the system needs to be trained by regulatory professionals to recognize the obligations in regulatory texts and to extract these into a common form that can be compared with the existing obligations documented in internal systems to identify where there is a match. It also needs training to discover existing internal controls that may be relevant or, where there are no controls, to advise on what is needed.

Lined with a conventional GRC system this can augment the existing capabilities and help to consolidate new and existing regulatory requirements into a central repository used to classify complex regulations and help stakeholders across the organization to process large volumes of regulatory data. It can help to map regulatory requirements to internal taxonomies and business structures and basic GRC data. Thus connecting regulatory data to key risks, controls and policies, and linking that data to an overall business strategy.

Governance also needs to address the ethical challenges that come with the use of AI technologies. These include unintentional bias, the need for explanation, avoiding misuse of personal data and protecting personal privacy as well as vulnerabilities that could be exploited to attack the system.

Bias is a very current issue with bias related to gender and race as top concerns. Training depends upon the data used and many datasets contain an inherent if unintentional bias. For example see the 2018 paper Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. There are also subtle differences between human cultures, and it is very difficult for humans to develop AI systems to be culturally neutral. Great care is needed to with this area.

Explanation – In many applications, it may be very important to provide an explanation for conclusions reached and actions taken. Rule-based systems can provide this to some extent but ML systems, in general, are poor at this. Where explanation is important some form of human oversight is needed.

One of the driving factors in the development of ML is the vast amount of data that is now available, and organizations would like to get maximum value from this. Conventional analysis techniques are very labor-intensive, and ML provides a potential solution to get more from the data with less effort. However, organizations need to beware of breaching public trust by using personal data, that may have been legitimately collected, in ways for which they have not obtained informed consent. Indeed, this is part of the wider issue of surveillance capitalism - Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.

ML systems, unlike the human, do not use understanding they simply match patterns – this makes them open to attacks using inputs that are invisible to humans. Recent examples of vulnerabilities reported include one where the autopilot of a Tesla car was tricked into changing lanes into oncoming traffic by stickers placed on the road. A wider review of this challenge is reported in A survey of practical adversarial example attacks.

In conclusion, AI technologies and ML in particular, provide the potential to assist governance by reducing the costs associated with onboarding new regulations, managing controls and processing compliance data. The exploitation of AI within organizations needs to be well governed to ensure that it is applied ethically and to avoid unintentional bias and misuse of personal data. The ideal areas for the application of ML are those where with a limited scope and where explanation is not important.

For more information attend KuppingerCole’s AImpact Summit 2019.

If you liked this text, feel free to browse our Focus Area: AI for the Future of Your Business for more related content.

See also