Regulation has the uncomfortable task of limiting untapped potential. I was surprised when I recently received the advice to think of life like a box. “The walls of this box are all the rules you should follow. But inside the box, you have perfect freedom.” Stunned as I was at the irony of having complete freedom to think inside the box, those at the forefront of AI development and implementation are faced with the irony of limiting projects with undefined potential.

Although Artificial General Intelligence – the ability of a machine to intuitively react to situations that it has not been trained to handle in an intelligent, human way – is still unrealized, narrow AI that enables applications to independently complete a specified task is becoming a more accepted addition to a business’ digital toolkit. Regulations that address AI are built on preexisting principles, primarily data privacy and protection against discrimination. They deal with the known risks that come with AI development. In 2018, biometric data was added to the European GDPR framework to require extra protection. In both the US and Europe, proposals are currently being discussed to monitor AI systems for algorithmic bias and govern facial recognition use by public and private actors. Before implementing any AI tool, companies should be familiar with the national laws for the region in which they operate.

These regulations have a limited scope, and in order to address the future unknown risks that AI development will pose, a handful of policy groups have published guidelines that attempt to set a model for responsible AI development.

The major bodies of work include:

The principles developed by each body are largely similar. The main principles that all guidelines discussed address the need for developers and AI implementers to protect human autonomy, obey the rule of law, prevent harm and promote inclusive growth, maintain fairness, develop robust, prudent, and secure technology, and ensure transparency.

The single outstanding feature is that only one document provides measurable and immediately implementable action. The EU Commission included an assessment for developers and corporate AI implementors to conduct to ensure that AI applications become and remain trustworthy. The assessment is currently in a pilot phase and will be updated in January 2020 to reflect the comments from businesses and developers. The other guidelines offer compatible principles but are general enough to allow any of the public, private, or individual stakeholders interacting with AI to deflect responsibility.

This collection of guidelines from the international community are not legally binding restrictions, but are porous barriers that allow sufficiently cautious and responsible innovations to grow and expand as the trustworthiness of AI increases. The challenge in creating regulations for an intensely innovative industry is to build in flexibility and the ability to mitigate unknown risks without compromising the artistic license. These guidelines attempt to set an ethical example to follow, but it is essential to use tools like the EU Commission’s assessment tool which establish an appropriate responsibility, no matter the status as developer, implementor, or user.

Alongside the caution from governing bodies comes a clear that AI development can bring significant economic, social, and environmental growth. The US issued an executive order in February 2019 to prioritize AI R&D projects, while the EU takes a more cautiously optimistic approach by building of recognizing the opportunities but prioritizing building and maintaining a uniform EU strategy for AI adoption.

If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.

See also