In the last 10 years machine learning has become ubiquitous and touches all lives in ways that was unimaginable before. The machines can make decisions that required considerable human effort at a much faster speed and reduced cost with a little human oversight. As a result, machines don’t just have a higher than before influence in shaping our lives but are also under increased scrutiny by both regulators as well as user rights advocates.
The adage “with great power comes great responsibility” has long been used – from French revolution to superhero comics. It has never been truer as the great power that machine learning wields is now in the hands of almost anyone making a software product. It ranges from giving people access to the funds that can alter their lifepath, medical diagnosis that can increase their life expectancy or reduce it dramatically to their social media feed that cannot just provide them the content that keeps them engaged, but also polarise their beliefs by feeding them information that reinforces their existing notions.
With the growing influence of AI technologies and the corresponding scrutiny, the way AI development happens is beginning to change. The full data science lifecycle needs to incorporate the elements of responsible AI and the professionals who know how to design and implement these will be the ones that employers will look for.
For many years public concern about technological risk has focused on the misuse of personal data, with GDPR, most hated and loved at the same time as one of the results. With the huge success of LLMs and generative AIs such as ChatGPT, artificial intelligence soon will be omnipresent in products and processes, which will shift regulator´s attention to the potential for bad or biased decisions by algorithms. Just imagine the consequences of a false medical diagnose, or of a correct diagnose created by an AI and then not accepted by the doctor. Not to mention all the other fields where bad AI can be harmful, such as autonomous cars or algorithms deciding on your future credibility. Inevitably, many governments will feel regulation is essential to protect consumers from that risk.
In this panel discussion we will try to jointly create a list of those risks that we need to regulate the sooner the better and try to create an idea on how this future regulation will impact the way we use AI in our bsuiness and private lives.