Branded by Elon Musk as a “fundamental risk to the existence of human civilisation,” AI technology has the power to completely transform our society - whether that transformation is for the better or for the worse is yet to be determined.
The benefits of AI are difficult to disregard, but its associated risks paint an ugly picture of the future, ranging from the unfair to the catastrophic.
So, what are some of the risks AI poses to our society? What regulations do we need to protect us? And what can and should organisations be doing to act responsibly during these unprecedented times?
To find out the answers to these troubling questions, we hosted an event on AI safety at the end of last month with our partners at ASI Data Science.
Here’s what our panel of technical, commercial and legal experts had to say:
What risks does AI pose to society?
As ASI Data Science’s Head of Science, Dr Ilya Feige is no stranger to talking about AI safety (in fact, he plays a pivotal role in ASI’s newly created AI safety research lab). But, when searching online for a simple definition of AI safety, Ilya pointed out that very little basic information was available, including a Wikipedia page.
So Ilya created a straightforward definition of AI safety himself: “The endeavour to make sure machine learning and AI are used in a non-detrimental way by humanity.”
The associated risks of AI safety are broad, says Ilya, who categorised these risks into two dimensions: human-controlled algorithms and autonomous algorithms.
Most of the threats from human-controlled algorithms will pop up by accident, with the algorithm’s creator unintentionally causing harm with unforeseen consequences. An example he gave was a supervised learning classifier that qualifies people for mortgages and unfairly pulls discriminatory factors like ethnicity or gender into the equation.
Then there’s the risk of human-controlled algorithms that are designed to be malicious, such as hacking and cyber-attacks, mass surveillance, or fake news.
“Autonomous algorithms learn how to do something by going out into the world and trying different things,” said Ilya. An example of an autonomous algorithm with a benign intention could be a household cleaning robot that figures out that the best way to keep your house clean is to lock you up in your wardrobe.
An example of a malicious autonomous algorithm, said Ilya, would be something akin to The Terminator.
The need for a wider discussion.
“What philosophy does and what the data science community does is essentially the same thing,” said Digital Ethicist and TEDx Speaker, Charles Radclyffe. “Building models for how the world should be.”
But with AI technology reaching a point where it can have a real, positive impact on society, Charles pointed out that this is the “first time in history where philosophy has become relevant.”
Comparing safety with ethics, Charles believes AI shouldn’t be a topic that’s reserved for the people building this technology, but a wider societal discussion involving people from all different backgrounds and practices areas.
Using gun safety as an analogy to further explain his point, Charles highlighted that the engineering of a gun is primarily a safety concern, but the decision of whether or not to arm school children to protect themselves is an ethical one.
The same reasoning applies to AI, argues Charles, saying we “need a much more broad engagement across society on this subject.”
Good governance means more than regulations.
“AI is not as intelligent as it thinks it is,” said Ben Kingsley, the Financial Services and FinTech Partner at Slaughter and May. “Not yet, anyway.”
As someone who regularly works alongside businesses looking to adopt and deploy AI-based solutions, Ben pointed out that the growing emphasis on AI safety regulations needs to be focused on the limitations of what an algorithm can do, as well as what it can’t do.
“Regulation of AI is a bit of misnomer,” he said, highlighting the fact that many aspects of AI are already regulated (think GDPR), especially in more sensitive sectors, such as aviation, healthcare and surgery, and warfare.
As far as putting more regulations in place, Ben is sceptical about the necessity: “Good governance around AI is going to be far more important than any sort of regulation or legislation.”