Artificial intelligence (AI) isn’t a future concept – it’s something that’s being deployed in a number of settings right here and now. And while AI technology has the ability to transform our society for the better, many are concerned about the harmful side effects that might come alongside it.
I recently hosted a panel discussion on AI safety with our partners at ASI Data Science, and I was surprised to find most of the risks aren’t immediate, pressing concerns, but more academic problems that could take place in five or 10 years’ time.
So, I put the question to our panel of tech, legal and commercial experts: Is AI safety a real-world problem today, or purely academic speculation?
AI poses threats both now and in the future.
There’s a good reason why academics are so strongly focused on the future risks of AI, said ASI Data Science’s Head of Science, Dr Ilya Feige. “The largest existential risk to humanity is this” he claimed, pointing out that AI is a far more pressing concern than the likes of climate change.
The academic community is mainly focused on threats posed by autonomous algorithms, which we are a few years away from fully developing. The main obstacle with autonomous robots is that once they’re activated, we don’t know how to switch them off, said Ilya.
“Generally intelligent autonomous robots that go out and solve problems, those I think should be regulated,” said Ilya.
Human-controlled algorithms, on the other hand, pose less of a threat to society as a whole and shouldn’t be regulated any more than they are already, but these algorithms still pose real-world problems that are impacting us today, particularly as organisations look to incorporate AI technology into their business offerings.
The argument for greater regulation.
Digital Ethicist and TEDx Speaker, Charles Radclyffe, believes AI is a real-world problem right now, and expressed a strong belief that more regulation needs to be put in place.
Drawing contrasts to the industrial revolution, Charles argued that the drive for technological innovation took precedence over what was best for society as a whole, and perhaps more regulation throughout this period could have stopped some of the less favourable side effects from taking place.
Mentioning internet giants such as Facebook, Twitter and Netflix, Charles argued that the “different economic climate we have today is very much a winner takes all economic model,” and while technology has done some great things, it has also “broken and exploited some of the weaknesses in the capitalist economic structure.”
On the end of the spectrum, Charles highlighted the general confusion that surrounds AI technology. “It’s just math,” he said, pointing out that “people fundamentally misunderstand the distinction between reality and science fiction, and I think that’s a dangerous thing.”
Governmental regulations will hinder AI innovation.
Financial Services and FinTech Partner at Slaughter and May, Ben Kingsley, said AI safety isn’t a purely academic topic – it is a real-world problem right now and will result in real-world casualties, just like any other revolution.
“AI is already in use in some quite mundane contexts,” Ben pointed out. “I don’t think the solution or the means of achieving safety in that deployment is through some generic form of legislation or regulation.”
A government regulation “would have a very negative impact on all of the good that is already being done through research and development of AI solutions.”