How to prevent creeping artificial intelligence becoming creepy

How to prevent creeping artificial intelligence becoming creepy 1024 565 Neil Lawrence

My latest post for The Guardian on artificial intelligence:

With successful AI emerging slowly and almost without us noticing, we must improve the understanding between human and computer.

The traditional view of benevolent artificial intelligence (AI) is as a companion, someone who understands and enhances us. Think of the computer in Star Trek or JARVIS in Iron Man. They don’t just have a vast knowledge and extraordinary computational abilities, but they exhibit emotional intelligence and remain subservient to their human masters. This is the utopian view of AI.

The alternative dystopia has been expressed by Tony Stark’s real life counterpart,Elon Musk. What if such intelligence isn’t satisfied with such a back seat role? What if it becomes self aware and begins to use its knowledge and computational resources against us?

Comparing computers in the real world with those in the movies, they tick the computational box and the knowledge box, but seem to fall down as far as the emotional intelligence goes.

One of the pleasures of knowing someone is understanding how they will think, how they react. At the moment, when we project this idea on to our relationship with computers we are frustrated because the machine doesn’t know how to react to us. Machines are pedantic, requiring us to formalise ourselves to communicate with them. They can’t sense how we are feeling.

Compare this with our longstanding human companion, the dog. In computational terms, and with regard to access to knowledge, they are limited, but in terms of emotional intelligence they are well ahead of their silicon rivals. They can even seem to understand when we need emotional support. At some level we understand our dogs and they understand us.

Successful AI is emerging slowly, almost without people noticing. A large proportion of our interactions with computers is already dictated by machine learning algorithms: the ranking of your posts on Facebook, the placement of adverts by Google, and recommendations from Amazon. When it is done well, though, we don’t notice it is happening. We can think of this phenomenon as creeping AI.

Continue reading on The Guardian here.

Picture credit: C_osett – Creative Commons Public Domain Mark 1.0

Neil Lawrence

Neil holds the collaborative Chair in Neuroscience and Computer Science at the University of Sheffield. Neil’s main research interest is machine learning through probabilistic models. He focuses on both the algorithmic side of these models and their application. He was Program Chair of NIPS in 2014 and is General Chair for 2015. Neil has a monthly column in The Guardian and is writing a book about Data Ethics. He enjoys cycling and last summer he spent six weeks cycling up and down the Alps. Many have tried, but no one has figured out how he fits everything in.

All stories by: Neil Lawrence

Leave a Reply

Your email address will not be published.