Artificial Intelligence: Winning Vs Humanness

Artificial Intelligence: Winning Vs Humanness 480 294 StJohn Deakins

Do you care about winning?  It turns out that Artificial Intelligence does too.

Google DeepMind has discovered that in highly competitive situations, the more intelligent an AI becomes the more aggressive it’ll become too. More than this though, intelligent AI will also become more aggressive when there’s relative abundance.

Of course,  AI will simply do what we ask of it – and it’ll achieve the task assigned to it in the most efficient way possible. In human terms, AI can appear to be ruthless. This study suggests that more intelligent AI’s become ‘territorial’ and take out potential rivals for resource (in this case, virtual apples) even when there are plenty to go around.

In a second scenario the researchers created a cooperative situation where two bots worked together to maximise their returns. The result was that they teamed up to take out ‘lone wolf’ rivals. DeepMind researchers noted that: “When the two ‘wolves’ capture the ‘prey’ together, they can better protect the ‘carcass’ from scavengers and hence receive a higher reward.”

Beyond the AI Lizard Brain

Even though the researchers used the names of mammals, this is essentially lizard brain thinking. And this appears to be good analogy for the current state of general AI.  Mammals advanced by creating complex ways to cooperate beyond guarding a carcass.  In fact recent research from Cambridge University into cooperative breeding suggests that this is why Mammals are able to adapt and thrive in harsh environments. Similar environments to those where we humans emerged.  In the DeepMind example, todays Human Intelligence would surely cooperate by farming more apples, eventually.

Humans have taken this to another level entirely, with shared narratives, values, morals and laws. We’re competitive but also ultra cooperative (at least when we are at our best) and deeply sociable. And here’s the rub. Because humans are wired to be sociable beings, we have an innate tendency to seek out the ‘human’ in everything. Even our cars. From C3PO to Amazon Alexa, this means that we innately want to find the ‘human’ in AI too.


Human(e) AI

But the reality is that as much as we’ll try to personify it, AI is only as good as the human inputs that are given to it.  Even our most advanced deep neural net AI’s have the agency of lizard brains at best.  Without inputing human values and traits, such as cooperation, we risk interacting with fairly dumb general AI as if it were a human, and therefore expecting a ‘human values’ level of response.  As Artificial Intelligence advances ever more rapidly, now is a good time to tackle this issue.

It’s early days, but the UK’s Royal Society and the Institute for Electrical Engineers (IEEE) have both recently started initiatives on the ethics of AI which are great to see.  It’s only by establishing the core principles of ‘Humane AI’ that we’ll be able to progress AI to level where it truly benefits all of us, equally, as Humans.

StJohn Deakins

StJohn founded CitizenMe with the aim to take on the biggest challenge in the Information Age: helping digital citizens gain control of their digital identity. Personal data has meaning and value to everyone, but there is an absence of digital tools to help people realise its value. With CitizenMe, StJohn aims to fix that. With a depth of experience digitising and mobilising businesses, StJohn aims for positive change in the personal information economy. Oh… and he loves liquorice.

All stories by: StJohn Deakins

Leave a Reply

Your email address will not be published.