“Follow The Bloody Algorithm!”

“Follow The Bloody Algorithm!” 150 150 StJohn Deakins

For the first time ever I missed my train last Sunday night. I was traveling up to this weeks Advances In Data Science event in Manchester to talk about algorithms and Humans (‘do they mix?’).

I figured traffic would be light on a Sunday so I took an Uber across town to Euston train station. However, once in the car, Uber suggested routes through the back streets due to heavy traffic on the A40(m) motorway. My Uber driver insisted that he knew better and that the highway was best: “It’s *always* clear on a Sunday Night! Don’t Worry!”.  He’d done the route many times before, he was certain, so cool with me.  Of course, we hit traffic, couldn’t get off the motorway – and the 37 minute journey through the backstreets recommended by Uber A.I. became a 62 minute journey on a clogged up highway.

Just before I jumped out to run between the motionless cars, I found myself saying to the driver “look just follow the bloody sat-nav – it has algorithms that know this shit!”.  Of course, the train was slowly pulling away from the platform when I finally arrived.  It was OK, there was another train 30mins later, but the episode highlights a very important point about A.I.

Trust in machine learning is not just a one way thing. It covers both us trusting A.I. with our data we give it, the inputs, and also the A.I. outputs we receive – the insights, recommendations and nudges.  We all know about the issues with trust around the collection of data by big companies. Our data is the ‘input’ into their machine learning engines that are designed to work out who we are and what we want.

We know what *should* happen, we should have transparency about who the data will be used by, what the purpose that it will be used for, we should be asked for our clear consent.  Even though many organisations still struggle to do this (or try to avoid it), we can be hopeful that new legislation such as the European Union General Data Protection Regulation (GDPR) will start to sort this issue out – at least in Europe.

However, there’s also the issue of trust in the output.  Do we trust what the A.I. tells us?  Algorithms increasingly help direct our lives, from Amazon purchase recommendations, to navigating traffic jams, to helping us make more serious physical and mental health decisions. So how do we know that we can trust the advice that we’re receiving? None of these examples come with any sort of independent certification. We have to trust our gut feeling that the brand providing the advice or recommendation is legitimate – and trustworthy.


Beyond this, we also have to trust that these algorithms know answers better than we think we do.   In the case of my Uber driver last Sunday night, he believed that he knew far better than the algorithm. It’s understandable, he’s human.  It’s even more understandable with London Black Cab drivers, who spend four years learning “the knowledge” of every street, road and lane in London. Mention “Uber” to them and watch their blood boil as they spurt expletives about “bloody algorithms” taking their jobs.  However, for my Uber driver, his big competitive advantage is that he has powerful algorithms on his side. Uber, Google Maps and Waze were all glowing at him from the dashboard with solutions. He should have had this one sorted out easily, but his own memories and experience wrestled with the A.I. He couldn’t bring himself to trust it, and he was wrong.

As A.I. algorithms become pervasive in our lives, it’ll be interesting to see how our natural A.I. Skepticism develops. Will we quickly learn to trust A.I. or will growing distrust lead to a form of A.I. inertia? As the Information age continues to accelerate, we’re likely to find out very soon.

P.S. the Advances in Data Science conference held by Manchester University is a great event with awesome speakers – including Google DeepMind, Amazon and, er, Uber. I thoroughly recommend it next year if you’re interested in where A.I. is headed and what its practical implementation means to everyday humans (and Uber drivers).

StJohn Deakins

StJohn founded CitizenMe with the aim to take on the biggest challenge in the Information Age: helping digital citizens gain control of their digital identity. Personal data has meaning and value to everyone, but there is an absence of digital tools to help people realise its value. With CitizenMe, StJohn aims to fix that. With a depth of experience digitising and mobilising businesses, StJohn aims for positive change in the personal information economy. Oh… and he loves liquorice.

All stories by: StJohn Deakins

Leave a Reply

Your email address will not be published.