OpenAI won’t benefit humanity without data-sharingOpenAI won’t benefit humanity without data-sharing https://www.citizenme.com/wp-content/uploads/2015/12/photo-1447858165805-528238ab4f61.jpg 1024 566 Neil Lawrence Neil Lawrence https://secure.gravatar.com/avatar/0662964a0da74cae60d93a8a9d401cb5?s=96&d=mm&r=g
My latest post for The Guardian:
Artificial intelligence experts welcome the launch of the Elon Musk-backed venture, but open algorithms are only the first step
There is a common misconception about what drives the digital-intelligence revolution. People seem to have the idea that artificial intelligence researchers are directly programming an intelligence; telling it what to do and how to react. There is also the belief that when we interact with this intelligence we are processed by an “algorithm” – one that is subject to the whims of the designer and encodes his or her prejudices.
OpenAI, a new non-profit artificial intelligence company that was founded on Friday, wants to develop digital intelligence that will benefit humanity. By sharing its sentient algorithms with all, the venture, backed by a host of Silicon Valley billionaires, including Elon Musk and Peter Thiel, wants to avoid theexistential risks associated with the technology.
OpenAI’s launch announcement was timed to coincide with this year’s Neural Information Processing Systems conference: the main academic outlet for scientific advances in machine learning, which I chaired. Machine learning is the technology that underpins the new generation of AI breakthroughs.
One of OpenAI’s main ideas is to collaborate openly, publishing code and papers. This is admirable and the wider community is already excited by what the company could achieve.
OpenAI is not the first company to target digital intelligence, and certainly not the first to publish code and papers. Both Facebook and Google have already shared code. They were also present at the same conference. All three companies hosted parties with open bars, aiming to entice the latest and brightest minds.
However, the way machine learning works means that making algorithms available isn’t necessarily as useful as one might think. A machine- learning algorithm is subtly different from popular perception.
Continue reading on The Guardian here.
Picture credit: Eric Patnoudes
Neil holds the collaborative Chair in Neuroscience and Computer Science at the University of Sheffield. Neil’s main research interest is machine learning through probabilistic models. He focuses on both the algorithmic side of these models and their application. He was Program Chair of NIPS in 2014 and is General Chair for 2015. Neil has a monthly column in The Guardian and is writing a book about Data Ethics. He enjoys cycling and last summer he spent six weeks cycling up and down the Alps. Many have tried, but no one has figured out how he fits everything in.All stories by: Neil Lawrence