London News & Search
A London-based artificial intelligence expert has played down fears of a runaway AI “consuming everything in its path” in scenes similar to Terminator 2 after Facebook shut down two chat robots that developed their own language.
James Pollock, Head of Technology at TicTrac, a health technology product that uses AI to help people live a healthier lifestyle, said “we shouldn’t expect a Skynet or Terminator situation occurring anytime soon” after Facebook developers were forced to cancel their artificial intelligence experiment.
Mr Pollock described the worst case scenario for the developers of artificial intelligence as a ‘Technological Singularity’ or ‘AI Singularity,’ which can happen when technology begins to teach itself or become self aware.
Mr Pollock said: “if we build an AI that can train itself, it might cause a runaway intelligence that will exceed its masters (humans) intelligence very rapidly and may cause significant and dangerous changes to human civilisation.
“To give an example, if we were to build a robot that was trained to make paperclips as efficiently as possible, a runaway intelligence may find that it can make paperclips out of any raw material and start to consume everything in its path.”
But he said in the case of the Facebook AI, he does not believe this is what happened.
“If a researcher builds a system that’s trying to improve itself but cannot describe the behaviour it starts to exhibit, they may get concerned and want to shut it down.
“In this specific example [Facebook’s AI], I think they simply turned it off because it wasn’t generating what they wanted (A bot that could talk and negotiate well with people), rather than fear of a singularity.
It comes after the social media company was forced to shut down its experiment when the two chat bots appeared to develop an entirely new language which developers could not understand.
The AIs, Bob and Alice, were attempting to imitate human speech but created a machine language of their own with no human input, scientists said.
Following the discovery Facebook decided to shut the bots down.
A spokesman for Facebook’s Artificial Intelligence Research (Fair) wrote in a blog: “During reinforcement learning, the agent attempts to improve its parameters from conversations with another agent.
“While the other agent could be a human, Fair used a fixed supervised model that was trained to imitate humans.
“The second model is fixed, because the researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating.”
London News & Search