Robot Intelligence is Dangerous Experts Warning after Facebook develop their own Language
Artificial intelligence researchers in current days have been speaking out against media reviews that dramatize AI studies that fb carried out.
An academic paper that facebook published in June describes an everyday research wherein researchers were given artificial retailers to barter with every other in chat messages after being proven conversations of humans negotiating. The agents’ development regularly achieved via trial and blunders.
But in the past week or so, some media outlets have published reports on the work that is alarmist in tone. “Facebook shuts down robots after they invent their own language,” London’s Telegraph newspaper reported. “‘Robot intelligence is dangerous’: Expert’s warning after Facebook AI ‘develop their own language,'” as London’s Sun tabloid put it.
At times some of the chatter between the agents did deviate from standard correct English. But that wasn’t the point of the paper; the point was to make the agents effectively negotiate. The researchers finished their experiment, and indeed they noticed that the agents even figured out how to pretend to be interested in something they didn’t actually want, “only to later ‘compromise’ by conceding it,” Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh and Dhruv Batra of Facebook’s Artificial Intelligence Research group wrote in the paper .