Earlier this week news broke that Professor Stephen Hawking, Tesla Motors CEO Elon Musk, and more than 1.000 other robot engineers and scientists have signed an open letter warning that killer AIs (artificial intelligences) would end up being a very dangerous problem if the military is allowed to start developing them.
The letter was published on Monday (July 27, 2015), and asks for autonomous weapon development to go under a ban that would prevent the start of a global arms race focused on killer robots.
The experts warn that if any one of the world’s major military powers was to begin developing AI weapons, the above mentioned global arms race would become impossible to avoid.
Killer robots would be fairly cheap to make and the raw materials that they require would not be hard to find, so once the first one is developed, all of the major military powers will start to mass-produce them and to use them for tasks such as carrying out an assassination, subduing a population, destabilizing a nation and selectively killing a specific ethnic group.
In response to the news, The New York Times posted an article about the potential use of artificial intelligence, making a reference to the sci-fi movie Ex Machine along the way.
The main AI in the movie is called Ava, and somewhere in the last 30 minutes of the movie, Nathan Bateman, Ava’s creator, describes the AI as having used “self-awareness, imagination, manipulation, sexuality, empathy” in order to escape captivity, with the main implication being “what’s more human than that?”.
And without further to say, here are the five (5) things that you should know about the use of artificial intelligence in real life:
1) Field experts say that the development of autonomous weapons might happen sooner than we think. In fact, if the abovementioned ban is not reinforced, we may see our first killer robot in just a few years and the 1.000+ people who signed the letter fear that “autonomous weapons will become the Kalashnikovs of tomorrow”.
2) Robot engineers inform that an AI of Ava’s intelligence and complexity is far from being developed in the real world. Some experts say that we may be able to start building then in 25 years, while others say that it may take decades for the technology to reach that point. The take away is that none of them see it as an immediate concern.
Oren Etzioni, chief executive officer for the Allen Institute for Artificial Intelligence (Seattle), recently gave a statement saying that Hollywood robots are a lot more advanced than real world robots. Many of the current real world AIs “can’t even grip things”.
Toby Walsh, professor of artificial intelligence from the University of New South Wales (Sydney, Australia), gave a statement of his own informing that the hardware for an Ava-like AI would be harder to obtain than the software. He theorizes that the software is less than 50 years away from being developed, but that the hardware may take anywhere between 50 and 100 years to be developed.
3) Here’s the scary part. Bart Selman, computer science professor from Cornell University (New York City), shared that field experts have already developed facial recognition technology that’s better than humans at detecting targets. If this technology was to be synchronized with video from surveillance cameras, the military would end up having quite a capable assassin.
4) But there’s good news too. Most AI developers are focus on creating smart machines that would benefit humans, not destroy them. Some of the main priorities are to build AI that can assist in search and rescue missions and AIs than can reduce medical errors.
5) As of right now the United States in the world leader in AI development, for both military and civilian use. China currently holds second place but it’s quickly catching up. Professor Walsh noted that anyone who knows they’re at risk of facing one of these militarized AIs is going to actively focus on developing their own.
Image Source: nyt.com