Fearing Artificial Intelligence

by Ali Minai

ScreenHunter_1341 Aug. 31 10.48Artificial Intelligence is on everyone's mind. The message from a whole panel of luminaries – Stephen Hawking, Elon Musk, Bill Gates, Apple founder Steve Wozniak, Lord Martin Rees, Astronomer Royal of Britain and former President of the Royal Society, and many others – is clear: Be afraid! Be very afraid! To a public already immersed in the culture of Star Wars, Terminator, the Matrix and the Marvel universe, this message might sound less like an expression of possible scientific concern and more a warning of looming apocalypse. It plays into every stereotype of the mad scientist, the evil corporation, the surveillance state, drone armies, robot overlords and world-controlling computers a la Skynet. Who knows what “they” have been cooking up in their labs? Asimov's three laws of robotics are being discussed in the august pages of Nature, which has also recently published a multi-piece report on machine intelligence. In the same issue, four eminent experts discuss the ethics of AI. Some of this is clearly being driven by reports such as the latest one from Google's DeepMind, claiming that their DQN system has achieved “human-level intelligence”, or that a robot called Eugene had “passed the Turing Test“. Another legitimate source of anxiety is the imminent possibility of lethal autonomous weapon systems (LAWS) that will make life-and-death decisions without human intervention. This has led recently to the circulation of an open letter expressing concern about such weapons, and it has been signed by hundreds of other scientists, engineers and innovators, including Musk, Hawking and Gates. Why is this happening now? What are the factors driving this rather sudden outbreak of anxiety?

Looking at the critics' own pronouncements, there seem to be two distinct levels of concern. The first arises from rapid recent progress in the automation of intelligent tasks, including many involving life-or-death decisions. This issue can be divided further into two sub-problems: The socioeconomic concern that computers will take away all the jobs that humans do, including the ones that require intelligence; and the moral dilemma posed by intelligent machines making life-or-death decisions without human involvement or accountability. These are concerns that must be faced in the relatively near term – over the next decade or two.

The second level of concern that features prominently in the pronouncements of Hawking, Musk, Wozniak, Rees and others is the existential risk that truly intelligent machines will take over the world and destroy or enslave humanity. This threat, for all its dark fascination, is still a distant one, though perhaps not as distant as we might like.

In this article, I will consider these two cases separately.

Read more »