Fearing Artificial Intelligence

by Ali Minai

ScreenHunter_1341 Aug. 31 10.48Artificial Intelligence is on everyone's mind. The message from a whole panel of luminaries – Stephen Hawking, Elon Musk, Bill Gates, Apple founder Steve Wozniak, Lord Martin Rees, Astronomer Royal of Britain and former President of the Royal Society, and many others – is clear: Be afraid! Be very afraid! To a public already immersed in the culture of Star Wars, Terminator, the Matrix and the Marvel universe, this message might sound less like an expression of possible scientific concern and more a warning of looming apocalypse. It plays into every stereotype of the mad scientist, the evil corporation, the surveillance state, drone armies, robot overlords and world-controlling computers a la Skynet. Who knows what “they” have been cooking up in their labs? Asimov's three laws of robotics are being discussed in the august pages of Nature, which has also recently published a multi-piece report on machine intelligence. In the same issue, four eminent experts discuss the ethics of AI. Some of this is clearly being driven by reports such as the latest one from Google's DeepMind, claiming that their DQN system has achieved “human-level intelligence”, or that a robot called Eugene had “passed the Turing Test“. Another legitimate source of anxiety is the imminent possibility of lethal autonomous weapon systems (LAWS) that will make life-and-death decisions without human intervention. This has led recently to the circulation of an open letter expressing concern about such weapons, and it has been signed by hundreds of other scientists, engineers and innovators, including Musk, Hawking and Gates. Why is this happening now? What are the factors driving this rather sudden outbreak of anxiety?

Looking at the critics' own pronouncements, there seem to be two distinct levels of concern. The first arises from rapid recent progress in the automation of intelligent tasks, including many involving life-or-death decisions. This issue can be divided further into two sub-problems: The socioeconomic concern that computers will take away all the jobs that humans do, including the ones that require intelligence; and the moral dilemma posed by intelligent machines making life-or-death decisions without human involvement or accountability. These are concerns that must be faced in the relatively near term – over the next decade or two.

The second level of concern that features prominently in the pronouncements of Hawking, Musk, Wozniak, Rees and others is the existential risk that truly intelligent machines will take over the world and destroy or enslave humanity. This threat, for all its dark fascination, is still a distant one, though perhaps not as distant as we might like.

In this article, I will consider these two cases separately.

The Rise of Intelligent Algorithms:

The socioeconomic and moral concerns emerge from the fact that the algorithms being developed under the aegis of artificial intelligence are slowly but surely automating capabilities that, until recently, were considered uniquely human. These include perceptual tasks such as speech recognition and visual processing, motor tasks such as driving cars, and even cognitive tasks such as summarizing documents, writing texts, analyzing data, making decisions and even discovering new knowledge. Of course, the same thing happened to repetitive tasks such as assembly line work decades ago, but humans had always seen that kind of work as mechanical drudgery, and though it caused great socioeconomic upheaval in the blue-collar workforce, society in general saw such automation as a positive. The automation of large-scale calculation raised even fewer issues because that was not something humans could do well anyway, and computers simply enhanced efficiency. The new wave of automation, however, is very different.

At the socioeconomic level, this process threatens white-collar workers and strikes at the core of what had been considered fundamentally human – work that could only be done by thinking beings. The concrete fear is that, eventually, intelligent machine will take over all the jobs humans do, including the most complex ones. What, then, will become of humans? A recent article in the Economist lays out both the nature of this threat and the reasons why it does not justify a general panic about AI as implied in the warnings of Hawking et al. The article argues – correctly – that the algorithms underlying the panoply of automated tasks are currently a disparate set of narrowly-focused tools, and their integration into a single generalized “true” artificial intelligence is only a remote possibility. A thoughtful article by Derek Thompson in the July/August issue of the Atlantic goes even further, exploring ways in which a work-free society might be more creative and intellectually productive than one where most time is spent “on the job”. Whatever happens, one thing is certain: Intelligent algorithms will certainly transform human society in major ways. And one of these will be to challenge our bedrock notions of ourselves: As more and more abilities that we had considered essentially human – thinking, planning, linguistic expression, science, art – become automated, it will become harder to avoid the question if these too are, like routine tasks and calculations, just material processes after all.

Among the many human capacities being automated is the capacity to make complex, life-or-death decisions with no human involvement – which poses the moral dilemma motivating much of the immediate anxiety about AI. The explicit concern is currently about weapons, but similar situations can arise with medical procedures, driverless cars, etc. The question being asked is this: Who is responsible for a lethal act committed as the result of an algorithm whose outcomes its human builders did not explicitly program and/or could not reasonably have predicted? In one sense, this is not really a new issue at all. A “dumb” heat-seeking missile can also make a lethal “decision” in a way that its human designers could not have anticipated. However, what distinguishes LAWS from such arbitrary cases is the presence of deliberation: A smart autonomous weapon would analyze data, recognize patterns and make a calculated decision. This is seen as much closer to cold-blooded assassination than the errant heat-seeking missile, which would be regarded as an accident. However, this dilemma results mainly from a failure to recognize the nature of the mind, and especially the nature of autonomy.

Autonomy is a vexing concept that moves one quickly from the safe terrain of science and engineering into the quagmire of philosophy. Broadly speaking, we ascribe some autonomy to almost all animals. The ant pushing a seed and the chameleon snagging an insect are considered to be acting as individuals. However, we also assume that animals such as these are basically obeying the call of their instinct or responding to immediate stimuli. Moving further up the phylogenetic tree, we gradually begin to assign greater agency to animals – the bird that cares for its young, the dog that understands verbal commands, the bull elephant that leads its herd – but this agency is still rather limited, and does not rise to the point of holding the animal truly responsible for its actions. That changes when we reach the human animal. Suddenly (evolutionarily speaking), we have an animal that thinks, understands, evaluates, and makes moral choices. This animal has a mind! With mind comes intelligence and the ability to make deliberate, thoughtful decisions – free will. But is this a reasonable description of reality?

Nothing that we know of in terms of its physical nature suggests that human animal is in any way qualitatively different than other animals with central nervous systems. Thus, the attributes identified with the mind – including intelligence – must either arise from some uniquely human non-material essence – soul, spirit, mind – or be present in all animals, though perhaps in varying degree. The former position – termed mind-body dualism (or just dualism) – is firmly rejected by modern science, which postulates that mind must emerge from the physical body. Indeed, this is the whole basis of the artificial intelligence project, which seeks to build other physical entities with minds. How to do this is an issue that has engaged computer scientists, philosophers and biologists for decades and is far from settled. I have expressed my own opinions on this elsewhere, and will return to them briefly at the end of this piece. The pertinent point is that intelligence – real or artificial – is not an objectively measurable, all-or-none attribute that currently exists only in humans and will emerge suddenly in machines one day. Rather, it is convenient label to describe a whole suite of capabilities that can be, and already are, present in animals and machines to varying degrees as a consequence of their physical structures and processes. Basically, from a modern scientific, materialistic viewpoint, animals and machines are not fundamentally different – though it is simplistic, in my opinion, to simply reduce them both to entirely conventional notions of information processing. From a mind-as-matter scientific viewpoint, the “smart” autonomous missile is not qualitatively different from the “dumb” heat-seeking missile, since both are equally at the mercy of their physical being – or embodiment – and their environments. The smart missile just has far more complex processes.

Scientifically valid as this view may be, it is not shared by most people. The problem is that there exists an unspoken mismatch between the scientific and societal conceptions of responsibility, but this issue has mainly been an academic one so far. AI algorithms are now forcing us to confront it in the real world. Since time immemorial, the human social contract has been based on the idea that people have freedom to choose their actions autonomously – with constraints, perhaps, but without compulsion. And yet, the current scientific consensus indicates that this cannot be the case; that true “free will”, in the sense of being potentially able to choose action B even at the instant that one chooses action A, is an illusion – a story we tell ourselves after the fact. This follows from the materialistic view of humans, which may have room for the unpredictability and even indeterminacy of actions, but no room for explicit choice. The activity of neurons and muscles always follows only one path, so only one choice is ever made. There is no way for an individual to make “the other decision” because by that point, the individual's physical being is the decision that is already occurring. In a sense, freedom of choice lies wholly in the post facto apprehension of the counterfactual.

Of course, this is problematic from a philosophical perspective that seeks to assign responsibility, but that is a modern – even post-modern – problem. Until now, the convention has been to assign responsibility based on motivation, with the implicit assumption that human evaluators (judges, juries, prosecutors, etc.) can ascertain the motivations of other humans because of their fellowship in the same species and a common system of values grounded in universally human drives and emotions. Even here, cultural differences can lead to very serious problems, but imagine having to ascertain the motives of a totally alien species whose thought processes – whatever they might be – are totally opaque to us. Do we assign responsibility based on human criteria? Is that “fair”? Are the processes occurring in the machine even “thought”? If we follow the logic of the Turing Test and decide to accept the appearance of complex, autonomous intelligence as “true” intelligence, we have a much more complex world. Currently, we only have human-on-human violence, but once we have three more kinds – machine-on-human, human-on-machine, and machine-on-machine – how do we assign priority? Does privilege go automatically to humans, and why is that ethical? We already face some of these issues with animals, but there it has tacitly been agreed that the human is the only species with true responsibility, and privilege is to be determined by human laws – even if it occasionally ends up punishing the human (as in the case of poaching or cruelty to animals). The co-existence of two autonomous, intelligent – and therefore responsible – species changes this calculus completely, requiring us to draw a moral boundary that has never been drawn before. Having to ascertain responsibility in machines will force us to define the physical basis of human volition with sufficient clarity that it can be applied to machines. In the process, humans will need either to acknowledge their own material nature and consequent lack of true free will, or give in to dualism and deny that purely material machines can ever be truly intelligent and moral agents in a human sense. In the former case – which modern science would recommend – we would have to accept that not only logical processes, but also those most cherished human attributes of emotion, empathy, desire, etc. can emerge from purely material systems. The caricature of the machine that cannot “feel” – all the angst of Mr. Data – would have to go, to be replaced by the much more disorienting possibility that, far from being coldly calculating, truly intelligent machines would turn out to be like us, and, even more disconcertingly, that we have been like our machines all along!

For most people outside the areas of science and technology, this perspective on machine intelligence is deeply problematic – much as the idea of human evolution has been. A unified view of humans and machines strikes at the core ideas of soul, mind, consciousness, intentionality and free will, reducing them to “figments of matter”, so to speak. Our moral philosophies, social conventions or legal systems may not be ready for this transition, but intelligent algorithms are going to force it upon us anyway. Some of the complexities involved are already being discussed by philosophers of law, though mainly in the context of relatively simple artificial agents such as bots and shopping websites.

The Existential Threat of AI:

The more sensational part of the alarms raised about AI are dire threats of human destruction or enslavement by machines. Steve Wozniak says that robots will keep humans as pets. Lord Rees warns that we are “hurtling towards a post-human future“. Given that science considers both animals and machines as purely material entities, why are such brilliant people so concerned about smarter machines?

Perhaps part of the answer can be found by posing the question: Would we be more afraid of a real tiger or an equally dangerous robot tiger? Most people would probably choose the latter. Scientific consensus or not, we remain dualists at heart, recognizing in animals a kinship of the mind and spirit – an assumption that, in the end, they are creatures with motivations and feelings similar to ours. About the machine, we are not certain, though it would be hard to explain this on a purely rational basis. For many, the very idea that the machine could have motivations and feelings is absurd – which is pure dualism – but even those who accept the possibility in principle hesitate to acknowledge the equivalence. Most of the concerned scientists who are unwilling to trust an autonomous machine to make a lethal choice are not pacifists, and are willing to allow human pilots or gunners to make the same decision. Rees, Hawking and co. may be concerned about intelligent robots that could enslave or destroy humanity, but are far less concerned that humans may do the same. Why is that? I think that the answer lies in a natural and prudent fear of alien intelligence.

Yes, alien! Though they are our constructions, most people see machines as fundamentally different, cold, non-empathetic, mechanical – and believe that any intelligence that may emerge from them will inherit these attributes. We may prize Reason as a crowning achievement of the human intellect, but both experience and recent scientific studies indicate that, in fact, humans are far from rational in matters of choice: Much of human (and animal) behavior emerges from emotions, biases, drives, passions, etc. We recognize intuitively that these – not Reason – form the bedrock of the values in which human actions are ultimately grounded. The duel between Reason and Passion, the Head and the Heart, Calculation and Compassion has pervaded the literature, philosophy, culture and language of all human societies, and has found expression in all spiritual, moral and legal systems. Those who lack empathy and emotion are regarded as sociopathic, psychopathic, heartless and inhuman. In this framework, machines appear to lie at the hyper-rational extreme –driven by algorithms, making choices through pure calculation with no room for empathy or compassion. Where, then, would a machine's values come from? Would it even have a notion of right and wrong, good and evil, kindness, love, devotion? Or would it only understand correct and incorrect, accurate and inaccurate, useful and useless? Would it have a purely rational value system with no room for humanity? It turns out that we humans fear the hyper-rational as much as we fear the irrational cruelty of a Caligula or Hitler. But in the case of intelligent machines, it is compounded further by at least three attributes that would make intelligent machines more powerful than any human tyrant:

· Open-Ended Adaptation: As engineers, we humans build very complex machines, but their behavior is determined wholly by our design. Everything else is a malfunction, and all our engineering processes are geared to squeezing out the possibility of such malfunctions from the machines we build. Our best machines are reliable, stable, optimized, predictable and controllable. But this is a formula to exclude intelligence. A viable intelligent machine would be adaptive – capable of changing its behavioral patterns based on its experience in ways that we cannot possibly anticipate when it rolls off the factory floor. That is what makes it intelligent, and also – from a classical engineering viewpoint – unreliable, unpredictable and uncontrollable. Even more dangerously, we would have no way of knowing the limits of its adaptation, since it is inherently an open-ended process.

· Accelerated Non-Biological Evolution: It has taken biological evolution more than three billion years to get from the first life-forms on Earth to the humans and other animals we see today. It is a very slow process. And though we now understand the processes of life – including evolution – quite well, we are still not at the point where we are willing – or able – to risk engineering wholly new species of animals. But once machines are capable of inventing and building better machines, they could super-charge evolution by turning it into an adaptive engineering process rather than a biological one. Even machines may lose control of what greater machines may emerge rapidly through such hyper-evolution.

· Invincibility and Immortality: Whatever their dangers, humans and animals eventually die or can be killed. We know how long they generally live, and we know how to kill them if necessary. There's a lot of solace in that! Machines made from ever stronger metal alloys, polymers and composites could be infinitely more durable – perhaps self-healing and self-reconfiguring. They may draw energy from the Earth or the sun as all organisms do, but in many more ways and much more efficiently. Their electronics would work faster than ours; their size would not be as limited by metabolic constraints as that of animals. Suddenly, in the presence of gleaming transformers, humans and their animal cousins would seem puny and perishable. Who then would be more likely to inherit the Earth?

Ironically, as argued earlier, the idea of the intelligent machine as hyper-rational is probably just a caricature. If what we believe about animal and human minds is correct, machines complex enough to be intelligent will also bring with them their own suite of biases, emotions and drives. But that is cold comfort. We have no way of knowing what values these “emotional” machines might have, or if they would be anything like ours. Nor can we engineer them to share our values. As complex adaptive systems, they will change with experience – as humans do. Even the child does not entirely inherit the parents' value system.

The fears expressed by the critics of AI are more philosophical than real today, but they embody an important principle called the Precautionary Principle, which states that, if a policy poses a potentially catastrophic risk, its proponents have the burden of proof to show that the risk is not real before the policy can be adopted. In complex systems – and intelligent entities would definitely be complex systems – macro-level phenomena emerge from the nonlinear interactions of a vast number of simpler, locally acting elements, e.g., the emergence of market crashes from the actions of investors, or of hurricanes from the interaction of particles in the atmosphere. The “common sense” analysis on which we humans base even our most important decisions often fails completely in the face of such emergence. As a result, when it comes to building artificial complex systems or human intervention into real complex systems – such as wars, social reforms, market interventions, etc. – almost all consequences are unintended. In addition to the many “known unknowns” inherent in a complex system due to its complexity, there is an infinitely greater set of “unknown unknowns” for which neither prediction nor vigilance is possible. The warnings by Hawking et al. are basically the application of the Precautionary Principle to the unleashing of the most complex system humans have ever tried to devise. Even thinkers as brilliant as Hawking or Musk cannot know what would actually happen if true AI were to emerge in machines; they are just not willing to take the risk because the worst-case consequences are too dire. And since the complexity of true AI would forever preclude a positive consensus on its benignness, the burden of proof imposed by the Precautionary Principle can never be met.

But can such machines ever be built in the first place? How will true AI emerge? How, if ever, will the issues of intelligence, autonomy, free will and responsibility be resolved? Thinkers much more accomplished than me have engaged with this problem, and continue to do so without achieving any consensus. However, let me suggest a principle inspired by Dobzhanski's famous statement, “Nothing in biology makes sense except in the light of evolution“. As we grapple with the deepest, most complex aspect of the human animal, I suggest that we adopt the following maxim: Nothing about the mind can make sense except in the light of biology. The mind is a biological phenomenon, not an abstract informational one. Focusing on the mind-as-algorithm – something to which I also plead guilty – may be formally justifiable, but it unwittingly ends up promoting a kind of abstract-concrete mind-body dualism, detaches mental functions from their physical substrate, and, by implying that these functions have some sort of abstract Platonic existence, blinds us to their most essential aspects. The mind – human or otherwise – emerges from a multicellular biological material organism with a particular structure and specific processes, all shaped by development over an extended period in the context of a complex environment, and ultimately configured by three billion years of evolution. We will only understand the nature of the mind through this framework, and only achieve artificial intelligence by applying insights obtained in this way. And when we do, those intelligent machines will not only be intelligent, they will be alive – like us!

Finally, a word on when all this might happen. Simple extrapolation over progress in AI would suggest that we are far from that time. However, progress in technology is often highly nonlinear. There are several developments underway that could supercharge the progress towards AI. These include: The development of very large-scale neural networks capable of generalized learning without explicit guidance; the paradigm of embodied robotics with an emphasis on emergent behavior, development and even evolution; much better understanding of the structure and function of the nervous system; and rapid growth in technologies for neural implants and brain-controlled prosthetics. This topic is too vast to be covered here, but one point is worth considering. It is quite possible that the transition to intelligent machines will occur through transitional stages involving the replacement or enhancement of living animals and humans with implants and prosthetics integrated into the nervous system. Once it is possible to integrate artificial sensors into the nervous system and control artificial limbs with thought, how long will it be before the militaries of all advanced countries are building cyborg soldiers? And how long will these soldiers retain their human parts as better ones become available from the factory? The Singularity may or may not be near, but the Six Million Dollar Man may be just around the corner.