Artificial Stupidity

by Ali Minai

"My colleagues, they study artificial intelligence; me, I study natural stupidity." —Amos Tversky, (quoted in “The Undoing Project” by Michael Lewis).

Humans-vs-AINot only is this quote by Tversky amusing, it also offers profound insight into the nature of intelligence – real and artificial. Most of us working on artificial intelligence (AI) take it for granted that the goal is to build machines that can reason better, integrate more data, and make more rational decisions. What the work of Daniel Kahneman and Amos Tversky shows is that this is not how people (and other animals) function. If the goal in artificial intelligence is to replicate human capabilities, it may be impossible to build intelligent machines without "natural stupidity". Unfortunately, this is something that the burgeoning field of AI has almost completely lost sight of, with the result that AI is in danger of repeating the same mistakes in the matter of building intelligent machines as classical economists have made in their understanding of human behavior. If this does not change, homo artificialis may well end up being about as realistic as homo economicus.

The work of Tversky and Kahneman focused on showing systematically that much of intelligence is not rational. People don’t make all decisions and inferences by mathematically or logically correct calculation. Rather, they are made based on rules of thumb – or heuristics – driven not by analysis but by values grounded in instinct, intuition and emotion: Kludgy short-cuts that are often “wrong” or sub-optimal, but usually “good enough”. The question is why this should be the case, and whether it is a “bug” or a “feature”. As with everything else about living systems, Dobzhansky’s brilliant insight provides the answer: This too makes sense only in the light of evolution.

The field of AI began with the conceit that, ultimately, everything is computation, and that reproducing intelligence – even life itself – was only a matter of finding the “correct” algorithms. As six decades of relative failure have demonstrated, this hypothesis may be true in an abstract formal sense, but is insufficient to support a practical path to truly general AI. To elaborate Feynman, Nature’s imagination has turned out to be much greater than that of professors and their graduate students. The antidote to this algorithm-centered view of AI comes from the notion of embodiment, which sees mental phenomena – including intelligence and behavior – as emerging from the physical structures and processes of the animal, much as rotation emerges from a pinwheel when it faces a breeze. From this viewpoint, the algorithms of intelligence are better seen, not as abstract procedures, but as concrete dynamical responses inherent in the way the structures of the organism – from the level of muscles and joints down to molecules – interact with the environment in which they are embedded.

When an animal produces a fruitful or futile behavior, it is because of how the electrical and chemical activity of its cells (including the neurons of the nervous system) is shaped by this interaction. While embodiment is often studied in terms of mechanical behaviors such as walking, it should be clear that it is just as relevant at the level of cellular networks in the nervous system as for muscles and joints responding to gravity, friction, or applied force. Information enters the animal through the activation of sensory receptors (such as the cells of the retina or the hairs of the inner ear), and flows through the brain and the body, interacting with the state of the system, dwelling in its networks, and being reshaped by them. This process – this flow – activates the neurons that innervate muscles, leading to movement; it also activates neurons in other parts of the brain, whose activity corresponds to percepts, memories, thoughts, plans, emotions, decisions. In this fundamental sense, there is no distinction between perception, thought, and action. All aspects of intelligence – including deep deliberation – must arise from this dynamics. But most importantly, so must all the mundane inferences and decisions on which the survival of the individual depends. The choices we make must be configured implicitly into our biology by the three adaptive processes that shape it: Evolution, development, and learning.

One of the biggest gaps between AI and natural intelligence is speed. Animals – including humans – live their lives in a very complex, ever-changing, dangerous world, and there are many things they must “know” from birth or learn quickly based on just a few experiences, and use this knowledge in real-time for such critical purposes as evading predators and finding food. Even more remarkably, they must do this using networks of slow, messy, error-prone biological cells rather than high-speed information processors or a vast, error-free digital memory. How does that happen? While AI systems have recently achieved spectacular successes on learning complex tasks, the learning that powers them depends crucially on five elements: 1) The availability of large amounts of data; 2) The ability of store this data off-line in memory, and to access it repeatedly for rehearsal; 3) The computational capacity to extract the requisite information from the data; 4) The time to carry out the computationally expensive process of repeatedly going through a lot of data to learn incrementally; and 5) The energy to sustain the whole process. None of these is available to a real animal, or has been available to humans through most of our species’ history. Nor have they been needed. Ideas that require great effort to understand or tasks that require a lifetime of practice to master are relatively recent developments even in human history, and are probably not a significant part of the experience of other animals outside of laboratory or circus settings. For an intelligent machine to learn chess or Go is remarkable, but says little about real intelligence. It is more useful to ask how a human child can recognize dogs accurately after seeing just one or two examples, or why a human being, evolved to operate at the speed of walking or running, can learn to drive a car through traffic at 70 mph after just a few hours of experience. This general capacity for rapid learning is the real key to intelligence – and is not very well-understood.

When an artificial system such as a car or computer is first deployed, it is not surprising to see it work perfectly. The reason is that the designers of these systems are expected to have configured the system’s functionality with precision, and the manufacturer to have implemented this fully functional design accurately. But these designed systems only work for predictable situations, and even if they adapt, it is in predictable ways (as when a voice-activated controller adapts to its user’s pronunciation). Animals face a very different situation: The environment in which they must make choices is extremely complex and unpredictable; their own bodies and brains are extremely complex; and – most importantly – there is no team of designers or engineers to guarantee functionality. And yet, they thrive! The reason, of course, is prior biases.

The fundamental genius of evolution has been to preconfigure biases – or priors – into the very structure of animal bodies and brains – available, if not always fully developed, at birth. Some capacities need experience to develop, and this occurs during the early developmental phase in complex animals such as birds and mammals. And finally, much of the fine tuning occurs as learning shapes the neural networks of the nervous system in the context of the body embedded in its environment. As a result, when an animal senses a predator, it does not need to run an explicit “program” to recognize it and to figure out an optimal escape path. A viable response strategy is already configured in its brain and body, and is immediately triggered through the mediation of emotions such as fear or anxiety. Formally, this may be seen as an “algorithm”, but practically, it is embodied in the “hardware” – or “wetware” – of the animal itself. Ultimately, all intelligence is a toolbox of interacting priors – some related to behavior, others to perception or cognition. It is these priors that shape experience in response to sensory stimulus. And, in a sufficiently complex animal such as humans, they also shape the inner workings of the mind: Thought, planning, imagination, creativity. Intelligence can only be understood properly in terms of these priors, which are mind’s natural substrate.

The functional role of the priors is to produce useful and timely responses automatically – without explicit thinking. If walking, for example, required thinking about, planning, and evoking each movement of every muscle, no one could ever walk. But walking is embedded as a prior pattern of activity in the neural networks of the brain and spinal cord, and the musculoskeletal arrangement (also a network) of bones, muscles, and tendons. It can be evoked in its entirety by a simple command from a higher brain region – as can other behaviors such as running, chewing, laughing, coughing, etc. The heuristics of inference and decision making are similarly preconfigured, to be triggered automatically without explicit thinking – a characteristic captured in the notions of instinct, intuition, snap judgment, and common sense. Equally important, however, is the perceptual and cognitive infrastructure that must underlie these operational heuristics. Why is it that children can learn to recognize dogs or tables based on only a few exemplars whereas AI programs require thousands of iterations over thousands of examples? The answer is that the human brain already has filters configured to recognize salient features – not just in dogs or tables, but in the world. Of all the infinite variety of features – shapes, color combinations, structures, sizes, etc. – the infant brain learns early in development – long before it needs to recognize dogs and tables – which limited set of features is likely to be useful in the real world. In doing so, it sets the expectations for what can be recognized in that world, and also for what gets ignored. This is the mind’s most fundamental prior, its deepest bias. It turns animals into compulsive pattern recognizers and predictors – seeing patterns and predicting outcomes automatically even when there isn’t enough data to justify them, because waiting for enough data or grinding through to a perfect prediction would be fatal. A successful animal is one with the right instincts, not one with the best calculations. This, ultimately, is the root of cognitive irrationality: Everything we perceive without effort, every inference and decision we make automatically, and every action we take without deliberation is shaped purely by our priors – our biases, prejudices, preconceptions, preferences – and addictions. This is our “nature” at the most basic level, the part that is most ingrained and most difficult to change, except possibly at a young age. It is our “lizard brain”.

Psychologists have long recognized a distinction between “automatic” and “effortful” cognitive and behavioral tasks. Considerable experimental evidence has also accumulated from neuroscience about the existence of preconfigured neurocognitive networks in the brain, which are activated as a whole during particular situations and, presumably, underlie the generated response. Recently, Daniel Kahneman has formalized the notion of looking at mental function as an interaction between two systems: System 1 (or the Fast System), which rapidly and automatically generates intuitive perceptions, inferences, judgments, decisions, and behaviors; and System 2 (or the Slow System), that works more deliberately, but is called into action only for complex tasks (and often not even then). System 1, acting with minimal explicit thought, is prone to logical errors, fallacies, and illusions in some situations, but its value as a real-time, almost effortless system outweighs such problems. The deliberative System 2, in contrast, is much less likely to make simple errors, but requires too much time, information and effort to be used for general-purpose real-time tasks. The critical point here is that the irrationality of System 1 – its “natural stupidity” – is a feature, not a bug. It is an essential price that must be paid for useful real-time perception, thought and behavior to be possible for a realistic system. A System 2-like insistence on deliberate responses would leave the animal “buried in thought” (to quote Clark Hull’s critique of Edward Tolman’s model of behavior).

It may be asked whether evolution could have configure more rational priors, or animals capable of learning such priors. At an abstract level, in fact, one can look at biological evolution entirely as a process of generating animals with better priors, capable of more complex useful actions. Indeed, the very emergence of deliberative, System 2 thinking in humans and other higher mammals demonstrates this. But as the universe of behavior becomes more complex (think of the behavioral choices facing an earthworm against those available to a human), being effective becomes both more possible and more difficult. The difficulty arises because, as more elementary behaviors become available, the number of choices offered by their combinations increases exponentially and each choice is itself more complex. Thus, although the best of these choices may be very effective, they are also exponentially harder to find through a learning process, and much more expensive to execute because of the cognitive load they entail: The deer that looks for the best possible path before it flees from a lion is likely to be killed. In fact, this limitation applies not only to animals, but to all complex systems, e.g., foreseeing all possibilities and making correct predictions in a complex economy is far more difficult than doing so in a simple one. The elegant solution found by nature is to select choices that are “good enough” – and to preconfigure them into the system. This is an instance of the satisficing principle proposed by Herbert Simon – one of the founding fathers of AI, complex systems, and behavioral economics – as a fundamental feature of human behavior. Humans and animals succeed by satisficing, not optimizing: Decisions are made at the lowest sufficient level of deliberation. The opportunity cost of being fully optimal or rational is too high for it to be biologically viable in most situations. This also means that more complex animals are likely to be more profoundly irrational in their instinctive choices because these choices are made in more complex frameworks – an insight to consider in thinking about AI.

While the two-system view of mental function is conceptually useful, there is little experimental evidence for an explicit division between two qualitatively different systems. It is probably more accurate to think of a continuum from automatic to effortful mental function, with various tasks requiring different levels of cognitive effort. As anyone who has learned to drive a car, play a sport, or speak a new language knows, behaviors that initially require significant effort can become automatic with practice. However, the levels of effort required for learning in these cases are quite different. One can learn to drive or ride a bicycle in a fairly short time, but learning to speak a language or hit a fast ball takes far more effort. Of course, many tasks never become automated, either because they are too complex or because they are not performed frequently enough to warrant such learning. From this continuum perspective, System 1 and System 2 are the two extreme ends of the same system. More importantly, it is not that System 1 is all heuristics and System 2 fully rational, but that it’s heuristics all the way: The heuristics just get slower, more information-intensive, more complex, and more rational as the tasks become more complex. And, as such, System 2 is just as prone to making mathematically incorrect or sub-optimal choices as System 1, but the errors made by System 2 are more complicated, and are seen as being of a different kind. A gambler who picks a sub-optimal gamble may be regarded as irrational, but what of decision makers who, after seeing a lot of data and going through extensive deliberations, still make poor decisions? Are they too irrational, or is there another way to see this?

Two things may provide some insight here. First, even with slow and deliberative thinking, there is usually no hope of actually considering all options in truly complex situations. No chess or Go player can evaluate all potential moves, nor a general consider every single battlefield possibility. Even inferences and decisions made carefully are, in the end, based on prior biases, albeit on biases that unfold more gradually and with much more complexity. This idea is captured in Herb Simon's concept of bounded rationality.The second – related – point is that more complex mental processes are, in fact, built on the foundation of simpler ones, all the way down to the deepest, most primitive heuristics of the lizard brain. Deliberation is not a separate, qualitatively different process than automatic choice, nor is operational Reason (as opposed to idealized Reason) anything other than a set of much more complex heuristics built from the simpler ones provided by instinct and rooted in emotion. Not only does the rational agent of economics not exist, it cannot exist. The complexity of the world does not allow perfect rationality: It is too contingent, too nonlinear, and has too many interacting parts, leading to a fundamental undermining of linear causality.

A two-system view of the mind invites questions on how these two systems differ; what different substrates are the embedded in; what different processes they use. The continuum view, in contrast, is rooted in an evolutionary perspective. Seen in this framework, the earliest animals capable of behavior would have been purely stimulus-response agents with behavior emerging from a direct linkage between their sensory and motor systems without any intervening “thinking” process. Gradually, as animals evolved more elaborate sensors and more complex bodies along with larger nervous systems, more complex relationships would have emerged between stimulus and response, including the possibility that certain responses could be inhibited or triggered selectively by interacting processes within the nervous system. That is the most primitive level of thought and choice. Then, as brains and bodies became even more complex, so would the linkage between stimulus and response, involving remote memories, concepts, categories, associations, language – eventually abstraction. And as this more complex machinery for perception, thought, and action evolved, the processes for triggering and inhibiting them would also have become more complex – coming to be recognized by humans in terms of values and emotions.

All this has fundamental implications for analyzing human epistemology and understanding the parameters of human societal organization, but these issues are too vast to be explored here. However, the implications of a “continuum of irrationality” viewpoint for building artificial intelligence are also worth exploring.

The AI project is an outgrowth of the larger Reason-based Science endeavor that has so transformed the world in the last few hundred years. It is difficult for most people to appreciate how radical the enterprise of Science is, and what a profound challenge it poses to human nature by explicitly rejecting instinct, intuition, and common sense as the basis of knowledge – substituting in their place empirical observation, mathematical analysis, and perpetually provisional models of reality. As such, Science offers the possibility of transcending the limits of inherent human irrationality, allowing humanity to address truly complex problems. Grounded in materialism and reductionism, and scoring success after success, it engenders the hope of total comprehension through complete rationality. But, as has already been argued above, this is an illusion. Science and technology have indeed transformed human epistemology in radical ways, but the complexity of both physical and biological systems remains a fundamental impediment to complete knowledge and total control. In fact, Science has itself helped to establish these limitations rigorously through concepts such as incompleteness, uncertainty, chaos, emergence, and complexity. This is especially relevant for human systems such as societies, economies, and organizations, that are complex systems of interacting irrational complex agents. Modeling humans as completely rational in formalisms such as classical economics and game theory is useful, but not very realistic, which is the prime motivation for that burgeoning science of irrationality: Behavioral economics. Will AI change this?

With exponential growth in computational power and availability of data, the unthinkable (literally, that which could not be thought) is now almost possible: Optimal – or near-optimal – choices can be calculated in real-time from vast amounts of data, even in some very complex tasks. And, thanks to the magic of machine learning, the mechanisms underlying these choices do not have to be specified by brain-limited humans; they can be inferred by the machines using the available data. So is AI finally going to give us the idealized rational agents of economists’ dreams? That is extremely doubtful! True, unlike the mechanisms of most human learning, the algorithms of machine learning are often based on rational objectives, but, like humans, machines must also learn from finite – albeit much larger – amounts of data. Thus, like humans, they too must fill in the gaps in data with heuristics – interpolating, extrapolating, simplifying, and generalizing just as humans do, but possibly in very different ways. And therein lies the rub! For now, machines try to learn something close to the human notion of rationality, which is already quite different from human thinking. But as intelligent machines progress to increasingly complex real-world problems and learn from increasingly complex data, the inferences they make will become less comprehensible, not more, because the complexity of the tasks will make the decision-making more opaque. And if machines are to become truly intelligent, they must become capable of learning rapidly like humans and other animals. But what they learn in that case will necessarily be even more biased by their priors and even less clearly interpretable to human observers – especially since many of these priors will themselves be acquired through learning.

A popular cartoon notion of intelligent machines imagines them as unable to have “feelings” because they are irretrievably rational. This is seen as posing a danger because such hyper-rational machines would lack empathy and make “heartless” decisions. In fact, the truth is more disturbing: Once they are sufficiently complex and autonomous, machines will have “feelings”, but they will not be the same as those we recognize in humans. These feelings may have no names, or they may seem like emotions we recognize, but with unexpected twists. In humans (and presumably in other animals) emotions function as a grounding mechanism, providing value for behaviors. They are the internal arbiters of what actually gets said or done. These arbiters – these filters – have been configured by evolution over eons, and shaped in each individual by their human experiences. They are inescapably, irrevocably path-dependent, and cannot simply be replicated top-down in a machine built by engineers. They must be “bred in the bone” and learned by experience. The largely human design of their bodies and the more data-driven learning of intelligent machines will inevitably lead to the emergence of a value system in each individual machine, based on its own experience and embodiment. And while some of these values may be visible – or even controllable – by its designers, the fundamentally emergent nature of complex learning means that much will be neither predictable nor visible. AI is not the creation of artificial and obedient humans; it is the generation of new species. There’s no reason to believe that the continuum of irrationality that begins with System 1 and the lizard brain will stop with human System 2, and not continue expanding in these new species with very different bodies and brains. Intelligent machines will not be more rational; they will probably be more profoundly irrational (or boundedly rational) than humans in unpredictable and inscrutable ways.

In the spring of 2016, Microsoft deployed a sweet chatbot named Tay on Twitter. It used AI to learn from the tweets of those it followed, and generate its own tweets. Before long, it was sending out the vilest, most bigoted and racist tweets imaginable, having learned from the vast reservoir of such material on Twitter. Less than a day after its deployment, Microsoft was forced to shut it down. This and other experiences have caused some to worry that AI left to learn from experience will develop biases and prejudices like humans. This concern is fully justified: There can be no AI without prejudices, because to be intelligent is to have prior expectations – innate or learned. But the concern does not go far enough. If and when they come to pass, truly intelligent machines will have their own irrationalities, their own instincts, intuitions, and heuristics. They will make choices based on values that have emerged within their embodiment as a result of their development and learning. And their heuristics and their choices will often not be consistent with the common values that most humans share because of a shared biological origin. The “heartless” decisions computers sometimes make today on issues like medical care and loans are merely the result of rules coded into algorithms by humans. Wait until the machines come up with their own rules. At that point, we may need a science of “artificial stupidity” as well. If we survive…