Artificial Stupidity

by Ali Minai

"My colleagues, they study artificial intelligence; me, I study natural stupidity." —Amos Tversky, (quoted in “The Undoing Project” by Michael Lewis).

Humans-vs-AINot only is this quote by Tversky amusing, it also offers profound insight into the nature of intelligence – real and artificial. Most of us working on artificial intelligence (AI) take it for granted that the goal is to build machines that can reason better, integrate more data, and make more rational decisions. What the work of Daniel Kahneman and Amos Tversky shows is that this is not how people (and other animals) function. If the goal in artificial intelligence is to replicate human capabilities, it may be impossible to build intelligent machines without "natural stupidity". Unfortunately, this is something that the burgeoning field of AI has almost completely lost sight of, with the result that AI is in danger of repeating the same mistakes in the matter of building intelligent machines as classical economists have made in their understanding of human behavior. If this does not change, homo artificialis may well end up being about as realistic as homo economicus.

The work of Tversky and Kahneman focused on showing systematically that much of intelligence is not rational. People don’t make all decisions and inferences by mathematically or logically correct calculation. Rather, they are made based on rules of thumb – or heuristics – driven not by analysis but by values grounded in instinct, intuition and emotion: Kludgy short-cuts that are often “wrong” or sub-optimal, but usually “good enough”. The question is why this should be the case, and whether it is a “bug” or a “feature”. As with everything else about living systems, Dobzhansky’s brilliant insight provides the answer: This too makes sense only in the light of evolution.

The field of AI began with the conceit that, ultimately, everything is computation, and that reproducing intelligence – even life itself – was only a matter of finding the “correct” algorithms. As six decades of relative failure have demonstrated, this hypothesis may be true in an abstract formal sense, but is insufficient to support a practical path to truly general AI. To elaborate Feynman, Nature’s imagination has turned out to be much greater than that of professors and their graduate students. The antidote to this algorithm-centered view of AI comes from the notion of embodiment, which sees mental phenomena – including intelligence and behavior – as emerging from the physical structures and processes of the animal, much as rotation emerges from a pinwheel when it faces a breeze. From this viewpoint, the algorithms of intelligence are better seen, not as abstract procedures, but as concrete dynamical responses inherent in the way the structures of the organism – from the level of muscles and joints down to molecules – interact with the environment in which they are embedded.

Read more »