Putting the “cog” in “cognitive”: on the “mind as machine” metaphor

by Yohan J. John

Robot_toy_1950s_redditScientists have long acknowledged the power of metaphor and analogy. Properly understood, analogical and metaphorical thinking are not merely ornamental aspects of language, but serve as a bridge from the known to the unknown. Perhaps the most important example of this process was the one that epitomizes the scientific revolution: Isaac Newton's realization that both heavenly and terrestrial bodies were governed by the same physical laws. A precise mathematical analogy exists between the falling of an apple and the orbit of the moon around the earth. The moon can be thought of as a really big and far-away apple that's "perpetually falling". Newton's analogy rests upon a broadening of the concept of free-fall — in other words, it involves a more abstract concept of motion. A couple of centuries later, James Clerk Maxwell recognized the process of generalization and abstraction as central to the scientific enterprise. The new sense of an idea, "though perfectly analogous to the elementary sense, is wider and more general. These generalized forms of elementary ideas may be called metaphorical terms in the sense in which every abstract term is metaphorical." We might go so far as to call metaphor the alchemy of thought — the essence of creativity.

Words like "abstraction" and "generalization" can often appear neutral, or even positive, depending on your intellectual tastes. But there are drawbacks to these unavoidable consequences of analogical thinking. The one that most often receives comment from scientists and philosophers is the fact that analogies are only ever partial: there are always differences between things and processes — that's how we know that they aren't identical in the first place. In other words, every abstraction involves a loss of specificity.

If scientists discover processes that are "perfectly analogous" to each other, as in the Maxwell quote above, then this loss is so minuscule that it doesn't cause any real problems. But in areas of active research, much more circumspection is required. When we propose that one system serve as a model for another that we don't understand, we must be careful not to lose sight of the inevitable differences between the model and reality. Otherwise we may confuse the map with the territory, forgetting that a map can only serve as a map by being less detailed than what it represents. As a model becomes more detailed, it eventually becomes just as complex as the real thing, and therefore useless as a tool for understanding. As the cybernetics pioneers Arturo Rosenblueth and Norbert Wiener joked, "the best material model for a cat is another, or preferably the same cat."

So the loss inherent in the process of analogy cannot be avoided through additional detail or specificity. In any case, any fastidious adherence to strictly literal language severely retards our ability to create new knowledge. If we seek any kind of usable understanding, we have to use analogy, taking care to watch out for the inevitable places where our analogies will inevitably break down.

"No alarms and no surprises" Factoryman

People who study the mind and brain often confront the limits of metaphor. In the essay 'Brain Metaphor and Brain Theory', the vision scientist John Daugman draws our attention to the fact that thinkers throughout history have used the latest material technology as a model for the mind and body. In the Katha Upanishad (which Daugman doesn't mention), the body is a chariot and the mind is the reins. For the pre-Socratic Greeks, hydraulic metaphors for the psyche were popular: imbalances in the four humors produced particular moods and dispositions. By the 18th and 19th centuries, mechanical metaphors predominated in western thinking: the mind worked like clockwork. The machine metaphor has remained with us in some form or the other since the industrial revolution: for many contemporary scientists and philosophers, the only debate seems to be about what sort of machine the mind really is. Is it an electrical circuit? A cybernetic feedback device? A computing machine that manipulates abstract symbols? Some thinkers so convinced that the mind is a computer that they invite us to abandon the notion that the idea is a metaphor. Daugman quotes the cogntive scientist Zenon Pylyshyn, who claimed that "there is no reason why computation ought to be treated merely as a metaphor for cognition, as opposed to the literal nature of cognition".

Daugman reacts to this Whiggish attitude with a confession of incredulity that many of us can relate to: "who among us finds any recognizable strand of their personhood or of their experience of others and of the world and its passions, to be significantly illuminated by, or distilled in, the metaphor of computation?." He concludes his essay with the suggestion that "[w]e should remember than the enthusiastically embraced metaphors of each "new era" can become, like their predecessors, as much the prisonhouse of thought as they at first appeared to represent its liberation."

This sort of wariness may be a necessary form of humility for any scientist, aligning with Rosenblueth and Wiener's warning: "The price of metaphor is eternal vigilance."[1] But when there are no competing metaphors available, it seems as if there is little choice but to work with what is known. Moreover, we can discern a common thread linking all prior metaphors for the mind: they were all technologies, and as such humans felt a sense of control over them. So perhaps we have been using the same metaphor all along?

At the heart of all technological metaphors is control. And control requires clear prediction — no surprises. This became clear to Norbert Wiener when, during World War II, he found himself contemplating how the movements of a pilot could be predicted by an automated anti-aircraft gun:

"It does not seem even remotely possible to eliminate the human element as far as it shows itself in enemy behavior. Therefore, in order to obtain as complete a mathematical treatment as possible of the over-all control problem, it is necessary to assimilate the different parts of the system to a single basis, either human or mechanical. Since our understanding of the mechanical elements of gun pointing appeared to us to be far ahead of our psychological understanding, we chose to try to find a mechanical analogue of the gun pointer and the airplane pilot." [2]

In other words, any composite system consisting of human and machine — pilot and plane, as well as gunner and gun — must be abstracted into a generalized mechanical scheme rather than a generalized human scheme, since it is only machines that we are currently capable of predicting and controlling. This reveals something about how the "human element" is understood from the perspective of cybernetics, a perspective that has permeated virtually all fields that study biological agents. The field of cybernetics was christened by Wiener, who defined it as the science of control. In principle, the only way the human element can be made legible only if a suitable mechanical analog is found. In other words, human beings are controllable to the extent that they can be accurately modeled as machines. So the machine metaphor seems unavoidable — if, that is, we see understanding and control as the same thing. From this perspective it makes sense that the only other tangible metaphor that arises when talking about the self is an animal metaphor: we seek to tame the animal spirits we discover in ourselves and in society. A domesticated animal is a lot like a machine, in that it is predictable and reliable. It does what it is expected to do. Ludwig von Bertalanffy, a pioneer of general systems theory, elaborated on this theme in 1968:

"… it is the "human element" which is precisely the unreliable component of their creations. It either has to be eliminated altogether and replaced by the hardware of computers, self-regulating machinery and the like, or it has to be made as reliable as possible, that is, mechanized, conformist, controlled and standardized. In somewhat harsher terms, man in the Big System is to be — and to a large extent has become — a moron, button-pusher or learned idiot, that is, highly trained in some narrow specialization but otherwise a mere part of the machine. This conforms to a well-known systems principle, that of progressive mechanization — the individual becoming ever more a cogwheel dominated by a few privileged leaders, mediocrities and mystifiers who pursue their private interests under a smokescreen of ideologies". [3]

Wiener_humanuses

Wiener also recognized the ominous implications of the cybernetic view of human beings. In his book Cybernetics, the following warning appears, perhaps more relevant now, in our age of anxiety about automation, than when it was written in 1948:

"Perhaps I may clarify the historical background of the present situation if I say that the first industrial revolution, the revolution of the "dark satanic mills," was the devaluation of the human arm by the competition of machinery. There is no rate of pay at which a United States pick-and-shovel laborer can live which is low enough to compete with the work of a steam shovel as an excavator. The modern industrial revolution is similarly bound to devalue the human brain, at least in its simpler and more routine decisions. Of course, just as the skilled carpenter, the skilled mechanic, the skilled dressmaker have in some degree survived the first industrial revolution, so the skilled scientist and the skilled administrator may survive the second. However, taking the second revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that is worth anyone's money to buy.

"The answer, of course, is to have a society based on human values other than buying and selling. To arrive at this society, we need a good deal of planning and a good deal of struggle…". [4]

You don't have to be a leftist or a conspiracy theorist to wonder if the darkest predictions of Wiener and Von Bertalanffy are already becoming a reality: the economic system we find ourselves in seems bent on maximizing the number of 'useless' people in society, and rendering machine-like the behavior of those who have not yet been made redundant. This conjures up the picture of modernity painted by the song 'Fitter, Happier', from Radiohead's 1997 album OK Computer. The song features a cold computerized voice reading out a list of the kinds of traits that the model neoliberal subject typically aspires to: "Not drinking too much / Regular exercise at the gym […] Slower and more calculated […] Concerned but powerless / An empowered and informed member of society / Pragmatism not idealism". It ends with the following line:

"Fitter, healthier and more productive

A pig in a cage on antibiotics"

"I may be paranoid, but no android"

The machine metaphor of mind is useful from the perspective of science, even if we don't yet have anything like Maxwell's "perfectly analogous" models. It serves as a conceptual foundation from which to construct new theories and experiments. And if we are as careful as Daugman suggests, we will keep an eye out for phenomena that don't quite fit the metaphor. Perfect models of human beings may well render us cogs in the machine if placed in the wrong hands, but knowledge always has liberatory potential too: widespread public understanding of the human mind could help us bridge the personal and the political, identifying how the pathologies of societies, economic systems and the environment are mirrored in the pathologies of the individual. But the mind as machine metaphor may, ironically, hinder our socio-economic progress even as it assists our scientific and technological research. To approach this idea, we must recognize that there is more to metaphor than its ostensive practical meaning. This is especially true when we are talking about human beings. Every linguistic concept has both denotative and connotative meanings. A scientific metaphor denotes specific features of a real phenomenon by aligning them with analogous features in a model.

Metafood1The idiosyncratic psychologist Julian Jaynes, in his book The Origin of Consciousness in the Breakdown of the Bicameral Mind, invites us to pay special attention to metaphor. He starts by introducing some terminology to help make explicit the structure of metaphor. A metaphor consists of two parts: the metaphrand, which is the phenomenon to be explained, and the metaphier, which is the more familiar phenomenon. So in the mind-as-computer metaphor, mind is the metaphrand, computer the metaphier. Jaynes introduced two other terms to describe how a metaphor becomes fleshed out: paraphier and paraphrand. Paraphiers are aspects of the metaphier that cast more light on the metaphrand by mapping onto paraphrands of the metaphrand (whew!). The paraphiers for a computer include processors, memory units, input devices (mouse and keyboard), and output devices (the screen). We can use these parts to identify corresponding parts of the mind or brain. The paraphrands in the brain might be the prefrontal cortex, the medial temporal, sensory cortex, and motor cortex. (Any self-respecting neuroscientist would consider this a ridiculously oversimplified and misleading metaphor of course, but the point here is to establish the terminology.)

The four-way relationship between metaphier, metaphrand, paraphier and paraphrand implicitly show up in the cognitive and brain sciences in the concept of a mental or neural representation. A representation is a model of some thing or process. A neural process can serve as a model of some phenomenon if its internal structure resembles the internal structure of the phenomenon. In other words, the representation(s) of a perceived object in the world must be isomorphic (isos = 'equal', morphe = 'form') with with object. For a metaphier to serve as a model, its paraphiers must be isomorphic to to the paraphrands of the metaphrand. (I know, this is starting to sound like gibberish.) [5] The parallel relationships between metaphier and paraphiers on the one hand, and between metaphrand and paraphrand on the other, are internal or intrinsic relationships.

But no concept is ever truly isolated: its denotative or explicit meaning is always embedded in a web of connotative and implicit meanings. These connotative relationships are extrinsic rather than intrinsic: they are arational, context-sensitive and association-based. If you say that an atom is like a solar system, you invite people to wonder if there are tiny planets orbiting the nucleus, with even tinier beings on them, peering down their own microscopes. If you say the universe is a simulation, you invite people to ask who programmed the simulation, and why.

When you use a metaphor to explain a complex concept to anyone you have no control over the associations the metaphor will evoke. This may explain why thinkers in technical fields invent new terms and use abstruse jargon: ostensibly un-metaphorical labels serve as cognitive speed bumps to slow down free-association. (If "brainwaves" had only ever been described as "neural oscillations", we might have far fewer tinfoil hat types in the world.)

The associations that can be triggered by the use of a word often lurk just below the surface of conscious awareness. When terms like 'mental illness' were used to replace words like 'madness' or 'insanity', it was hoped this would reduce stigma. But the opposite may often occur: words like 'illness' and 'disease' evokes all kinds of responses other than sympathy. The concepts of contagion and quarantine may arise, unbidden. Such terms can also bring up unpleasant memories of hospitals: the smell of disinfectant, the sickly green walls, the waiting rooms. The sensory content of such memories may bring on avoidance responses rather than compassion. One study from a few years ago suggested that the metaphors of crime as beast or as crime as virus could bias some people's attitudes about crime-reduction policies. People who were presented crime statistics and the beast metaphor were more likely to support 'tough on crime' type policing that people presented with the same statistics and the virus metaphor.

We might say that the conscious and intentional interpretations of metaphor are the visible tip of an semiotic iceberg: beneath the surface lurk unconscious and uncontrollable impressions. If the denotative meaning of a metaphor is prosaic, then the connotative meaning is poetic. And poetry is one of our oldest methods by which to bypass rational cognitive roadblocks. Slavoj Žižek often touches on this, in typically provocative fashion: he notes that poets have often provided aesthetic 'justification' for totalitarian and fascist regimes. Impressionistic imagery involving nationalism, pride, fear, or rage, can bypass our flimsy cognitive defenses.

The denotation/connotation scheme relates to Daniel Kahneman's division of thinking into 'System 1' and 'System 2': System 1 is 'fast, automatic, frequent, emotional, stereotypic, subconscious'. System 1, in other words, is quick to use simplified models and analogies, which in turn trigger meanings linked through prior association. System 2, by contrast, is 'slow, effortful, infrequent, logical, calculating, conscious.' System 2 is necessary to slow down the runaway analogies and associations — to keep the paraphiers in check.

When we are told that the mind is a machine all kinds of images, meanings and reactions are summoned in a manner that is (ironically?) beyond our control. Our System 2 defenses may even be lulled into a false sense of security, given that it is society's System 2, the community of scientists, that is the source of the metaphor. Even if public intellectuals keep repeating that their use of the machine metaphor is not de-humanizing, they will have to contend with the fact that the general public cannot easily shed the associations that the machine concept has accumulated over the centuries. Machines, as most people understand them, are devoid of volition. They do not have feelings. They are predictable. They serve very specific purposes — purposes that are not self-generated, but are imposed upon them from the outside. Machines are designed. They are composed of parts that can be repaired or replaced. They are discarded when they cease to be useful.

"Hysterical and useless"

We must be very careful about the metaphors we unleash into the wild, especially the ones we use to define people. When we popularize a particular model of the self, people will inevitably use it to assess themselves and others. As the psychologist Barry Schwartz wrote recently:

"Planets don't care what scientists say about their behavior. They move around the sun with complete indifference to how physicists and astronomers theorize about them. Genes are indifferent to our theories about them also. But this is not true of people. Theories about human nature can actually produce changes in how people behave. What this means is that a theory that is false can become true simply by people believing it's true." [6]

Despite the best efforts of many scientists, philosophers, politicians, and propagandists, human beings remain stubbornly un-machinelike. Each of us remains a mystery, even — or perhaps especially — to ourselves. But by simply believing that we are machines, or repeating this belief so often that it becomes 'common sense', we may eventually transform ourselves into machines, judging ourselves and others according to the harsh criteria of usefulness.

Many people are perfectly okay with this [7]. Others, like me, are wary. Some even see in the vagaries of human behavior — in our organic capriciousness and unreliability — the essence of freedom and creativity [8]. There will always be discontents and dissidents who instinctively shirk off whatever society's latest definition of the self happens to be. "Whatever People Say I Am, That's What I'm Not". We might call this a version of "apophatic psychology" – the "not this, not that" approach. [9]

We might even call the apophatic method the anti-metaphor: rather than saying I am isomorphic with some thing or process, I list all the forms in the world, and say that I resemble none of them.

This may well be 'useless' obscurantism, but perhaps that is key to its paradoxical appeal: does everything, including my self, have to be useful?


_______ References and Notes

[1] I have been unable to find the primary source of this quote. Richard Lewontin has attributed this line to Rosenblueth and Wiener several times, since at least 1963.

[2] Norbert Wiener, I Am a Mathematician (1953), quoted in Gallison (1994).

[3] Ludwig von Bertalanffy, General Systems Theory (1968)

[4] Nortbert Wiener, Cybernetics (1948)

[5] Metaphier and metaphrand have a similar relationship to each other as the sign and signified in semiotics, but a sign, by virtue of being arbitrary and convention-based, need not have any structural similarity with the signified.

[6] Barry Schwarz, Why We Work (2015)

[7] Science fiction may help. The emotive robots, androids and cyborgs we encounter in books and movies may eventually weaken our past associations with the word 'machine'.

[8] In the film I'm Not There, one of the many Bob Dylans delivers the following lines:

"People are always talking about freedom. Freedom to live a certain way, without being kicked around. Course the more you live a certain way, the less it feels like freedom. Me, uhm, I can change during the course of a day. I wake and I'm one person, when I go to sleep I know for certain I'm somebody else.":

[8] Apophatic psychology comes up in my 3QD essay on randomness. A random pattern, as it turns out, must be defined 'apophatically:' not this pattern, not that pattern.

~

Several of the issues here were brought up in a new discussion group on the history of cognitive science that some friends and I recently started. If you are in the Boston area, get in touch if you'd like to attend. I've also covered metaphor in a more celebratory essay called 'Metaphor: the Alchemy of Thought'.