Cerebral Imperialism

Neurons The present is where the future comes to die, or more accurately, where an infinite array of possible futures all collapse into one. We live in a present where artificial intelligence hasn't been invented, despite a quarter century of optimistic predictions. John Horgan in Scientific American suggests we're a long way from developing it, despite all the optimistic predictions (although when it does come it may well be as a sudden leap into existence, a sudden achievement of critical mass). However and whenever (or if ever) it arrives, it's an idea worth discussing today. But, a question: Does this line of research suffer from “cerebral imperialism”?

___________________________________

The idea of “cerebral imperialism” came up in an interview I did for the current issue of Tricycle, a Buddhist magazine, with transhumanist professor and writer James “J” Hughes. One exchange went like this:

Eskow: There seems to be a kind of cognitive imperialism among some Transhumanists that says the intellect alone is “self.” Doesn’t saying “mind” is who we are exclude elements like body, emotion, culture, and our environment? Buddhism and neuroscience both suggest that identity is a process in which many elements co-arise to create the individual experience on a moment-by-moment basis. The Transhumanists seem to say, “I am separate, like a data capsule that can be uploaded or moved here and there.”

You’re right. A lot of our Transhumanist subculture comes out of computer science— male computer science—so a lot of them have that traditional “intelligence is everything” view. s soon as you start thinking about the ability to embed a couple of million trillion nanobots in your brain and back up your personality and memory onto a chip, or about advanced artificial intelligence deeply wedded with your own mind, or sharing your thoughts and dreams and feelings with other people, you begin to see the breakdown of the notion of discrete and continuous self.

An intriguing answer – one of many Hughes offers in the interview – but I was going somewhere else: toward the idea that cognition itself, that thing which we consider “mind,” is over-emphasized in our definition of self and therefore is projected onto our efforts to create something we call “artificial intelligence.”

Is the “society of mind” trying to colonize the societies of body and emotion?

___________________________

Why “artificial intelligence,” after all, and not an “artificial identity” or “personality”? The name itself reveals a bias. Aren't we confused computation with cognition and cognition with identity? Neuroscience suggests that metabolic processes drive our actions and our thoughts to a far greater degree than we've realized until now. Is there really a little being in our brains, or contiguous with our brains, driving the body?

To a large extent, isn't it the other way around? Don't our minds often build a framework around actions we've decided to take for other, more physical reasons? When I drink too much coffee I become more aggressive. I drive more aggressively, but am always thinking thoughts as I weave through traffic: “I'm late.” “He's slow.” “She's in the left lane.” “This is a more efficient way to drive.”

______________________

Why do we assume that there is an intelligence independent of the body that produces it? I'm well aware of the scientists who are challenging that assumption, so this is not a criticism of the entire artificial intelligence field. There's a whole discipline called “friendly AI” which recognizes the threat posed by the Skynet/Terminator “computers come alive and eliminate humanity” scenario. A number of these researchers are looking for ways to make artificial “minds” more like artificial “personalities.”

Why not give them bodies? Sure, you could create a computer simulation of a body, but wouldn't they just override that?

______________________

Intelligence co-developed with other processes embedded in the body and designed for evolutionary advancement – love, for example, and empathy. A non-loving and non-empathetic humanlike empathy is a terrifying thing.

In fact, we already have non-loving, non-empathetic autonomous creations that function by using humanlike intelligence. They're powerful and growing, and they operate along perfectly logical lines in order to ensure their own survival and well-being. Here are two of them: British Petroleum and Goldman Sachs. Each of them is an artificially intelligent “being” (whose intelligence is borrowed from a number of human brains), designed by humans but now acting strictly in their own self-interests.

How's that working out?

_____________________

This isn't a “science” vs. “religion” argument, either. “Cerebral imperialism” in its present form is a computer science phenomenon, but religion runs the same risks – on a far greater or more immediate scale, in fact. Religious fanaticism is selfless heroism when viewed through a certain lens of belief. And the Eastern religions that so many of us hold in warm regard have the potential, if misused, to turn anybody into an “unfriendly AI.” Buddhism and Hinduism revere life. But by emphasizing the insubstantiality of life and the relative nature of human values, any of these religious philosophies run the risk of encouraging participants toward amorality.

Aum Shinrikyo, the Japanese cult that conducted sarin gas attacks on Tokyo's subways, blended some Christian iconography with a melange of Buddhist and other concepts. They were able to lead their followers through a step-by-step process that stripped them of their attachment to transient existence and then removed their resistance to violence. It's a remarkable testament to the power of the Eastern spiritual tradition that there haven't been dozens of such groups during its history.

_________________________________

The Fourth Century Christian schismatics known as Donatists had a group called the “Lord's Athletes” or Agonistici, who attacked the “impure” Catholics and other believers, driving them from sacred sites the way the Taliban does to Sufis in Pakistan today. And Sufism, the loving and gentle branch of Islam, is open to similar forms of abuse. Hassan-i-Sabbah was reportedly influenced by Sufism when he formed the hashashin group (of original “assassin”) in the 10th Century. Sufis have been among the most gentle and loving of historical figures, and the Persian Sufi poet Rumi is the most popular poet in North America, seven centuries after his death (although mostly in highly bowdlerized New Age translations). Yet this popular quote is attributed to Rumi: “Out beyond right and wrong there is a field. I'll meet you there.”

Um, no thanks.

When mystics like Rumi or the Buddhist masters discuss going “beyond right and wrong,” it's after a rigorous framework of training and is based on a cosmology that inclines toward benevolence. “Friendly AI” researchers may want to study these philosophies. If “artificial intelligence” isn't rooted in a body, it might be a good idea to make sure they're Sufis or Buddhists.

_____________________

Could it be that there is no intelligence without a body? That there's only computation? That cognition is the byproduct of biological processes, and never the driver of them?

__________________________

There is also the possibility that “pure intelligence,” devoid of body and emotion, might sometimes or always be sociopathic.

______________________

I've written before about the Turing Test's value and its cultural and religious roots. Conversation is an output of mind, but that doesn't mean conversation is impossible without mind. The whole discussion seems to confusion “selfhood” with “mind,” and “mind” with the products of mind. At best, it confuses output with structure or essence.

After all, the factory that produces synthetic leather isn't an “artificial cow.”

__________________________

Couldn't this over-emphasis on cognition as the core part of identity really be an attempt to suppress unruly and unwelcome emotions? That would be the same impulse that leads people to misuse the mystical experience like the hashashin and Aum Shinrikyo did. “Unfriendly AI” is a frightening prospect, but the most immediate danger is to live in a society where we are collectively detached from our emotions – one where we create a false ideal of cognition and then worship to the exclusion of other values. That's how we got BP and Goldman Sachs, two far more immediate dangers, isn't it?

Gehirn, Gehirn Über Alles! Brain, brain above all … we might want to give that a second thought. Our current “unfriendly AIs,” the mega-corporations that control our world, have already given us as much disembodied, emotionless logic as we can stand.


(1) About the term: I was going to use “cognitive imperialism,” but a quick Google search to see if it was taken found 1,380 results for an anthropological term that describes a form of cultural bias. It's a related concept, but different. So I hit on “cerebral imperialism,” which is even better because a) it may reflect the idea even more accurately and b) it sounds kinda cooler. A Google search of that phrase found only nine hits for a Jungian term of some kind. Oh, well … Virtually any two combinations of words in the English language will have been used for one purpose or another, and in this kind of Google contest the low number wins.

Sorry, Jungians.

Image of brain neurons licensed via Creative Commons from Dr. Jonathan Clarke.