the real galbraith


JOHN KENNETH GALBRAITH, the Harvard economist, diplomat, and author of nearly four dozen books, loved words–especially his own, but no less those about him. So it’s too bad that he’s not here to correct so many of the hundreds of articles about him that have appeared since he died last weekend at the age of 97. As his biographer, I was sorry for him, too, that so many admirers and detractors alike miscast him as the last of a dying breed: “a liberal,” “a Keynesian economist,” or “an apostle of ‘big government.'” That’s not who he was at all, at least not as those terms are used today.

Ken Galbraith was far too protean and nuanced for such labels, and those who use them are guilty of what he called “the conventional wisdom”–“the means by which the majority protects itself from thought.” Understanding Galbraith doesn’t require that we end up agreeing with him. (Quite the contrary: He would have found a million little Galbraiths abhorrently dull.) But it does mean grasping how he thought–and why.

more from Boston Globe Ideas here.

Mango Mania in India

From The New York Times:Mangoes_1

A crescendo of mangoes takes place March through May every year in India. They roll into the markets in small numbers at the start of the season, expensive and aloof; by the time the harvest peaks this month they are all over the place, playfully cheap and ready to be squeezed and inspected by all.

Right now, mango frenzy is in full swing, not least in Mumbai, a city where people know better than anyone how to reincarnate a mango: street vendors across the city start squeezing mango juice for around 20 rupees (about 45 cents, at about 44 rupees to $1); fashionable bars mix mango martinis for around 20 times as much; and restaurants at five-star hotels launch mango minifestivals featuring expensive avant-garde mango curiosities.

Indians have become very fond indeed of a fruit that is absent for so much of the year. (Outside the season many must console themselves with their mothers’ homemade mango pickles.) The first mangoes of the year make newspaper headlines and herald the coming of summer. India has its own heavily processed answer to Coca-Cola in Frooty, a ubiquitous sugary mango-flavored drink (the Coca-Cola Company has retaliated with its own version called Maaza).

More here.

Lesbians Respond Differently to “Human Pheromones”

From The National Geographic:Brain_19

Lesbian women respond differently than straight women when exposed to suspected sexual chemicals, according to a new brain imaging study. The finding builds on previous research that suggest that gay men responded in a way more similar to heterosexual women than heterosexual men when exposed to a synthetic chemical.

The natural version of this chemical reportedly appears in high concentrations in male sweat. The new study extends the research to homosexual women. It found that lesbians’ brains respond in a fashion more similar to that of heterosexual men than of heterosexual women when exposed to the sweat chemical and a synthetic chemical that has been detected in female urine.

More here.

Tuesday, May 9, 2006

T & A


Teen sex comedies—each of those words defined incredibly loosely—blossomed from 1982 to 1985.[1] These movies burgeoned in the cultural airspace cleared by ’70s porn, back when porn thought it needed plot. Usually structured around a crude story about a group of high school or college students who want sex, and featuring plenty of nude or near-nude female bodies but no close-ups of genitals, sex comedies are like the nonalcoholic beer of porn. Twelve-year-olds may get intoxicated, but that’s about it. With a lose-our-virginity-or-bust belief system, the films and their characters pole-vault over ethics to get at sex—like they could crash maturity as they would a party. In the mid-’80s, the pole snapped. The movies didn’t have enough heart to make it, though the formula came out of retirement in 1999 to execute one improbably graceful vault in the form of American Pie.

Teens, sex, comedy: sounds like a new holy trinity of American popular culture. But these teens were, as sex-seeking robots, too one-dimensional to be sympathetic. And their ideas about sex were off-balance—a mixture of debauched aggression and deep weakness (even the repeatedly uttered goal of “getting laid” ultimately implies passivity). It’s sex without a connection—male-female relations as a grudge match. And it’s hard to appeal to the groin and the funny bone at the same time; the movies are, with a few exceptions, witless.

more from The Believer here.

Tahar Ben Jelloun

While he was interned in Morocco under the iron fist of King Hassan II, Tahar Ben Jelloun found an escape in James Joyce. Books were not allowed but he asked his brother for the thickest paperback he could find, and the smuggled gift was a French translation of Ulysses. In captivity, he was fascinated “by this writer’s liberty”.

The young Moroccan composed his first poems, in French, during those 18 months in army camp, after his arrest in 1966 for taking part in student demonstrations in Casablanca. The experience was pivotal.

“At 21, I discovered repression and injustice – that the army would shoot students with real bullets,” he says. He sought exile in Paris in 1971, and, now aged 61, is one of France’s most fêted writers, and its most prominent author from the Maghreb. As well as poetry, fiction, plays and essays, he writes for France’s Le Monde, Italy’s La Repubblica and Spain’s El País.

Much of his fiction is set in Morocco, though his main inspiration, Tangier – “where it’s possible to see the Atlantic and the Mediterranean at the same time” – is “more a memory than a city”.

more from Guardian Unlimited Books here.

One Thing They Aren’t: Maternal

From The New York Times:

Rabbit_1 Here is a mother guinea hen, trailed by a dozen cotton-ball chicks. Here a mother panda and a baby panda share a stalk of bamboo, while over there, a great black eagle dam carries food to her waiting young. We love you, Mom, you’re our port in the storm. You alone help clip Mother Nature’s bloodstained claws.

But wait. That guinea hen is walking awfully fast. In fact, her brood cannot quite keep up with her, and by the end of the day, whoops, only two chicks still straggle behind. And the mama panda, did she not give birth to twins? So why did just one little panda emerge from her den? As for the African black eagle, her nest is less a Hallmark poem than an Edgar Allan Poe. The mother has gathered prey in abundance, and has hyrax carcasses to spare. Yet she feeds only one of her two eaglets, then stands by looking bored as the fattened bird repeatedly pecks its starving sibling to death.

What is wrong with these coldhearted mothers, to give life then carelessly toss it away? Are they freaks or diseased or unnatural? Cackling mad like Piper Laurie in “Carrie”? In a word — ha. As much as we may like to believe that mother animals are designed to nurture and protect their young, to fight to the death, if need be, to keep their offspring alive, in fact, nature abounds with mothers that defy the standard maternal script in a raft of macabre ways. There are mothers that zestily eat their young and mothers that drink their young’s blood. Mothers that pit one young against the other in a fight to the death and mothers that raise one set of their babies on the flesh of their siblings.

More here.

Dolphins play name game

From Nature:Dolphin_1

We are not the only animals to give ourselves names, says research on bottlenose dolphins. The dolphins’ distinctive whistles may function as individual calling cards, allowing them to recognize each other and even refer to others by name. The research reveals that bottlenose dolphins (Tursiops truncatus) each have their own personalized whistle, which is recognized by other dolphins even from a synthetic version played through a speaker. This suggests that the creatures recognize these as names in their own right, rather than identifying individuals based simply on the sound quality of their voice.

The dolphins have also been heard using each others’ names in their ‘conversation’ — meaning that they may be able to call their comrades during social interactions. The calls may be used to bind groups together in the wild where individuals cannot always see each other, or to coordinate their delicately complex hunting manoeuvres.

More here.

Monday, May 8, 2006

Below the Fold: Inequality in a Predatory World

We live in a predatory world. The poor, the helpless, or simply the less well off find themselves dehumanized and victimized around the world. They are often defenseless against the degradation and violence visited upon them by the better off, or by the states the better off control. Without economic equality, human well being, a life rich in the possibilities of self-fulfillment, is impossible. Without economic equality, any gains in achieving full citizenship including racial, gender and political equality are unsustainable.

Indeed, quite the opposite occurs routinely. Disadvantage awakens in the advantaged a desire for gain at the expense of others, even a desire for conquest over others less powerful. The English philosopher John Hobbes argued that when people found themselves in a state of equality, their gnawing fear of losing their status would transform their society into a war of all against all. The world’s rich are showing that Hobbes, if anything, under-estimated the power of circumstances. Even overwhelming economic superiority does not quiet the fear of losing. As the saying goes, you can never be too rich, but for reasons the society wags never fathomed. For those who have it all, they never have enough. Instead it quickens their desire for more. It also arouses in them a need to dominate and degrade the disadvantaged masses beneath them. They enact their sovereignty by violating the dispossessed. The rich become what Hobbes believed the sovereign must become — a monstrous Leviathan capable of instilling shock, awe, and death, this time among the world’s poor.

Perhaps only the cynical Manicheans trying to run the world from the White House understand this need for the Leviathan. The endless desire for more wealth, the fear of the poor from whom the wealth is extracted, and the need to make the masses stand in fear suggest a reason for our period’s particular cruelty. Endless wars, mass annihilations, horrific tortures, barbaric incarcerations, and above all a policy of lawlessness are Leviathan’s means. Its works produce grisly as well as material satisfactions for the rich and a ghastly theater of violence and subjection for the rest.

I argue that the more economically unequal as a world we become, the more an inferno our lives will become. Liberal intellectuals and policy makers, or perhaps one should say the rest of the world ruling elite, seem inured to the relationship between growing inequality and growing inhumanity. Instead of demanding economic equality, they focus on poverty reduction, hoping that reducing poverty will make a dent in economic inequality. Perhaps cynically too, they hope that modest improvements in living standards will dampen popular resistance to the rule of the rich, to which they, though less than the rich themselves, are acclimated.

“Let us abandon the fight against inequality,” writes Foreign Policy editor Moises Naim in a recent Financial Times op-ed. “Let us stop fighting a battle we cannot win and concentrate all efforts on a fight that can succeed. The best tools to achieve a long-term, sustained decline in inequality are the same as those that are now widely accepted as the best available levers to lift people out of poverty.” By fighting poverty through health, education, jobs and housing, Naim argues, we will wear inequality down.

Naim expresses, albeit from the liberal side, the consensus view of the rich-country development community, the World Bank, and an international effort such as the UN Millennium Project. Poverty reduction is the goal because it is achievable, and it is saleable as a strategy precisely because poverty reduction does not call for a redistribution of world resources. Thus, liberals, either naïve or too mindful of the Leviathan, content themselves with lifting up the abject. They either do not countenance or reject outright liberating the dispossessed from subjection.

The trouble with the liberal position, though very different from Manichean murder and terror, is that it is rather wishful, and it ignores rather well established facts. Eliminating poverty does not achieve equality, and it doesn’t take a Nobel-winning economist to show it. The United States hit its lowest historical level of economic inequality in 1968, a time of great prosperity and government intervention to eliminate poverty. The level we reached then was equivalent to the economic inequality we would find in many poor countries today, which is to say a pretty abysmal level. Note too that the good times of the Clinton era and the recent recovery during the Bush regime have not stopped economic inequality from growing. In fact, inequality in America has been accelerating, not slowing.

Economic growth alone does not eliminate poverty. Many economists forecast that it will take China, even at its remarkable rate of economic growth, almost 30 years to eliminate dire poverty, leaving a massive job of lifting another up to half a billion people out of three to four dollar a day poverty. Perhaps cognizant of this, the Chinese state is taking dramatic steps to redistribute income to the rural peasantry, eliminating land taxes, providing free public education, and rebuilding a rural health system. Yet, even as Chinese poverty will prove a difficult problem to solve, a middle class will be living at the level of the today’s Korean middle class, and the great wave of capitalist development will have created a massive new generation of the truly, world-level wealthy. Inequality will get worse, and one can only wish good luck to the Chinese peasants.

The first lesson here is that economic growth creates the wealthy first, and brings along the masses later – far later than the time necessary to earn their way to equality through labor or enterprise. It happens inside countries like our own. It happens across countries. Consider evidence accumulated by World Bank economist Branko Milanovic that the ratio of inequality, rich country to poor country, has grown from 19 to 1 in 1960 to 37 to 1 in 2000. This is true despite the spread of industrialization, thought to be the holy grail of development, and rising income levels in Asia.

The second lesson is that if you don’t go after economic equality, and settle instead for poverty reduction, there is little prospect that the disadvantaged can hold on to their gains given the predations of the rich. Again, the US is a paradigm case. Even as the rich have gotten richer over the past quarter century, the American state has actually contrived to take back a variety of welfare benefits from the poor. As America’s medium family income has stagnated since the seventies, the poor have become objectively poorer. The state has ignored these facts and refused increases in life support consisting of income supplements, housing assistance, health care, education, and food assistance.

The only solution that will work, whether at the national or the international level, is redistribution of the wealth. The rich must be made poorer and the poorer their equals, if the goal is a modicum of well being for all.

We know how to do this at the national level, and again the evidence for its success is widely known. Taxes work. Not only did they increase equality in America starting with World War I and beginning again during the New Deal, but inequality increased as taxation radically declined starting with the Reagan Administration in 1981.

At the international level, how to proceed is less certain, given that no international body possesses the means to compel peoples via their states to contribute tax monies to the common good of all. The amounts necessary to raise are not hard to calculate. We are masters of calculation in this age. Currently, rich countries cannot even come up with 1% of their Gross Domestic Product in transfer payments to poor countries, a figure once considered the minimum moral response to global destitution. Despite six years of posturing about supporting the UN Millennium initiative to eliminate much of less than a dollar-a-day poverty worldwide, rich country support is declining rather than increasing. It is important to put redistribution at the top of the global agenda rather than engage in the bait and switch of poverty reduction.

Economic equality requires an obviously enormous and lasting redistribution of wealth worldwide. Yet someone once calculated that there is US$5000 in wealth for every person on the planet, the equivalent of the Gross Domestic Product of Uruguay. Imagine the world as a big Uruguay. Things could be worse: people in Uruguay live as long as Americans do, their child mortality rate is even with ours, and less than 4% of their children suffer malnutrition.

The beaches are beautiful, Montevideo is a dream, and no one expects an Uruguayan invasion of Iran any time soon.

Teaser Appetizer: The Adipose American, A Few Facts

Evolutionary pressures banish unfit biological species into extinction. The American descendents of Homo sapiens species will explode into extinction at the midriff. A walk through Main Street, USA will convince any skeptic of the veracity of this prediction. And it will all happen due to the adipose state of the nation.

Fact: 65% of US population is either overweight or obese.
Fact: The number of obese Americans zoomed from 14.5% in 1976 to 30.5% in 2000.

Millions of Americans are obese, diabetic, hypertensive, hyperlipidemic and succumb to this murderous metabolic syndrome. Strokes, heart attacks, fatty liver, osteoporosis, cancer, depression, arthritis and sleep apnea ravage the obese. The chart below, reproduced from Baylor College of medicine, depicts all havoc unleashed by obesity:


We have an epidemic. We spend $117 billion directly or indirectly on obesity and its complications; we eat more, exercise less and our bodies have become a battleground of conflicting hormones and peptides.

We thought our loads of fat were meant only for aesthetic shame but in 1994 scientists told us that the adipose tissue is an endocrine organ! Yes, an endocrine organ, similar to thyroid and adrenal gland. Like them it secrets into blood, a hormone — in this case, Leptin — which travels to remotely located hypothalamus and suppresses appetite.

In reverse, lack of Leptin stimulates appetite, encourages over eating, thus increasing fat storage. (This probably rendered an evolutionary advantage to help store a reservoir of fat for lean days of starvation.) Leptin deficient mice due to gene depletion (ob-ob mice) are obese and leptin replacement cures their obesity. Leptin gene deficiency and obesity is rare in humans and improves with leptin therapy.

Corollary: if leptin were administered to obese people they should loose weight. So the investigators tried it but only with partial success. It so happens that obese people have high – not low – levels of leptin. Their cells lack the receptors for leptin to attach and are resistant to leptin therapy. Thus, obese are either leptin deficient or leptin receptor deficient.

Leptin is not the only attention grabber; ghrelin entered the stage in 1999. Stomach secrets grehlin in response to hunger; a hungry man has high grehlin. The circulating grehlin stimulates the satiety center in the hypothalamus and grehlin secretion stops. A satiated man has low grehlin. Now add to this complexity insulin, cholecystokinin and GLP-1 Low insulin levels stimulate hunger and initiate the act of eating. The fat and probably protein in the meal stimulates cholecystokinin secretion from the upper small bowel which suppresses appetite and slows gastric emptying causing fullness and satiation. The act of eating stops. GLP-1 oozes out of the lower small bowel to suppress the appetite further.

But that is not all; in science it gets complex before it get simple. See the diagram below reproduced from adipose society of Baylor College of medicine:


Adiposity is regulated by a set of short and long term signals. Those for the short term determine the size of a meal and it frequency; the long term signals determine the fat storage.

The mechanism of appetite regulation and fat deposit is an interaction of competing and feed back signals. The following are some of the mediators:

  1. Neurotransmitters in the hypothalamus, like NPY, AGR,5HT
  2. Gut hormones like leptin , grehlin , cholecystokinin and GLP1
  3. Other circulating hormones like cortisol and thyroxine
  4. Sensory input from stomach and intestines
  5. External input like smell, taste and emotions

Currently the US obesity hormones are in state of misalignment: the USA is a leptin resistant, ghrelin deficient, cholecystokinin inefficient and insulin abundant nation.

And we still don’t know which molecule is the master conductor of this orchestra and how to transform this cacophony into harmony. It is obvious that the mechanism of appetite regulation and fat deposition is complex which leads to general failure of any single mode of therapy. Unrealistic individual goals of weight reduction further thwart the success. The therapy of obesity must include a combination of the following:

  1. Eat less: A daily deficit of 500 to 1000 calories is reasonable. This is the single most important component of therapy and most difficult to adhere to.
  2. Exercise a lot: Strenuous aerobic activity for over 200 minutes per week maintained over a long period of time with calorie restriction is effective. Physical activity conserves fat free mass, improves glucose tolerance and lipid profile. Fact: Moderate exercise like walking 45 minutes a day for 5 days a week has minimal effect on weight loss.
  3. Modify behavior to avoid temptation to engorge on food. This warrants life style change and altering emotional response to food. Self monitoring and social support are essential.
  4. Use drug therapy: Only two drugs have been approved by FDA for long term therapy.
    • Sibutaramine causes anorexia by blocking neuronal monoamine uptake.
    • Orlistat decreases fat absorption
  5. Get surgery if morbidly obese and nothing else helps.
    • Gastric bypass to channel food directly into mid intestine thus decreasing absorption
    • Gastric banding and stapling to diminish the size of the stomach
    • Combination of bypass and stomach size reduction.

Fact: Even moderate weight loss of 5% decreases the complications significantly

The prescription of eat- less-exercise- more-modify- behavior is still the best choice but compliance has been pathetic. On average, a person on a weight reduction diet has tried and failed three to six other diets before. This failure has created an enormous market opportunity for fad diet authors and manufacturers. Some examples:

  1. Eat less carbohydrates ( Atkins, South beach)
  2. Eat less fats ( Ornish, Pritikin)
  3. Eat less of both ( Weight Watchers, Jennie Craig)
  4. Eat very low calorie diet:400 calories ( Optifast, Cambridge)

The failure has also challenged the scientists to discover new therapies and many new drugs are in the various stages of development. One exciting possibility is the recent understanding of the endocannabinoid (endogenous cannabis like molecules) system. When investigators were working to understand the molecular action of Cannabis Sativa they found cannabinoid receptors (CB1) in the central nervous system and in the adipose tissues. Stimulation of CB1 in the brain increases appetite and stimulation in the fat cells increases fat deposition. It seems this system is in perpetual overactive drive in the obese and blockage of the receptors decreases appetite and promotes weight loss. Rimonabant, a drug now in clinical trials blocks the CB1 receptors and will be an exciting new weapon to combat obesity.

Fat20cat_2Many other drugs are under development but only an accurate understanding of the mechanism of obesity will lead to a better therapy. Science travels from metaphorical to mathematical; the journey is both exciting and agonizing. The investigation meanders, looses way, finds it again, and races to the next stop, falters, sprints and trundles along with hope towards exhilarating simplicity and elegance. The investigation of obesity is scurrying through the difficult middle stretch at present. We better arrive soon or the speed of decline of the American civilization will be directly proportional to the rate of expansion of its girth.

The sobering fact is:

I think and breathe and live because I eat
I eat therefore I am
But soon I will not be,
Because I ate.

Random Walks: Narnia, Schmarnia

[Author’s Note: Some of you may have received an earlier, unfinished version of this particular column. It was not, as one reader suggested, an avant-garde literary choice — Behold! The Half-Finished Post! — but a sad case of an inexperienced blogger accidentally hitting “Publish Now” when she really meant to save it in “Draft” mode. Really, it’s a miracle she is allowed to blog at all. But she promises to never do it again.]

C.S. Lewis’ Chronicles of Narnia have long enjoyed enormous popularity among readers of all ages, particularly among those with Christian leanings. That’s not surprising, since Lewis was himself an avowed Christian and made no bones about the fact that the series was intended as a reworking of the traditional Christian “myth” (and I use that term in the literary sense). But it’s not obvious to everyone, as I discovered when a friend of mine recently went to see the much-anticipated film version of The Lion, the Witch and the Wardrobe. A staunch agnostic, she was horrified to find that somehow, in the translation to the silver screen, the subtleties of Lewis’ mythical retelling were lost, leading to what she considered to be little more than a ham-fisted, didactic advertisement for the Christian religion.

My friend is not alone in her objections to the film (I share them) — indeed, it is a common refrain when discussing Lewis’ literary output. There are many people who view Lewis with suspicion, precisely because he has been so warmly embraced by evangelical Christians. And in the case of bestselling children’s author Philip Pullman, author of the His Dark Materials trilogy (a wonderful read in its own right), suspicion gives way to outright hostility. Pullman is among Lewis’ most outspoken critics, clearly evidenced by a 1998 article in The Guardian, in which he dismisses the Narnia books as “one of the most ugly and poisonous things I’ve ever read.” More recently, he dismissed his rival’s work as being “blatantly racist,” “monumentally disparaging of women,” and blatant Christian propaganda in remarks at the 2002 Guardian Hay festival. (Pullman in turn has been unjustly attacked by right-wing naysayers as “the most dangerous author in Britain” and “semi-Satanic”; he is, in many respects, the anti-Lewis.)

Pullman has made some valid points in his public comments about Lewis and the Narnia chronicles. In addition to his avowed Christianity, Lewis was a conservative product of his era, with all its recumbent prejudices. And he was not, by any means, “nice,” possessing a flinty,  intellectually stringent, sometimes slightly bullying disposition that didn’t always win friends and influence people. Lewis did not suffer fools gladly, if at all. I doubt many of the evangelical Christians who deify Lewis today would have much cared for him in person, and vice versa. Yet he was hardly evil incarnate. I am not a diehard fan of Lewis’ work, but I will be so bold as to suggest that the truth lies somewhere in between the two extremes of beloved saint and recalcitrant sinner. Lewis was a man, plain and simple, with all the usual strengths and foibles.Cslsmoking

As for the charge of Lewis’ work being blatant Christian propaganda, Pullman somewhat over-states the case. Certainly Lewis deliberately evoked the themes and symbols of the Christian mythology in much of his writing, but so did many of the greatest writers in Western literature: Dante, Milton, and Donne, to name just a few. The problem lies not with the choice of themes, but with Lewis’ decidedly heavy-handed style. In his hands, the subtle symbolism of myth more often than not devolved into  overly-simplistic allegory — a far less satisfying approach, artistically.

Lewis certainly understood the power of myth. He’d been fascinated with mythology since his childhood, particularly the Norse myths, and within those, relished the story of Balder the Beautiful, struck down by an errant arrow as a result of the meddlesome Loki. Balder is the Christ figure of the North. Norse mythology was an enthusiasm Lewis shared with J.R.R. Tolkien when the two men met at Oxford in the 1930s. (If nothing else, we may owe The Lord of the Rings trilogy in part to Lewis, who was the first to read early drafts of Tolkien’s imagined world and who encouraged his friend. Tolkien himself later credited Lewis with “an unpayable debt” for convincing him the “stuff” could be more than a “private hobby.”) Along with several other Oxford-based writers and scholars, they began meeting regularly at a local pub called The Eagle and Child, fondly dubbed The Bird and the Baby.

The Oxford Inklings, as they came to be called, were arguably the literary mythmakers of the mid-20th century, at least in England. In addition to Lewis and Tolkien, the group included the lesser-known Charles Williams, who penned fantastical tales in which, for example, the symbolism of the Major Arcana in the traditional tarot deck becomes manifest (The Greater Trumps), while the Earth is invaded not by aliens from outer space, but by the Platonic Ideal Forms (The Place of the Lion). The Platonic Lion featured in the latter may have influenced Lewis’ choice of that animal to represent his Narnia Christ figure, Aslan.

Ironically, it was Lewis’ love of myth that eventually led to his conversion. He was a notoriously prickly atheist for much of his early academic career; in fact, he was as dogmatic about his atheism as he was later about his Christian beliefs, so if nothing else, the man was consistent in his character.  He was also rigorously trained in logic, thanks to an early tutor, W.T. Kirkpatrick. An anecdote related in Humphrey Carpenter’s book, The Inklings, tells of Lewis’ first meeting with Kirkpatrick. Disembarking onto the train platform in Surrey, England, Lewis sought to make small talk by remarking that the countryside was more wild than he’d expected. Kirkpatrick pounced on this innocuous observation and led his new student through a barrage of questions and challenges to his assumptions, concluding, “Do you not see that your remark was meaningless?”

As Carpenter writes, the young Lewis thereby “learned to phrase all remarks as logical propositions, and to defend his opinions by argument.” Among the irrational concepts Lewis rejected was belief in God, or any religion, writing to his Belfast friend Arthur Greeves, “I believe in no religion. There is absolutely no proof for any of them, and from a philosophical standpoint Christianity is not even the best. All religions, that is, all mythologies… are merely man’s own invention.” For Lewis, Christianity was merely “one mythology among many.”

Personally, I’m inclined to agree with the young Lewis on that point (although I, too, have an affinity for myths both ancient and modern); it’s a shame he lost that rigorous clarity later on. I disagree with his early rejection of the thrill of imagination; he insisted it must be kept “strictly separate from the rational.” So what changed? That’s not entirely clear. Over a period of several years, Lewis learned to embrace his childhood love of myth and story, particularly the emotional sensation he called “Joy,” which would come to symbolize, for him, the divine, in the form of the Christian god.  Through long discussions with Tolkien and another Oxford colleague, Owen Barfield (ironically, a fellow atheist, albeit one who propounded the story-telling power of myth), he changed his tune. Tolkien in particular played a role, convincing him that the Christ story was the “true” version of the age-old “dying god” motif in mythology — familiar to anyone who has read Joseph Campbell’s compelling The Voyage of the Hero — but unlike, say, the story of Balder, Tolkein maintained that the Christ myth brought with it “a precise location in history and definite historical consequences.” It was myth become fact, yet still “retaining the character of myth,” as Carpenter tells it.

My problem is not with Lewis’ acceptance of the view that Christianity is rooted in the ancient “dying god” mythology; that should be patently obvious to lovers of story and myth. But it takes a certain special kind of arrogance to assume that, out of all the versions of this prevailing myth that have been told throughout the ages, the one of Jesus is the only “true” one. Lewis was too rigorous a logician not to realize this, and correctly concluded the point was logically unprovable. At some point, he chose to ignore his lingering misgivings and make a leap of faith. That is why they call it faith, after all. Lewis knew his Dante; he recognized that cold hard logic (personified in The Divine Comedy by the poet Virgil) could only lead him to Purgatory, not Paradise. But he hadn’t yet found his Beatrice.  He took that leap of faith anyway, which might be why he became so dogmatic about his adopted religion: he knew he was on logically shaky ground, just as his earlier atheistic foundation was shaken by his love for myth and the experience of “Joy.”

However enriching Lewis may have found his faith personally, I (and many others) would argue that his writing suffered for it. He was hardly a slouch in the writing department, but he lacked the subtlety and complexity of his friends Tolkien and Williams. His innate Christian bias seeped into everything he produced. Since he was a medievalist, this was less of a problem for his scholarly criticism, because the great works from that period in literary history are firmly rooted in the Christian tradition. But the didacticism hurt his fiction. Even Tolkien, a fellow believer, found the Narnia chronicles distasteful in their cavalier, overly-literal approach to mythology, announcing, “It simply won’t do, you know!”

Nonetheless, there are bright spots. Lewis’ science fiction trilogy (Out of the Silent Planet, Perelandra and That Hideous Strength) owes as much to the conventions of medieval literature as it does to his Christian faith. And for those able to look beyond the overtly Christian trappings of The Screwtape Letters, they may find a highly intelligent, perceptive, and mercilessly satirical exposition of human frailty. One can also see shades of Milton’s Paradise Lost in Screwtape’s insistence that Hell’s demons fight with an unfair disadvantage: since all creation is “good,” by virtue of emanating from God — a.k.a., “the Enemy” — everything “must be twisted before it is of any use to us.”

One of my favorite passages in these fictional letters from a senior demon to his nephew, a junior tempter, concerned the sin dubbed “gluttony of delicacy,” or the “All I want…” phenomenon. For instance, the target’s mother has an irritating habit of refusing anything offered to her, for a simple piece of toast and weak tea, rationalizing her finicky behavior with the reassurance that her wishes are quite modest, “smaller and less costly than what has been set before her.” In reality, it cleverly disguises “her determination to get what she wants, however troublesome it may be to others.”

That particular insight — like many of those contained in the book — is just as apt today, with our modern obsession with fad diets. More and more restaurants are tailoring menu items to meet the needs of their customers, whether they’re watching their carbs, cutting down on fat, avoiding meat and dairy, or choosing to subsist entirely on dry toast and weak tea. Starbucks’ entire rationale seems to be affording its customers the ability to order their caffeinated beverage to the most precise specifications. (In that respect, I’m as guilty as the next person. You’ll pry my grande soy chai tea latte from my cold dead fingers before you’ll get me to go back to drinking Folger’s instant coffee or that standard-issue Lipton orange pekoe tea bag. At least offer me the option of selecting a nice darjeeling or Earl Grey blend from Twinings or something. Gluttony of delicacy, indeed.)

But I digress. For all my distaste for Lewis’ Christian didacticism, I forgive all on the merits of just one book: the unjustly ignored novel, Till We Have Faces. It is a mythical retelling of Cupid and Psyche, told from the perspective of the ugly elder sister, Orual, who eventually becomes queen of Lewis’ fictional realm. Despite her role in bringing about her sister’s downfall, Orual is a good queen, and a sympathetic character. But the book ends with a shattering moment of painful self-awareness, when the dying Orual — who has long held a grudge against the gods for their treatment of her — finally has the opportunity come before those gods and read her “complaint,” a book she has been carefully composing over the course of her entire life. It is the mythology she has created of her experience, the story she tells herself, the persona she has created to present to the world. But in the presence of the eternal, she realizes that her once-great work is now “a little, shabby, crumpled” parchment, filled not with her usual elegant handwriting, but with “a vile scribble — each stroke mean and yet savage.” This is her true self, her true voice, stripped of all the delusions and lies she has been hiding behind all those years.

Lewis is unflinching in his depiction of Orual’s metaphorical “unveiling.” And therein lies the novel’s lasting power. Narnia, Schmarnia; those books are highly over-rated. For once, Lewis achieved the essence of myth without lapsing into the cheap  didacticism that characterizes so much of his overtly Christian writing. Why hasn’t someone made the film version of Till We Have Faces? The same over-arching themes are present, but explored in a richer, far less literal (and less overtly Christian) context. Perhaps it is no coincidence that the novel — which Lewis rightly considered his best work — was written in 1955, after he had met and married Joy Davidman. She was his Beatrice, bringing his faith and understanding of mythology (not to mention himself) to a new, deeper level; everything up to that point had been Purgatory, mere pretense, in comparison. Alas, the marriage was short-lived; Joy succumbed to cancer in 1960, and Lewis wrote a wrenching poem in the days before her death, declaring,

… everything you are was making

My heart into a bridge by which I might get back

From exile, and grow man. And now the bridge is breaking.

Joy’s death precipitated a crisis of faith, and while Lewis weathered it and stubbornly clung to belief, I think it is clear from his later writings that he emerged with a deeper kind of faith, something closer to the spirit of mythology than any blind adherence to, or easy acceptance of, conventional religious dogma. He never quite got all the way to true Paradise; he lost his “bridge” midway. But he got farther in his lifetime than many modern believers who might not be quite as willing to ask the hard questions, nor bring the same rigorous, unflinching logic to bear on their faith. (That spotlight is uncomfortably unforgiving, and few of us can wholly withstand the glare.)

There is much to find objectionable in the life and work of C.S. Lewis, if one doesn’t happen to share his religious (or political, or moral) beliefs. But there is also much to praise. Give the man credit for his insights into what seems to be an innate human need to tell stories that make sense of our existence and give it broader meaning. That longing goes beyond the gods of any specific religion, and this is what lifts Till We Have Faces so far above Lewis’ other work and makes it timeless. Like Orual, Lewis’ entire life was spent weaving a “story,” but in the end it was always the same one, worked, and reworked, until he finally managed to hit the truth of the matter and say what he really meant. As Orual concludes in her moment of realization, “I saw well why the gods do not speak to us openly, nor let us answer. Till that word can be dug out of us, why should they hear the babble that we think we mean? How can they meet us face to face, till we have faces?”

When not taking random walks on 3 Quarks Daily, Jennifer Ouellette waxes whimsical on various aspects of science and culture on her own blog, Cocktail Party Physics.

The Best Poems Ever

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at and at the NLA.

The best poems ever  a collection of poetry’s greatest voices edited by Edric S. Mesmer (Scholastic Inc. 2001).

Of course it can’t be anything of the kind. However, it is no more foolish than Fade To Grey, which is my imaginary name for various anthologies that come to mind. How could ‘best poems ever’ leave out ‘Sir Gawain and the Green Knight’, Goethe and Auden? Then there is the work chosen. Carl Sandburg’s ‘Buffalo Dusk’ sits next to ‘Ode On A Grecian Urn’, and Gertrude Stein’s ‘A Red Hat’ follows Shakespeare’s Sonnet 130. Where is Gwen Harwood? What happened to Hart Crane and Yeats?

However, strange as the contents may be for someone who knows the history of poetry, I can see where the editor is coming from, since this is a booklet designed for younger readers, and readers new to poetry. From that point of view Best poems ever is interesting, especially for younger teenagers at whom Best poems ever seems mainly to be aimed. This publication raises a very important question: how do you go about teaching poetry? Just as it would be wrong to introduce opera to children with Parsifal, so it would be unwise to try for slabs of Paradise Lost or The Cantos with younger readers, although there are always going to be a few who will take to the unlikeliest reading material like ducks to water. Here there is some real depth, plus some effective set pieces of the kind that appeal to younger readers, plus some banalities. All it requires is the right teacher to inculcate habits of critical reading, which can be done over time. Rush jobs don’t work in education. You have to think in year terms, not days or weeks. I can see how a good teacher could use this little edition—seventy-one pages—to get younger readers motivated. I always think longer recitation pieces work well, none of which are included here—Australian ballads, ‘The Pied Piper of Hamelin’, Alfred Noyes’ ‘The Highwayman’ or Eliot’s Old Possum’s Book of Practical Cats. Children enjoy speaking verse aloud and begin to appreciate what language can achieve in poetic form. That is the main thing—to get children enjoying language.

It is not compromising art to put ‘The Highwayman’ before children. It isn’t one of the world’s great poems, but it is certainly well made, with excellent versification, music and rhyme. There is a poem from the Best poems ever which fills the bill to a degree— ‘When Great Dogs Fight’ by Melvin B. Tolson: ‘He padded through the gate that leaned ajar, / Maneuvered toward the slashing arcs of war, / Then pounced upon the bone; and winging feet / Bore him into the refuge of the street. // A sphinx haunts every age and every zone: / When great dogs fight, the small dog gets a bone.’

This poem would work well in class. Next to it is an extract from Shelley’s ‘To A Skylark’. Already you are on much more difficult ground, but it is still a poem that could be usefully looked at in the classroom. All children relate to birds, the idea of freedom, and escape: ‘In the golden light’ning / Of the sunken sun, / O’er which clouds are bright’ning, / Thou dost float and run, / Like an unbodied joy whose race is just begun.’

There is one poem of Emily Dickinson—’My Life had stood—a Loaded Gun’, which puts you into provocative territory, as Dickinson always does. However, young readers enjoy Dickinson on a certain level, just as they take to Robert Frost immediately. ‘The Road Not Taken’ is the poem used here.

Another thing in this edition’s favour is that it includes an equal number of female and male writers. Amongst the women, apart from Stein and Dickinson, there is Lucille Clifton, Anne Bradstreet, Aphra Behn, Lorna Cervantes, Sylvia Plath, Sor Juana Inés de la Cruz, Phillis Wheatley, H.D., Emily Brontë, Gwendolyn Brooks, Barbara Guest, Christina Rossetti, Edna St. Vincent Millay, Elizabeth I, Angelina Grimké, Elizabeth Browning and Marianne Moore. There is more than a touch of the politically-correct about this selection, and some of the poems aren’t up to much, in my opinion, but at least there’s a consciousness about representation. A teacher can do a great deal with these poems. Preparation for future experience comes readily to hand, as in this poem by Elizabeth I whose opening lines read: ‘The doubt of future foes exiles my present joy, / And wit me warns to shun such snares as threaten mine annoy. / For falsehood now doth flow and subject faith doth ebb, / Which would not be if reason ruled or wisdom weaved the web.’

Millay’s ideas don’t seem very interesting, but, once again, younger readers can relate to ‘My heart is warm with friends I make, / And better friends I’ll not be knowing; / Yet there isn’t a train I wouldn’t take, / No matter where it’s going.’

Each person is going to come up with their own anthology of poems, be it for younger readers, or readers generally. Here, gathering work for use in the classroom is the thing, and that is difficult. It’s no good putting together something the size of, say, the Faber Collected Poems of Ted Hughes, which students can’t be expected to lug around with them. Best poems ever isn’t big enough, but it is portable, and small enough to read on demand.

Older teenagers can get into Owen’s ‘Dulce Et Decorum Est’—there’s plenty of material close to hand for consideration there—or Rilke’s ‘Archaic Torso of Apollo’—a pity to have missed the opportunity to put the original German next to the English translation. ‘Dover Beach’ waits with its sober melancholy. Favourites like Thomas’ ‘Do Not Go Gentle Into That Good Night’ and Elizabeth Browning’s Sonnet 43 are good choices since these are two examples of memorable speech, hard to better, which is why they are rightly famous, ‘best’, if you like.

If I was putting together an anthology for use in schools I would do it differently. For one thing, I think it helps to relate some biography and history to place poems in an historical context. Photos of authors as children, and adults, are good too, so that readers realise poets are no different to them. An editor has to have done some hard thinking for teachers, always hard-pressed for time and harassed by the extraordinary demands made on them. You have to provide some work material concerning the poems chosen, and then at least the teacher can choose to use the associated material, or take the lesson along paths they’ve predetermined.

Well, everyone’s a critic. Best poems ever isn’t that, but it makes a stab in the direction of getting together a collection of poems that could be usefully taught in the classroom. At a time when actual study of poetry seems to be diminishing in lieu of rap songs, film scripts, advertising and text messaging, and when textbooks themselves are fast disappearing from the classroom, that is praiseworthy.

Selected Minor Works: Where’s the Philosophy?!

Justin E. H. Smith

(An extensive archive of Justin Smith’s writing can be found at

Now that I am a tenured professor of philosophy, and thus may resign from service in my profession’s pep squad without fear of losing my salary, I’m going to come right out and say it: after all this time as a student, and then as a graduate student, and then as a professor of philosophy, I still have absolutely no idea what philosophy is, and therefore what it is I am supposed to be doing.  I do not know what the special competences are that I, qua philosopher, am expected to have. It’s clear that I am expected to say “qua” a lot, and to give off other such social cues through language, gesture, and dress.  But it is that thing that I can do because I am a philosopher that a surgeon, or an archeologist, or a thoughtful sales clerk cannot do, because they are not philosophers, that remains elusive.

Well, one might reply, there’s “critical thinking.” But this is something that, in the ideal situation, any active participant in the civic life of a free society would be able to employ in reading the newpaper, listening to the speeches of politicians, etc.  There’s formal logic, but if I agree with Heidegger on anything it is that logic, like shortpants, is for schoolboys.  In the good old days, when one learned anything at all at school, one learned the forms of argumentation, the fallacies together with their Latin names, etc.  This is all really just advanced critical thinking, and if I can see that q follows from p on a symbol-dense page, I still don’t believe that counts as knowing anything.  As Wittgenstein said, everything is left the same.  Finally, of course, there’s the stuff about God and the soul, which used to be the stock-in-trade of philosophy and which philosophy still can’t really dispense with, in spite of its general awkwardness around the topics.  There I am certainly as ignorant as every other human being is and always has been.

I do have some special competences.  For instance, I know how to read Latin, and I use this in my research.  But that doesn’t count as a distinctively philosophical competence, since I could be employing it to read the Pope’s encyclicals, and those sure as hell aren’t philosophy. Some people, unlike me, claim to have distinctively philosophical competences.  ‘Round hiring season, one hears quite a bit about these from young Ph.D.’s without jobs. When philosophy departments run ads in the professional publication for new hires, they ask for candidates with competences in “philosophy of mind,” “philosophy of science,” etc.  I’ve even seen “philosophy of sports and leisure.”  When the candidates come for their interviews, they are asked: “Can you do philosophy of mind?”  And they had better reply: “Oh yes I can. I can do philosophy of mind.”  And then the hopeful young things will go on to list all the other varieties of philosophy they can “do.” Doing these is crucial. These days, one “does” philosophy, and one does not “philosophize.”  Eager young grad students have now sprouted up throughout America who innocently speak of “rolling up their sleeves” and “doing some philosophy” as if this were a group activity facilitated by a hackey-sack or a waterbong.

Now I’ve read countless books filed under “philosophy.”  I’ve thought about what these books have to say, and I’ve written as much as I’ve been able in response.  But I don’t remember ever having “done” philosophy.  I don’t think I belong to the same world as one capable of saying that.

The question lingers, though: is a specialist in “philosophy of mind” comparable to an organic chemist or an archeologist of neolithic burial mounds with respect to some specialized body of expert knowledge?  Perhaps, but this is still not some body of expert knowledge that every philosopher, qua philosopher (there’s that “qua” again), must have, since as I have already said I am a tenured philosopher and I have only an inkling of an idea about it.  It is not that I am not interested in it.  I am about as interested in it as I am in organic chemistry, and rather less than I am in neolithic burial mounds. And, well, vita brevis

So then why not just say that having expert knowledge in philosophy of mind is a sufficient but not necessary condition for being a philosopher, and that there is a cluster of such bodies of expert knowledge, with family resemblances between them, and that is what makes up philosophy?  There are a few problems with this approach.  One is that the millions of scruffy undergraduates cannot be entirely wrong when they see a page of, e.g., Jerry Fodor’s “A Theory of Content” and think to themselves: that’s not philosophy!  The kids want Dasein, and will-to-power, and différance, and other stuff they can be sure they won’t understand.  I am not saying that curriculum decisions should be turned over to the students. That would be a disaster.  But Richard Rorty is at least right to say that what philosophy departments offer fails largely to live up to the sense that newcomers have that the discipline ought to be doing something rather more, well, important.  Another problem with the family-resemblance approach is that there simply are no traits that occur regularly throughout the various subdisciplines. We cannot be a family if it’s not even clear that we’re the same species.

Again, the only common threads seem to be sociological, rather than doctrinal.  We recognize each other by our ability to rattle off the names of philosophy professors who have become major public personalities; to note “where they’re at” now, Harvard, Oxford, etc.; perhaps to mention that we’ve heard how much they get paid.  Reading Brian Leiter’s “Gourmet Report” is particularly helpful for generating this sense of cohesion, and anyone aspiring to join the club would do well to study it.  Learn the cues.  Get remarked –to use Pierre Bourdieu’s sardonic term to describe the autoreproduction of homo academicus— by someone who’s been remarked in the Gourmet Report, and you’re well on your way to being a remarkable philosopher.

The long war between the “analytic” and “continental” philosophers, too, has more to do with the sociology of groups than with beliefs. “Continental” philosophers go to their own conferences, where they tend to pick up the same speech habits, even the same distinctive North American Continental Philosophy accent.  They tend to say “imbricate” a lot, which sounds a good deal more precious in English than it does in the French from which it is lifted, but the majority of “continental” philosophers do not speak French.  Analytic philosophers have moved over the past few decades from a demand for “rigor” to an interest in being, like Donald Davidson, “charitable.”  They have also gone more postmodern than they like to imagine, and nowadays before they claim anything, in writing or in conference, they describe to you the “story” they’re about to “tell.”

There is also professional humor, of course, as an important factor in giving philosophers a feeling of belonging to a community.  For the most part, though, it is about as funny as the slogans accompanying images of cats wearing sunglasses that one often find in secretaries’ cubicles.  It is palliative, occupational humor, like Dilbert, or like a bumpersticker on a union van that reads “Electricians Conduit Better”: a futile effort to overcome the poverty of a life that has been reduced to and identified with the career that sustains it.

But clearly there’s some common ground that is truly philosophical, isn’t there?  Brains in vats?  Moral dilemmas involving railway switching stations?  These topics do come up, but I must say I think about them as little as possible.  My own work is on the problem of sexual generation as it was understood and debated by what used to be called “natural philosophers” in the period extending from roughly 1550 to 1700.  No brains in vats, not even any trains, let alone switching stations.

Recently, most of my reading has consisted in 16th-century botanical compendia, or, as the Germans call them, Kräuterbücher. I am permitted to work on this topic, as a philosopher, because as a matter of historical fact many of the people who cared in the period about the topic that interests me today happen to be recognized, today, as “philosophers”: Descartes, Leibniz, and so on.  Thank God for them. Their shout-outs to, e.g., Antoni van Leeuwenhoek, who did not go down in history as a philosopher, permit me, as a philosopher, to read his work on the microscopical study of fleas’ legs and on the composition of semen.  And he’s fascinating. 

What used to be called “natural philosophy” and has since been parted out into the various science departments is, in general, fascinating.  It asks whether frogs emerge de novo from slime, and whether astral influx is responsible for the growth of crystals.  I know in advance the answer to both of these questions, but I can’t shake the feeling that reading these texts, and witnessing their authors struggling with these questions, is more edifying, and more important, than  seeking to solve the problems that happen to be on the current disciplinary agenda.

Of course, as Steven Shapin –that truly brilliant outside observer of philosophy’s “doings”– has said, anti-philosophy, like philosophy, is the business of the philosophers.  Periodically, after a long spell of failed system-building and bottom-heavy foundationalism, some guy comes along with a Ph.D. in philosophy and says: Philosophy! Who needs it!  Rorty is a good recent example, though certainly just the latest in a long line.  Diogenes of Sinope, in his own way, eating garbage and pleasuring himself in the agora, was out to show what a waste of time it is to theorize instead of simply to live, to live!  There are plenty of people who go much further, such as those who drop out of grad school after one semester because they got a B+ they didn’t like, and go into investment banking and spend their lives berating those who waste theirs in the Ivory Tower.  Now that is anti-philosophy. Rorty and Diogenes, on the other hand, remain susceptible to Shapin’s jab. They are insiders, and their denunciations only work because their social identities were already secured through a demonstration of concern for and interest in philosophy.

I do not know that I would like to join them.  I think I would sooner choose to masturbate at the mall than hope to take on Rorty’s establishment-gadfly role.  I think I would just like to keep writing about what interests me, without being asked, as I all too often am by the short-sighted philosophical presentists who hear of my various research concerns: Where’s the philosophy?!  For that is precisely the question I have been asking of them.

Monday Musing: The Palm Pilot and the Human Brain, Part III

Part III: How Brains Might Work, Continued…

180pxpalmpilot5000_2In Part I of this twice-extended column, I tried to explain how it is that very complex machines such as computers (like the Palm Pilot) are designed and built by using a hierarchy of concepts and vocabularies. I then used this idea to segue into how attempts to understand the workings of the brain must reverse-engineer the design which has been provided by natural selection in that case, and in Part II, began a presentation of an interesting new theory of how the brain works put forth in his book On Intelligence by the inventor of the Palm Pilot, Jeff Hawkins, who is also a respected neuroscientist. Today, I want to wrap up that presentation. While it is not completely necessary to read Part I to understand what I will be talking about today, it is necessary to read at least Part II. Please do that now.

Last time, at the end of Part II, I was speaking of what Hawkins calls invariant representation. This is what allows us, for example, to recognize a dog as a dog, whether it is a great dane or a poodle. The idea of “dogness” is invariant at some level in the brain, and it ignores the specific differences between different breeds of dog, just as it would ignore the specific differences in how the same individual dog, Rover say, is presented to our senses in different circumstances, and would recognize it as Rover. Hawkins points out that this sense of invariance in mental representation has been remarked for some time, and even Plato’s theory of forms (if stripped of its metaphysical baggage) can be seen as a description of just this sort of ability for invariant representation.

This is not just true for the sensory side of the brain. The same invariant representations are present at the higher levels of the motor side. Imagine signing your name on a piece of paper on a two inch wide space. Now imagine signing your name on a large blackboard so that your signature sprawls several feet across it. Despite the fact that completely different nerve and muscle commands are used at the lower levels to accomplish the two tasks (in the first case, only your fingers and hand are really moving while in the second case those parts are held still while your whole arm and other parts of your body move), the two signatures will look very much the same, and could be easily recognized as your signature by an expert. So your signature is represented in an abstract way somewhere higher up in your brain. Hawkins says:

Memories are stored in a form that captures the essence of relationships, not the details of the moment. When you see, feel, or hear something, the cortex takes the detailed, highly specific input and converts it to an invariant form.It is the invariant form that is stored in memory, and it is the invariant form of each new input pattern that it gets compared to. Memory storage, memory recall, and memory recognition occur at the level of invariant forms. There is no equivalent concept in computers. (On Intelligence, p. 82)

We’ll be coming back to invariant representations later, but first some other things.


Jeff_hawkins_on_stageImagine, says Jeff Hawkins, opening your front door and stepping outside. Most of the time you will do this without ever thinking about it, but suppose I change some small thing about the door: the size of the doorknob, or the color of the frame, or the weight of the door, or I add a squeak to the hinges (or take away an existing squeak). Chances are you’ll notice right away. How do you do this? Suppose a computer was trying to do the same thing. It would have to have a large database of all the door’s properties, and would painstakingly compare every property it senses with the whole database, but if this is how our brains did it, then, given how much slower neurons are than computers, it would take 20 minutes instead of the two seconds that it takes your brain to notice anything amiss as you walk through the door. What is actually happening at all times at the lower level sensory portions of your brain is that predictions are being made about what is expected next. Visual areas are making predictions about what you will see, auditory areas about what you will hear, etc. What this means is that neurons in your sensory areas become active in advance of actually receiving sensory input. Keep in mind that all this occurs well below the level of consciousness. These predictions are based on past experience of opening the door, and span all your senses. The only time your conscious mind will get involved is if one or more of the predictions are wrong. Perhaps the texture of the doorknob is different, or the weight of the door. Otherwise, this is what the brain is doing all of the time. Hawkins says the primary function of the brain is to make predictions and this is the foundation of intelligence.

Even when you are asleep the brain is busy making its predictions. If a constant noise (say the loud hum of a bad compressor in your refrigerator) suddenly stops, it may well awaken you. When you hear a familiar melody, your brain is already expecting the next notes before you hear them. If one note is off, it will startle you. If you are listening to a familiar album, you are already expecting the next song as one ends. When you hear the words “Please pass the…” at a dinner table, you simultaneously predict many possible words to follow, such as “butter,” “salt,” “water,” etc. But you do not expect “sidewalk.” (This is why a certain philosopher of language rather famously managed to say “Fuck you very much” to a colleague after a talk, while the listener heard only the expected thanks.) Remember, predictions are made by combining what you have experienced before with what you are experiencing now. As Hawkins puts it:

These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence. (Ibid, p. 104)


Let us focus on vision for a moment, as this is probably the best understood of the sensory areas of the brain. Imagine the cortex as a stack of four pancakes. We will label the bottom pancake V1, the one above it V2, the one above that V4, and the top one IT. This represents the four visual regions involved in the recognition of objects. Sensory information flows into V1 (over one million axons from your retinas feed into it), but information also flows down from regions to the one below. While parts of V1 correspond to parts of your visual field in the sense that neurons in a part of V1 will fire when a vertain feature (say an edge) is present in a certain part of the retina, at the topmost level, IT, there are cells which become active when a certain object is anywhere in your visual field. For example, a cell may only fire if there is a face present anywhere in your visual field. This cell will fire whether the face is tilted, seen at an angle, light, dark, whatever. It is the invariant representation for “face”. The question, obviously, is how to get from the chaos of V1 to the stability of the representation at the IT level.

The answer, according to Hawkins, lies in feedback. There are as many or more axons going from IT to the level below it, as there are in the upward direction (feedforward). At first people did not pay much attention to these feedback connections, but if you are going to be making predictions, then you are going to have to have axons going down, as well as up. The axons going up carry information on what you are seeing, while the axons going the other way carry information on what you expect to see. Of course, exactly the same thing occurs in all the sensory areas, not just vision. (There are also association areas even higher up which connect one sense to another, so that, for example, if I hear my cat meowing and the sound is approaching from around the corner, then I expect to see it in the next instant.) Hawkins’s claim is that there is a sort of invariant representation at each level of the cortex, of the more fragmented sensory input from the level below. It is only when we get to the levels available to consciousness like IT that we can give these invariant representations easily understood names like “face.” Nevertheless, V2 forms invariant representations of what V1 is feeding it, by making predictions of what should come in next. In this way, each level of cortex develops a sort of vocabulary in terms that are built upon repeated patterns from the layer below. So now we see that the problem was not how to construct invariant representations in IT, like “face,” from the three layers below it. Rather, each layer forms invariant representations based on what comes into them. In the same way, association layers above IT may make invariant representations of objects based on the input of multiple senses. Notice that this also goes along well with Mountcastle’s idea that all parts of the cortex basically do the same thing! (Keep in mind that this is a simplified model of vision, ignoring much complexity for the sake of for expository convenience.)

In other words, every single cortical region is doing the same thing: it is learning sequences of patterns coming in from the layer below and organizing them into invariant representations that can be recalled. This is really the essense of Hawkins’s memory-prediction framework. Here’s how he puts it:

Each region of cortex has a repertoire of sequences it knows, analogous to a repertoire of songs… We have names for songs, and in a similar fashion, each cortical region has a name for each sequence it knows. This “name” is a group of cells whose collective firing represents the set of objects in the sequence… These cells remain active as long as the sequence is playing, and it is this “name” that gets passed up to the next region in the hierarchy. (Ibid. p. 129)

This is how greater and greater stability is created as we move up in the hierarchy, until we get to stages which have “names” for the common objects of our experience, and which are available to our conscious minds as things like “face.” Much of the rest of the book is spent on describing details of how the cortical layers are wired to make all this feedforward and feedback possible, and you should read the book if you are interested enough.


As I mentioned six weeks ago when I wrote Part I of this column, complexity in design (whether done by humans or by natural selection) is achieved through hierarchies which build layer upon layer of complexity. Hawkins takes this idea further and says that the neocortex is built as a hierarchy because the world is hierarchical, and the job of the brain, after all, is to model the world. For example, a person is usually made of a head, torso, arms, legs, etc. The head has eyes, a nose, a mouth, etc. A mouth has lips, teeth, and so on. In other words, since eyes and a nose and a mouth occur together most of the time, it makes sense to give this regularity in the world (and in the visual field) a name: “face.” And this is what the brain does.

Have a good week! My other Monday Musing columns can be seen here.

Sunday, May 7, 2006

A room of her own

Sean Coughlan for BBC News Magazine

A homeless woman in London has been living in a car since last summer. But by writing a blog she has put herself in touch with an international audience…

…A woman becomes homeless, so she gets into her car and drives. Except she has nowhere to go – so she stays in the car, with all her possessions heaped in the back, sleeping in the front seats, parking in secluded streets. 

But this is the information age. And even though she doesn’t speak to anyone, she can go into a library where she can access the internet and write an online journal – a homelessness blog – which she uses to describe all her unspoken experiences and feelings. “

From her Blog:

Gender: female

  Astrological Sign: Gemini

  Occupation: unemployed

  Location: Woodland : United Kingdom


  • Hot food 
  • mugs of steaming tea 
  • warmer weather 
  • feather mattresses

Mearsheimer and Walt Respond

John Mearsheimer and Stephen Walt respond to the criticisms of their article on “The Israel Lobby”, in the LRB.

One of the most prominent charges against us is that we see the lobby as a well-organised Jewish conspiracy. Jeffrey Herf and Andrei Markovits, for example, begin by noting that ‘accusations of powerful Jews behind the scenes are part of the most dangerous traditions of modern anti-semitism’ (Letters, 6 April). It is a tradition we deplore and that we explicitly rejected in our article. Instead, we described the lobby as a loose coalition of individuals and organisations without a central headquarters. It includes gentiles as well as Jews, and many Jewish-Americans do not endorse its positions on some or all issues. Most important, the Israel lobby is not a secret, clandestine cabal; on the contrary, it is openly engaged in interest-group politics and there is nothing conspiratorial or illicit about its behaviour. Thus, we can easily believe that Daniel Pipes has never ‘taken orders’ from the lobby, because the Leninist caricature of the lobby depicted in his letter is one that we clearly dismissed. Readers will also note that Pipes does not deny that his organisation, Campus Watch, was created in order to monitor what academics say, write and teach, so as to discourage them from engaging in open discourse about the Middle East.

Several writers chide us for making mono-causal arguments, accusing us of saying that Israel alone is responsible for anti-Americanism in the Arab and Islamic world (as one letter puts it, anti-Americanism ‘would exist if Israel was not there’) or suggesting that the lobby bears sole responsibility for the Bush administration’s decision to invade Iraq. But that is not what we said. We emphasised that US support for Israeli policy in the Occupied Territories is a powerful source of anti-Americanism, the conclusion reached in several scholarly studies and US government commissions (including the 9/11 Commission). But we also pointed out that support for Israel is hardly the only reason America’s standing in the Middle East is so low. Similarly, we clearly stated that Osama bin Laden had other grievances against the United States besides the Palestinian issue, but as the 9/11 Commission documents, this matter was a major concern for him. We also explicitly stated that the lobby, by itself, could not convince either the Clinton or the Bush administration to invade Iraq. Nevertheless, there is abundant evidence that the neo-conservatives and other groups within the lobby played a central role in making the case for war.

Oh No She Didn’t!

Lindsay Beyerstein, in a tour de force, rightly takes down Caitlin Flanagan.

It’s not hypocritical for Flanagan to have servants, as some have claimed. She’s just obtuse about how her privilege shapes her experience. I’m sure it’s lovely to stay at home and arrange flowers in between manicures–and perfectly traditional, too. I just don’t see how Flanagan’s rarified existence is relevant to any larger social issues, except perhaps as an implied argument for a more progressive tax structure.

The thing is, Caitlin Flanagan is a phony. She doesn’t have an exceedingly traditional lifestyle. She doesn’t even fit the event planner/fucktoy model of housewiffery that she exalts. Yet she lectures other women about how they ought to aspire to this fantasy life, setting herself up as living proof of concept.

Flanagan isn’t any kind of housewife. Like most parents, she’s working and raising a family. Flanagan happens to be a staff writer at the New Yorker with a regular column at the Atlantic Monthly, a recent op/ed in Time Magazine, and a big new book. She even flew out from California to promote her book on the Colbert Report. (Interestingly, Mrs. Traditional writes under “Flanagan” and not “Hudnut”, her husband’s name.)

She sounds like the woman who has it all. How does she get it? By telling other women that they can’t possibly have it all. Hypocrite.

YouTube Lets Sam Anderson Contemplate Lip-Syncing

Sam Anderson in Slate:

The range of material on the Web site YouTube is almost literally incredible—it’s like the largest talent show in the history of the world crossed with your boring uncle’s home video collection. You can see virtuoso guitarists playing TV theme songs, college guys pretending to be repulsed by ice cream, a robot dancer who might actually be a robot, and (for some reason) a girl eating an apple. There are kids’ bands covering inappropriate songs, James Lipton reciting bad rap lyrics like they were Keats poems, and endless footage of George Bush’s awkwardness at press conferences. If you like home video of iguanas, you have about 70 choices. The site has no organizing aesthetic or agenda. It’s a kind of anti-TV-network: an incoherent, totally chaotic accretion of amateurism—pure webcam footage of the collective unconscious. It can be a little overwhelming. And its users add 35,000 videos every day…

For the cultural critic, however, YouTube is an invaluable resource. It allows us to study phenomena that have flown for centuries under the analytical radar. Take, for instance, the formerly mysterious art of lip-syncing. Once merely a private folk art, syncing has risen over the last 20 years to displace jazz, baseball, and rock ‘n’ roll as the great American pastime. It’s become the sole prerequisite of post-MTV fame and one of our most lucrative global exports. (We ridiculed Ashlee Simpson not because we suddenly discovered she was syncing—everyone knew that—but because she bungled it so publicly: It was a national embarrassment, like an Austrian ski-jumper crashing in the Olympics.) In bedrooms from Maine to Oregon, lip-syncing is the last real connection between a celebrity overclass and its fan base. It has become such a powerful symbol of Western culture that it was outlawed last year in Turkmenistan. And yet we know very little about it. What, for instance, makes a good lip-sync so funny that you want to forward it to your entire address book, and a bad one so painful that you want to hurt the syncer?

The Aquariums of Pyongyang


It is a depressing truth of some books that the stories they tell, like Tolstoy’s happy families, resemble each other enough to constitute a genre. To make that observation of the genre to which The Aquariums of Pyongyang belongs – that of Gulag memoir – is not to diminish either the individual or collective suffering described, only to observe that human cruelty tends to lack originality. (There are, for instance, a limited number of ways in which human beings can be brutally interrogated, and you find them in accounts of torture from Buenos Aires to Abu Ghraib.) The interest, then, apart from the salutary but depressing reminder that such things are all around us, lies largely in the detail. In the case of this book, the detail is vivid and revealing.

more from The New Statesman here.

liberty is sweet

What with the noise, the heat, and the danger of being forced back into slavery, sometimes it’s good to get out of the city. Such, at least, was the assessment of Harry Washington, who, in July of 1783, made his way to the salty, sunbaked docks along New York’s East River and boarded the British ship L’Abondance, bound for Nova Scotia. A clerk dutifully noted his departure in the “Book of Negroes,” a handwritten ledger listing the three thousand runaway slaves and free blacks who evacuated New York with the British that summer: “Harry Washington, 43, fine fellow. Formerly the property of General Washington; left him 7 years ago.”

Born on the Gambia River around 1740, not far from where he would one day die, Harry Washington was sold into slavery sometime before 1763. Twelve years later, in November, 1775, he was grooming his master’s horses in the stables at Mount Vernon when the royal governor of Virginia, Lord Dunmore, offered freedom to any slaves who would join His Majesty’s troops in suppressing the American rebellion. That December, George Washington, commanding the Continental Army in Cambridge, received a report that Dunmore’s proclamation had stirred the passions of his own slaves. “There is not a man of them but would leave us if they believed they could make their escape,” a cousin of Washington’s wrote from Mount Vernon, adding bitterly, “Liberty is sweet.”

from a review of two new books on the history of slavery at The New Yorker.