the permanent night-time of his elected trade

Onpagea25

When John le Carré published A Perfect Spy in 1986, Philip Roth, then spending a lot of time in London, called it ‘the best English novel since the war’. Not being such a fan of A Perfect Spy, I’ve occasionally wondered what Roth’s generous blurb says about the postwar English novel. As a le Carré bore, however, I’ve also wondered how Roth managed to overlook Tinker Tailor Soldier Spy (1974), the central novel in le Carré’s career, in which George Smiley – an outwardly diffident ex-spook with a strenuously unfaithful wife and an interest in 17th-century German literature – comes out of retirement to identify the turncoat in a secret service that’s explicitly presented as a metaphorical ‘vision of the British establishment at play’. If you sit up late enough watching DVDs of the BBC adaptation starring Alec Guinness, or Martin Ritt’s version of The Spy who Came in from the Cold with Richard Burton, it’s possible to persuade yourself that le Carré might even be the greatest English novelist alive. Unfortunately, looking at his other books the next morning makes this seem less likely, in part because the classic phase of his career ended earlier than we bores like to remember, and in part because some of his early strengths have become, in a changed context, weaknesses.

more from the LRB here.

to organize the world’s information and make it universally accessible and useful

Hp0

Every weekday, a truck pulls up to the Cecil H. Green Library, on the campus of Stanford University, and collects at least a thousand books, which are taken to an undisclosed location and scanned, page by page, into an enormous database being created by Google. The company is also retrieving books from libraries at several other leading universities, including Harvard and Oxford, as well as the New York Public Library. At the University of Michigan, Google’s original partner in Google Book Search, tens of thousands of books are processed each week on the company’s custom-made scanning equipment.

Google intends to scan every book ever published, and to make the full texts searchable, in the same way that Web sites can be searched on the company’s engine at google.com. At the books site, which is up and running in a beta (or testing) version, at books.google.com, you can enter a word or phrase—say, Ahab and whale—and the search returns a list of works in which the terms appear, in this case nearly eight hundred titles, including numerous editions of Herman Melville’s novel.

more from The New Yorker here.

Snake Bites the Toxic Toad That Feeds It–and Spreads Its Poison

From Scientific American:

Snake It sounds like something straight out of a video game: A snake collects toxin by biting a poisonous toad and uses that venom as a defense against hawks and other predators. But that is exactly what researchers say the Asian snake Rhabdophis tigrinus does, based on studies of glandular fluid from hatchlings and adult snakes on two Japanese islands.

Some R. tigrinus snakes carry toxins called bufadienolides in their nuchal glands, sacks located under a ridge of skin along their upper necks. When threatened, they arch their necks, exposing the poisonous ridge to an antagonist. The clawing and biting of hawks and other predators most likely rips the skin and lets the poison ooze out, potentially blinding the snake’s attackers, says herpetologist Deborah Hutchinson of Old Dominion University in Norfolk, Va. “It might not kill the predator but it would be noxious enough to deter predation,” she says.

More here.

‘Hobbit’ human ‘is a new species’

From BBC News:Hobbit

The finds caused a sensation when they were announced to the world in 2004. But some researchers argued the bones belonged to a modern human with a combination of small stature and a brain disorder called microcephaly. That claim is rejected by the latest study, which compares the tiny people with modern microcephalics. Microcephaly is a rare pathological condition in humans characterised by a small brain and cognitive impairment.

In the new study, Dean Falk, of Florida State University, and her colleagues say the remains are those of a completely separate human species: Homo floresiensis. They have published their findings in Proceedings of the National Academy of Sciences. The remains at the centre of the Hobbit controversy were discovered at Liang Bua, a limestone cave on the Indonesian island of Flores, in 2003.

More here.

A Case of the Mondays: The Blank Slate and Other Phantom Theories

Reading Steven Pinker’s The Blank Slate reminded me of most other polemical books I’d read that attempt to integrate some science into their works. In theory it’s a science book, a longwinded defense of both evolutionary psychology and its obvious social implications. But in practice, it’s mostly a political book; the science is provided only as a backdrop against which Pinker sets up his attacks on a host of social, political, and cultural notions that stand in opposition to crude evolutionary psychology (which I’ll abbreviate as EP in the rest of this post).

Pinker frames his view as this of modern science, represented by such tools as genetics, neurobiology, and post-Williams Revolution evolutionary biology, versus this of three closely interlinked demons. The first demon, which he focuses on the most, is the view that at birth the human mind is a blank slate to be shaped by environmental forces. The second is romantic affection for the noble savage, uncorrupted by pernicious civilization. And the third is the dualist notion that people are ghosts inhabiting the machines that are their own bodies.

The problems with the book’s thesis start right at the beginning, when Pinker claims that a) all three views are interlinked, and b) all three views were very respectable until the science of EP started to overthrow them. The best way of seeing why Pinker is wrong there is by looking at the three philosophical positions he associates with the three demons—empiricism for the blank slate, romanticism for the noble savage, and dualism for the ghost in the machine.

By and large, the philosophers who developed empiricism, romanticism, and dualism in modern times disagreed with one another. Descartes’ dualism isn’t a component of Locke’s empiricism; on the contrary, they disagree on the fundamental issue of whether all knowledge comes from experience. Romanticism developed mostly after the Enlightenment, and was only associated with empiricism or dualism when it mythologized European progress rather than the noble savage.

Zooming in on empiricism, it’s easy to see another error of Pinker’s: Lockean empiricism does not strictly speaking say the mind is a blank slate, at least not in the way that is relevant to EP. The main point of EP is that the human brain is hardwired to be prone to certain forms of learning and modes of behavior. The EP-derived view that men are on average better than women at math is not that men are born knowing more math than women but that men are born with a greater aptitude for math than women. In contrast, Locke’s main contention is that knowledge comes directly from experience. He never concerned himself with social learning, which only became a serious subject of study a century or two after his death.

More importantly, the people Pinker criticizes for distorting science by claiming that IQ is not meaningful or not hereditary, or even that the mind is indeed a blank slate, have nothing to do with the other two demons. Marxist theory, which the people Pinker labels radical scientists adhere to, is extremely anti-romantic and anti-dualist. Among all the radical ideologies in existence—libertarianism, fascism, religious fundamentalism, anarchism—it is certainly the most pro-modern. Lewontin’s politics is largely doctrinaire Marxist: in Biology as Ideology, he trumpets the triumph of progress, even as he indicates this progress should come from accepting socialism more than from ordinary capitalist improvements.

The relationship between Pinker and Lewontin is an interesting one. Pinker notes that although Lewontin claims that he thinks the dominant force in evolution is the interaction between gene, organism, and environment, in terms of social implications he ignores everything but environment. On that Pinker is certainly right: Biology as Ideology is an anti-science polemic that distorts facts to fit Lewontin’s agenda (my take on Lewontin was subsequently debated in length here). However, Pinker commits the same transgression: he says he believes in the sensible moderate view that human behavior is determined by both inborn and environmental factors, and goes on to not only ignore the implications of the environmental part but also defend racists and sexists who have used pseudoscience as cover.

For instance, he starts by ridiculing people who called Richard Herrnstein a racist for a 1970 paper about intelligence and heredity. Although the paper as Pinker describes it is not racist per se, Herrnstein was indeed a racist. The screed he published with Charles Murray in 1994, The Bell Curve, is not only wrong, but also obviously wrong. Even in 1994, there were metastudies about race and intelligence that showed that the IQ gap disappears once one properly controls for environmental factors, for example by considering the IQ scores of children born to single mothers in Germany by American fathers in World War Two.

The truth, or what a reasonable person would believe to be the truth, is never oppressive. If there is indeed an innate component to the racial IQ gap, or to the gender math score gap, then it’s not racist or sexist to write about it. It remains so even if the innate component does not exist, but the researcher has solid grounds to believe it does.

However, Murray and Herrnstein had no such solid grounds. They could quote a few studies proving their point, but when researchers publish many studies about the same phenomenon, some studies are bound to detect statistically significant effects that do not exist. By selectively choosing one’s references, one can show that liberals are morally superior or morally inferior to conservatives, or that socialism is more successful or less successful than capitalism. At times there are seminal studies, which do not require any further metastudy. There weren’t any in 1994, while existing metastudies suggested that the racial IQ gap was entirely environmental. As I will describe below, the one seminal study done in 2003 moots not only Murray and Herrnstein’s entire argument but also much of Pinker’s.

To rebut claims of racism and sexism, Pinker invokes the moral argument—in other words, that to be against racism and sexism one need only vigorously oppose discrimination, without believing that without any discrimination there would be no gaps in achievement. In theory, that is correct. But in practice, that narrow view makes it impossible to enforce any law against discrimination.

Worse, Pinker invokes anti-feminist stereotypes that are born not of serious scholarship, but of ideologically motivated conservative thinking. He supports Christina Hoff-Sommers’ spurious distinction between equity feminism and gender feminism. Although there are many distinctions among different kinds of feminists, some of which track the degree of radicalism, none of the serious ones has anything to do with Hoff-Sommers’. In theory, equity feminism means supporting equality between women and men, while gender feminism means supporting a view of the world in which the patriarchy is omnipresent. In practice, the people who make that distinction, including Pinker, assign everyone who supports only the forms of equality that are uncontroversial in the United States, like equal pay laws and suffrage, to equity feminism, and everyone who supports further changes or even existing controversial ones to gender feminism.

As a case study, take family law activist Trish Wilson. Wilson’s activism focuses on divorce law; she has written articles and testified in front of American state legislatures against laws mandating presumptive joint custody, mainly on the grounds that it hurts children. In addition, she has written exposés of ways abusive men exploit legal loopholes, including presumptive joint custody, to gain custody of children. In pushing for equality in the courtroom, she is a liberal feminist’s liberal feminist. And yet, her attacks on the men’s rights movement for protecting abusive men have caused every conservative who makes distinctions between equity and gender feminism to deride her as a gender feminist.

Any reasonable distinction between a more radical feminist stream and a more conventional one would put Betty Friedan and her organization NOW on the less radical side. Friedan was anti-radical enough to devote much of her tenth anniversary afterword to The Feminine Mystique to attacking radical feminists, by which she means not Catharine MacKinnon or Andrea Dworkin, but Kate Millett. NOW has focused on legal equality, primarily abortion rights and secondarily laws cracking down on employment discrimination and sexual harassment. But Pinker assigns Friedan as well as Bella Abzug to the gender feminism slot, using entirely trivial statements of theirs to paint them as radicals. Friedan he attacks for suggesting extending compulsory education to the age of 2; Abzug he attacks for saying equality means fifty-fifty representation everywhere.

To his credit, Pinker never quite claims that there is no gender discrimination. However, he makes an earnest effort to undermine every attempt to counteract it, however well founded. For instance, he claims that it’s absurd to say that women’s underrepresentation in science in the United States is due to discrimination, on the grounds that they’re even more underrepresented in math, and it’s not likely mathematicians are more sexist than scientists. Instead, he suggests, women are just not interested in math and science.

However, it is legitimate to ask why this interest gap exists. There is no EP-based argument why it should be innate. On the contrary, independent evidence from, for example, the proportion of female mathematicians who come from families of mathematicians versus the proportion of male mathematicians, suggests it is environmental. Indeed, the educational system of the United States has long encouraged women to ignore the hard sciences. Other educational systems produce near-parity: while 13% of American scientists and engineers are women, many other countries, such as Sweden and Thailand, have percentages higher than 30 or even 40.

Furthermore, one of the most important pieces of information about biases in education, the stereotype threat, receives no mention from Pinker. It’s an established fact that telling girls who are about to take a math test that boys do better will make them do worse. In fact, telling them that the test measures aptitude, or even asking them to fill out an oval for gender before the test, will hurt their performance. And yet somehow Pinker glosses over that fact in a book that purports to be about a combination of genetics and environment.

There is hardly a single thing Pinker gets right about rape in his book, except that Susan Brownmiller is wrong. His explanation of rape is that it is a male biological urge, as evidenced in the fact that in many species males rape females. However, that theory says nothing about why straight men rape other men in prison, or in general about the dynamics of male-on-male rape. He provides scant circumstantial evidence for his theory of rape; instead, he prefers to rant about how Brownmiller’s feminist theories are dominant, even though in fact the dominant view among criminologists is that rape is simply a violent crime, rather than a case of passionate sex gone awry or a mechanism of reinforcing the patriarchy.

Pinker commits not only a sin of omission in writing about rape or violence in general, but also a sin of commission, in writing that nobody really knows what causes violence. In fact, criminologists have fairly good ideas about how social ills like poverty and inequality cause crime, although they know it about murder more than about other violent crimes. Still, the rates of all violent crimes are closely correlated; the major exception is the United States’ murder rate, which is higher than its general violent crime rate predicts presumably because of its lax gun control laws.

Finally, Pinker quotes a 2001 study by Eric Turkheimer as showing that the Darwin wars ended and the gene-centric side, led by Richard Dawkins, prevailed over the more environment-based side, led by Stephen Jay Gould. Thence Pinker concludes that attempts to raise children in ways more conducive to growth are futile, since much of their future behavior is genetic, and almost all of what is not genetic is due to developmental noise rather than environmental influence.

However, in 2003 Turkheimer published another study, which sealed the questions of race and IQ and of environmental influences on children in general. Turkheimer’s starting point was that earlier studies about the heritability of IQ often focused on adopted children in middle- and upper-class families, where environmental influences might be different from in lower-class families. By examining a large array of data spanning multiple races and social classes, he saw that on the one hand, within the middle class IQ is highly genetic, with a heritability level of 0.72 and no significant environmental effects. But on the other, within the lower class, which includes most blacks and Hispanics in the US, the heritability of IQ drops to 0.1, and environmental factors such as the depth of poverty or the level of schooling predominate.

Obviously, it would be futile to blame Pinker for not mentioning Turkheimer’s 2003 study. The Blank Slate was published in 2002. However, all other facts I have cited against Pinker’s thesis and its purported social implications predate 2002. The Turkheimer study does not show by itself that Pinker’s book is shoddy; it merely shows that much of it is wrong. What establishes Pinker’s shoddiness is the treatment of social problems like sexism, racism, and crime, which is based not on examination of the available evidence or even the views that are mainstream among social scientists who study them, but on what think tanks whose views align with Pinker’s say.

Even a cursory examination of the current mainstream social scene will show that the myths of the noble savage and the ghost in the machine are nonexistent. That fringe scholars sometimes believe in them is no indication of their level of acceptability; there are fringe scholars who believe in 9/11 conspiracy theories, too. Even the theory of the blank slate, at least in its most extreme form, is a phantom ideology. Lewontin adheres to it, but Lewontin is a contrarian; non-contrarian scientists do not publish books comparing modern biology departments to Medieval Christianity. Pinker likes to poke fun at theories that suggest everyone or almost everyone can succeed in life, but he never gets around to actually refuting them. All he does is attack extreme caricatures such as the blank slate and other phantom theories.

Shia and Sunni, A Ludicrously Short Primer

Even now, many people who hear these terms daily on the news are confused about what the real differences are between Sunni and Shia Muslims, so I, having been brought up in a very devout Shia household in Pakistan, thought I would explain these things, at least in rough terms. Here goes:

It all started hours after Mohammad’s death: while his son-in-law (and first cousin) Ali was attending to Mohammad’s burial, others were holding a little election to see who should succeed Mohammad as the chief of what was by now an Islamic state. (Remember that by the end of his life, Mohammad was not only a religious leader, but the head-of-state of a significant polity.) The person soon elected to the position of caliph, or head-of-state, was an old companion of the prophet’s named Abu Bakr. This was a controversial choice, as many felt that Mohammad had clearly indicated Ali as his successor, and after Abu Bakr took power, these people had no choice but to say that while he may have become the temporal leader of the young Islamic state, they did not recognize him as their divinely guided religious leader. Instead, Ali remained their spiritual leader, and these were the ones who would eventually come to be known as the Shia. The ones who elected Abu Bakr would come to be known as Sunni.

This is the Shia/Sunni split which endures to this day, based on this early disagreement. Below I will say a little more about the Shia.

So early on in Islam, there was a split between political power and religious leadership, and to make a long story admittedly far too short, this soon came to a head within a generation when the grandson of one of the greatest of Mohammad’s enemies (Abu Sufian) from his early days in Mecca, Yazid, took power in the still nascent Islamic government. Yazid was really something like a cross between Nero and Hitler and Stalin; just bad, bad in every way: a decadent, repressive dictator (and one who flouted all Islamic injunctions), for whom it became very important to obtain the public allegiance of Husain, the pious and respected son of Ali (and so, grandson of Mohammad). And this Husain refused, on principle.

Yazid said he would kill Husain. Husain said that was okay. Yazid said he would kill all of Husain’s family. Husain said he could not compromise his principles, no matter what the price. Yazid’s army of tens of thousands then surrounded Husain and a small band of his family, friends and followers at a place called Kerbala (in present day Iraq), and cut off their water on the 7th of the Islamic month of Moharram. For three days, Husain and his family had no water. At dawn on the third day, the 10th of Moharram, Husain told all in his party that they were sure to be killed and whoever wanted to leave was free to do so. No one left. In fact, several heroic souls left Yazid’s camp to come and join the group that was certain to be slaughtered.

On the 10th of Moharram, a day now known throughout the Islamic world as Ashura, the members of Husain’s parched party came out one by one to do battle, as was the custom at the time. They were valiant, but hopelessly outnumbered, and therefore each was killed in turn.  All of Husain’s family was massacred in front of his eyes, even his six-month old son, Ali Asghar, who was pierced through the throat by an arrow from the renowned archer of Yazid’s army, Hurmula. After Husain’s teenage son Ali Akbar was killed, he is said to have proclaimed, “Now my back is broken.” But the last to die before him, was his beloved brother, Abbas, while trying desperately to break through Yazid’s ranks and bring water back from the Euphrates for Husain’s young daughter, Sakeena. And then Husain himself was killed.

The followers of Ali (the Shia) said to themselves that they would never allow this horrific event to be forgotten, and that they would mourn Husain and his family’s murder forever, and for the last thirteen hundred years, they have lived up to this promise every year. This mourning has given rise to ritualistic displays of grief, which include flagellating oneself with one’s hands, with chains, with knives, etc. It can all seem quite strange, out of context, but remembrance of that terrible day at Kerbala has also given rise to some of the most sublime poetry ever written (a whole genre in Urdu, called Marsia, is devoted to evoking the events of Ashura), and some of us, religious or not, still draw inspiration from the principled bravery and sacrifice of Husain on that black day.

Earlier today, I took the following unlikely pictures on the ritziest road in New York City, Park Avenue:

Procession_1

This is the procession commemorating Ashura, or the 10th of Moharram. In front, you can see a painstakingly recreated model of the tomb of Husain. The mourners are dressed mostly in black. It is a testament to the tolerance of American society that despite the best attempts of some of its cleverest citizens to proclaim a “clash of civilizations,” it allows (and observes with curiosity) such displays of foreign sentiment.

Sea_of_heads_on_park_ave

The procession is made up of Shias of various nationalities, with the largest contingents being from Pakistan and Iran.

Punk_with_alam

A young Shia holds up a banner, perhaps forgetting for a second that he is supposed to be mourning.

Morgan_and_coffin 

You can see one of the coffins with roses on it, which are ritualistically carried in the procession.

Hands_up_1

The self-flagellation is in full swing at this point. (The arms are raised before coming down to beat the chest.)

Zuljana

This is “Zuljana” or Husain’s horse, caparisoned with silks and flowers.

Blurred_matam

The self-flagellation, or matam, reaches a climactic frenzy before ending for Asr prayers. Later in the evening, there are gatherings (or majaalis) to remember the women and children of Husain’s family who survived to be held as prisoners of Yazid.

Sojourns: Two Views of the Apocalypse

Death_earthSlavoj Zizek once said “it is much easier for us to imagine the end of the world than a small change in the political system. Life on earth maybe will end but somehow capitalism will go on.” One is tempted to respond, well yes of course. It is also easier to imagine blowing up a car than designing one. Destruction is a rather simple proposition. Feats of engineering are somewhat more complicated.

And yet there is something to the apocalyptic imagination. Thinking about the end of the world can perhaps tell us something about the world that is ostensibly ending. Or so it would seem from two of the more visually arresting films to appear in the last decade, both ruminating over our final days, both set, as it happens, in England. I refer here to everyone’s favorite intellectual zombie flick 28 Days Later and the more recent dystopian thriller Children of Men.

The first thing I would point to is that it is not the “world” that is ending in these movies so much as the human race that has lorded over it for the past eon or so. It is part of our species arrogance to identify the world with humanity and then to wonder if our destruction would be anything other than a good thing for the rest of “life on earth.” So then let us be clear. What we are talking about here is not exactly the globe or the planet but simply the noisome breed of animals bent on mucking it up for everyone else.

28dayslater001Humans. We are tiresome, aren’t we? Few could deny the beauty of the depopulated London with which 28 Days Later begins: the seraphic Cillian Murphy ambling about Oxford Circle, picking detritus off the ground, alone save for the pigeons and the gulls. Humanity has perished because the “rage virus” has been loosed from a lab and made us tear each other limb from limb. We don’t die from the virus itself. It’s the rage that kills us. And so we ought to wonder how much the virus adds to our native cruelty and rancor. Perhaps Cornelius had it right after all: “Beware the beast Man, for he is the devil’s pawn. Alone among God’s primates he kills for lust or sport or greed … Let him not breed in great in numbers, for he will make a desert of his home and yours.”

Actually, the conclusion (or at least the original one) of 28 Days Later is nowhere near as radical. It turns out the virus never got out of the country. Humanity is spared. The hero, his girlfriend, and an orphaned kid make an ersatz domestic hearth in the English countryside, all warm in their sweaters and waiting to be rescued. Rage may be conquered after all. Perhaps we can all just get along.

Humanity (nearly) perishes by anger in 28 Days Later. Sadness dooms us in Children of Men. Seventeen years after a global infertility crisis has brought a stop to human reproduction across the planet, “life” has pretty much ground to a halt. There’s no future generation in sight, so nations plunge into despair. War, chaos, and social entropy ensue. The sound of children’s voices is dearly missed.Childrenofmen

Children of Men is a movie at odds with itself. At its core, the story is a saccharine humanist fable of a culture of life fighting to persist among one of death. A baby springs miraculously into the fallen world and suddenly there is a future to save, as if one could only live for the sake of progeny, as if a world without humans would not be left well enough alone. Amid the rubble and squalor of the end of world, life or death struggle turns to getting the baby offshore to a group of save-the-planet scientists aptly dubbed (giving the game away) … the human project.

Deer_munchsYet, for us much as the movie is committed at the level of story to a bland humanism, it is equally committed at the level of form to something quite different, to making us wonder, within the terms of the narrative, whether the human species ought not to become extinct after all. A great deal of attention has been paid to the six-minute long shot in a battle strewn internment camp. As with 28 Days Later, humanity’s end makes quite a spectacle. I would point also to an earlier scene at an abandoned and dilapidated schoolyard. Here we are supposed to be thinking about the despair left in the absence of children. But the camera does something else. We freeze on a deer that strides into the frame and occupies the place of the missing kids. It’s an arresting moment precisely in the species difference. A non-human animal walks on the ruins of a civilization made for human children. And perhaps that is just fine.

As with 28 Days Later, humanity ends and begins again in England and is best imagined wrapped up in a cable knit sweater while drinking Earl Gray tea (a role brilliantly played here by Michael Cain). Yet, Children of Men makes the saving of humanity look and feel like it is beside the point and a waste of time. And that is why it is most interesting in spite of its own worst ideas.

So, perhaps the lesson is that thinking about the end of the world is in fact thinking about making it a better place.

No Reservations, Asad Raza-Style

Recently, my wife and I have been avid watchers of chef Anthony Bourdain‘s program No Reservations on the Travel Channel (get cable, will you? And then get TIVO, too–trust me), and as I see Tony visit exotic locales and sample their various culinary offerings, I always wonder why he never replied to the late-nite letter that I once wrote him inviting him to dinner at my house, even promising to get my nephew Asad Raza to cook the incomparably zesty-yet-subtle, and completely sui generis, Pakistani dish, Nihari, for him. Now, let me tell you, Asad cooks a mean Nihari, but even the NM (Nihari Master) must go to the source for inspiration and instruction once in a while, and Asad not only went to Burns Road in Karachi (read about some of his other activities while he was there, here), he recorded his visit on video for the rest of us. So, Tony, either go to Pakistan, or come over to my place for some of Asad’s Nihari, and meanwhile, watch this video which made my mouth water (and my heart ache):

The Decline and Fall of Public Festivals

In The Nation, Terry Eagleton reviews Barbara Ehrenreich’s Dancing in the Streets.

Western liberals who are besotted with the Other should read E.M. Forster’s mischievous little novel Where Angels Fear to Tread. The well-bred young English heroine of this tale runs off with a rather roughneck young Italian, to the horror of her priggish, xenophobic, stiff-necked family. Yet just as the reader is relishing the family’s discomfort, an equally discomforting realization begins to dawn. The young Italian turns out to be an appalling brute. The parochially minded prigs were right after all.

Barbara Ehrenreich’s Dancing in the Streets refuses to fall for the romance of the Other, though its subject–popular festivity versus puritanical order–might well have tempted her to. What we have instead is an admirably lucid, level-headed history of outbreaks of collective joy from Dionysus to the Grateful Dead. It is a book that investigates orgy but declines quite properly to join in. For one thing, it recognizes in its impressively unromantic way that most carnivalesque activity over the centuries has been planned rather than spontaneous, rather as rock concerts are today. For another thing, unlike the more dewy-eyed apostles of dancing in the streets, it recognizes that popular carnival has a darker, violent dimension. In wisely agnostic manner, Ehrenreich refuses to take sides in the debate about whether carnival is a licensed displacement of popular energies (“There is no slander in an allowed fool,” remarks Olivia in Shakespeare’s Twelfth Night), or whether it is a case of the plebeians rehearsing the uprising.

An Online Conference on Danto’s The Transfiguration of the Commonplace

(And via Virtual Philosopher) an online conference on Danto’s The Transfiguration of the Commonplace, 25 years later. The schedule can be found here. From Richard Shusterman’s keynote:

I first encountered Arthur Danto’s philosophy as an undergraduate in Jerusalem in the early 1970s, in a course on analytic aesthetics, where we also studied the texts of Monroe Beardsley, Nelson Goodman, Richard Wollheim, George Dickie, and Joseph Margolis. Each of these philosophers has a distinctive voice, and it was not Danto’s but Nelson Goodman’s that initially won my heart and inspired my philosophical ambitions. So inseparable was his red-covered Languages of Art from my person that friends jokingly described it, with reference to Chairman Mao’s current eminence, as my little red book of cultural revolution. Goodman’s austerely uncompromising nominalism, his lean, hard-fisted logical style, his confident, even arrogant tone of conviction all appealed to me as a young Israeli shaped by that culture’s military virtues. The infatuation did not survive my doctoral studies in Oxford, and my unqualified zeal for analytic philosophy did not survive my encounter with pragmatism in the early 1990s. Now, after more than thirty years of engagement with analytic aesthetics (both from the inside and from the critical perspective of the pragmatist aesthetics I advocate), I regard Danto as having its most alluringly potent oeuvre. This paper is, in part, an effort to explain why.

Several factors contribute to Danto’s greatness and collectively conspire to take him beyond those other prominent analytic aestheticians of his generation whose conceptual and argumentative skills seem every bit as impressive and who are likewise capable of systematic philosophy. First is his lovingly intimate engagement with the visual arts, though this is something that Wollheim and Goodman certainly shared. Another factor is Danto’s superior literary style – artfully belle-lettrist but never artificial, colorful and free-flowing without sacrificing logical form, bold but not bullying in its argumentation, imaginative but not eccentric, sophisticated and complex yet easy to follow, professional but not pedantic, precise enough to satisfy the philosophical expert but sufficiently flexible and broadly comprehensible to convey its message to any intellectual interested in the arts. There is also the vibrant passion that pervades Danto’s aesthetic imagination, a passion as richly inflected with the erotic as the philosophical, fusing his sensuous and intellectual perceptions to make his arguments intriguingly compelling even when their logical architecture seems slim and shadowy in pure conceptual terms.

An Interview With Simon Blackburn

(Via Political Theory Daily Review) the Virtual Philosopher interviews Simon Blackburn:

Nigel: Since returning to England from the States, taking up a professorial chair in Cambridge, you have been prolific as a writer of popular philosophy books: Think, Being Good, Lust, and most recently Truth. Is there a particular reason for this apparent change of direction?

Simon: Actually Think was published a couple of years before I left the States, and Being Good was finished before I did so. So if there was a change in direction, I suppose it was while I was in the States. It is a change of emphasis, I think, more than a change of direction. I have always had a democratic streak: one of my earliest books, published in 1984, was supposed to be an introduction to the philosophy of language (Spreading The Word). But I also like to blend some of my own attempts at philosophy into supposedly introductory books, so for instance that book was quite influential in its moral theory, and to some extent in what it said about other things such as rule-following and truth. I try to keep on publishing “professional” papers while I also produce books like these.

Nigel: David Hume in his essay writing saw himself as an ambassador from the ‘dominions of learning to those of conversation’. Is that a position that you now identify with? Who are you writing for? Do you know who reads your books?

Simon: As so often, Hume puts it better than I could myself. Yes, that’s an admirable description of a position I identify with. I think professional philosophy can be very odd, very self-contained and narcissistic and quite out of touch with more general cultural currents. Its writings, as Bernard Williams memorably put it, can often resemble “scientific reports badly translated from the Martian”. I think good philosophy always has had to take some nourishment from surrounding politics, moral concerns, and science. It may be harder to identify what it returns, but in my books I try at least to exhibit something it can give back.

How Did The Answer To “What to Eat?” Get So Complicated?

Michael Pollan in the NYT Magazine:

Eat food. Not too much. Mostly plants.

That, more or less, is the short answer to the supposedly incredibly complicated and confusing question of what we humans should eat in order to be maximally healthy. I hate to give away the game right here at the beginning of a long essay, and I confess that I’m tempted to complicate matters in the interest of keeping things going for a few thousand more words. I’ll try to resist but will go ahead and add a couple more details to flesh out the advice. Like: A little meat won’t kill you, though it’s better approached as a side dish than as a main. And you’re much better off eating whole fresh foods than processed food products. That’s what I mean by the recommendation to eat “food.” Once, food was all you could eat, but today there are lots of other edible foodlike substances in the supermarket. These novel products of food science often come in packages festooned with health claims, which brings me to a related rule of thumb: if you’re concerned about your health, you should probably avoid food products that make health claims. Why? Because a health claim on a food product is a good indication that it’s not really food, and food is what you want to eat.

Uh-oh. Things are suddenly sounding a little more complicated, aren’t they? Sorry. But that’s how it goes as soon as you try to get to the bottom of the whole vexing question of food and health. Before long, a dense cloud bank of confusion moves in. Sooner or later, everything solid you thought you knew about the links between diet and health gets blown away in the gust of the latest study.

Last winter came the news that a low-fat diet, long believed to protect against breast cancer, may do no such thing — this from the monumental, federally financed Women’s Health Initiative, which has also found no link between a low-fat diet and rates of coronary disease. The year before we learned that dietary fiber might not, as we had been confidently told, help prevent colon cancer. Just last fall two prestigious studies on omega-3 fats published at the same time presented us with strikingly different conclusions. While the Institute of Medicine stated that “it is uncertain how much these omega-3s contribute to improving health” (and they might do the opposite if you get them from mercury-contaminated fish), a Harvard study declared that simply by eating a couple of servings of fish each week (or by downing enough fish oil), you could cut your risk of dying from a heart attack by more than a third — a stunningly hopeful piece of news. It’s no wonder that omega-3 fatty acids are poised to become the oat bran of 2007, as food scientists micro-encapsulate fish oil and algae oil and blast them into such formerly all-terrestrial foods as bread and tortillas, milk and yogurt and cheese, all of which will soon, you can be sure, sprout fishy new health claims. (Remember the rule?)

Scientists bridging the spirituality gap

From MSNBC:

Spirit Religion and science can combine to create some thorny questions: Does God exist outside the human mind, or is God a creation of our brains? Why do we have faith in things that we cannot prove, whether it’s the afterlife or UFOs? The new Center for Spirituality and the Mind at the University of Pennsylvania is using brain imaging technology to examine such questions, and to investigate how spiritual and secular beliefs affect our health and behavior. Newberg’s center is not a bricks-and-mortar structure but a multidisciplinary team of Penn researchers exploring the relationship between the brain and spirituality from biological, psychological, social and ideological viewpoints.

How does the center test the relationship between the mind and spirituality? In one study, Newberg and colleagues used imaging technology to look at the brains of Pentecostal Christians speaking in tongues — known scientifically as glossolalia — then looked at their brains when they were singing gospel music. They found that those practicing glossolalia showed decreased activity in the brain’s language center, compared with the singing group.

More here.

Rare “Prehistoric” Shark Photographed Alive

From The National Geographic:

Shark

Flaring the gills that give the species its name, a frilled shark swims at Japan’s Awashima Marine Park on Sunday, January 21, 2007. Sightings of living frilled sharks are rare, because the fish generally remain thousands of feet beneath the water’s surface. Spotted by a fisher on January 21, this 5.3-foot (160-centimeter) shark was transferred to the marine park, where it was placed in a seawater pool. “We think it may have come to the surface because it was sick, or else it was weakened because it was in shallow waters,” a park official told the Reuters news service. But the truth may never be known, since the “living fossil” died hours after it was caught.

More here.