Two recent events, the removal of an essay on the many tellings of the Indian epic the Ramayana from the curriculum of Delhi University and the firebombing of a French newspaper for printing a cartoon of the Prophet in an edition devoted to a satirical look at the Shariat, share a surface resemblance. They have taken place in India and Western Europe, two diverse places but both places that take pride in a tradition of tolerance. While it is possible to read into the incidents the continuing religious intolerance for any examination of faith, it seems to make more sense to me to focus on the differences between the two events and what they say about the manner in which these two societies actually practice tolerance.
The essay removed from the curriculum at Delhi University was written by A.K. Ramanujan, at least in the Indian way of thinking a Hindu, drawing upon a long tradition in which the diversity within the faith is itself a source of tolerance. The opposition to this essay has come from the Hindu right, which is not a conservative but a radical force. It wants to historicize a tradition that is rooted in myth and storytelling. Uncomfortable with the elasticity of myth, they prefer the certainty they think history grants them. For them the figure of Rama, central to the epic, is not subject to the vagaries of storytelling and local lore, he is a historical figure with a kingdom and a birthplace.
This historicity is central to a version of Hinduism that goes by the name of hindutva and shores up the main opposition party in Indian, the Bharatiya Janata Party (BJP). Irrespective of its antecedents (it is a modern idea, born in the early twentieth century) it has come to command enough of a following to influence the norms that actually mediate tolerance in India. By tolerance, I do not just mean intellectual tolerance which however important is only a part of a wider idea. By tolerance I mean the wider idea that allows diverse ways of living to coexist in a society.
Reply to a message from Ricardo who wrote: How U b? . I b well enough
Work fairly regular— ’bout 4-5 hours a day at TSA, plus a couple of side jobs drawing lucky (to have work) chug chug
Keep my hand in the writing game: blogs, two local paper gigs shooting my mouth off at greedy vampire windmills sucking global blood working at finishing the downstairs room under the kitchen
Haven’t been writing poems though It comes it goes Breath comes till it won’t— interesting set of circumstances without comprehensible explanation mysterious as the sun shooting through the middle of nowhere lifting us on swells of gravity
—it rose again today! happy and bright shining on silvery frost bright and beautifuller than any precious metal a commodities speculator might hoard grass beneath more verdant and moist than the greenest lust of a banker air crisper than a fresh thousand dollar bill
as breathable as necessary as fine as sweet .. …’bout you?
In a large mausoleum on Tiananmen Square, Beijing, lies a crystal sarcophagus containing the mortal remains of Mao Zedong. Every day, masses of Chinese citizens line up on this largest of the world’s public squares to view and pay tribute to him. An immense, framed portrait of Mao gazes beatifically upon them from the high walls of the once Forbidden City, a palace fortress at the edge of the square. A few years ago, I too had arrived hoping for a glimpse of the man—the spectacle of Mao’s refrigerated body held for me nearly as much morbid fascination as my interest in his legacy and place in the Chinese imagination.
As it happened, the mausoleum was closed for renovation. Disappointed, I mused that perhaps the real reason for closing the mausoleum was to hide the evidence that Mao had been turning in his grave of late: watching China grind from feudalism to communism to capitalism in a mere half century cannot be good for his repose. If “communism” means a classless society with a centrally planned economy in which the state owns the primary means of production, then poor old Mao—as the man who fought for it, forged it, and upheld it for decades—became irrelevant long ago. And though the frozen Mao may still be revered, the pulse of China throbs now to a different beat.
For some years now, the zeitgeist in China has been closer to what Deng Xiaoping, a successor to Mao, neatly voiced in 1993: “To get rich is glorious.” And today it seems that the only thing still communist about China is the name of the party that continues to rule it with an iron hand; for while China’s communist leaders have embraced capitalism with an astonishing zeal, they have not allowed the free flow of ideas and information within China. Ordinary citizens are actively kept in the dark even about the 1989 protests in Tiananmen Square. As I stood on the square, among the crowds of locals and holiday makers, flying kites or striking “I was there!” poses in front of Mao’s portrait, it struck me that most of the people around me—and most Chinese nationals under the age of 35—do not even know about the event that transpired there.
Mikael Levin. Solar Cartography. Chinati Foundation Residency Project. Dec 2005.
“… Using two bullet holes in the Locker Plant's front windows as reference points, Levin used tautly suspended pieces of multicolored string to track the December daylight's procession through the room. The resulting sculpture acted as a sort of free-floating sundial or three-dimensional photogram. In a related work, on the Locker Plant's west wall Levin displayed a series of photographs taken from a fixed point in Chinati's Arena building charting the movement of sunlight through the space.”
This summer I had a crush on Edie Sedgwick. Recently, I tried to “be” both Edie and Andy Warhol for Halloween. It was easy, because he used to dress like her. The source of her ability to fascinate is hard to explain, even now that she's dead–and I imagine it was even harder for her devotees to explain back then. Over the summer, I read several books about Edie, all of which were half-dominated by glossy photos. In a short time, I developed the sort of crush good girls get on bad girls in Junior High. It just seemed sort of fascinating and marvelous that a person could be almost nothing at all but will and whimsy, and could empty themselves of anything but surface. By this I mean to to say that a young woman, very pained and twisted by the forces of childhood, could become a sort of Peter Pan and fly through New York as if there was no future.
She can hardly have been the first person to turn partying and the wearing of odd clothing into a primary form of expression, but she seems to have had a real gift (a curse-gift, of course, the kind of gift that kills you) for just that. A few writers attribute our whole fascination with androgyny (particularly, slim little boy-women) to Edie, which is maybe not the best legacy to have left on the world, but from what I've read, I can't imagine she intended for generations to copy her. And while its clear that her life in New York, and at the Factory, made her into a sort of combustible, dancing, fairy-machine that ran off of attention, I don't think that fame (at least, as we understand it now) can have held real attraction for her. I sometimes think I can understand what made her go, but have only been able to access that through poetry.
The gods of irony are smiling. I recently attributed the existence of the TV-show Jersey Shore as the closest thing to an insult I could fathom for myself, when comparing myself to Christians who regularly want things banned. Then, thanks to Jerry Coyne, I discovered my old friend – my seriously old and now obviously senile friend – academia has cozied up to said show, in order to get them young folk interested in “bigger questions”.
Not so long ago, the University of Chicago had an academic conference on Jersey Shore, where the various sessions discussed important topics like: “The Monetization of Being: Reputational Labor, Brand Culture, and Why Jersey Shore Does, and Does Not, Matter”, “The Construction of Guido Identity” and “Foucault’s Going to the Jersey Shore, Bitch!”. What are the merits of having conferences on pop-culture, where questions are discussed on metaphysics, ethics and “identity” (I still don’t understand that topic)? Anchoring these questions to pop-culture topics, like Jersey Shore, is like putting scented oils on a corpse, serving little purpose other than to keep our breakfasts down before we bury the whole mess and carry on with our actual lives.
Coyne certainly thinks it’s largely useless:. He says: “(1) I’m not a huge fan of academic pop-culture studies, which seem shallow, too infested with postmodern obscurantism, and bad in that they replace more substantive material that can actually make students think deeply about things. (2) Pop-culture courses seem to me to be an easy way for professors to attract students by tapping into their t.v.-watching and music-listening habits.”
Now those are two distinct points. The first part argues that pop-culture conferences are largely useless, a waste time and resources, too indulging in obscurantism, and replace actual learning with the illusion of grappling with profound subjects because the titles indicate “big questions”. The second part points out why such conferences exist at all and how professors can be comfortable teaching this with a straight face: it gets them students, therefore maintains income because more students would come to a course on Jersey Shore than just vanilla ones on Plato, etc. The second is a description and seems to me obviously true: It is one way to keep education alive, one way to secure oneself a regular job, and so on, by affixing your learning toward what your audience actually cares about.
In May, I went to the site of the Tunisian protests; in July, I talked to Spain’s indignados; from there, I went to meet the young Egyptian revolutionaries in Cairo’s Tahrir Square; and, a few weeks ago, I talked with Occupy Wall Street protesters in New York. There is a common theme, expressed by the OWS movement in a simple phrase: “We are the 99%.”
That slogan echoes the title of an article that I recently published, entitled “Of the 1%, for the 1%, and by the 1%,” describing the enormous increase in inequality in the United States: 1% of the population controls more than 40% of the wealth and receives more than 20% of the income. And those in this rarefied stratum often are rewarded so richly not because they have contributed more to society – bonuses and bailouts neatly gutted that justification for inequality – but because they are, to put it bluntly, successful (and sometimes corrupt) rent-seekers.
This is not to deny that some of the 1% have contributed a great deal. Indeed, the social benefits of many real innovations (as opposed to the novel financial “products” that ended up unleashing havoc on the world economy) typically far exceed what their innovators receive.
But, around the world, political influence and anti-competitive practices (often sustained through politics) have been central to the increase in economic inequality. And tax systems in which a billionaire like Warren Buffett pays less tax (as a percentage of his income) than his secretary, or in which speculators, who helped to bring down the global economy, are taxed at lower rates than those who work for their income, have reinforced the trend.
Research in recent years has shown how important and ingrained notions of fairness are. Spain’s protesters, and those in other countries, are right to be indignant: here is a system in which the bankers got bailed out, while those whom they preyed upon have been left to fend for themselves.
Samuel McNerney over at a guest post at one of Scientific American's blogs:
Embodied cognition, the idea that the mind is not only connected to the body but that the body influences the mind, is one of the more counter-intuitive ideas in cognitive science. In sharp contrast is dualism, a theory of mind famously put forth by Rene Descartes in the 17th century when he claimed that “there is a great difference between mind and body, inasmuch as body is by nature always divisible, and the mind is entirely indivisible… the mind or soul of man is entirely different from the body.” In the proceeding centuries, the notion of the disembodied mind flourished. From it, western thought developed two basic ideas: reason is disembodied because the mind is disembodied and reason is transcendent and universal. However, as George Lakoff and Rafeal Núñez explain:
Cognitive science calls this entire philosophical worldview into serious question on empirical grounds… [the mind] arises from the nature of our brains, bodies, and bodily experiences. This is not just the innocuous and obvious claim that we need a body to reason; rather, it is the striking claim that the very structure of reason itself comes from the details of our embodiment… Thus, to understand reason we must understand the details of our visual system, our motor system, and the general mechanism of neural binding.
What exactly does this mean? It means that our cognition isn’t confined to our cortices. That is, our cognition is influenced, perhaps determined by, our experiences in the physical world. This is why we say that something is “over our heads” to express the idea that we do not understand; we are drawing upon the physical inability to not see something over our heads and the mental feeling of uncertainty. Or why we understand warmth with affection; as infants and children the subjective judgment of affection almost always corresponded with the sensation of warmth, thus giving way to metaphors such as “I’m warming up to her.”
Embodied cognition has a relatively short history. Its intellectual roots date back to early 20th century philosophers Martin Heidegger, Maurice Merleau-Ponty and John Dewey and it has only been studied empirically in the last few decades. One of the key figures to empirically study embodiment is University of California at Berkeley professor George Lakoff.
Lakoff was kind enough to field some questions over a recent phone conversation, where I learned about his interesting history first hand.
The simplest description of a black hole is a region of space-time from which no light is reflected and nothing escapes. The simplest description of consciousness is a mind that absorbs many things and attends to a few of them. Neither of these concepts can be captured quantitatively. Together they suggest the appealing possibility that endlessness surrounds us and infinity is within.
But our inability to grasp the immaterial means we’re stuck making inferences, free-associating, if we want any insight into the unknown. Which is why we talk obscurely and metaphorically about “pinning down” perception and “hunting for dark matter” (possibly a sort of primordial black hole). The existence of black holes was first hypothesized a decade after Einstein laid the theoretical groundwork for them in the theory of relativity, and the phrase “black hole” was not coined until 1968.
Likewise, consciousness is still such an elusive concept that, in spite of the recent invention of functional imaging – which has allowed scientists to visualize the different areas of the brain – we may not understand it any better now than we ever have before. “We approach [consciousness] now perhaps differently than we have in the past with our new tools,” says neuroscientist Joy Hirsch.
“The questions [we ask] have become a little bit more sophisticated and we’ve become more sophisticated in how we ask the question,” she adds – but we're still far from being able to explain how the regions of the brain interact to produce thought, dreams, and self-awareness. “In terms of understanding, the awareness that comes from binding remote activities of the brain together, still remains what philosophers call, ‘The hard problem.'”
Dr. James Luther Adams, my ethics professor at Harvard Divinity School , told us that when we were his age, he was then close to 80, we would all be fighting the “Christian fascists.”
The warning, given to me 25 years ago, came at the moment Pat Robertson and other radio and televangelists began speaking about a new political religion that would direct its efforts at taking control of all institutions, including mainstream denominations and the government. Its stated goal was to use the United States to create a global, Christian empire. It was hard, at the time, to take such fantastic rhetoric seriously, especially given the buffoonish quality of those who expounded it. But Adams warned us against the blindness caused by intellectual snobbery. The Nazis, he said, were not going to return with swastikas and brown shirts. Their ideological inheritors had found a mask for fascism in the pages of the Bible.
He was not a man to use the word fascist lightly. He was in Germany in 1935 and 1936 and worked with the underground anti-Nazi church, known as The Confessing Church, led by Dietrich Bonhoeffer. Adams was eventually detained and interrogated by the Gestapo, who suggested he might want to consider returning to the United States . It was a suggestion he followed. He left on a night train with framed portraits of Adolph Hitler placed over the contents inside his suitcase to hide the rolls of home movie film he took of the so-called German Christian Church, which was pro-Nazi, and the few individuals who defied them, including the theologians Karl Barth and Albert Schweitzer.
When Tim Minchin – actor, comedian, confirmed atheist – decided to take his comedy to America's Bible belt, we were concerned he might be burnt at the stake. Here, he describes what happened next…
Tim Minchin in The Observer:
Whenever a friend or fan finds out I've started touring the States, there is an inevitable raising of the eyebrows (or eyebrow, if they are blessed with that most enviable of talents). There are two reasons behind such browular elevations, the first of which is born of comedy snobbery: Brits and Aussies are very fond of saying that Americans “don't get irony”. This is absurd; if anything, they don't get absurdity, which the Brits and the Irish probably “get” better than anyone else. Apart from that, I have observed a surprising consistency in what makes people laugh, notwithstanding geography-specific subject matter, which I avoid. (The only other cultural-comic quirks I have observed are that the English really like camp men making thinly veiled bum-sex double entendres, and Australians love swearing. We think it's fucking hilarious.)
The second thing that concerns people about me touring the US is that they fear my penchant for jaunty-but-vehement criticism of religion will at best result in empty auditoriums, and at worst get me shot. But the perception that the country is packed wall-to-wall with Christian fundies is as specious as the irony myth. There is no doubt that many Americans have what seems to be a near-erotic relationship with the two-millennium-dead Middle-Eastern Jewish magician-preacher we call Jesus. But there are frickin' loads of people in America, and even if the percentage of the population that is not religious is only 10% (it's a much greater number, surely), then there are still 33 million potential ticket-buyers.
MEASURING THE MINDS OF OTHER creatures is a perplexing problem. One yardstick scientists use is brain size, since humans have big brains. But size doesn’t always match smarts. As is well known in electronics, anything can be miniaturized. Small brain size was the evidence once used to argue that birds were stupid—before some birds were proven intelligent enough to compose music, invent dance steps, ask questions, and do math. Octopuses have the largest brains of any invertebrate. Athena’s is the size of a walnut—as big as the brain of the famous African gray parrot, Alex, who learned to use more than one hundred spoken words meaningfully. That’s proportionally bigger than the brains of most of the largest dinosaurs.
Another measure of intelligence: you can count neurons. The common octopus has about 130 million of them in its brain. A human has 100 billion. But this is where things get weird. Three-fifths of an octopus’s neurons are not in the brain; they’re in its arms. “It is as if each arm has a mind of its own,” says Peter Godfrey-Smith, a diver, professor of philosophy at the Graduate Center of the City University of New York, and an admirer of octopuses. For example, researchers who cut off an octopus’s arm (which the octopus can regrow) discovered that not only does the arm crawl away on its own, but if the arm meets a food item, it seizes it—and tries to pass it to where the mouth would be if the arm were still connected to its body.
Q What do you get if you cross a Jane Austen novel with a crime thriller? A The latest fiction from PD James- 'Death Comes to Pemberley'. Here the distinguished novelist explains why she decided to combine her two literary passions to produce a sequel which opens with a brutal murder at Pemberley.
A. Like many – probably most – novelists, I am happiest when plotting and planning or writing a new book, and the period in between, once the excitement of the publication is over, is usually spent considering what to write next. The prospect of becoming 90 was a time of important decision-making, since I had become increasingly aware that neither years nor creative energy last forever. After the publication of my latest Dalgliesh story, The Private Patient, in 2008, I decided that I could be self-indulgent and turn to an idea that had been in my mind for some time: to combine my two lifelong enthusiasms, namely for writing detective fiction and for the novels of Jane Austen, by setting my next book in Pemberley. My own feeling about sequels is ambivalent, largely because the greatest writing pleasure for me is in the creation of original characters, and I have never been tempted to take over another writer’s people or world, but I can well understand the attraction of continuing the story of Elizabeth and Darcy. Austen’s characters take such a hold on our imagination that the wish to know more of them is irresistible, and it is perhaps not surprising that there have been more than 70 sequels to Austen’s novels.
Pride and Prejudice, which was originally titled First Impressions, was written between October 1796 and August 1797. Austen’s father wrote to a London bookseller, Thomas Cadell, to ask if he had any interest in seeing the manuscript, but he declined by return of post. It was in 1811 and 1812 that Austen revised the novel, making it shorter, and it was published in 1813 under the title Pride and Prejudice. It is frustrating that the original manuscript has not been discovered as it would have been fascinating to see what portions were excised and which retained and possibly extended. In Death Comes to Pemberley, I have chosen the earlier date of 1797 for the marriages both of Elizabeth and her older sister Jane, and the book begins in 1803 when Elizabeth and Darcy have been happily together for six years and are preparing for the annual autumn ball which will take place the next evening. With their guests, which include Jane and her husband Bingley, they have been enjoying an informal family dinner followed by music and are preparing to retire for the night when Darcy sees from the window a chaise being driven at speed down the road from the wild woodlands. When the galloping horses have been pulled to a standstill, Lydia Wickham, Elizabeth’s youngest sister, almost falls from the chaise, hysterically screaming that her husband has been murdered. Darcy organises a search party and, with the discovery of a blood-smeared corpse in the woodlands, the peace both of the Darcys and of Pemberley is shattered as the family becomes involved in a murder investigation.