Banality of Evil, the French Version

Maurcie Papon, the Vichy bureaucrat whose trial for crime against humanity provides one of the few instances in which the French examined their part in the Holocaust, is dead. In the Economist:

That summer he also received other orders. He was to round up a “sufficient number” of Jews and send them to a staging camp at Drancy, in northern France. And he was to make such convoys regular. This meant ordering arrests, arranging police escorts and organising express trains that would not stop at stations. He managed it with his usual competence. Between 1942 and 1944 1,690 Jews were shipped out of Bordeaux, including 223 children. Most ended up in Auschwitz.

Had he known they would? No, he insisted later, nor did he have any inkling of the Nazis’ broader plans. He had certain fears about Drancy. But people had to understand that he was not a free agent. There was a German imperium in force; Vichy was subject to it and he, after 1940, obedient to Vichy. With the coming of the Nazis numbers of civil servants had been sidelined or silenced, but he had a job to do, and “desertion was not in his ideology”. There was a duty to survive, to keep things running, to avoid gratuitous provocation that might make a bad case worse. In Bordeaux he resisted in his own way, he said: taking names off arrest-lists, tipping off families in advance, sheltering a rabbi in his house. Why, he even chartered the city trams to spare the very young or old the walk to the station, and booked passenger trains, not goods wagons, to make their journey comfortable.

These self-justifications came out at Mr Papon’s trial, one of only two of French officials who collaborated with the Nazis in their crimes against humanity. Hundreds more might have been charged, including all those who worked for him. But once the Vichy leaders had been executed for treason after the Liberation, a different imperative prevailed: to keep France united, to avoid recriminations and to draw a veil over the past. In this new version of history all Frenchmen had resisted, including those who were now intent on quietly protecting each other. In his mind Mr Papon, too, had spent the Occupation fighting.

On the Consequences of Carter’s Palestine

In The American Conservative, Philip Weiss looks at the political and cultural impact of Jimmy Carter’s Palestine: Peace not Apartheid:

The conventional wisdom seemed to be that Carter had damaged himself [by writing the book], and badly.

But the fury has masked a quieter trend —nodding support for the president’s views across the country. The book still ranks sixth on the New York Times bestseller list three months after publication, and Carter has taken on a moral halo among progressives and realists, the shotgun marriage of the Bush years. Film director Jonathan Demme, who mainstreamed gay rights with “Philadelphia,” is making a documentary on the book tour. “NBC Nightly News” featured the former president breaking down in tears on a panel at the Carter Center when relating a story of praying to God to give him strength before he confronted Anwar Sadat at Camp David in 1978, when Carter forged an historic peace accord between Israel and Egypt.

“I think the attacks in some ways have made the book more effective,” says Michael Brown, a fellow at the Palestine Center. “It’s extraordinary, but when people oppose a book or a movie, and make a big fuss out of it, most Americans will say, ‘I want to know what this is about.’”

Some of the fury hides an old-fashioned power struggle. For the first time since the State of Israel was created in 1948, a prominent American politician has publicly taken up the cause of the Arabs, describing Israel’s practices as oppressive. Such voices are common in Europe and in Israel itself. But they are uncommon here, where staunchly Zionist voices routinely assert that Israeli and American interests are identical, a view uniformly reflected in our politics and policies. The Carter groundswell seems to represent a real political threat to that claim. A recent batch of letters to the Houston Chronicle ran three-to-one in Carter’s favor. “Can’t Israel defend itself without subjecting all Palestinians in the occupied territories to such shameful conditions?” one asked. “Nothing justifies treating an entire group of people as if they were second-class human beings.”

A Review of The Coast of Utopia

Alexander Herzen’s My Past & Thoughts is probably my favorite autobiography. When friends of mine went to see Stoppard’s The Coast of Utopia, I was a bit curious about its portrayals of Herzen (as well as of Bakunin and others). Eric Alterman reviews the play in The Nation:

The significance of the Lincoln Center Theater’s production of Tom Stoppard’s three-part, nearly eight-hour The Coast of Utopia lies in its status as a cultural rather than a literary event. As a dramatic work the play, which follows the lives of a series of Russian intellectuals and would-be revolutionaries across Europe between 1833 and 1866, suffers from all kinds of insoluble problems. For starters, even if you’ve done all your homework–including the extra credit–it’s damn near impossible to remember who everybody is, what they thought and with whom they slept, and why it might matter seven hours (and possibly months) later. But as an occasion for serious political and philosophical argument in a culture bereft of both, Stoppard’s magnum opus is cause for celebration.

Utopia resists simple summary. It begins in the years following the crushing of the Decembrist revolt of 1825, as Stoppard’s young idealists muse about the backward nature of their nation and the beautiful future they would create if only they weren’t saddled with institutions like the czar, serfdom, censorship and the Third Section, the KGB’s pre-Revolution precursor. In doing so, they use and abuse the arguments of various German Romanticists, French proto-socialists and even the odd novelist. An enormous Ginger Cat, representing the dialectic of history passing from Hegel to Marx to Engels, has a walk-on, too.

Eventually, as the action moves from the splendor of the Bakunin family estate in Premukhino with its “500 souls” to Moscow to Paris to Rome to Nice to London and, finally, to Geneva, the arguments focus on the various disagreements between Michael Bakunin–known to most of us as one of the philosophical fathers of anarchism but who here spouts an extremely confused and romantic Hegelianism–and Alexander Herzen, who remains today the hero of Russian constitutional liberals and who ought to be a hero to liberals everywhere.

Desktop Fusion

In The New York Times:

A few small companies and maverick university laboratories, including this one at U.C.L.A. run by Seth Putterman, a professor of physics, are pursuing quixotic solutions for future energy, trying to tap the power of the Sun — hot nuclear fusion — in devices that fit on a tabletop.

Dr. Putterman’s approach is to use sound waves, called sonofusion or bubble fusion, to expand and collapse tiny bubbles, generating ultrahot temperatures. At temperatures hot enough, atoms can literally fuse and release even more energy than when they split in nuclear fission, now used in nuclear power plants and weapons. Furthermore, fusion is clean in that it does not produce long-lived nuclear waste.

Dr. Putterman has not achieved fusion in his experiments. He and other scientists form a small but devoted cadre interested in turning small-scale desktop fusion into usable systems. Although success is far away, the principles seem sound.

“Insider Luck”

From Harvard Magazine:Forum1

The compensation of top American corporate executives has soared during the past 15 years. Measured in 2005 dollars, the average annual compensation of the CEOs of the large companies in the Standard & Poor’s 500 almost tripled from 1992 to 2005, growing from $3.7 million to $10.5 million.

In this context, the opportunistic timing of executive stock-option grants, via backdating or otherwise, has attracted a great deal of news coverage, regulator attention, and public debate since the media first focused on it in the spring of 2006. The U.S. Senate’s banking and finance committees held hearings on the subject. More than 150 firms have thus far come under scrutiny, dozens of executives and directors have been forced to resign, and many companies have announced that they will have to revise their past financial statements.

But our understanding of option-grants manipulation remains incomplete. What circumstances and factors led to opportunistic timing of grants in some companies but not in others?

More here.

It Seems the Fertility Clock Ticks for Men, Too

From The New York Times:Fertility_1

When it comes to fertility and the prospect of having normal babies, it has always been assumed that men have no biological clock — that unlike women, they can have it all, at any age. But mounting evidence is raising questions about that assumption, suggesting that as men get older, they face an increased risk of fathering children with abnormalities. A number of studies suggest that male fertility may diminish with age.

It’s a touchy subject. “Advanced maternal age” is formally defined: women who are 35 or older when they deliver their baby may have “A.M.A.” stamped on their medical files to call attention to the higher risks they face. But the concept of “advanced paternal age” is murky. “If you look at males over 50 or 40, yes, there is a decline in the number of sperm being produced, and there may be a decline in the amount of testosterone,” Dr. Sokol said. But by and large, she added, “the sperm can still do their job.”

“The message to men is: ‘Wake up and smell the java,’ ” said Pamela Madsen, executive director of the American Fertility Association, a national education and advocacy group. “ ‘It’s not just about women anymore, it’s about you too.’ “

More here.

A Case of the Mondays: Religion and Welfare

In most countries secularism is positively correlated with support for welfare, but does welfare make people more secular? Anthony Gill of the University of Washington says yes; in 2004, he and grad student Erik Lundsgaarde published a paper arguing that welfare provides a substitute for church attendance, making people less likely to attend church.

The full theory goes as follows: in the 19th century, the power of Christian churches came from their ability to provide social services such as charity, education, and health care. As the state started providing the same services without requiring or expecting church attendance, it became less economic for people to attend church, and less economic for church leaders to focus on welfare activities.

This theory has a lot of holes in it, but the study has some empirical backing. There’s a statistically significant relationship between a Christian country’s welfare spending as a percentage of GDP and the percentage of people in it who report attending church weekly, even when controlling for such variables as education and whether the country is Catholic or not. The weakness of the study comes not from its lack of data, but from flaws in how the variables are defined, failure to look for alternative explanations, and problems with individual case studies.

First, the study doesn’t explicitly say how welfare spending is measured. This is significant because it right off the bat fails to control for key factors. Most importantly, the most expensive part of the welfare state is social security, whose cost increases with the old age dependency ratio. But more religious states have higher population growth rates, leading to younger demographics and lower social security costs.

It’s possible to get around that by looking at states that buck the trend and are both relatively religious and relatively old. The best case study here is Poland, which is simultaneously the most religious nation in Europe and one of the oldest. Additional examples include Spain, Portugal, and to some extent Italy. The only one of the four that appears in the scattergram plotting church attendance and welfare spending is Spain, which is considerably more religious than the regression line predicts.

In addition, even when one controls for old age pensions, not all governments spend welfare the same way. The USA prefers targeted tax breaks, making its welfare system appear stingier than it actually is. In addition, some benefits can be distributed either as welfare or as spending on health care and education, which the study doesn’t account for. A good example in the US would be free lunches in schools, a welfare service that adds to the education budget.

Second, the omission of education spending is crucial. A church often thrives by having its own set of parochial schools. The standard British joke about catechism is that religious education only secularizes people, though the more common sensical effect is the opposite, namely that greater availability of parochial schools will make the population more religious. Education spending is correlated to welfare spending via the mediating variable of economic liberalism or socialism. As such, Gill and Lundsgaarde commit a grave sin of omission by overlooking it.

Likewise, a more direct political mediating variable could account for much of the correlation. In a followup paper, Gill notes that the correlation between welfare and religosity holds within US states, too. But within the US, both welfare and secularism fall under the rubric of liberal politics, contrasted with the welfare-busting and religiosity of conservative politics.

This in fact holds true in Europe and Latin America, which comprise all countries in the study but two, the US and Australia. Throughout Europe and Latin America, even more so than in the US, there is a strong tradition of anti-clerical liberalism. It’s likely that all Gill’s motivating example of Uruguay shows is that Uruguay has a long history of domination by the left-liberal Colorado Party.

Third, the main measure used for religiosity, reported church attendance, is deeply flawed. The USA’s real church attendance rate is half its reported rate. The church attendance variable tracks not how many people attend church, but how many would like pollsters to believe that they attend church. This variable has some value, but is overall less important than data based on actual church attendance.

The other figure used, the percentage of people who declare themselves nonreligious, is flawed as well. There are two dimensions to religious affiliation – one’s choice of religion, which tracks culture, and one’s position along the religious-secular spectrum. More plural areas, especially those with strong connections between religion and culture, will have a lower percentage of people calling themselves nonreligious than less plural areas.

Fourth, many of the assertions in the study admit too many inexplicable case study exceptions. Ireland and the Philippines’ unusually high levels of religiosity are attributable to the role the Catholic Church played in pro-independence and anti-Marcos politics respectively; I presume Poland could be similarly explained away, were it in the study. But other exceptions require seriously modifying the theory.

For example, the study would predict an increase in American church attendance rates after the welfare reforms of the 1990s. The American study only finds a slightly less significant correlation between welfare and religion in 1995; meanwhile, there was a measurable increase in church attendance in the two months following the 9/11 attacks.

For another example, the case study of Britain goes in almost the opposite direction as the one the study predicts. Britain hasn’t had a serious welfare system since Thatcher’s economic reforms. And yet, in the 1990s, religious belief crashed, and while children of secular parents always grew up to be secular, children of religious parents had only a 50% chance of growing up to be religious. Levels of belief crashed even among Muslims, who Britain forces a religious identity on in many respects.

And fifth, there are alternative explanations that the study should look at but doesn’t. First, it’s legitimate to ask why support for welfare correlates so nicely with secularism in Western politics. It could be an ideological accident that modern liberalism is secular and pro-welfare and modern conservatism is religious and anti-welfare; after all, in turn-of-the-18th-century Britain, it was the Tories who were more supportive of extensive Poor Laws and the Whigs who favored a libertarian economic policy.

Or, equally well, it could be the realpolitik version of what the study is trying to say: welfare is a substitute for religion. As such, religious organizations are likely to ally themselves with political groups that oppose welfare. It holds to some extent for modern conservatives, though by no means for all. In 1900, the US populists were both pro-religion and pro-welfare, and would only embrace prosperity theology in the 1960s and 70s.

A good way of gauging such political explanations is seeing if the same trends hold for non-Western countries. Muslim organizations provide the same welfare Christian ones do; in fact, one of the main power sources of Islamist movements is their strong performance in disaster relief. Of course, Islamism has an entirely different dynamic to it – its main promise isn’t charity but change – but it’s useful to examine this dynamic and see how it can apply to the West. How relevant is the promise to change the morally uncertain status quo to the rise of American Dominionism?

I should stress that except perhaps for the problematic definitions of the variables, this study is not shoddy. A data set comparing religiosity and welfare is always useful. The study’s downfall is in using the data to confirm a theory that has no other evidence to it. Although the study seems to satisfy the falsification criterion in that Gill intended for it to highlight the failure of the theory, in fact it does not falsify the statement “welfare does not cause a decline in religiosity.” All it does is superficially confirm the statement that welfare does in fact cause religiosity to fall.

Of the many different angles the study could take, the one about a direct effect of welfare on religiosity is one of the most obvious two, which is probably why Gill went with it. The other, that religious groups lobby against welfare, is more empirically plausible than the converse direction of causation, but does not fit well into Gill’s theory. But more indirect links, for example with education or political liberalism as a mediating variable, look far more fruitful. The study’s ultimate downfall is not so much that it is wrong as that it is woefully incomplete, concentrating on perhaps the least enlightening theory available.

Shrooming in Late Capitalism: The Way of the Truffle

Truffbw1_2

On a winter’s night in Paris long ago, I ducked into the Grand Vefour – then a charmingly approachable temple of gastronomy, free of the rather strained merriment that signals too much money being spent – and, as one of seven guests of a rich man, sat down to a dinner that would leave me not as I was before. 

To my right was Diarmuid C.-J., an elderly esthete of some renown living among dusty art objects a stone’s throw from the restaurant.  He was well used to ordering without regard to the menu, and he did so this night.  While others were calling for appetizers, a fish course and an entrée, Diarmuid commanded a dish of eight lightly sautéed whole fresh truffles.  A little salt and pepper, a splash of cream whisked into the pan juices – that would suffice for his dinner. 

But, what were truffles?  Rare mushrooms, the man on my left quickly whispered to me. Rare, and black and growing underground.  They were the cost equivalent, I later determined, of ordering five or six personal lobsters while others in your party struggled with choices less pricey and less pure.  But cost was only part of the story.

Dinner began to arrive, the unexcitingly superb starter items of the era: delicate pike terrines, mussels steamed with shallots and Chablis.  Who isn’t happy with such?  But it all fell away when, in a footed, lidded Limoges dish, Diarmuid’s golf ball-sized truffles were borne to the table by a sly-looking servitor who uncovered them and swanned off.  The others, including our imperturbable host, smiled faintly but intently, like Etruscans at bull games.  They were in the know.  Silently, I sniffed the truffle aroma, nothing if not a decisive fragrance, but I lacked the right referent. The grassiness of the cream — cream had never smelled so grassy — called up woods and moon and dew.  The odor I might later describe as “earthy” and “musky” and many other things to do with cheese was then but deeply portentous.  An agreeable fright overtook me: it was Pan, I understood – it was Pan!  Beneath the cool weight of napery, my knees knocked slightly.  I shot Diarmuid a meaning glance, all but nudged him as he plied his knife and fork, and opened my mouth to receive a truffle. For was I not still a baby bird, the whole world’s pleasure to feed me? The saurian flicker of his cold pale eye should have warned me to desist, but it did not. 

And so, my first truffle. Tuber melanosporum, unearthed not a day earlier by a caveur who knew a secret place in the oak groves of Perigord, who had gone out after nightfall with his muzzled, truffle-ardent sow or his keenest bitch – for the female of the species is by far the better finder – and, kneeling where the unerring animal pressed its snout among the roots and panted and grunted and stamped, had angled his small trowel into the soil and sifted his way down to the prize.  My prize.  Oh, I could wish it had been fed me by an unbegrudging man, but that might only have crowded the sensation.

Not a sensation that I particularly had words for, either, looking back on the almost convent-bred purity of my food vocabulary that year.  Best just to liken it to the entrance into the room, naked, of that person whom you know will make all the difference.  Time passed — I’m not sure how much — and as I licked my lips and refocused on the table I saw that people — all but one — were smiling those faint, intent smiles not at the truffles but at me. 

Having been admitted, in any case, to the 4,000 year-old company of those who know the truffle firsthand, I was hardly astonished when, a few years later, a Parisian banker, discovering that his cook had served his only truffle to two of her friends, made television news by shooting her. The investigating magistrate refused to bring the banker to trial for what was “obviously a crime of passion, completely understandable and completely forgivable.”

Yes, I understood. And if, wedged among his dusty curios, Diarmuid caught the news and untenderly remembered me, then I spared a thought for him too.

It Started with Desert Truffles in the Axial Age

Truf2 The Pharaoh Khufu, builder of the Great Pyramid, is the one of the first truffle eaters whom history names, although truffles were prized still earlier in the palaces of ancient Mesopotamia, where their remains have been found in special baskets.  The Egyptians inventoried their edibles, making papyrus records of who ate them, but the Sumerians left recipes.  The truffles beloved of Khufu and the Sumerians, well known both to the writers of the Mishna and the Hadiths, and greedily imported by the Greeks and Romans, are not the same as T. melanosporum, however, but desert truffles, of the Terfezia and Tirmania genera, comprising about 30 varieties.  And, although they are in flavor terms if not in pedigree far humbler cousins, any consideration of the truffle must begin with them.

Terfezia taste nutty and delicate, with flesh that is white or creamy or even rosy in color, and they need cooking – either simmering in milk and honey or roasting in the embers of a fire. While T. melanosporum imparts unmistakable flavor to other foods, the mild Terfezia will take on the flavor of whatever it is cooked with. It can also be ground into flour for poultices, its cooking juices saved as a treatment for eye infections. In the Tirmidhi Hadith, No. 1127, Mohammed recommends the latter use. There is even an intriguing etymological case that the self-replenishing manna from heaven sustaining the Israelites in the Book of Genesis was in fact Tirmania nivea, the aristocrat of desert truffles.

Among nomadic peoples, folklore about the truffle abounds — it is a highly nutritious “found food” for which relish, gratitude and even awe are well demonstrated.  Singing to the truffles, Bedouin girls forage at dawn, when the first rays of light create telltale shadows on the still damp sand, and the truffles swell not far below the surface. Bedouins claim that truffles will grow where lightning strikes, appearing without seed or root, loosened from their beds by thunder. These beliefs go back thousands of years, at least as far back as Theophrastus, the favored pupil of Aristotle and father of taxonomy, who described truffles in the 3rd Century B.C.E. as “a natural phenomenon of great complexity, one of the strangest plants, without root, stem, fibre, branch, bud, leaf or flower.” Three hundred years later Pliny the Elder wrote that “among the most wonderful of all things is that anything can spring up and live without a root. These are called truffles.” The Babylonian Talmud, compiled in Iraq in the 5th Century C.E., records the rabbis concluding after discussion that truffles “emerge as they are in one night, wide and round like rounded cakes.”

In the desert as elsewhere, outlandish explanations for tuber growth have stubbornly attached to the truffle. But the necessary reciprocal relationship between truffle and host obtains in the desert as in the forest. Shrubs of the Helianthemum genus – relatives of the North American rock rose – can be a tip-off to desert truffle presence, for Terfezia and Helianthemum are symbionts.  Filaments of the truffle penetrate the roots of the shrub, obtaining nourishment from it, in turn producing a substance that inhibits the growth of competing plants. In the absence of Helianthemum, the desert truffle can make do with other shrubs.  It’s all a bit mysterious, as desert truffles grow in locations that are closely guarded secrets, and they utterly resist cultivation.

Usually no more than a few centimeters across but occasionally the size of a fist, desert truffles are found in the spring and sold in the souk, from North Africa to the Negev to easternmost Iraq.  A good truffle year depends on adequate rainfall in the autumn – about 8 to 10 inches.  In a middling year, desert truffles can cost about $100 a kilo, the price fluctuating wildly with supply. 

In the past few years, European interest in whether desert truffles flourish has increased along with the size of Europe’s Middle Eastern population. Traditional European fanciers of T. melanosporum and its lordly white Italian counterpart, T. magnatum, are also looking to Africa and the Middle East for truffles, the supply of their most highly prized indigenous ones being egregiously threatened, down twentyfold from 100 years ago, rarer and pricier and more sought after with every passing season. A good time, in short, to take after the Romans and import Terfezia from Africa, thereby nabbing — it is surely hoped — some of that same old razzle-dazzle if not the peerless and shocking taste.

Food of the Devil, Fit Only for Saints and Popes

If one of the defining characteristics of Late Antiquity was its excessive devotion to banqueting, with the inclusion in banqueting protocol of emetics and special chambers – vomitoria – where diners would rid themselves of surfeit the better to take on still more surfeit, then with the Fall of Rome the elaborate truffle dishes of the era would go the way of the stewed cygnet’s tongues, leopard’s marrow cooked in goat’s milk, almond-fed geese, and conger eels fattened with live slave-meat fetishized by the later, briefer Roman emperors. The Middle Ages were dark indeed for the abused and maligned truffle, whether because, with the rise of Christian Europe the devil was presumed afoot in the kitchen as he never was in less sober times, or because food preparation to some end beyond sustenance – cuisine, that is — took centuries to regain sway after being made repulsive by decadence and impracticable by the breakdown of trade routes.

In these years, there occurred also a shift in the thinking about exactly what a truffle was, and where it came from.  It was the devil’s own food, and it was black.  Though occasionally it was white, tasting of honey and garlic, a Manichean reading of this difference would never obtain.  Any way you sliced it in the Dark Ages, a truffle was a degenerate thing, and it came not from Africa but from secret pockets of Europe.  T. melanosporum and T. magnatum had been found, and found to be potent aphrodisiacs, conferring unholy sexual prowess on their eaters.  And so they were banned from kitchens – most kitchens, that is.

Ambrose, the famously ascetic 4th Century Bishop of Milan who became after death a saint, received a gift of truffles from the Bishop of Trevi.  No one can say whether he ate them, but he certainly recorded his gratitude for them.  Pope Gregory IV, who reigned in the mid-9th Century, let it be known that he positively needed truffles “to strengthen him in the battle against the Saracens.” Around this time there was philosophical speculation as to whether the truffle was truly a plant.  Folk wisdom still held that it was a fusion of water, heat and lightning, but deeper thinkers asked whether it might not be some kind of animal.  One of the salient mysteries enshrouding all love foods began to pertain to the truffle — in particular, the question of how food that debauches the weak-willed and the sinful serves yet to fortify the strong-willed and the saintly, nourishing them towards victory in their fitting and strenuous tasks.

By the late 14th Century, however, the truffle had made a comeback from the demonic hypothesis.  Petrarch dedicated a sweet sonnet to it, and its ungodly reputation burned off like ground fog in the clear light of more rational times.

Back with Bells On, This Time for Women

Lucreziaborgia_1 During the Renaissance, the absence of truffles from the tables of the mighty would have been an inadmissible embarrassment, and their chefs were under relentless pressure to present them with ingenuity and élan. The custom of the truffle tribute arose.  In 1502, the nobles of the Marchigian region of Aquamagna made a gift of stupendous black truffles to Lucrezia Borgia, the daughter of Pope Alexander VI.  The redoubtable  Lucrezia, for whose golden tresses long curly pastas were named, was very well pleased indeed, and lost no time incorporating the truffles into her beauty routine – history does not say exactly how. 

But it was Catherine de Medici who outdid all other comers in securing the hold of the truffle on the European imagination.  The late-born daughter of Lorenzo the Magnificent, Catherine was the child bride of Henry II of France.  Forsaking Florence for grim cold Paris could not have delighted the 13-year old royal girl, and she brought with her cooks, and forks, and artichokes, and truffles and high heels, thinking to subdue the gaucheries of the French.  That would become the gayest achievement of Catherine, who for lack of love grew into a dour and grasping queen, not averse to poisoning her political rivals. By the time she died in 1589, however, the French court was used to the sight of ladies of high birth openly eating love foods such as artichokes and truffles. This was unexampled in the Florence of her distant youth, so full of gorgeous perks for men only.  It is worth remembering that until Catherine de Medici became Queen of France, aphrodisiacs were the prerogative of men, at least officially. The truffle tribute received by Lucrezia Borgia would not have been intended for her to eat – as perhaps she did not – but to serve to male guests to good effect.

A century and a half later, things had become ever so much more relaxed. Madame de Pompadour chatted freely with her maid about amatory matters.  Hoping to hold onto the affections of the king, Louis XV, she lived for days at a time on an aphrodisiac regime of vanilla and celery and truffles. “My dearest,” she confided to her maid, “the fact is I am very cold by nature.  I thought I might warm myself up, if I went on a diet to heat the blood, and now I’m taking this elixir which does seem to be doing me good.”

Sipping at truffle juice, Pompadour had no call to give the king heirs; when, one evening, she and Louis XV sat down to a dinner of truffled ram’s testicles, they were unbothered by thoughts of the succession.

Read more »

For some countries, America’s popular culture is resistible

Tyler Cowen in the International Herald Tribune:

NusratfatehalikhanAn Indian Muslim might listen to religious Qawwali music to set himself apart from local Hindus, or a native of Calcutta might favor songs from Bengali cinema. The Indian music market is 96 percent domestic in origin, in part because India is such a large and multifaceted society. Omar Lizardo, an assistant professor of sociology at the University of Notre Dame, explains this logic in his recent paper “Globalization and Culture: A Sociological Perspective.”

Today, economic growth is booming in countries where American popular culture does not dominate, namely India and China. Population growth is strong in many Islamic countries, which typically prefer local music and get their news from sources like the satellite broadcaster Al-Jazeera.

The combination of these trends means that American entertainment, for largely economic reasons, will lose relative standing in the global marketplace. In fact, Western culture often creates its own rivals by bringing creative technologies like the recording studio or the printing press to foreign lands.

More here. [Photo shows legendary Pakistani qawwali singer Nusrat Fateh Ali Khan with his brother, Farrukh Fateh Ali Khan.]

Post-Putin

Steven Lee Meyers in the New York Times Magazine:

Screenhunter_07_feb_26_0129Ivanov, who is 54, is a leading contender to become only the third elected president in Russia’s history, replacing Vladimir V. Putin, the steely, steady president who, according to the country’s adolescent Constitution, must step down early in 2008 after two full terms in office. At least he is presumed to be a contender, just as there is presumed to be an election, scheduled for March 2, 2008.

Ivanov has never expressed the desire to be president — neither in public nor, as far as anyone who knows will tell, in private. Neither has Dmitri A. Medvedev, the other first deputy prime minister and the other presumed-to-be-leading candidate. Nor have Valentina I. Matviyenko, the energetic governor of St. Petersburg; Vladimir I. Yakunin, another former K.G.B. agent who heads the state-owned Russian Railways; Sergei S. Sobyanin, a former governor and the president’s chief of staff; Dmitri N. Kozak, the presidential envoy to the turbulent Caucasus; Boris V. Gryzlov, the speaker of the lower house of Parliament; Sergei M. Mironov, the chairman of the upper house; or Sergei V. Chemizov, director of the state arms-marketing monopoly who served as an intelligence officer with Putin in East Germany.

More here.

Prisoner of Hollywood

Walter Kirn in the New York Times Book Review:

Kirn450Why most Hollywood movies stink is a big question, but why we go on eagerly inhaling them is a bigger one. David Mamet thinks he knows the answer. In “Bambi vs. Godzilla,” a collection of tough-minded essays about the film business, the award-winning playwright turned screenwriter and director posits a “repressive mechanism” to account for our appetite for dramas that have ceased to be dramatic and entertainments that barely entertain. “The very vacuousness of these films is reassuring,” he writes, comparing them to the expensive weapons systems whose presence makes us feel secure in other ways. These filmed extravaganzas send the message that “you are a member of a country, a part of a system capable of wasting $200 million on an hour and a half of garbage. You must be somebody.”

More here.

Why I refused to blog for Edwards

“Before Amanda Marcotte’s short-lived tenure as blogger for the John Edwards campaign, I was offered the job. Here’s why I said no.”

3QD friend, and well known blogger, Lindsay Beyerstein in Salon:

Img_5139_2_2“I’m probably not … the person you want,” I said, finally. “I mean, I’m on the record saying that abortion is good and that all drugs should be legalized, including heroin. Don’t you think that might be a little embarrassing for the campaign?”

Bob assured me that my controversial posts weren’t a problem as far as the campaign was concerned. They were familiar with my work. And Bob did seem to know my writing. I didn’t get the impression he was a daily reader, but it was obvious he had been reading the blog for a while.

“That’s you, that’s not John Edwards,” he said.

Bob was confident that people would understand the difference. I wasn’t so sure.

“So, it’s not a problem that I’m an outspoken atheist?” I asked.

Every blogger says controversial things from time to time, Bob assured me. He admitted that he’d drawn some fire for a tasteless joke on his own site a while back. It hadn’t been a big deal.

More here.

best of pulp

Vanity Fair editor Graydon Carter in Good:

Magazines The essential strength of a magazine is its ability to amplify. An idea, or an image, or a story, set within the pages of a magazine and assembled by the right hands, can become the grist of breakfast chatter, dinner-party conversation, or elective body debate around the world. Until recently, with the advent of USA Today and the national editions of The New York Times and The Wall Street Journal, newspapers were by and large local endeavors. Magazines were national, and as they became international, their power of amplification grew exponentially. A woman photographs a dam. Nothing noteworthy in this, except that the woman is Margaret Bourke-White and the structure is the Fort Peck Dam. A photograph from that shoot appears on the cover of the first issue of Life and becomes one of the most known feats of human engineering in the world. That is amplification.

A magazine—like the smart, charming gazette you hold in your hands, even in this age of electronic everything everywhere, is a marvelous invention. In America, Ben Franklin is credited with conceiving of the first such publication, in 1741. (It was called The General Magazine, and it began a trend that exists to this day—within six months it had closed its doors.) Another essential difference between newspapers and magazines is this: News-papers tell you about the world; magazines tell you about their world—and by association, your world. Writers, photographers, editors, and designers bundle the slice of the world they have chosen to explore and deliver it to you in a singularly affordable, transportable, lendable, replaceable, disposable, recyclable package. You can buy a magazine almost anywhere. Publishers will even deliver it to your door, for less than the cost of going out into the hurried street to find and purchase one. 

More here.

Thanks to Lauren Shaw.

The Revolutionary Struggle in Second Life

A month ago there was a riot in the virtual world of Second Life, specifically in front of the Second Life virtual offices of the proto-fascist Front National. The pictures tell the story.

Ichi_jaehun

Now there is a power struggle for control of the virtual world waged by the Second Life Liberation Army (SLLA). Its demands echo Rudolph Meidner’s plan for a wage earners’ fund that would buy out capital in Sweden and thereby socialize the economy. (Somewhere in a letter to Weidemeir, Marx jokes that in England perhaps the workers could buy out the owners of capital. I don’t know if the joke was an inspiration.) The SLLA’s demands?

The establishement of basic ‘rights’ for Second Life Players. Having consulted widely we now believe the best vehicle for this is for Linden Labs to offer public shares in the company. We propose that each player is able to buy one share for a set-price. This would serve both the development of the world and provide the beginnings of representation for avatars in Second Life.

The struggle for, er, a stock market people’s democracy includes virtual terrorist attacks. What it says about the way people view terrorist violent (like what 24 says about the way pop culture sees torture) is unsettling, though the 24 torture issue seems far more unsettling. In Techtree.com:

Imagine a wildly popular virtual destination such as Second Life in the throes of a power struggle!

According to an AFP (Agence France Presse) report, the last six months or so have seen the rise and rise of a group which calls itself the Second Life Liberation Army (SLLA), and which aims to replace what it perceives as the rule of Linden Labs (creator of Second Life) with a government of, by, and for the four million-odd residents of Second Life.

With claims none less than being an ‘in-world military wing of a national liberation movement’, the SLLA has been busy setting-off virtual atomic bomb explosions in Second Life.

The bombs explode in hazy white balls, blotting out portions of the screen, and more often than not blasting nearby avatars, which are essentially animated virtual world proxies of residents of Second Life.

Of these blasts, Linden says they are brief, and not serious enough to cause lasting damage in Second Life. Linden even views the bombings as a sort of ‘mock terrorism’ intended to spur debate on the power structure within Second Life.