david salle does Sistine

Article002_1

I’D SEEN THE SISTINE CHAPEL a couple of times and been awed—along with hundreds of others craning their necks—but I’d never really studied the paintings. After doing some reading on their iconography, I began to see images that had a metaphorical quality I thought I could deal with. The idea of the commission was not to repaint the ceiling, but to make some kind of contemporary reference to it. Together with Carlo, I picked the three themes of the Creation, the Flood, and the Last Judgment as being representative of the whole. The first painting I worked on represents the Creation. Rather than take the most famous image from that cycle, in which God touches Adam’s hand, I used the image of God as a purple-clad protean actor flying around, building, making stuff happen. This seemed the most compelling and straightforward image to use, because God was so identifiable and, in Michelangelo’s mind, linked to the idea of the artist-creator.

more from Artforum here.

a poem for warren zevon

I want you to tell me if, on Grammy night, you didn’t get one hell of a kick
out of all those bling-it-ons in their bullet-proof broughams,
all those line-managers who couldn’t manage a line of coke,

all those Barmecides offering beakers of barm –
if you didn’t get a kick out of being as incongruous
there as John Donne at a Junior Prom.

Two graves must hide, Warren, thine and mine corse
who, on the day we met, happened
also to meet an individual dragging a full-length cross

along 42nd Street and kept mum, each earning extra Brownie points
for letting that cup pass. The alcoholic
knows that to enter in these bonds

is to be free, yeah right.

the poem continues at the TLS here.

Inspiring Evolutionary Thought, and a New Title, by Turning Genetics Into Prose

From The New York Times:Dawkins_9

Thirty years ago, a young biologist set out to explain some new ideas in evolutionary biology to a wider audience. But he ended up restating Darwinian theory in such a broad and forceful way that his book has influenced specialists as well. “Richard Dawkins: How a Scientist Changed the Way We Think” is a collection of essays about Dr. Dawkins’s book “The Selfish Gene” and its impact. Contributors to the book, edited by Alan Grafen and Matt Ridley, are mostly biologists but include the novelist Philip Pullman, author of “His Dark Materials,” and the bishop of Oxford, Richard Harries.

The biologists have copious praise for Dr. Dawkins’s work of synthesis, while the writers remark on his graceful and vivid style. It is quite surprising for anyone to be commended from such opposite quarters, but “The Selfish Gene,” published in 1976, was unusual. Written in clear and approachable language, it worked its way so logically into the core of Darwinian theory that even evolutionary biologists were seduced into embracing Dr. Dawkins’s view of their world.

Dr. Dawkins’s starting point was the idea that the gene, not the individual, is the basic unit on which natural selection acts. The gene’s behavior is most easily understood by assuming its interest is to get itself replicated as much as possible — hence the “selfish” gene of the title.

More here.

Scientists get inside look at viruses

From MSNBC:Virus_1

Exactly 25 years ago, in the body of the world’s first diagnosed AIDS case, the full capabilities and mysterious workings of a virus unfolded. Three years later, in 1984, Luc Montagnier of the Pasteur Institute of Paris and Robert Gallo, then of the National Cancer Institute, announced their discovery of HIV, the virus that infects the human immune system and causes AIDS.

Even though the smallest viruses are only about one-millionth of an inch long, they live up to their Latin namesake — poison. They are capable of infecting and hijacking a human body, creating health hazards as minor as the common flu and as disastrous as the AIDS epidemic.Viruses are neatly organized, petite packages of genetic material, shaped like rods, filaments, harpoons, or spheres. Proteins surround the package, which is called a capsid. Some viruses have an added layer of lipids that coat the capsid. Little extensions on the virus are called antigens, which help the virus hunt down the target host cell.

More here.

THE HORNINESS GENE

Maggie Wittlin in Seed Magazine:

SexgeneAre you unhappy with your ability to function sexually? Do you lack interest in sex or find it difficult to become aroused? Are you unsatisfied with your orgasms? If so, you may be genetically predisposed to have a moderate to low sex drive.

Israeli researchers published a study online in the April 18th issue of Molecular Psychiatry suggesting a link between a dopamine receptor gene and human sexual desire, arousal and function. They conclude that one gene variant found in about 60% of the population may lead to a more subdued sex drive while another, found in about 30% of the population, contributes to higher sexual desire, arousal and function.

More here.

SEPTUAGENARIAN SEX

Virginia Ironside reviews Unaccompanied Women by Jane Juska, in the Literary Review:

As this is a book about a book, in order to get through this one, you need to have waded through the first one: Jane Juska’s A Round-Heeled Woman: My Late-Life Adventures in Sex and Romance. In this, the author recounted what happened after she’d placed an advertisement in the New York Review of Books which read: ‘Before I turn 67 – next March – I would like to have a lot of sex with a man I like. If you want to talk first, Trollope works for me.’ Billed as a strike for sexual freedom for the mature (actually very mature) woman, it came across as a tragic wail from someone who was young in the Fifties but who clearly wished she’d been young in the Sixties.

As a result of the ad, Jane managed to get quite a few orgasms under her belt but oh, what a price she had to pay! Eighty-two-year-old Jonah, for example, insisted she talked dirty the first night and, on the second, announced that he didn’t desire her – ‘Get yourself some KY jelly. You get dry before I can get in, and I can’t keep it up long enough for you to get wet,’ he said, brutally, before fleeing with the two champagne flutes that she’d brought to drink from, not to mention the trousers of her red silk jim-jams. Then she met Robert. He was a member of AA and already had a girlfriend, whom he rang repeatedly, in order to tell her he loved her. He had also started drinking again. The following lovers were equally, if not more, unappetising (one of them sucked boiled sweets when they had sex) and finally she bumped into the much younger Graham, whom she adored because he pompously uttered this smug and well-worn cliché, which it appears she had never heard before in her life: ‘The greatest pleasure for me in making love is giving the other person pleasure.’

In her latest book, Juska tells us what happened next.

More here.

In West Bank, a First Hint of Agriculture: Figs

John Noble Wilford in the New York Times:

Figs_650_1In the ruins of a prehistoric village near Jericho, in the West Bank, scientists have found remains of figs that they say appear to be the earliest known cultivated fruit crop, perhaps the first evidence anywhere of domesticated food production at the dawn of agriculture. The figs were grown some 11,400 years ago.

Presumably that was well after Adam and Eve tried on the new look in fig leaves, in which case the fig must have grown wild in Eden.

Two botanists and an archaeologist, who describe the discovery in today’s issue of the journal Science, said the figs came from cultivated trees that grew about 1,000 years before such staples as wheat, barley and chickpeas were widely domesticated in the Middle East. These grain and legume crops had been considered the first steps in agriculture.

More here.

Below the Fold: Forget the Sheepskin, and Follow the Money, or Please Don’t Ask What a University is For…

Garbed in cap and gown and subjected probably for the first time in their lives to quaint Latin orations, three quarters of a million students, sheepskin in hand, will bound forth into the national economy, hungry for jobs, economic security, and social advancement. They exit a higher education economy that looks and works more and more like the national economy they now enter. The ivory tower has become the office block, and its professors highly paid workers in an $317 billion dollar business.

Some of this is, of course, old news. From the Berkeley 1964 Free Speech movement onward, the corporate vision of American universities as factories of knowledge production and consumption bureaucratically organized as the late Clark Kerr’s “multiversities,” has been contested, but has largely come to pass.

But even to this insider (confession of interest: I am now completing my 20th year before the university masthead), there are new lows to which my world is sinking. They amount to the transformation of American universities into entrepreneurial firms, and in some cases, multinational corporations.

Most of you by now are used to the fact that universities are big business. The press never stops talking about the $26 billion Harvard endowment, or how the rest of the Ivy League and Stanford are scheming to be nipping at old John Harvard’s much-touched toes. But many non-elite schools are joining the race for big money and to become big businesses. Twenty-two universities now have billion dollar fund-raising campaigns underway. After talking with a colleague from the University of Iowa on another matter, I went to the university web page to discover that Iowa has raised over a billion dollars in a major campaign since 1999 – not bad when you recall that the state itself only has 3 million residents. Even my university, the City University of New York, the ur-urban ladder to social mobility for generations of immigrants and poor, has announced that it is embarking on a billion-dollar crusade.

In addition to billion-dollar endowments, there is revenue to consider. You might be surprised at all of the billion dollar universities in neighborhoods near you. All it really takes to put a university over the billion-dollar revenue mark is a hospital. Iowa, for instance, is a half billion a year all-purpose education shop; add its medical school and hospital system, and its revenue quadruples. A big enrollment urban school like Temple does a billion dollars of health care business in Philadelphia, easily surpassing its educational budget of 660 million. These university budgets often depend as much on the rates of Medicare and Medicaid reimbursement as they do tuitions from their various educational activities.

Tuitions are no small matter, of course, for those who pay them. The elite schools have recently crossed the $40,000 a year threshold, but the perhaps more important and less noticed change in higher education finances is that states are passing more of the burden for public college and university education directly onto the students themselves. The publics enroll three quarters of the nation’s students. As late as the 1980s, according to Katharine Lyall in a January, 2006 article in Change, states paid about half of the cost of their education; now the proportion has dropped to 30%. For instance, only 18% of the University of Michigan’s bills are paid by the state; for the University of Virginia, state support drops to 8%. Baby-boomers on that six-year plan at “old state” where they paid in the hundreds for their semester drinking and drug privileges find themselves now paying an average yearly tuition of $5,500 a year for their kids. When you add in room and board, a year at “old state” now costs an average of $15,500 a year, a figure that is 35% of the median income for a U.S. family of four.

So under-funded are important state universities that they are resorting to tax-like surcharges to make up for chronic state neglect. The University of Illinois, for example, is adding an annual $500 “assessment” on student bills for what the university president Joseph White, as quoted by Dave Newbart in the April 7 Chicago Sun-Times, describes as deferred maintenance. “The roofs are leaking and the caulking is crumbling and the concrete is disintegrating,” President White says. Next year it will cost $17,650 to go to Champaign-Urbana. The state will cover only 25% of Illinois’ costs.

Illinois’ President Newbart may be a bit old-school, and perhaps has lagged back of the pack of higher education industrial leaders. He should get smart. Instead of milking the kids on a per capita basis and incurring undying consumer wrath (after all the plaster was cracked way before I got there, I can hear a student voice or two saying), Newbart should join his peers in a little financial manipulation. What do big firms with billions in assets and large revenue flows do? They sell bonds! So much money, so little interest. And with principal due after a succession of presidents has become so many oil portraits in the board room, so little personal and professional exposure. With the increasingly short tenure of university presidents, even Groucho Marx’s Quincy Adams Wagstaff could get out in time.

American universities have made a very big bet on their future prosperity. They have issued over $33 billion in bonds, according to the May 18 Economist. For the multinationals like Harvard, this is sound money management. To raise working capital, rather than sell some of their stock portfolio at a less than optimal moment or sell the 265-acre arboretum near my house which would diminish the university endowment, Harvard can use its assets as guarantees. The university’s credit is AAA, interest rates are still historically fairly low, and their tax-exempt status makes them attractive investment choices. Harvard can deploy the money in new projects, or re-invest it in higher-yielding instruments and pocket the difference tax-free.

The entrepreneurial universities, that is, those not internationally branded and not elite, are trying to gain a competitive edge. They borrow through bonds to build dormitories, student unions, and to beautify their campuses. Many are borrowing money they don’t have or can’t easily repay. As the saying goes, they are becoming “highly leveraged.” A turn around a town with more than a few universities will likely reveal how it’s raining dorm rooms locally. Here in Boston, it has afflicted universities on both sides of the Charles. Even an avowedly commuter campus like the University of Massachusetts-Boston is building dorms to create that market-defined “campus” feel. Bonds pay for the dorms, and the students through higher rents, pay them off.

The educative value of dorm living, smart remarks aside, is rather problematic. Talking with an old friend who heads an academic department at a Boston university, I have begun to understand, however, the business logic at work. His bosses have explained the situation thus: the last of the baby boomer progeny are passing through the system, and a trough lies behind them. The children of baby-boomers, alas, prefer the reproductive freedoms of their parents, and are having children late as well. International students, full-tuition payers and once the source of great profit, are choosing increasingly non-American universities, for a variety of reasons, some related to our closed-door policy after 9/11. Add income difficulties among the American middle class, and the entrepreneurial universities calculate that they must improve their marketability and take business from others. Expand market share, create new markets (new diplomas, new student populations), or fight to keep even, they reason. Or face decline, now perhaps even a bit more steep since they are into hock for millions of dollars in bond repayments. The “high yield” customer is the traditional customer, a late adolescent of parents with deepish pockets. So dorms, fitness gyms, and student unions it is, and the faculty is mum.

In the great expand-or-die moment occurring among America’s entrepreneurial universities, you would think faculty would be making out, but they aren’t. Let us set aside for another time comment on the highly limited American Idol, star search quests among the elite schools and the entrepreneurs’ somewhat desperate casting about for rainmakers and high-profile individuals who can help in creating a distinctive brand for their paymasters. College and university faculty salaries as a whole since 1970 have stagnated, the U.S. Department of Education reports. Part of the reason is that although the number of faculty has risen 88% since 1975, the actual number of tenured faculty has increased by only 24%, and their proportion of the total has dropped from 37% in 1975 to 24% in 2003. Full-time, non-tenure track and part-time faculty are being used to meet increased demand. Universities are succeeding in gradually eliminating tenure as a condition of future faculty employment.

Forty-three years after Kerr presented his concept of the “multiversity,” the facts conform in many respects to his vision. American universities are massive producers of knowledge commanded by technocrats who guide their experts toward new domains of experiment and scientific discovery. They possess a virtual monopoly on postsecondary education, having adapted over the past half century to provide even the majority share of the nation’s technical and applied professional training.

But swimming with instead of against the stream of American capitalism over the past half century has cost American universities what few degrees of freedom they possessed. They have become captives of corporate capitalism and have adopted its business model. They are reducing faculty to itinerant instructors. Bloated with marketeers, fund-raisers, finance experts, and layers of customer service representatives, they are complicated and expensive to run, and risky to maintain when the demographic clock winds down or competition intensifies. Moreover, as Harry Lewis, a Harvard College dean pushed out by the outgoing President Larry Summers, put rather archly in the May 27 Boston Globe, students whose parents paying more than $40,000 a year “expect the university to treat them customers, not like acolytes in some temple they are privileged to enter.”

As a priest in the temple, it hurts to note how much further down the road we have gone in reducing teaching and learning to a simple commodity. However, in demanding to be treated as customers, students and their parents are simply revealing the huckster we have put behind the veil. Their demands cannot change the course of American universities for the better, but they tell those of us still inside where we stand, and where we must begin anew our struggle.

Random Walks: Band of Brothers

Ufc_hughesgracie_ufcstoreWhile a large part of mainstream America was blissfully enjoying their long Memorial Day weekend, fans of the Ultimate Fighting Championship franchise were glued to their Pay-Per-View TV sets, watching the end of an era. In the pinnacle event of UFC-60, the reigning welterweight champion, Matt Hughes, faced off against UFC legend Royce Gracie — and won, by technical knockout, when the referee stopped the fight  about 4 minutes and 30 seconds into the very first round.

To fully appreciate the significance of Hughes’ achievement, one must know a bit about the UFC’s 12-1/2-year history. The enterprise was founded in 1993 by Royce’s older brother, Rorion Gracie, as a means of proving the brutal effectiveness of his family’s signature style of jujitsu. The concept was simple, yet brilliant: invite fighters from every conceivable style of martial art to compete against each other in a full-contact, no-holds-barred martial arts tournament, with no weight classes, no time limits, and very few taboos. No biting, no fish-hooks to the nostrils or mouth, no eye gouging, and no throat strikes. Everything else was fair game, including groin strikes.

(Admittedly, the fighters tended to honor an unspoken “gentlemen’s agreement” not to make use of groin strikes. That’s why karate master Keith Hackney stirred up such a controversy in UFC-III when he broke that agreement in his match against sumo wrestler Emmanuel Yarbrough and repeatedly pounded on Yarbrough’s groin to escape a hold. I personally never had a problem with Hackney’s decision. He was seriously out-sized, and if you’re going to enter a no-holds-barred tournament, you should expect your opponent to be a little ruthless in a pinch. But the universe meted out its own form of justice: Hackney beat Yarbrough but broke his hand and had to drop out of the tournament.)

The first UFC was an eight-man, round-robin tournament, with each man fighting three times — defeating each opponent while still remaining healthy enough to continue — to reach the final round. Since no state athletic commission would ever consider sanctioning such a brutal event, the UFC was semi-underground, finding its home in places like Denver, Colorado, which had very little regulations in place to monitor full-contact sports. Think Bloodsport, without the deaths, but plenty of blood and broken bones, and a generous sampling of testosterone-induced cheese. (Bikini-clad ring girls, anyone?)

Rorion chose his younger brother, Royce, to defend the family honor because Royce was tall and slim (6’1″, 180 pounds) and not very intimidating in demeanor. He didn’t look like a fighter, not in the least, and with no weight classes, frequently found himself paired against powerful opponents with bulging pecs and biceps who outweighed him by a good 50 pounds or more. And Royce kicked ass, time and again, winning three of the first four UFC events. (In UFC-III, he won his first match against the much-larger Kimo, but the injuries he sustained in the process were sufficient to force him to drop out of the tournament.)

He beat shootfighter Ken Shamrock (who later moved to the more lucrative pro-wrestling circuit) not once, but twice, despite his size disadvantage. Royce_09_1 His technique was just too damned good. Among other things, he knew how to maximize leverage so that he didn’t need to exert nearly as much force to defeat his opponents. Shamrock (pictured at right) has said that Gracie might be lacking in strength, “but he’s very hard to get a hold of, and the way he moves his legs and arms, he always is in a position to sweep or go for a submission.”

UFC fans soon got used to the familiar sight of the pre-fight “Gracie Train”: When his name was announced, Royce would walk to the Octagon, accompanied by a long line of all his brothers, cousins, hell, probably a few uncles and distant cousins just for good measure, each with his hands on the shoulders of the man in front of him as a show of family solidarity and strength. And of course, looking on and beaming with pride, was his revered father, Helio Gracie (now 93), who founded the style as a young man — and then made sure he sired enough sons to carry on the dynasty.

Royce’s crowning achievement arguably occurred in 1994, when, in UFC-IV’s final match, he defeated champion wrestler Dan “The Beast” Severn. Many fight fans consider the fight among the greatest in sports history, and not just because Severn, at 6’2″ and 262 pounds, outweighed Royce by nearly 100 pounds. Technique-wise, the two men were very well-matched, and for over 20 minutes, Severn actually had Royce pinned on his shoulders against the mesh wall of the Octagon. Nobody expected Royce to get out of that predicament, but instead, he pulled off a completely unexpected triangle choke with his legs, forcing Severn to tap out.

For all his swaggering machismo, Royce was one of my heroes in those early days, mostly because I had just started training in a different style of jujitsu (strongly oriented toward self-defense), at a tiny storefront school in Brooklyn called Bay Ridge Dojo. True, it was a much more humble, amateur environment than the world of the UFC, but Royce gave me hope. I trained in a heavy contact, predominantly male dojo, and at 5’7″ and 125 pounds, was frequently outsized by my class mates. My favorite quote by Royce: “I never worry about the size of a man, or his strength. You can’t pick your opponents. If you’re 180 pounds and a guy 250 pounds comes up to you on the street, you can’t tell him you need a weight class and a time limit. You have to defend yourself. If you know the technique, you can defend yourself against anyone, of any size.” And he proved it, time and again.

For smaller mere mortals like me, with less developed technique, size definitely mattered. The stark reality of this was burned into my memory the first time one of the guys kicked me so hard, he knocked me into the wall. Needless to say, there was a heavy physical toll: the occasional bloody nose, odd sprain, broken bone, a dislocated wrist, and a spectacular head injury resulting from a missed block that required 14 stitches. (I still proudly bear a faint, jagged two-inch scar across my forehead. And I never made that mistake again.) I didn’t let any of it faze me. I worked doggedly on improving my technique and hired a personal trainer, packing on an extra 30 pounds of muscle over the course of two years. Not very feminine, I admit: I looked like a beefier version of Xena, Warrior Princess. At least I could take the abuse a little better. In October 2000, I became only the second woman in my system’s history to earn a black belt.

I learned a lot over that seven-year journey. Most importantly, I learned that Royce was right: good technique can compensate for a size and strength disadvantage. It’s just that the greater the size differential, the better your technique has to be, because there is that much less margin for error. And if your opponent is equally skilled — well, that’s when the trouble can start, even for a champion like Royce.

After those early, spectacular victories, Royce faded from the UFC spotlight for awhile, focusing his efforts on the burgeoning Gracie industry: there is now a Gracie jujitsu school in almost every major US city. He’d proved his point, repeatedly, and it’s always wise to quit while you’re at the top. But every now and then he’d re-emerge, just to prove he still had the chops to be a contender. As recently as December 2004, he defeated the 6’8″, 483-pound (!) Chad Rowan in two minutes, 13 seconds, with a simple wrist lock. (“Either submit, or have it broken,” he supposedly said. Rowan wisely submitted.)

The very fact of Royce’s success inevitably caused the sport to change. Fighters were forced to learn groundfighting skills. Back when the UFC was all about martial arts style versus style, many fighters in more traditional disciplines — karate, tae kwon do, kickboxing — had never really learned how to fight effectively on the ground. The moniker changed from No-Holds-Barred, to Mixed Martial Arts — a far more accurate designation these days. Today, the UFC has time limits (with occasional restarts to please the fans, who get bored watching a lengthy stalemate between two world-class grapplers), and even more rules: no hair-pulling, and no breaking fingers and toes. The formula is commercially successful — UFC events typically garner Nielsen ratings on a par with NBA and NHL games on cable television — but these are not conditions that favor the Gracie style. Eventual defeat was practically inevitable.

And so it came to pass over Memorial Day weekend. The UFC torch has passed to Hughes. But Royce’s legacy is incontrovertible. He changed the face of the sport forever by dominating so completely, that he forced everyone else to adapt to him. That’s why he was one of the first three fighters to be inducted into the UFC Hall of Fame (along with Shamrock and Severn). Royce Gracie will always be a legend.

When not taking random walks at 3 Quarks Daily, Jennifer Ouellette muses about physics and culture at her own blog, Cocktail Party Physics.

Talking Pints: 1896, 1932, 1980 and 2008–What Kenny Rogers Can Teach the Democrats

by Mark Blyth

“You got to know when to hold ‘em, know when to fold ‘em, know when to walk away, and know when to run.”

Kenny_rogersKenny Rogers may seem an unlikely choice for the post of Democratic party strategist, but the advice of ‘the Gambler’ may in fact be the single best strategy that the Democrats can embrace when considering how, and who, to run in 2008. Although we are still a long way from the next US Presidential election, the wheels seem to have truly come off the Republicans’ electoral wagon. The ‘political capital’ Bush claimed after his reelection was used up in the failed attempt to privatize Social Security and in the continuing failure to stabilize Iraq. Sensing this, Congressional Republicans (and fellow travelers) increasingly distance themselves from Bush, claiming that, in the manner of small furry passengers who have decided that the cruise was not to their liking after all, the Bushies (and/or the Congressional Republicans) have betrayed the Reagan legacy, that Iraq was a really bad idea all along, and that when its all going to pot you might as well grab what you can in tax cuts for yourselves and head for the exits.

Such un-characteristic implosion from the usually well-oiled Republican machine might lead one to expect the Democrats to make real political inroads for the first time in years. Yet, as the line attributed to Abba Eban about the Palestinians goes, the Democrats “never miss an opportunity to miss an opportunity.” This lack of Democratic political bite, when seen against the backdrop of an already lame-duck second-term President, is remarkable. For example, leading Democrats cannot get a break. Joe Biden makes a ‘major’ policy speech on Iraq, and outside of the New York Times reading ‘chattering classes’ it is roundly ignored. While some Democrats argue for a troop pull-out in Iraq, others in the same party urge ‘stay the course’ thereby ‘mixing the message’ ever further. Even populist rabble rouser Howard Dean, now head of the Democratic National Committee, has all but disappeared from view.

Yet should we be surprised by this? Perhaps the Democrats are a party so used to offering ‘GOP-lite’ that they really have no independent identity. Just as Canada has no identity without reference to the USA (sorry Canada, but you know its true), so the Democrats have no identity without defining themselves against the GOP. But to be against something is not to be for anything. Given that the Republicans are clearly for something, the ‘fact free politics’ of ‘freedom’, ‘prosperity’, ‘lower taxes’, ‘individual initiative’, and other feel-good buzzwords, the Democrats seem to have no one, and no big ideas, to take them forward, except perhaps one person – Hillary Clinton.

Topmast_hillarythumbIts pretty obvious that she wants the job. Much of the media has decided that she already has the Democratic nomination in the bag, but are split on whether she can actually win. To resolve this issue, we need the help of an expert, and this is where I would like to call in Kenny Rogers. Mr. Rogers’ advice is that you have to know when to hold, fold, walk, or run. I would like to suggest that the best thing that the Democratic Party can do is to realize that this next Presidential election is exactly the time to do some serious running; as far away from the White House as possible. I would like to propose the following electoral strategy for the Democrats:

  1. Hillary Clinton must run in 2008. She will lose. This is a good thing.

  2. If the Democrats lose in 2008, they might well win the following three elections.

  3. If the Democrats nominate anyone other than Hillary they might actually win in

    2008, and this would be a disaster.

Ok, how can losing the next election be a good thing for the Democrats? The answer lies in how some elections act as ‘critical junctures’, moments of singular political importance where because an election went in one direction rather than the other, the next several elections went that way too. 1896 was such an election for the Republicans, as was 1932 for the Democrats when they overturned Republican control and began their own long period of political dominance into the 1970s. Indeed, it is worth remembering that the Democratic party used to be the majority party in the US, and that the institutions and policies they set up in the 1930s and 1940s from Social Security to Fannie Mae, are as popular as ever. Indeed, one might add that only one of nine post-WW2 recessions occurred when the Democrats were in power. How then did the Democrats become the weak and voiceless party that they are now? The answer was Ronald Reagan and the critical election of 1980.

RonaldreaganReagan did something that no Democratic politician ever did before, he (or at least those around him) really didn’t give a damn about the federal budget. Reagan managed to combine tax cuts, huge defense expenditure increases, super-high interest rates, and welfare cuts into a single policy package. Despite the supposed ability of voters to see through such scams and recognize that tax cuts now mean tax raises later, Reagan managed to blow a huge hole in federal finances and still be rewarded for it at the ballot box. Despite their fiscal effects, this tax-cutting ‘thing’ became extremely popular, and the Democrats had to find an issue of their own to argue against them. That new issue was the so-called “Twin deficits’ that Reagan’s policies created and the policy response was deficit reduction.

Under Reagan (and Bush the elder) the US ran increasingly large deficits both in the federal budget and in the current account. The Democrats of the day seized on these facts and banged-on and on about them for a decade as if the very lives of Americas’ children depended on resolving them. The problem was however that as the world’s largest economy with the deepest capital markets, so long as foreigners were willing to hold US dollar denominated assets, no one had to pay for these deficits with a consumption loss. The US economy became the equivalent of a giant visa card where the monthly bill was only ever the minimum payment due. Take the fact that no one ever refused US bonds, and add in that most voters would have a hard time explaining what the budget deficit was, let alone why it was this terrible thing that had to be corrected with tax increases, and you have a political weapon as sharp as a doughnut. By arguing for a decade that the twin deficits were real and dangerous, and that tax increases and consumption losses (pain) were the only way forward, the Democrats went from being the party of ‘tax and spend’ to being the party of tax increases and balanced budgets, which simply played into Republican hands.

200pxbill_clintonWhich brings us to why the election of the other Clinton (Bill) in 1992 was not a critical turning point away from Republican politics in the way that 1932 was. Having banged-on about how terrible the deficits were, once in power the Democrats had to do something about them. Being boxed into a fiscal corner, Bill Clinton’s proposals for a budget stimulus and universal healthcare collapsed, and all that was left was (the very worthy) EITC and (the very painful) deficit reduction. Cleaning up the fiscal mess that the Republicans had made became Clinton’s main job, and this helped ensure that by 1996 Clinton was seen as a lame duck President who hadn’t really done anything. His unexpected win in 1996 confirmed this insofar as it resulted in no significant policy initiatives except the act of a Democrat ‘ending welfare as we know it.’ The asset bubble of the late 1990s may have made the economy roar, and Clinton’s reduction of the deficit may have helped in this regard, but the bottom line was that the Democrats were now playing Herbert Hoover to the Republicans’ Daddy Warbucks.

George20bushSo Bush the younger was elected and he continued the same tax cutting agenda, but coupled this to huge post 9-11 military expenditures and the Iraqi adventure. As a result of these policies the US carries current account and federal deficits that would make Reagan blush, the Republicans have a splintering party and support base, and the country as a whole is mired in Iraq with a very unpopular President at the helm. Surely then 2008 can be a new 1932 in a way that 1992 wasn’t? The inauguration of a new era of Democratic dominance? Possibly…but only if the Democrats loose the 2008 election rather than win it. To see why, let us examine what might happen if the Democrats did win the next election with Hillary Clinton at the helm.

In terms of security politics its far more likely that Iraq will go from bad to worse than from worse to better over the next few years. Its a mess a regardless of who is in charge, and the choices at this point seem to be ‘stay forever’ or ‘get out now.’ If the Republicans sense that they are going to lose in 2008 the smart thing to do would be to keep the troops in Iraq so that the Democrats would be the ones who would have to withdraw US forces. When that happens, Iraq goes to hell in a hand-basket, and the Democrats gets blamed for ‘losing Iraq’ and ‘worsening US security.’ If on the other hand Bush pulls US forces out before 2008 and the Democrats win, the local and global security situation worsens, and the probability that ‘something really bad’ happens on the Democrats’ watch rises, which they then get the blame for.

In terms of economic policy the structural problems of the US economy are only going to get worse over time. Since Bush came into office the dollar has lost over a third of its value against the Euro and around 20 percent against other currencies. This means higher import costs, which along with higher oil prices, suggests future inflation and larger deficits. Given that the US relies on foreigners holding US debt, any future inflation and larger deficits would have to be offset with higher interest rates. This would negatively impact the already over-inflated US housing market, perhaps bursting the bubble and causing a deep recession. So, regardless of who is in office in 2008, the economy is likely to be in worse shape then than it is now. If the Democrats are in power and the economy tanks, they will get the blame for these outcomes regardless of the policies that actually brought the recession about.

In terms of cultural politics, social issues are likely to come to a head with the new Roberts’ court finding its feet. It is probably safe to say that there will be an abortion case before the Court during the 2008-2012 cycle, if not before. This is usually treated as the clinching argument for why the Democrats must win the next election rather than lose it. Again, I disagree. Precisely because the only people who still think Bush is doing a good job are conservatives with strong social policy concerns, you can bet they will mobilize to get this policy through even if the rest of the world is crashing about their ears. I say let them have it. The sad truth is that if Roe v Wade is overturned rich white women will still get abortions if they need them, and poor women will not be much worse off since they don’t get access to abortion in most of the country as it is. But more positively, if the Republicans go for this, anyone who says “I’m a moderate Republican” or “I’m socially liberal but believe in low taxes” etc., has to confront an awkward fact. That they self-identify with an extremely conservative social agenda: one that treats women’s bodies in particular, and sexual issues in general, as objects of government regulation. If this comes to pass then the Democrats have a chance to split the Republican base in two, isolate moderates in the party, and turn the Republicans into a permanent far right minority party.

Finally, in terms of electoral politics, the Democrats have to face up to an internal problem – Hillary Clinton really is unelectable. While she may be smart, experienced, popular in the party, and have a shit-load of money behind her, the very appearance of her on television seems to result in the instant emasculation of around 30 million American men. Indeed, 33 percent of the public polled today say that they would definitely vote against her, and this at a time when Bush’s numbers are the worst of any President in two generations. It may be easy to forget how much of a hate figure Hillary Clinton was in the 1990s. One way to remember is to simply search amazon.com for books about Hillary Clinton and see how the ‘hate book’ industry that dogged the Clintons all through the 1990s is moving back into production with a new slew of ‘why she is evil incarnate’ titles. But Hillary Clinton is not just a hate figure for the extreme right. After a decade of mud slinging (that is about to go into high gear again) she is simply too damaged to win. There is a bright side to all this. Hillary Clinton is a huge figure in the Democratic party in terms of fundraising, profile, and ambition. The only way she will get out of the way and allow new figures in the party to come forward who might actually win is by her losing; so let her lose.

In sum, ‘knowing when to walk away, and when to run’ is a lesson the Democrats need to learn, and losing in 2008 would be ‘the Gambler’s’ recommendation. First, making the Republicans clean up their own mess would not only be pleasing to the eye, it would be electorally advantageous. Forcing the Republicans to accept ownership of the mess that they have made makes their ability to ‘pass the buck’ onto the Democrats, as happened to Bill Clinton, null and void. Clearly, from the point of view of Democratic voters the probable consequences of a third Republican victory have serious short-term costs associated with them, but it is also the case that the possible long term benefits of delegitimating their policies, watching their base shatter, and not having to clean up their mess and get blamed for it, could be greater still. Second, if the Democrats do win, then all the problems of Iraq, the declining dollar, the federal and trade deficits, higher interest rates, a popping of the housing bubble, a possible deep recession, and being blamed for the end of ‘the visa card economy’, become identified with the Democrats. They come in, get blamed for ending the party, clean up the mess, and get punished for it at the next election. Seriously, why do this? Third, if it is the case that Hillary Clinton will indeed get the nomination, then let her have it. She cannot win, so why not kill two birds with one stone. Nominate Hillary, run hard, and lose. That way Hillary cannot not get nominated again, new blood comes into the party, and the Republicans have to clean up their own mess. Do this, and 2012 really might be 1932 all over again.

Mark Blyth’s other Talking Pints columns can be seen here.

NOTE: This essay is posted by Abbas Raza due to a problem with Mark’s computer.

Monday Musing: Susan Sontag, Part I

In an essay about the Polish writer Adam Zagajewski, Sontag writes that as Zagajewski matured he managed to find “the right openness, the right calmness, the right inwardness (he says he can only write when he feels happy, peaceful.) Exaltation-—and who can gainsay this judgment from a member of the generation of ’68—is viewed with a skeptical eye.” She’s writing about what Zagajewski was able to achieve but she is also, of course, writing about herself.

Sontag was also a member of the generation of ’68, if a slightly older one. She too achieved an openness, calm, and inwardness as she matured, though it came with regrets and the sense that the pleasure of a literary life is an ongoing battle against a world that is predisposed to betray that pleasure.

Writing about Zagajewski again, she explains that his temperament was forged in the fires of an age of heroism, an ethical rigor made sharp by the demands of history. These men and women spent decades trying to write themselves out of totalitarianism, or they were trying to salvage something of their selves from what Sontag does not hesitate to call a “flagrantly evil public world”. And then suddenly, in 1989, it was all over. The balloon popped, the Wall came down. Wonderful events, no doubt, but with the end of that era came the end of the literary heroism made possible by its constraints. Sontag says, “how to negotiate a soft landing onto the new lowland of diminished moral expectations and shabby artistic standards is the problem of all the Central European writers whose tenacities were forged in the bad old days.”

Sontag also managed to come in softly after scaling the heights of a more exuberant time. In Sontag’s case, she wasn’t returning to earth after the struggle against a failing totalitarianism, she was coming down from the Sixties. But that is one of the most remarkable things about her. Not everyone was able to achieve such a soft landing after the turbulence and utopian yearnings of those years.

Sontag’s early writings are shot through with a sense of utopian exaltation, an exaltation so often associated with the Sixties. In her most ostensibly political work, “Trip to Hanoi”, she talks specifically about her mood in those days. As always, she is careful not to overstate things. “I came back from Hanoi considerably chastened,” she says. But then she goes on, heating up. “To describe what is promising, it’s perhaps imprudent to invoke the promiscuous ideal of revolution. Still, it would be a mistake to underestimate the amount of diffuse yearning for radical change pulsing through this society. Increasing numbers of people do realize that we must have a more generous, more humane way of being with each other; and great, probably convulsive social changes are needed to create these psychic changes.”

You won’t find Sontag in a more exalted state than that. Rarely, indeed, does she allow herself to become so agitated and unguarded, especially in the realm of the outwardly political. But that is exactly where one must interpret Sontag’s politics, and exaltation, extremely carefully.

Sontag’s political instincts gravitate toward the individual, in exactly the same way that she reverses the standard quasi-Marxian directions of causality in the above quote. Marxists generally want to transform consciousness as the necessary first step toward changing the world. In contrast, Sontag wants the world to change so that we can get a little more pleasure out of consciousness. Convulsive social changes, for Sontag, are but extreme measures for affecting a transformation that terminates in psychic changes. Politics means nothing if it obscures the solid core of the individual self. Her commitment to this idea gives all of her writing a Stoic ring even though she never puts forward a theory of the self or a formal ethics. It is the focus on her particular brand of pleasure that provides the key. Pleasure and the Self are so deeply intertwined in Sontag’s writing that one cannot even be conceived without the other.

Writing years later, in 1982, about Roland Barthes, Sontag spoke again pleasure and the individual self. Barthes great freedom as a writer was, for Sontag, tied up with his ability to assert himself in individual acts of understanding. Continuing a French tradition that goes back at least to Montaigne (a man not unaware of the Stoics), she argues that Barthes’ writing “construes the self as the locus of all possibilities, avid, unafraid of contradiction (nothing need be lost, everything may be gained), and the exercise of consciousness as a life’s highest aim, because only through becoming fully conscious may one be free.” She speaks about the life of the mind as a “life of desire, of full intelligence and pleasure.”

A human mind, i.e., an individual mind, will, at its best, be ‘more generous’ and ‘more humane’. But for Sontag, it is what humans have access to in the world of ideas, as individual thinking agents, that marks out the highest arena of accomplishment.

“Of course, I could live in Vietnam,” she writes in A Trip to Hanoi, “or an ethical society like this one—but not without the loss of a big part of myself. Though I believe incorporation into such a society will greatly improve the lives of most people in the world (and therefore support the advent of such societies), I imagine it will in many ways impoverish mine. I live in an unethical society that coarsens the sensibilities and thwarts the capacities for goodness of most people but makes available for minority consumption an astonishing array of intellectual and aesthetic pleasures. Those who don’t enjoy (in both senses) my pleasures have every right, from their side, to regard my consciousness as spoiled, corrupt, decadent. I, from my side, can’t deny the immense richness of these pleasures, or my addiction to them.”

Sontag’s political thinking is driven by the idea that what is otherwise ethical, is often thereby sequestered from what is great, and what is otherwise great, is often mired in the unethical. She never stopped worrying about this problem and she ended her life as conflicted about it as ever. It was a complication that, in the end, she embraced as one of the interesting, if troubling, things about the world.

But for a few brief moments, as the Sixties ratcheted themselves up year after year, she indulged herself in considering the possibility that the conflict between ethics and greatness could be resolved into a greater unity. She thought a little bit about revolution and totality. She got excited, exalted. Summing up thoughts about one of her favorite essays, Kleist’s “On the Puppet Theater,” Sontag leaves the door open for a quasi-Hegelian form of historical transcendence. She says, “We have no choice but to go to the end of thought, there (perhaps), in total self-consciousness, to recover grace and innocence.” Notice the parenthesis on ‘perhaps’. She’s aware that she (and Kleist) are stretching things by saying so, but she can’t help allowing for the possibility of ‘total self-consciousness’. Often, when Sontag uses parentheses she is allowing us a glimpse into her speculative, utopian side.

In “The Aesthetics of Silence (1967),” for instance, she equates the modern function of art with spirituality. She defines this spirituality (putting the entire sentence in parenthesis). “(Spirituality = plans, terminologies, ideas of deportment aimed at resolving the painful structural contradictions inherent in the human situation, at the completion of human consciousness, at transcendence.)”.

***

In the amazing, brilliant essays that make up the volume Against Interpretation it is possible to discover more about the utopian side of Sontag’s thinking. Drawing inspiration from Walter Benjamin, whose own ideas on art explored its radically transformative, even messianic potential, Sontag muses that, “What we are witnessing is not so much a conflict of cultures as the creation of a new (potentially unitary) kind of sensibility. This new sensibility is rooted, as it must be, in our experience, experiences which are new in the history of humanity…”

Again with the parenthesis. It is as if, like Socrates, she always had a daimon on her shoulder warning her about pushing her speculations too far. But the talk of unity is an indication of the degree to which she was inspired by the events of the time, or perhaps more than the specific events of the time, by the mood and feel of the time. Her sense that there was an “opening up” of experience, sensibility, and consciousness drove Sontag to attack certain distinctions and dichotomies she saw as moribund. Again following closely in the footsteps of Walter Benjamin and his influential “Art in the Age of Mechanical Reproduction” she writes, “Art, which arose in human society as a magical-religious operation, and passed over into a technique for depicting and commenting on secular reality, has in our own time arrogated itself a new function…. Art today is a new kind of instrument, an instrument for modifying consciousness and organizing new modes of sensibility.” This led her to a central thesis, a thesis that drove her thinking throughout the Sixties, a thesis that is nestled into every essay that makes up Against Interpretation. She sums it up thusly:

“All kinds of conventionally accepted boundaries have thereby been challenged: not just the one between the ‘scientific’ and the ‘literary-artistic’ cultures, or the one between ‘art’ and ‘non-art’; but also many established distinctions within the world of culture itself—that between form and content, the frivolous and the serious, and (a favorite of literary intellectuals) ‘high’ and ‘low’ culture.”

Sontag’s famous “Notes on ‘Camp’” is simply a sustained attempt to follow that thesis through. Her defense of camp is a defense of the idea that worth can be found in areas normally, at least back in the Sixties, relegated to the realm of the unserious. The new unity was going to raise everything into the realm of the intellectually interesting, and pleasurable.

Yet, Sontag is not trying to abolish all distinctions. It isn’t a leveling instinct. Even in her youngest days, Sontag was suspicious of the radically democratic impulses that would, say, collapse art and entertainment. Sontag is doing something different. She is trying to show that the arena for aesthetic pleasure should be vastly expanded, but never diluted. She wants the new critical eye to stay sharp and hard. Sontag’s version of pleasure is an exacting one. It is relentless and crystalline. It is an effort.

“Another way of characterizing the present cultural situation, in its most creative aspects, would be to speak of a new attitude toward pleasure. . . . Having one’s sensorium challenged or stretched hurts. . . . And the new languages which the interesting art of our time speaks are frustrating to the sensibilities of most educated people.”

In this, there was always an element of the pedagogue in Sontag. She was trying to teach a generation how to tackle that frustration in the name of aesthetic pleasure. She was driven by her amazing, insatiable greed for greater pleasure. She wanted us to be able to see how many interesting and challenging things there are in her world of art, a world vaster and richer than the one surveyed by the standard critical eye of her time. And at least in the Sixties, her passion for greatness and its pleasures spilled over into a yearning for a societal transformation that would make that passion and pleasure universal…

to be continued…

Richard Wagner: Orpheus Ascending

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

A reassessment of Wagner and Wagnerism

The following (Part 1 June, Part 2 July, Part 3 August) is excerpted from a talk originally given at the Goethe Institut in Sydney on April 18, 1999 and subsequently published in London by the Wagner Society of the United Kingdom in 2001.

Part 1:

There are several versions of the ancient Greek myth of Orpheus. In the best known of these Orpheus goes down to the Underworld to seek the return of his wife Eurydice who had been killed by the bite of a snake. The lord of the Underworld agrees on the condition that Orpheus should not turn round and look at Eurydice until they reach the Upper World. The great singer and musician who could charm trees, animals and even stones could not survive this final and most perilous of temptations. He turns to look on his beloved wife and she is lost to him forever. Another version of the myth tells of Orpheus being torn to pieces by the Thracian women or Maenads; his severed head floated, singing, to Lesbos.

Wagner01Wagner’s dismembered head continues to sing, unheard by many and misunderstood by most. That beautiful yet volatile singing head with its Janus face, enigmatically poised between black holes and galaxies; that is the head we still find puzzling. And because our civilisation does not like puzzles, and wishes to rationalise whatever has provoked it to think or feel, our best critical efforts have reduced one of the greatest creative and cultural phenomenon of Western culture to manageable proportions.

Well, Wagner continues to ascend, leaving Wagnerism behind to do battle on any number of fronts, whether at Bayreuth with its interminable family squabbling, or in the raft of prose that has followed in the wake of the German gigantomane, or through those cliches that Wagnerian ideology has left us with as the Valkyries ride their helicopters across a Vietnamese apocalypse or another wedding is inaugurated to the strains of the bridal chorus from Lohengrin.

That is how we now manage the Wagnerian cosmos. Cliche helps us to feel comfortable near this unquiet grave with its all-too-human disturbing element. Humour helps us, and it’s necessary—we haven’t the fortitude, the talent, the persistence, or, indeed, the genius, to bring into being imperishable works of art. We like to laugh when Anna Russell tells us, ‘I’m not making this up you know’. But of course that is exactly what Wagner did do; he made up an entire aesthetic and cultural world that we still have not been able to come to terms with. We try from time to time to make sense of the life and the work, but never without those attendant twins, partisanship and antagonism.

So Wagner is ascending, like Orpheus, to his place in the cultural imperium, alienated from the world’s embrace, a lonely figure, so lonely in his own life, and lonely still. Liszt saw all too clearly what Wagner would have to accustom himself to, and in a letter to his friend, when Wagner intimated thoughts of ending it all, advised, ‘You want to go into the wide world to live, to enjoy, to luxuriate. I should be only too glad if you could, but do you not feel that the sting and the wound you have in your own heart will leave you nowhere and can never be cured? Your greatness is your misery; both are inseparably connected, and must pain and torture you until you kneel down and let both be merged in faith!’ [Hueffer. Correspondence of Wagner and Liszt, Vienna House, 1973, Vol One, p 273]

The world may celebrate his work, technology may bring his music to every part of the planet, yet another monograph may be published; Wagner turns to look for his audience, and at once he loses that audience. The bloodlust that can be unleashed by a good Wagner performance, the obsessions notoriously associated with Wagnerism, the strident approbation and denunciation—these are not the cultural signifiers of classicism freely given to Shakespeare, Mozart or Goethe. Wagner evades classicism still; yet that is his predestined end. We are still too close to the psychic firestorm of his imagination, and we are still too disturbed by the misuse of his art, for that classicism to show any signs of emerging. Even as passionate a Wagnerian as Michael Tanner cannot bring himself to fully equate Shakespeare and Wagner, an equation that cultural history proposes but which we are not yet up to accepting.

Recently a German said to me, ‘You English have your Shakespeare; we have our Wagner.’ Granted my passion for Wagner and my lack of cultural chauvinism it was perhaps odd that I was so shocked by her remark. I didn’t want to say, ‘But Shakespeare is greater than Wagner.’ But I did feel a strong urge to protest about Wagner’s being seen as part of the cultural landscape in the same way that Shakespeare is (even for Germans, thanks to Tieck and Schlegel). [Michael Tanner, Wagner. HarperCollins, 1996, p 211]

Wagner’s uncertain cultural status reaches beyond our historical moment. Perhaps a Hegelian analogy is best: thesis, antithesis, synthesis. The life, 1813-1883, represents the thesis—and what a proposition it is. The twentieth century represents the antithesis replete with reductionism, antagonism, equally disreputable fanaticism and hatred. It remains for the future to offer the synthesis. And when that synthesis occurs, then Wagner will have ascended to the Upper World; his audience will not flinch from looking at him directly. Shakespeare was lucky not to have left much biographical debris behind. When the biographers and critics got to work, the focus of their studies was necessarily on the plays and poems themselves. The lacerated spirit that gave birth to the murderous rampage of a Macbeth, the suicidal melancholy of a Hamlet or the self-hatred and disgust of a Lear was easily accommodated to textual analysis and theorising because biographical motive was missing. The hunt for the Dark Lady of the Sonnets was a pastime for some but, on the whole, scholars were prepared to indulge Shakespeare’s evident greatness. Only recently have they come around to asking why Shakespeare’s younger daughter couldn’t write. No such luck for Wagner. There is enough biographical material laid on the line to keep critics in clover until the end of time—letters, autobiographies, diaries, pamphlets, theoretical writings. And that’s just the primary material. Has any scholar yet read all of it? Then there is the secondary material and we know that it is now beyond the ability of anyone to read it, let alone make sense of it. This deluge of material shows no sign of abating. Are we now any closer to understanding the phenomenon of Wagner? Wagnerism seems to be one of the chief ways with which we seek to cope with what is now considered to be the ‘problem’ of Wagner.

Thus two mutually antagonistic modes of thinking fail to reach any accommodation with one another. It seems that Wagnerian historiography must advance, not by the slow accumulation of historical and cultural detail, but always explosively, so that an apparent understanding of events is wrenched apart by either previously unknown factual details or fresh polishing of a facet of the Wagnerian rough diamond.

[Parts 2 and 3 of Orpheus Ascending can be read here and here.]

The Long Interrogation

“When Edgegayehu Taye took a job in an Atlanta hotel, she never expected the service elevator doors to open one day and reveal the man who tortured her years before in Ethiopia. Nor could she have predicted what it would take to see justice done.”

Andrew Rice in the New York Times Magazine:

04torture1

Six months after he arrived in America, Kelbessa applied for political asylum, saying he had been persecuted and imprisoned by Ethiopia’s military dictatorship. It was the Reagan era, and Ethiopia was Communist; the application was quickly approved. Kelbessa then set about achieving his next goal: saving enough money to send for his three children, who were still stuck in Ethiopia. (He and his wife were divorced.) He worked the graveyard shift at a convenience store and took a second job, washing dishes at the Colony Square Hotel. Later, he was promoted to bellhop.

One afternoon, Kelbessa was outside the employee locker room, waiting for the service elevator. The elevator doors opened, and another Ethiopian walked out, a young woman in a waitress’s uniform.

More here.

The rebirth of electric-shock treatment

From The Economist:

Electricity has long been used to treat medical disorders. As early as the second century AD, Galen, a Greek physician, recommended the use of electric eels for treating headaches and facial pain. In the 1930s Ugo Cerletti and Lucio Bini, two Italian psychiatrists, used electroconvulsive therapy to treat schizophrenia. These days, such rigorous techniques are practised less widely. But researchers are still investigating how a gentler electric therapy appears to treat depression.

Vagus-nerve stimulation, to give it its proper name, was originally developed to treat severe epilepsy. It requires a pacemaker-like device to be implanted in a patient’s chest and wires from it threaded up to the vagus nerve on the left side of his neck. In the normal course of events, this provides an electrical pulse to the vagus nerve for 30 seconds every five minutes.

This treatment does not always work, but in some cases where it failed (the number of epileptic seizures experienced by a patient remaining the same), that patient nevertheless reported feeling much better after receiving the implant. This secondary effect led to trials for treating depression and, in 2005, America’s Food and Drug Administration approved the therapy for depression that fails to respond to all conventional treatments, including drugs and psychotherapy.

More here.

Reaching Out to Iran

David Ignatius in the Washington Post:

America’s opening to China had its ping-pong diplomacy. Detente with the Soviet Union featured the Bolshoi Ballet. Perhaps in the new diplomatic dance between the United States and Iran, a similar people-to-people role will be played by an immunologist named David Haines and his project to study Iranian victims of Iraqi chemical weapons.

Haines first told me his unlikely story several months ago, as he was seeking U.S. government approval for his effort to bring an Iranian scientist to join him in his work at the University of Connecticut. The urgency of his project became obvious after Secretary of State Condoleezza Rice announced Wednesday that the United States is willing to join direct talks with Iran for the first time in nearly three decades. Perhaps Haines’s project can be a model for broader educational and scientific contacts if a U.S.-Iran dialogue can begin.

Haines’s tale features many of the strands that are knotted together in the current Middle East crisis: weapons of mass destruction; the aftershocks of Saddam Hussein’s brutal regime; the legacy of the Sept. 11, 2001, attacks; the need to prepare for future WMD attacks by terrorist groups. You may doubt that all those themes could converge in the work of one scientist, but read on.

More here.  [Thanks to Samad Khan.]

World of Warcrack

Joichi Ito on the World of Warcraft MMORPG, in Wired:

On November 23, 2004, Rob Pardo and his team at Blizzard Entertainment wrapped up four years of development on World of Warcraft. It quickly became the most popular massively multi-player online game ever, with more than 6 million subscribers each paying up to $15 a month to access its fantastic realms. (At the peak of its popularity, EverQuest had only about half a million subs.)

I started playing a year ago and have become custodian of We Know, a guild of about 250 people worldwide: medics, CEOs, bartenders, mothers, soldiers, students. We assemble in-game to mount epic six-hour raids that require some members to wake at 4 am and others to stay up all night. Outside the game, we stay in touch using online forums, a wiki, blogs, and a mailing list – plus a group voice chat, which I’ve connected to my home stereo so I can hear the guild’s banter while I’m cooking dinner. I have never been this addicted to anything before. My other hobbies are gone. My daily blogging regimen has taken a hit. And my social life revolves more and more around friends in the game.

More here.

History’s Age of Hatred

Tristram Hunt on The War of the World: History’s Age of Hatred by Niall Ferguson, in The Guardian:

Waroftheworld_128His thesis is clear: what makes the 20th century remarkable is its exceptional violence. “The hundred years after 1900 were without question the bloodiest century in history, far more violent in relative as well as absolute terms than any previous era.” Why? Well, not for the old textbook explanations of economic crises, class warfare, nationalism or ideological fervour. Rather, in good historical fashion, for three new reasons.

According to Ferguson, the 20th-century bloodbath was down to the dreadful concatenation of ethnic conflict, economic volatility and empires in decline. Despite genetic advances that revealed man’s essential biological similarities, the 1900s saw wave upon wave of ethnic strife thanks (pace Richard Dawkins) to a race “meme” entering public discourse. Across the world, the idea of biologically distinct races took hold of the 20th century mindset to deadly effect.

More here.

The problems with animal testing

Arthur Allen in Slate:

060601_medex_labmousetnEvery year, in the name of medical progress, scientists breed and nurture hundreds of millions of mice, rats, and other subordinate mammals. Then they expose the critters to substances that could become the next Zocors, Prozacs, and Avastins. Since the alternative is to experiment on people, most everyone other than hardcore animal lovers accepts animal testing. Periodically, however, a spectacular failure raises new questions about the enterprise—not for ethical reasons, but scientific ones.

In March, London clinicians injected six volunteers with tiny doses of TGN1412, an experimental therapy for rheumatoid arthritis and multiple sclerosis that had previously been given, with no obvious ill effects, to mice, rats, rabbits, and monkeys. Within minutes, the human test subjects were writhing on the floor in agony. The compound was designed to dampen the immune response but it had supercharged theirs, unleashing a cascade of chemicals that sent all six to the hospital. Several of the men suffered permanent organ damage, and one man’s head swelled up so horribly that British tabloids refer to the case as the “elephant man trial.”

Animal rights activists in Britain pounced, declaring the uselessness of animal experimentation in the development of human drugs.

More here.