Too Many Grandpas


A “grandfather boom” is rippling through the world’s population as it heads for the nine billion mark by 2050, putting pressure on health care and pension systems, international population experts will hear this week.

“In most Western countries, 2005 marks a new demographic shock: the grandfather boom will introduce a delicate balance between the working and non-working,” said Catherine Rollet, president of the organising committee of the conference of the International Union for the Scientific Study of Population (IUSSP).

more here.

for your listening pleasure

If you really wanted to own a copy of the Apollo 13’s famous “Houston we have a problem”, or maybe “I have a dream” speech of Martin Luther King, Audioville is the place for you.

The service, which positions itself as “iTunes for the spoken word” offers content that ranges from historic speeches to comedies and from fine poetry to science:

“The site currently boasts licensing deals with the Economist, offering the business magazine’s quarterly technology report in spoken form, and BBC Worldwide, which provides comedies such as Little Britain and Fawlty Towers.

Audio books from authors such as HG Wells and Beatrix Potter can also be downloaded alongside historic speeches, such as Kennedy’s “Ich bin ein Berliner” address and Apollo 13’s “Houston, we have a problem” relay.

AudioVille also offers its own content, such as city guides, and plans to let users upload their own content, which, if popular, it will retail on the site.”

More here (registration needed)

Cosma Shalizi looks at looking at how science works

Keeping with the theme of Abbas’ Monday Musing on how science proceeds, here is an interesting post on the sociology of scientific knowledge by Cosma Shalizi.

“One of the best books I’ve read on how science actually works is Stephen Toulmin’s Human Understanding: The Collective Use and Evolution of Concepts. (It is, of course, long out of print.) The core of it is a set of ideas about how the social mechanisms of working scientific disciplines actually implement the intellectual goals of learning about the world, and rationally changing our minds, through a evolutionary process. (And Toulmin actually understands evolution in a sensible, blind variation plus selection, way, rather than some useless idea about progress or trends.) A lot of the argument is summed up in two of his aphorisms, which he admitted he exaggerated a bit for effect: ‘Every concept is an intellectual micro-institution’ (p. 166), consisting of the people who accept the concept, and the practices by which they use and transmit it; and conversely, ‘Institutions are macro-concepts’ (p. 353).

The natural question is whether one can say which institutions correspond to which concepts, and vice versa. This is a very tricky question, but an excellent beginning has been made by two papers on Camille Roth and Paul Bourgine, which I’ve been meaning to post about for quite a while.”

(Hat tip: Dan Balis)

The Story Behind the New Battlestar Galactica

I’m a big fan of the new Battlestar Galactica, the SciFi Channel original series which re-imagines the old 1970s TV show.  It explores war, terrorism, and religion, while remaining subtle and thoughtful.  From The New York Times Magazine:


“As in the original show, the humans of the Galactica and its fleet are relentlessly pursued by evil robots called Cylons. But in the current version, conceived by Ronald D. Moore and David Eick, most of the evil Cylons look like people and have found God. Ruthlessly principled and deeply religious, the Cylons have been compared by fans and critics both to Al Qaeda and to the evangelical right. And the humans they are relentlessly pursuing are fallible and complex. Their shirts are not clingy or color-coded; the men of space wear neckties. They are led by Edward James Olmos as the Galactica’s commander and Mary McDonnell as the president of the humans, and their stories revolve as much around the tensions within — between the military and civil leadership of the fleet — as they do around the Cylon threat. As Eick described the show to me last month with evident, subversive pleasure, ‘The bad guys are all beautiful and believe in God, and the good guys all [expletive] each other over.’ Moore, who is also the show’s head writer, put it more simply: ‘They are us.’

It is sometimes jarring to watch ‘Battlestar Galactica,’ for it is not like any science-fiction show on television today. Science fiction is a genre that, for all its imaginative expansiveness, tends also to be very conservative; its fans sometimes defend its cliches fiercely. ‘Battlestar Galactica’ upends sci-fi cliches.”

Lessons learned from monkeying with history

From MSNBC:Darrow

Over the weekend, the 6,000 or so residents of Dayton, Tenn., put on a play, the same play they have put on every year about this time. It retells the story that put Dayton on the map 80 years ago. Townsfolk prominent and not so prominent dressed up in the styles of the Roaring ’20s and assembled outside the Rhea County Courthouse to recite the proceedings of the real Trial of the Century: the prosecution in 1925 of John T. Scopes for teaching his students the theory of evolution.

The picture that emerged, especially in the hyperventilating prose of the iconoclastic Baltimore journalist H.L. Mencken and later in the play and movie “Inherit the Wind,” was of a town full of “Christian pro-creation” believers who were “uneducated, dimwitted people who came to town barefoot and married their cousin,” said historian John Perry, co-author of a new book, “Monkey Business: The True Story of the Scopes Trial.” He and co-author Marvin Olasky recount the trial and argue for teaching the hypothesis that an intelligent designer shaped the course of human development. 

More here.

If it’s male, attack it; if female, mate with it.

From The New York Times:Fly_2

Last month researchers reported on the role of genes in the sexual behavior of both voles and fruit flies. One gene was long known to promote faithful pair bonding and good parental behavior in the male prairie vole. Researchers discovered how the gene is naturally modulated in a population of voles so as to produce a spectrum of behaviors from monogamy to polygamy, each of which may be advantageous in different ecological circumstances. The second gene, much studied by fruit fly biologists, is known to be involved in the male’s elaborate suite of courtship behaviors. New research has established that a special feature of the gene, one that works differently in males and females, is all that is needed to induce the male’s complex behavior.

The male mouse’s rule for dealing with strangers is simple – if it’s male, attack it; if female, mate with it. But male mice that are genetically engineered to block the scent-detecting vomeronasal cells try to mate rather than attack invading males.

More here.


James Surowiecki in The New Yorker:

BobIn 1985, when Bob Geldof organized the rock spectacular Live Aid to fight poverty in Africa, he kept things simple. “Give us your fucking money” was his famous (if apocryphal) command to an affluent Western audience—words that embodied Geldof’s conviction that charity alone could save Africa. He had no patience for complexity: we were rich, they were poor, let’s fix it. As he once said to a luckless official in the Sudan, after seeing a starving person, “I’m not interested in the bloody system! Why has he no food?”

Whatever Live Aid accomplished, it did not save Africa. Twenty years later, most of the continent is still mired in poverty. So when, earlier this month, Geldof put together Live 8, another rock spectacular, the utopian rhetoric was ditched. In its place was talk about the sort of stuff that Geldof once despised—debt-cancellation schemes and the need for “accountability and transparency” on the part of African governments—and, instead of fund-raising, a call for the leaders of the G-8 economies to step up their commitment to Africa.

More here.

Dirty knees and frocks

Elizabeth Cooney in the Worcester Telegram and Gazette:

BildeThe personal and professional merge in Dr. Azra Raza’s life, sometimes painfully so.

Both a cancer researcher and a cancer doctor, she learned firsthand how vast a gulf there is between the laboratory and the patient when she felt “the infinite helplessness of being on the other side of the bed” when her husband, Dr. Harvey D. Preisler, died of the disease he had dedicated his life to curing.

He inspired her in life and in death to narrow that chasm between the promise of basic research and the reality of current cancer treatments. Chief of hematology/oncology at University of Massachusetts Medical School, Dr. Raza believes the current convergence of basic research, scientific technology and clinical practice will lead to unparalleled progress in preventing, detecting and treating cancer.

Aps“This is the time when things are coming together for us,” she said. “Some of what we have been striving for for 20 years is finally materializing in terms of improved outcomes for patients. I never dreamt I would be able to see this day, when I would have patients sitting in my clinic saying, ‘Dr. Raza, I didn’t even know how badly I felt until I feel better.’ ”

How she arrived at that point began with her early curiosity about nature while growing up in Pakistan.

She remembers being 4 years old and crawling after ants, following them to their holes and getting bitten, upsetting her mother with her dirty knees and frocks…

“I grew up in a family in which the definition of a bum was anyone over 18 not going to medical school,” she joked.

Sughra2Her sister Dr. Sughra Raza, director of the women’s imaging program at Brigham & Women’s Hospital in Boston, said that wasn’t quite true. Engineers were accepted, too, she said, but more important was the absolute equality between the sons and daughters.

More here (subscription required*).  [As most of you know, Azra Raza is a 3QD editor. I am extremely proud to say that she is also my sister, as is Sughra Raza, my equally accomplished youngest sister who is also mentioned above. Sughra is also normally a 3QD editor, but is on sabbatical at the moment.]

*If you don’t happen to have a subscription to the Worcester Telegram and Gazette, click here, then click “open”.

Update: I just realized there’s more. In another story in the same paper:

Radhey Khanna felt he was almost out of time when he first met Dr. Azra Raza.

She gave him hope for the future; now he has pledged to do the same for her.

An electrical engineer turned real estate investor who lives in New Hampshire, Mr. Khanna has pledged $1 million to support Dr. Raza’s research on myelodysplastic syndrome, a disorder in which patients’ blood cell counts fall dangerously low. Many of them go on to develop leukemia.

Two and a half years ago, doctors told Mr. Khanna he had MDS but there was no treatment or cure. They thought he might be helped by a new drug once it was approved by federal drug regulators.

At the time he just felt fatigued, but his condition grew worse. Eventually he needed frequent blood transfusions and was unable to walk. Then he felt he could no longer wait.

“I was willing to spend any amount of money, I was willing to travel anywhere,” he said recently. “It’s a pretty sad situation when nobody can do anything at all.”

A friend who was a researcher at Dana-Farber Cancer Institute mentioned that Dr. Raza had moved her MDS Center to UMass Memorial Medical Center from Chicago. Maybe she could help him get the drug, he suggested.

After their first meeting nine months ago, Dr. Raza was encouraging. While not able to get him the Revlimid he was waiting for, she did enroll him in an experimental trial using thalidomide, the drug that caused birth defects 50 years ago but has been revived to treat leprosy and multiple myeloma. Revlimid, still not approved, is an improved version of thalidomide, lacking its side effects but targeting a similar cellular process that goes awry in MDS.

The thalidomide treatment worked right away for Mr. Khanna, who turned 60 last month.

For more, click here, then click “open”.

‘The Framing Wars’ and ‘Iranian Lessons’

There are two good articles in the New York Times Magazine this week. First, Matt Bai writes:

Do Republicans win elections because they know how to turn issues into stories? Can Democrats learn the same trick? And can they find the magic words to win the coming battle over the Supreme Court?

More about that here. Then, there is an article by Michael Ignatieff:

Invited to Tehran during the recent presidential election to lecture on human rights, the author learned that those who don’t yet have liberty have a lot to teach to those who do.

More of that here.  [Thanks to Syed Tasnim Raza.]

Jean-Paul Sartre

Kevin Jackson in Prospect:

Portrait_jacksonConfessions of a teenage existentialist: back in the early 1970s, when my mates and I were all revving up for A-levels, Jean-Paul Sartre was, simply, the most famous of all living philosophers, and just about the most famous of all proper, serious writers. He was inevitable, compulsory, ubiquitous. You didn’t even have to be a swot to have a fairly good idea of who he was, since BBC2 had just devoted 13 solid hours of prime-time viewing to its dramatisation of the Roads to Freedom trilogy. (Thinkable nowadays?) The Monty Python gang performed a Sartre sketch and for weeks afterwards, schoolyards echoed to imitations of Mrs Premise’s high-pitched telephone query to Sartre’s (fictitious) wife: “Quand sera-t’il libre?” Pay-off: “She says he’s spent the last 60 years trying to work that one out!” Oh, we did laugh.

More here.

Storied Theory

Roald Hoffman (Nobel, Chemistry) in American Scientist:

HoffmanwebOne might think that experiments are more sympathetic than theories to storytelling, because an experiment has a natural chronology and an overcoming of obstacles (see my article, “Narrative,” in the July-August 2000 American Scientist). However, I think that narrative is indivisibly fused with the theoretical enterprise, for several reasons.

One, scientific theories are inherently explanatory. In mathematics it’s fine to trace the consequences of changing assumptions just for the fun of it. In physics or chemistry, by contrast, one often constructs a theoretical framework to explain a strange experimental finding. In the act of explaining something, we shape a story. So C exists because A leads to B leads to C—and not D.

Two, theory is inventive. This statement is certainly true for chemistry, which today is more about synthesis than analysis and more about creation than discovery. As Anne Poduska, a graduate student in my group, pointed out to me, “theory has a greater opportunity to be fanciful, because you can make up molecules that don’t (yet) exist.”

Three, theory often provides a single account of how the world works—which is what a story is. In general, theoretical papers do not lay out several hypotheses. They take one and, using a set of mathematical mappings and proof techniques, trace out the consequences. Theories are world-making.

Finally, comparing theory with experiment provides a natural ending. There is a beginning to any theory—some facts, some hypotheses. After setting the stage, developing the readers’ interest, engaging them in the fundamental conflict, there is the moment of (often experimental) truth: Will it work? And if that test of truth is not at hand, perhaps the future holds it.

The theorist who restates a problem without touching on an experimental result of some consequence, or who throws out too many unverifiable predictions, will lose credibility and, like a long-winded raconteur, the attention of his or her audience. Coming back to real ground after soaring on mathematical wings gives theory a narrative flow.

Let me analyze a theoretical paper to show how this storytelling imperative works. Not just any paper, but a classic appropriate to the centennial of Albert Einstein’s great 1905 papers…

More here.

Monday, July 18, 2005

Critical Drigressions: Literary Fashion

Ladies and gentlemen,

On an overcast Sunday afternoon in Karachi, we donned a kurta pajama and kola puris and headed towards Chundrigar Road. Every week the streets outside the Arts Council and the Hindu Gymkhana are cordoned off for a book bazaar (which till a year ago was held in the gardens of the Frere Hall). There we surveyed the stalls for books that we might include in our summer reading and picked up Ellison’s American Psycho, Pierre’s Vernon God Little, and Martel’s The Life of Pi – admittedly, a random selection, determined by the amount of rupees in our pocket and also by the contrarian in us who does not have faith in the proverb, you can’t judge a book by its cover.

Whether or not book covers betray the substance of a book might be a matter of drunken debate but you might judge a book otherwise: by the quality of the author’s prose – whether its ornate, dense, muscular, Spartan – by character development, by the narrative voice, narrative structure, storytelling, the pathos the narrative generates, or perhaps, by the way a book ends (and so on). Since the inception of the novel not only has it evolved but the critical infrastructure that determines the “value” of a novel has also evolved. Over time, different writers and critics have assigned different values to different components of the novel.

As in art, the ambition of fiction has changed from the time of the horrid eighteenth century novel (Richardson’s Pamela and Aphra Behn’s Love Letters Between a Nobleman and his Sister immediately come to mind). Joyce and Nabokov had different ambitions, agendas. They conceived of their novels as constructions, not representations. Moreover, the respective oeuvres of Pynchon, Rushdie and Kundera exemplify that prose has became increasingly self conscious over the span of the last century.

At the same time, critical consensus has marginalized writers who once populated the Pantheon of literary greats. Hemingway’s Spartan style was novel and immensely influential but now seems somewhat dated (especially because a whole generation of writers has interpreted and reinterpreted his variety of minimalism). Once hailed by Sartre as “the greatest living writer of our time,” John Dos Passos – Hemingway’s contemporary and brother in arms in the Spanish civil war – has fallen off the map. His cinematic prose and didacticism no longer fashionable, Passos’ books are neither bought nor taught. There are many others: John O’ Hara, Theoder Dreiser, Robert Musil, that third leg of the modernist enterprise (or something like that.)

Sensibilities are changing again. Contemporary criticism abhors stylistic pyrotechnics and self-consciousness. The thoroughly entertaining but famously venomous critic, Dale Peck, declaims, “I will say it once and for all, straight out: it all went wrong with James Joyce…Ulysses is nothing more than a hoax upon literature…” In one sentence, Peck excises “most of Joyce, half of Faulkner and Nabokov, nearly all of Gaddis, Pynchon, and DeLillo” from the canon. Another critic – B.R. Meyers – unknown before the publication of “A Reader’s Manifesto” in the Atlantic Monthly – attacks others: Cormac McCarthy, Annie Proulx and Don Delilo. He finds their prose “repetitive…elementary in its syntax, and…numbing in its overuse of wordplay.” And James Wood – probably the finest contemporary literary critic (along with Michiko Kakutani – harkens back to Henry James. He likes Monica Ali and Naipaul but doesn’t care of Zadie Smith and John Updike. These critics may have influenced the PEN/Faulkner committee who has awarded Ha Jin prizes for War Trash and Waiting – two brilliant novels in the tradition of Russian realism, featuring Spartan prose, rich pathos and pathology.

Ultimately, however, critics – no matter how comprehensive their analysis – are the sums of their likes and dislikes, like everybody else. And ultimately, we enjoy critics whose sensibilities cohere with ours.

So which book is worth our while? Considering that high style comes in and out of fashion, like art, like clothes, maybe only good story-telling endures (Gogol’s “The Overcoat Coat” and Manto’s “Toba Tek Singh” immediately come to mind). In that case, we may adorn our shelf with our new acquisitions, return to the book fair next week to find some Coetzee, who ranks high on our List of Literature’s Latest and Greatest. This evening we may just watch Bale as Bateman.

What explains the appeal of radical Islam to some of Europe’s Muslims?

The Economist looks at some psychological and sociological explanations of the appeal of Islamism to some of Europe’s Muslims.

“[A]lthough paths to extremism vary widely, they tend to follow certain social and psychological patterns. Frequently, a young Muslim man falls out of mainstream society, becoming alienated both from his parents and from the ‘stuffy’ Islamic culture in which he was brought up. He may become more devout, but the reverse is more likely. He turns to drink, drugs and petty crime before seeing a ‘solution’ to his problems—and the world’s—in radical Islam. . .

Another French ‘Islamologue’, Antoine Sfeir, has identified relations between the sexes as a big factor in the re-Islamisation of second-generation Muslims in Europe. Because young Muslim women often do better than men at adapting to the host society (they tend to do better at school, for example), old patriarchal structures are upset and young men acquire a strong incentive to reassert the old order.”

Reporter Guy

David Remnick on Stephen Colbert’s upcoming fake news show, in The New Yorker:

021212_stephencolbertSince Bill Murray’s departure for the movies, no one has done fatuous like Colbert does fatuous: the serious-reporter-guy ability to cock a brow with bogus knowing, his way of tilting his head to indicate sincerity worthy of an Airedale. The key is not listening, missing the point. During the 2004 Presidential campaign, “The Daily Show” interviewed the Democratic candidates, none more vividly than the Reverend Al Sharpton:

Colbert: In street lingo, are you running to stick it to the Man?
Sharpton: I don’t know on what street you got that language.
Colbert: The urban street. The mean streets.
Sharpton: I’m sticking up for a lot of people that have felt that no one has stuck up for them. But I’m not trying to stick it to anyone.
Colbert: Not even . . . the Man?
Sharpton: Who’s the Man?
Colbert: Let’s pretend for a moment that I’m the Man. Now stick it to me.
Sharpton: I’m not sticking it to anyone.
Colbert: Not even the Man? He’s very stickable.

More here.

The Biggest Starquake Ever

Michael Schirber at

050712_sgr_burst_02The biggest starquake ever recorded resulted in oscillations in the X-ray emission from the shaking neutron star.  Astronomers hope these oscillations will crack the mystery of what neutron stars are made of.

On December 27, 2004, several satellites and telescopes from around the world detected an explosion on the surface of SGR 1806-20, a neutron star 50,000 light years away.  The resulting flash of energy — which lasted only a tenth of a second — released more energy than the Sun emits in 150,000 years.

Combing through data from NASA’s Rossi X-ray Timing Explorer, a team of astronomers has identified oscillations in the X-ray emission of SGR 1806-20.  These rapid fluctuations, which began 3 minutes after the starquake and trailed off 10 minutes later, had a frequency of 94.5 Hertz.

“This is near the frequency of the 22nd key of a piano, F sharp,” said Tomaso Belloni from Italy’s National Institute of Astrophysics.

Just as geologists study the Earth’s interior using seismic waves after an earthquake, astrophysicists can use the X-ray oscillations to probe this distant neutron star.

More here.

New Blog: Cosmic Variance

There have been signs in the past days, but the new science blog Cosmic Variance will come as a pleasant surprise to many.  Founded by a friend and supporter of 3QD, Sean Carroll of Preposterous Universe, and his colleagues (Mark Trodden of Orange Quark, JoAnne Hewitt, Risa Weschler, and Clifford Johnson), Cosmic Varaince:

“is a group blog constructed by some idiosyncratic human beings who also happen to be physicists. Sometimes we’ll talk about science, other times it will be food or literature or whatever moves us — I know I have some incisive things to say about Brad Pitt and Angelina Jolie, for one thing. We’re not a representative collection of scientists, just some engaged individuals curious about our world.”

Check it out.

Marrying Maps to Data for a New Web Service

From The New York Times:Map

David Gelernter, a computer scientist at Yale, proposed using software to create a computer simulation of the physical world, making it possible to map everything from traffic flow and building layouts to sales and currency data on a computer screen. Mr. Gelernter’s idea came a step closer to reality in the last few weeks when both Google and Yahoo published documentation making it significantly easier for programmers to link virtually any kind of Internet data to Web-based maps and, in Google’s case, satellite imagery.

Since the Google and Yahoo tools were released, their uses have been demonstrated in dozens of ways by hobbyists and companies, including an annotated map guide to the California wineries and restaurants that appeared in the movie “Sideways” and instant maps showing the locations of the recent bombing attacks in London.

More here.

Analysis Identifies Common Genetic Core for Trio of Parasites

From Scientific American:Parasite_1

Scientists have successfully sequenced the genomes of three deadly parasites that together threaten half a billion people annually around the globe. According to reports published in the current issue of the journal Science, the parasites responsible for African sleeping sickness, Chagas disease and leishmaniasis–illnesses with very different symptoms–share a core of a few thousand genes. Scientists hope that the results will prove useful for identifying novel drug or vaccine targets.

More here.

Dispatches: On Ethnic Food and People of Color

One of the most shortsighted commonsense expressions in use today must be ‘ethnic food,’ as in ‘What are you in the mood for tonight?’  ‘Something ethnic?’  As a shorthand for classifying cuisines, it’s pretty incoherent, lumping together the foods of whichever nations or cultures are considered to be non-standard.  This consensus is, of course, temporary: as so many histories of American culture point out, today’s natives are yesterday’s immigrants (you could see Walter Benn Michaels’ Our America for an informative account).  As the most significant recent immigration to this country has been from Asia, ethnic food today might include Chinese, Thai, Vietnamese, Indian.

But this is poor thinking and dangerous ideology.  The Italian and Irish arrivants of a century ago have not only had their cuisines domesticated (and, in the process, modified).  Ironically, they also reintroduced to U.S. diets foods that originated here: potatoes, tomatoes, chili peppers, and corn are all native to the Americas and did not reach Europe until their conquest.  The us-and-them belief structure underlying ‘ethnic food,’ known as nativism, conceals the truly global nature of food culture underneath a phantom authenticity, as though lasagna should be regarded as more American than pho in any but the most momentary sense.

The British have been particularly good at transforming the foreign into the (as they say) homely, as in the case of tea (even the opium trade with China was begun to offset the massive trade deficit incurred by tea imports).  A more complex example is Worcestershire sauce, which two hundred years ago incorporated the unfamiliar fruits of colonial expansion, among them tamarind, cloves, and chili peppers.  These far-fetched tastes were sweetened using a colonial by-product, molasses, and then combined and fermented, thereby domesticating them for the timid palate: it’s a kind of Orientalism in a bottle.  Even the availability of that most common staple, white sugar, was ensured by a global system of slave labor and plantation colonies, as Sidney Mintz points out in the excellent Sweetness and Power

I mention this culinary false consciousness as a benign but persistent example of a frightening tendency: the projection of the false and pernicious image of a pure, unsullied ‘homeland’ threatened by foreign infiltrators, which infects fundamentalisms worldwide.  Clearly, the contemporary right traffics in this kind of thinking constantly.  Even on the academic left, however, the appellation ‘people of color’ conflates groups whose experience is radically different (the racism experienced by African-Americans in the U.S., for instance, is of a completely different kind and degree than that of other minority groups).  I don’t question the honorable intention of the term–to generate solidarity among people who suffer oppression–but in practice it prolongs the ideological falsehood that the ur-citizen is a white male Protestant, even while attempting to critique just that.

‘People of color’ also depends on and reinforces the illusion that there is one group of white men really in control of what is American.  If the mistake of ‘ethnic food’ is the unstated assumption that the ‘normal’ food has no ethnicity, then the mistake of ‘people of color’ is the sense that white is not a color.  This has no doubt been assumed all too often in American culture, but to define resistance purely in oppostion to it presumes that the U.S. notion of who is white and who isn’t is universal, when in fact it is occasional and subject to change.  That’s way too much power to ascribe to the opposition.  We should never be afraid to emphasize the basis of liberty: that our differences have nothing to do with our belonging.

Last Dispatch: Aesthetics of Impermanence

Monday Musing: Francis Crick’s Beautiful Mistake

Many scientists don’t know what they are doing. That is, they are so immersed in science, that they often do not step outside it for a wider philosophical perspective on what it is they do, while remaining convinced that science is somehow more correct than other ways of doing things. For example, a scientist might argue that she can treat malaria better than a witch doctor can. The witch doctor, of course, will say the opposite. If you ask the scientist why she thinks she is right, she will say that she can demonstrate her efficacy with an experiment: a large sample of cases of malaria which are treated by her method as well as with the witch doctor’s method (and maybe even a control group), after which she will perform a sophisticated statistical analysis on the data that she collects on all these cases, thus showing that her method is better. Now, if you object that her reasoning is circular, after all, she has just used the scientific method to show that the scientific method is correct (thereby only really showing that the scientific method is self-consistent), and don’t allow her to use science to prove science right (if the scientific method of proving something right were already acceptable to you, you wouldn’t be questioning her in the first place), she will tend to start getting desperate and try to make appeals to common sense, or even question your sanity (“Are you crazy? It’s obvious that witch doctor is a thieving fraud, taking people’s money and pretending to help them with his wacky chants,” etc.) And she will have a lingering suspicion that you have somehow tricked her with some sneaky rhetorical sophistry; she will continue to think that of course science is right, just look at what it can do!

So what’s going on here? I am not claiming that witch doctors (or astrologists, or parapsychologists, or faith-healers, or Uri Gellar, or Deepak Chopra, or other charlatans) are just as good as scientists, or even that they are right about anything at all (they are not); what I am saying is that there is no neutralDawkins_richard_sm_1 ground on which to stand, and from the outside, as it were, proclaim the supremacy of science as the best avenue to truth. One must learn to live without such an absolute grounding. Even as clear-headed and careful a thinker as Richard Dawkins can sometimes get confused about this. At the end of an otherwise fascinating and inventive essay entitled “Viruses of the Mind” (Dawkins’s contribution to the volume Dennett and His Critics) in which he uses viruses as a metaphor for the various bad ideas (or memes) that “infect” brains in a culture (particularly the “virus” of religion), and also makes a parallel analogy with computer viruses, Dawkins asks if science itself might be a kind of virus in this sense. He then answers his own question:

No. Not unless all computer programs are viruses. Good, useful programs spread because people evaluate them, recommend them and pass them on. Computer viruses spread solely because they embody the coded instructions: ‘Spread me.’ Scientific ideas, like all memes, are subject to a kind of natural selection, and this might look superficially virus-like. But the selective forces that scrutinize scientific ideas are not arbitrary or capricious. They are exacting, well-honed rules, and .Dennett_2 . . they favour the virtues laid out in textbooks of standard methodology: testability, evidential support, precision, . . . and so on.

Daniel Dennett spares me the need to respond to this very uncharacteristic bit of wishful silliness from Dawkins by doing so himself (and far better than I could):

When you examine the reasons for the spread of scientific memes, Dawkins assures us, “you find they are good ones.” This, the standard, official position of science, is undeniable in its own terms, but question-begging to the mullah and the nun–and to [Richard] Richard20rorty Rorty, who would quite appropriately ask Dawkins: “Where is your demonstration that these ‘virtues’ are good virtues? You note that people evaluate these memes and pass them on–but if Dennett is right, people (persons with fully-fledged selves) are themselves in large measure the creation of memes–something implied by the passage from Dennett you use as your epigram. How clever of some memes to team together to create meme-evaluators that favor them! Where, then, is the Archimedean point from which you can deliver your benediction on science?”

[The epigram Dawkins uses and Dennett mentions above is this:

The haven all memes depend on reaching is the human mind, but a human mind is itself an artifact created when memes restructure a human brain in order to make it a better habitat for memes. The avenues for entry and departure are modified to suit local conditions, and strengthened by various artificial devices that enhance fidelity and prolixity of replication: native Chinese minds differ dramatically from native French minds, and literate minds differ from illiterate minds. What memes provide in return to the organisms in which they reside is an incalculable store of advantages — with some Trojan horses thrown in for good measure. . .

Daniel Dennett, Consciousness Explained

Below, Dennett continues his response to Dawkins…]

There is none. About this, I agree wholeheartedly with Rorty. But that does not mean (nor should Rorty be held to imply) that we may not judge the virtue of memes. We certainly may. And who are we? The people created by the memes of Western rationalism. It does mean, as Dawkins would insist, that certain memes go together well in families. The family of memes that compose Western rationalism (including natural science) is incompatible with the memes of all but the most pastel versions of religious faith. This is commonly denied, but Dawkins has the courage to insist upon it, and I stand beside him. It is seldom pointed out that the homilies of religious tolerance are tacitly but firmly limited: we are under no moral obligation to tolerate faiths that permit slavery or infanticide or that advocate the killing of the unfaithful, for instance. Such faiths are out of bounds. Out of whose bounds? Out of the bounds of Western rationalism that are presupposed, I am sure, by every author in this volume. But Rorty wants to move beyond such parochial platforms of judgment, and urges me to follow. I won’t, not because there isn’t good work for a philosopher in that rarefied atmosphere, but because there is still so much good philosophical work to be done closer to the ground.

Now I happen to agree more with Rorty on this, but that is not the point. What is important is that Rorty, Dennett and I, all agree that there is no neutral place (for Archimedes to stand with his lever) from where we can make absolute judgments about science (the way Dawkins is doing), or anything else. We must jump into the nitty gritty of things and be pragmatists, and give up the hope of knowing with logical certainty that we are right.

So how do scientists go about their business then? How do they know when they are onto something? These are questions that many sociologists, anthropologists, psychologists, philosophers of science, and scientists themselves have tried to answer, and the answers have filled many books. One thing comes up again and again, however, and especially when scientists themselves talk about what they do and how they do it: the importance of beautyEinstein_cd. Scientists don’t just sit there dreaming up random hypotheses and then testing them to see if they are true. There are too many possible hypotheses to work this way. Instead, they try to think of beautiful things. This intrusion of the aesthetic into the hard, cold, austere realm of science is unexpected to many people, but it is surprisingly consistent. When Albert Einstein was asked what he would do if the measurements of bending starlight at the 1919 eclipse contradicted his general theory of relativity, he famously repliedVonnegut, “Then I would feel sorry for the good Lord. The theory is correct.” What he meant was that the theory is far too beautiful to be wrong. How do you tell when something is beautiful? That, I’m afraid, is a question too big for me. (Though if that kind of thing interests you, you may wish to have a look at this recent Monday Musing essay by Morgan Meis and the ensuing discussion in the comments area.) For now, we’ll have to make do with some you-know-it-when-you-see-it notion of beauty. (Kurt Vonnegut once said that to know if a painting is good, all you have to do is look at a million paintings. I can only mimic him and say that if you want to know what is beautiful in science, all you have to do is look at a lot of science.)

FranciscrickYes, yes, I am slowly coming to my subject. (Hey, it’s my Monday Musing and I’m allowed to ramble on a bit!) We are now approaching the first anniversary of 3 Quarks Daily. The very first day that 3QD went online, July 31, 2004, I posted the sad news of Francis Crick’s death. Crick, of course, along with James Watson (and Rosalind Franklin, and Maurice Wilkins), was the co-discoverer of the molecular structure of DNA. (In possibly the most coy understatement ever published in the history of science, at the end of the momentous paper in which Watson and Crick detailed their discovery of the double helix–which can be unwound, each strand then re-pairing with other bases to form a new double helix identical to the original–thereby solving the problem of DNA replication, they wrote: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”) Crick won a Nobel for this work, but this is not all he did. He spent the latter part of his life as a distinguished neuroscientist, publishing much in this new field, including the book The Astonishing Hypothesis.

GamowThe years following the discovery of the structure of DNA were busy ones, not just for molecular biologisFeynmants, but also for physicists and mathematicians (Crick himself had come to biology after obtaining a degree in physics), and specialists in codes, because the code instantiated in the double helix took some time to understand. George Gamow m300pxedwardteller1958ade significant contributions, and other physicists also took a crack at the problem, including a young Richard Feynman, and even Edward Teller proposed a wacky scheme.

Let me now, finally, attempt to deliver on the promise of my title. At some point in time, this much was clear: the molecular code consisted of four bases, A, T, C, and G. These form the alphabet of the code. Somehow, they encode the sequences of amino acids which specify each protein. There are twenty amino acids, but only four bases, so you need more than one base to specify each amino acid. Two bases will still not be enough, because there are only 42, or 16 possible combinations. A sequence of three bases, however, has 43, or 64 possible combinations, enough to encode the twenty amino acids and still have 44 combinations left over. Such a triplet of three bases which specify an amino acid is known as a codon. So how exactly is it done? What combinations stand for which amino acid? Nature is seldom wasteful, so people wondered why a combinatorial scheme which allows 64 possibilities would be used to specify a set of only 20 amino acids. Francis Crick had a beautiful answer. As we will see, it was also wrong.

What Crick thought was something like this: suppose you have a sequence of 15 bases (or 5 codons) which specifies some protein (remember, each codon specifies an amino acid), like GAATCGAACTAGAGT. This means the codon GAA (or physically, whatever amino acid that stands for), followed by the codon TCG, followed by AAC, and so on. But there are no commas or spaces to mark the boundaries of codons, so if you started reading this sequence after the first letter, you might think that it is the codon AAT, followed by CGA, followed by ACT, and so on. It is as if in English, if we had no spaces and only three letter words, you might read the first word in the string PATENT as PAT, or if by mistake (this would be easy to do if you had whole books filled with 3 letter words without spaces in between) you started reading at the second letter, as ATE, or starting at the third letter, as TEN, etc. Do you see the difficulty? This is known as the frame-shift problem. Now Crick thought, what if only a subset of the 64 possible codons is valid, and the rest are non-sense. Then, it would be possible that the code works in such a way that if you shift the reading frame in the sequence over by one or two places, what results are nonsense codons, which are not translated into protein or anything else. Again, let me try to explain by example: in the earlier English case, suppose you banned the words ATE and TEN (but allowed ENT to mean something), then PATENT could be deciphered easily because if you start reading at the wrong place you just end up with meaningless words, and you can just adjust your frame to the right or left. In other words, it would work like this: if ATG and GCA are meaningful codons, then TGG and GGC cannot be valid codons because we could frame shift ATGGCA and get those. Similarly if we combine the two valid codons above in the other order, we get GCAATG, which if shifted gives CAA and AAT, which also must be eliminated as nonsense. This kind of scheme is known as a comma-free code, as it allows sense to be made of strings without the use of delimiters such as commas.

Now, Crick worked out the combinatorial math (I won’t bore you with the details, Josh) and found that with triplets of 4 possible bases, one has to eliminate 44 of the 64 possiblilities as nonsense codons, to make a comma-free code. Voila! That leaves 20 valid codes for the 20 amino acids, saving parsimonious Nature from any sinful profligacy! This is what beauty in science is all about. Now, Crick had no evidence that this is indeed how the genetic code works, but the beauty of the idea convinced him that it must be true. In fact, the exact elegance of this scheme was such that all attempts at actually figuring out the genetic coding scheme for the next many years attempted to be compatible with the idea. Alas, it turned out to be wrong.

In the 60s, when the actual genetic coding schemes were finally figured out in labs where people managed to perform protein synthesis outside the cell using strings of RNA, it turned out that there are real codons which the comma-free code theory would have eliminated, and this nailed the coffin of Crick’s lovely idea shut forever. In fact, more than one codon sometimes codes for the same amino acid, while other codons are start and stop markers, acting as punctuation in the sentences of genetic sequences. It is now understood that nature is not prodigal, and uses this redundancy as an error correction measure. Computer simulations show that the actual code is nearly optimal when this error correction is taken into account. So it is quite beautiful, after all. Still, why did so many scientists think for so long that Crick must be right? Because in science, as in life, beauty is hard to resist.

Have a good week!

My other recent Monday Musings:
The Man With Qualities
Special Relativity Turns 100
Vladimir Nabokov, Lepidopterist
Stevinus, Galileo, and Thought Experiments
Cake Theory and Sri Lanka’s President