David Gelernter, a computer scientist at Yale, proposed using software to create a computer simulation of the physical world, making it possible to map everything from traffic flow and building layouts to sales and currency data on a computer screen. Mr. Gelernter’s idea came a step closer to reality in the last few weeks when both Google and Yahoo published documentation making it significantly easier for programmers to link virtually any kind of Internet data to Web-based maps and, in Google’s case, satellite imagery.
Since the Google and Yahoo tools were released, their uses have been demonstrated in dozens of ways by hobbyists and companies, including an annotated map guide to the California wineries and restaurants that appeared in the movie “Sideways” and instant maps showing the locations of the recent bombing attacks in London.
Scientists have successfully sequenced the genomes of three deadly parasites that together threaten half a billion people annually around the globe. According to reports published in the current issue of the journal Science, the parasites responsible for African sleeping sickness, Chagas disease and leishmaniasis–illnesses with very different symptoms–share a core of a few thousand genes. Scientists hope that the results will prove useful for identifying novel drug or vaccine targets.
One of the most shortsighted commonsense expressions in use today must be ‘ethnic food,’ as in ‘What are you in the mood for tonight?’ ‘Something ethnic?’ As a shorthand for classifying cuisines, it’s pretty incoherent, lumping together the foods of whichever nations or cultures are considered to be non-standard. This consensus is, of course, temporary: as so many histories of American culture point out, today’s natives are yesterday’s immigrants (you could see Walter Benn Michaels’ Our America for an informative account). As the most significant recent immigration to this country has been from Asia, ethnic food today might include Chinese, Thai, Vietnamese, Indian.
But this is poor thinking and dangerous ideology. The Italian and Irish arrivants of a century ago have not only had their cuisines domesticated (and, in the process, modified). Ironically, they also reintroduced to U.S. diets foods that originated here: potatoes, tomatoes, chili peppers, and corn are all native to the Americas and did not reach Europe until their conquest. The us-and-them belief structure underlying ‘ethnic food,’ known as nativism, conceals the truly global nature of food culture underneath a phantom authenticity, as though lasagna should be regarded as more American than pho in any but the most momentary sense.
The British have been particularly good at transforming the foreign into the (as they say) homely, as in the case of tea (even the opium trade with China was begun to offset the massive trade deficit incurred by tea imports). A more complex example is Worcestershire sauce, which two hundred years ago incorporated the unfamiliar fruits of colonial expansion, among them tamarind, cloves, and chili peppers. These far-fetched tastes were sweetened using a colonial by-product, molasses, and then combined and fermented, thereby domesticating them for the timid palate: it’s a kind of Orientalism in a bottle. Even the availability of that most common staple, white sugar, was ensured by a global system of slave labor and plantation colonies, as Sidney Mintz points out in the excellent Sweetness and Power.
I mention this culinary false consciousness as a benign but persistent example of a frightening tendency: the projection of the false and pernicious image of a pure, unsullied ‘homeland’ threatened by foreign infiltrators, which infects fundamentalisms worldwide. Clearly, the contemporary right traffics in this kind of thinking constantly. Even on the academic left, however, the appellation ‘people of color’ conflates groups whose experience is radically different (the racism experienced by African-Americans in the U.S., for instance, is of a completely different kind and degree than that of other minority groups). I don’t question the honorable intention of the term–to generate solidarity among people who suffer oppression–but in practice it prolongs the ideological falsehood that the ur-citizen is a white male Protestant, even while attempting to critique just that.
‘People of color’ also depends on and reinforces the illusion that there is one group of white men really in control of what is American. If the mistake of ‘ethnic food’ is the unstated assumption that the ‘normal’ food has no ethnicity, then the mistake of ‘people of color’ is the sense that white is not a color. This has no doubt been assumed all too often in American culture, but to define resistance purely in oppostion to it presumes that the U.S. notion of who is white and who isn’t is universal, when in fact it is occasional and subject to change. That’s way too much power to ascribe to the opposition. We should never be afraid to emphasize the basis of liberty: that our differences have nothing to do with our belonging.
Many scientists don’t know what they are doing. That is, they are so immersed in science, that they often do not step outside it for a wider philosophical perspective on what it is they do, while remaining convinced that science is somehow more correct than other ways of doing things. For example, a scientist might argue that she can treat malaria better than a witch doctor can. The witch doctor, of course, will say the opposite. If you ask the scientist why she thinks she is right, she will say that she can demonstrate her efficacy with an experiment: a large sample of cases of malaria which are treated by her method as well as with the witch doctor’s method (and maybe even a control group), after which she will perform a sophisticated statistical analysis on the data that she collects on all these cases, thus showing that her method is better. Now, if you object that her reasoning is circular, after all, she has just used the scientific method to show that the scientific method is correct (thereby only really showing that the scientific method is self-consistent), and don’t allow her to use science to prove science right (if the scientific method of proving something right were already acceptable to you, you wouldn’t be questioning her in the first place), she will tend to start getting desperate and try to make appeals to common sense, or even question your sanity (“Are you crazy? It’s obvious that witch doctor is a thieving fraud, taking people’s money and pretending to help them with his wacky chants,” etc.) And she will have a lingering suspicion that you have somehow tricked her with some sneaky rhetorical sophistry; she will continue to think that of course science is right, just look at what it can do!
So what’s going on here? I am not claiming that witch doctors (or astrologists, or parapsychologists, or faith-healers, or Uri Gellar, or Deepak Chopra, or other charlatans) are just as good as scientists, or even that they are right about anything at all (they are not); what I am saying is that there is no neutralground on which to stand, and from the outside, as it were, proclaim the supremacy of science as the best avenue to truth. One must learn to live without such an absolute grounding. Even as clear-headed and careful a thinker as Richard Dawkins can sometimes get confused about this. At the end of an otherwise fascinating and inventive essay entitled “Viruses of the Mind” (Dawkins’s contribution to the volume Dennett and His Critics) in which he uses viruses as a metaphor for the various bad ideas (or memes) that “infect” brains in a culture (particularly the “virus” of religion), and also makes a parallel analogy with computer viruses, Dawkins asks if science itself might be a kind of virus in this sense. He then answers his own question:
No. Not unless all computer programs are viruses. Good, useful programs spread because people evaluate them, recommend them and pass them on. Computer viruses spread solely because they embody the coded instructions: ‘Spread me.’ Scientific ideas, like all memes, are subject to a kind of natural selection, and this might look superficially virus-like. But the selective forces that scrutinize scientific ideas are not arbitrary or capricious. They are exacting, well-honed rules, and . . . they favour the virtues laid out in textbooks of standard methodology: testability, evidential support, precision, . . . and so on.
Daniel Dennett spares me the need to respond to this very uncharacteristic bit of wishful silliness from Dawkins by doing so himself (and far better than I could):
When you examine the reasons for the spread of scientific memes, Dawkins assures us, “you find they are good ones.” This, the standard, official position of science, is undeniable in its own terms, but question-begging to the mullah and the nun–and to [Richard] Rorty, who would quite appropriately ask Dawkins: “Where is your demonstration that these ‘virtues’ are good virtues? You note that people evaluate these memes and pass them on–but if Dennett is right, people (persons with fully-fledged selves) are themselves in large measure the creation of memes–something implied by the passage from Dennett you use as your epigram. How clever of some memes to team together to create meme-evaluators that favor them! Where, then, is the Archimedean point from which you can deliver your benediction on science?”
[The epigram Dawkins uses and Dennett mentions above is this:
The haven all memes depend on reaching is the human mind, but a human mind is itself an artifact created when memes restructure a human brain in order to make it a better habitat for memes. The avenues for entry and departure are modified to suit local conditions, and strengthened by various artificial devices that enhance fidelity and prolixity of replication: native Chinese minds differ dramatically from native French minds, and literate minds differ from illiterate minds. What memes provide in return to the organisms in which they reside is an incalculable store of advantages — with some Trojan horses thrown in for good measure. . .
Daniel Dennett, Consciousness Explained
Below, Dennett continues his response to Dawkins…]
There is none. About this, I agree wholeheartedly with Rorty. But that does not mean (nor should Rorty be held to imply) that we may not judge the virtue of memes. We certainly may. And who are we? The people created by the memes of Western rationalism. It does mean, as Dawkins would insist, that certain memes go together well in families. The family of memes that compose Western rationalism (including natural science) is incompatible with the memes of all but the most pastel versions of religious faith. This is commonly denied, but Dawkins has the courage to insist upon it, and I stand beside him. It is seldom pointed out that the homilies of religious tolerance are tacitly but firmly limited: we are under no moral obligation to tolerate faiths that permit slavery or infanticide or that advocate the killing of the unfaithful, for instance. Such faiths are out of bounds. Out of whose bounds? Out of the bounds of Western rationalism that are presupposed, I am sure, by every author in this volume. But Rorty wants to move beyond such parochial platforms of judgment, and urges me to follow. I won’t, not because there isn’t good work for a philosopher in that rarefied atmosphere, but because there is still so much good philosophical work to be done closer to the ground.
Now I happen to agree more with Rorty on this, but that is not the point. What is important is that Rorty, Dennett and I, all agree that there is no neutral place (for Archimedes to stand with his lever) from where we can make absolute judgments about science (the way Dawkins is doing), or anything else. We must jump into the nitty gritty of things and be pragmatists, and give up the hope of knowing with logical certainty that we are right.
So how do scientists go about their business then? How do they know when they are onto something? These are questions that many sociologists, anthropologists, psychologists, philosophers of science, and scientists themselves have tried to answer, and the answers have filled many books. One thing comes up again and again, however, and especially when scientists themselves talk about what they do and how they do it: the importance of beauty. Scientists don’t just sit there dreaming up random hypotheses and then testing them to see if they are true. There are too many possible hypotheses to work this way. Instead, they try to think of beautiful things. This intrusion of the aesthetic into the hard, cold, austere realm of science is unexpected to many people, but it is surprisingly consistent. When Albert Einstein was asked what he would do if the measurements of bending starlight at the 1919 eclipse contradicted his general theory of relativity, he famously replied, “Then I would feel sorry for the good Lord. The theory is correct.” What he meant was that the theory is far too beautiful to be wrong. How do you tell when something is beautiful? That, I’m afraid, is a question too big for me. (Though if that kind of thing interests you, you may wish to have a look at this recent Monday Musing essay by Morgan Meis and the ensuing discussion in the comments area.) For now, we’ll have to make do with some you-know-it-when-you-see-it notion of beauty. (Kurt Vonnegut once said that to know if a painting is good, all you have to do is look at a million paintings. I can only mimic him and say that if you want to know what is beautiful in science, all you have to do is look at a lot of science.)
Yes, yes, I am slowly coming to my subject. (Hey, it’s my Monday Musing and I’m allowed to ramble on a bit!) We are now approaching the first anniversary of 3 Quarks Daily. The very first day that 3QD went online, July 31, 2004, I posted the sad news of Francis Crick’s death. Crick, of course, along with James Watson (and Rosalind Franklin, and Maurice Wilkins), was the co-discoverer of the molecular structure of DNA. (In possibly the most coy understatement ever published in the history of science, at the end of the momentous paper in which Watson and Crick detailed their discovery of the double helix–which can be unwound, each strand then re-pairing with other bases to form a new double helix identical to the original–thereby solving the problem of DNA replication, they wrote: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”) Crick won a Nobel for this work, but this is not all he did. He spent the latter part of his life as a distinguished neuroscientist, publishing much in this new field, including the book The Astonishing Hypothesis.
The years following the discovery of the structure of DNA were busy ones, not just for molecular biologists, but also for physicists and mathematicians (Crick himself had come to biology after obtaining a degree in physics), and specialists in codes, because the code instantiated in the double helix took some time to understand. George Gamow made significant contributions, and other physicists also took a crack at the problem, including a young Richard Feynman, and even Edward Teller proposed a wacky scheme.
Let me now, finally, attempt to deliver on the promise of my title. At some point in time, this much was clear: the molecular code consisted of four bases, A, T, C, and G. These form the alphabet of the code. Somehow, they encode the sequences of amino acids which specify each protein. There are twenty amino acids, but only four bases, so you need more than one base to specify each amino acid. Two bases will still not be enough, because there are only 42, or 16 possible combinations. A sequence of three bases, however, has 43, or 64 possible combinations, enough to encode the twenty amino acids and still have 44 combinations left over. Such a triplet of three bases which specify an amino acid is known as a codon. So how exactly is it done? What combinations stand for which amino acid? Nature is seldom wasteful, so people wondered why a combinatorial scheme which allows 64 possibilities would be used to specify a set of only 20 amino acids. Francis Crick had a beautiful answer. As we will see, it was also wrong.
What Crick thought was something like this: suppose you have a sequence of 15 bases (or 5 codons) which specifies some protein (remember, each codon specifies an amino acid), like GAATCGAACTAGAGT. This means the codon GAA (or physically, whatever amino acid that stands for), followed by the codon TCG, followed by AAC, and so on. But there are no commas or spaces to mark the boundaries of codons, so if you started reading this sequence after the first letter, you might think that it is the codon AAT, followed by CGA, followed by ACT, and so on. It is as if in English, if we had no spaces and only three letter words, you might read the first word in the string PATENT as PAT, or if by mistake (this would be easy to do if you had whole books filled with 3 letter words without spaces in between) you started reading at the second letter, as ATE, or starting at the third letter, as TEN, etc. Do you see the difficulty? This is known as the frame-shift problem. Now Crick thought, what if only a subset of the 64 possible codons is valid, and the rest are non-sense. Then, it would be possible that the code works in such a way that if you shift the reading frame in the sequence over by one or two places, what results are nonsense codons, which are not translated into protein or anything else. Again, let me try to explain by example: in the earlier English case, suppose you banned the words ATE and TEN (but allowed ENT to mean something), then PATENT could be deciphered easily because if you start reading at the wrong place you just end up with meaningless words, and you can just adjust your frame to the right or left. In other words, it would work like this: if ATG and GCA are meaningful codons, then TGG and GGC cannot be valid codons because we could frame shift ATGGCA and get those. Similarly if we combine the two valid codons above in the other order, we get GCAATG, which if shifted gives CAA and AAT, which also must be eliminated as nonsense. This kind of scheme is known as a comma-free code, as it allows sense to be made of strings without the use of delimiters such as commas.
Now, Crick worked out the combinatorial math (I won’t bore you with the details, Josh) and found that with triplets of 4 possible bases, one has to eliminate 44 of the 64 possiblilities as nonsense codons, to make a comma-free code. Voila! That leaves 20 valid codes for the 20 amino acids, saving parsimonious Nature from any sinful profligacy! This is what beauty in science is all about. Now, Crick had no evidence that this is indeed how the genetic code works, but the beauty of the idea convinced him that it must be true. In fact, the exact elegance of this scheme was such that all attempts at actually figuring out the genetic coding scheme for the next many years attempted to be compatible with the idea. Alas, it turned out to be wrong.
In the 60s, when the actual genetic coding schemes were finally figured out in labs where people managed to perform protein synthesis outside the cell using strings of RNA, it turned out that there are real codons which the comma-free code theory would have eliminated, and this nailed the coffin of Crick’s lovely idea shut forever. In fact, more than one codon sometimes codes for the same amino acid, while other codons are start and stop markers, acting as punctuation in the sentences of genetic sequences. It is now understood that nature is not prodigal, and uses this redundancy as an error correction measure. Computer simulations show that the actual code is nearly optimal when this error correction is taken into account. So it is quite beautiful, after all. Still, why did so many scientists think for so long that Crick must be right? Because in science, as in life, beauty is hard to resist.
John Tierney has some extreme ideas as how to punish hackers who write viruses and worms and damage computers around the world. He relates to Steven Landsburg’s cost-benefit analysis of executing murderers which yields up to $100 million in social benefits. Referring to Landsburg’s views on hackers punishment Tirney writes:
“The benefits of executing a hacker would be greater, he argues, because the social costs of hacking are estimated to be so much higher: $50 billion per year. Deterring a mere one-fifth of 1 percent of those crimes – one in 500 hackers – would save society $100 million. And Professor Landsburg believes that a lot more than one in 500 hackers would be deterred by the sight of a colleague on death row.
I see his logic, but I also see practical difficulties. For one thing, many hackers live in places where capital punishment is illegal. For another, most of them are teenage boys, a group that has never been known for fearing death. They’re probably more afraid of going five years without computer games.”
The charges were announced by Judge Raed Juhi, chief investigative judge of the tribunal. They are connected with a 1982 series of detentions and executions after an assassination attempt against Saddam in Dujayl. No trial date was announced, but under Iraqi law Saddam could stand trial as early as September, because of a minimum 45-day period following referral for trial. On July 8, 1982, a convoy carrying Saddam traveled through the town of Dujayl, a Shiite village north of Baghdad, and was attacked by a small band of residents. A series of detentions and executions in the town followed the incident. According to the tribunal, 15 people were summarily executed and some 1,500 others spent years in prison with no charges and no trial date. Ultimately, another 143 were put on “show trials” and executed, according to the tribunal.
Saddam has been in custody since December 2003, when he was captured by U.S. troops.
No wonder Irshad Manji has received death threats since appearing on British television: she is a lipstick lesbian, a Muslim and scourge of Islamic leaders, whom she accuses of making excuses about the terror attacks on London. Oh, and she tells ordinary Muslims to “crawl out of their narcissistic shell”. Ouch. Manji is a glamorous Canadian television presenter whose book, The Trouble with Islam, has made her so famous in America that she won something called the Oprah Winfrey Chutzpah award.
The underlying problem with Islam, observes Manji, is that far from spiritualising Arabia, it has been infected with the reactionary prejudices of the Middle East: “Colonialism is not the preserve of people with pink skin. What about Islamic imperialism? Eighty per cent of Muslims live outside the Arab world yet all Muslims must bow to Mecca.” Fresh thinking, she contends, is suppressed by ignorant imams; you can see why she has been dubbed “Osama’s worst nightmare ”.
The science of complexity is perhaps the greatest challenge of all, Astronomer Royal Sir Martin Rees believes. The biggest conundrum is humanity and how we came to be. One man who is set on trying to unfold the complexity of life and how we are made up and came to be in order to understand our future is Craig Venter. He was one of the masterminds behind the sequencing of the human genome – the genetic code that creates life. His next big challenge is to create living, artificial organisms from a kit of genes, and he is well on his way. He says an artificial single cell organism is possible in two years.
To unravel the complexity of life on our planet in order to understand more about where humans come from, Dr Venter embarked on a round the world ocean voyage to take samples of seawater every 200 miles. At every stop they found new species. At one location, one barrelful contained 1.3 million new genes and 50,000 new species. One certainty in an uncertain world is clear to Prof Rees: “Whatever happens in this uniquely crucial century will resonate in the remote future and perhaps far beyond the Earth.”
WatchingAmerica.com is a web site that tracks online newpapers from around the world. It focuses on how the US is viewed and reported on abroad. Side by side, the stories paint a diverse, contradictory, disturbing, and rich image of how we’re seen and understood.
I remember reading about the exceptionlly long and healthy lives that natives of the mountain regions of Pakistan and Turkey enjoy. The two common features about their lifestyles turned out to be the water they drink being thousands of times richer in its calcium content and the fact that each community spends at least 8 hours in the sun everyday. The following story in News Target explains why they live longer:
Taking a daily 10 to 15 minute walk in the sun not only clears your head, relieves stress and increases circulation – it could also cut your risk of breast cancer in half. At least that’s what Esther John, an epidemiologist at the Northern California Cancer Center, recommends. In The Breast Cancer Prevention Diet, Dr. Robert Arnot claims that national rates of breast cancer inversely correlate to solar radiation exposure. In other words, breast cancer occurs at a much higher rate in colder, cloudier northern regions than in sunnier southern regions.
How does this work? There is in fact a scientific answer. The sun stimulates production of a hormone in your skin. Vitamin D3 isn’t exactly a vitamin, but rather a type of steroid hormone that can drastically improve your immune system function. Vitamin D3 also controls cellular growth and helps you absorb calcium from your digestive tract. Most importantly, this hormone/vitamin inhibits the growth of cancer cells.
When I learned that after more than 30 years in business, the Oscar Wilde Bookshop in New York — which claimed to be the world’s first gay and lesbian bookshop — was supposed to close its doors, the news provoked a pang of nostalgia. In 1983, I worked there for exactly one day. I was six months out of college, wanted to be a writer, had recently come out, and needed a part-time job. The Oscar Wilde seemed like a good fit.
Once it was revolutionary to publish a gay novel, or open a gay bookshop, but now the time may be upon us when the revolutionary thing to do is to retire the category altogether. I’m for stepping into the post-gay future — which is why, every time I go into a Borders, I move a few books from the gay fiction shelf to the general fiction section, restoring them to their rightful place in the alphabetical and promiscuous flow of literature.
Confused young men, torn between cultures, are easy prey for preachers of hatred. Britons must bind their own wounds and be more aware of the impact of their government’s policies – on Iraq, Palestine etc – on Muslims everywhere. But Pakistanis must tackle their own problems. We live in one world: anyone who cares about what happens in Rochdale or Leeds needs to worry about Rawalpindi and Lahore as well.
Dr Badawi has visited the US several times, most recently in 2003. He was given an honorary knighthood, and in 2003 was a guest of the Queen at a state banquet for the US president, George Bush. Earlier this week, Dr Badawi joined other British religious leaders, including Archbishop of Canterbury, Dr Rowan Williams, and Chief Rabbi Sir Jonathan Sacks, in publicly condemning the London bomb blasts, which killed at least 54 people. […]
The US Customs and Border Protection office said Dr Badawi had been refused entry to the country based on information indicating that he was “inadmissible”.
As the unofficial spiritual leader of the Britain’s Muslims, the 82-year-old Bawadi has a spiritual stature comparable to that of the Archbishop of Canterburry. He is also a vocal opponent of Islamic extremism:
When Bin Laden issued a fatwa on Americans, he dismissed it as being without religious authority and declared acerbically: “Fatwas have become a cheap business. Since Ayatollah Khomeini issued his against Salman Rushdie, everyone has opened a fatwa shop.”
Following censorship in the 1980s and vociferous scalding by critics, Richard Serra has transcended all odds with a mammoth installation entitled “The Matter of Time” at the Bilbao Guggenheim. As part of the museum’s permanent collection, this installation consists of five Torques, and three other pieces: Snake, Between the Torus and Sphere, and Blind Spot Reversed. This suite of eight sculptures features coiling undulating lines of convex and concave surfaces that somehow move the space within and around the gallery. “The Matter of Time,” an appropriately weighty title for such a massive work, has the feel of a magnus opus: it marks the culmination of ideas that Serra has been working on for the past twelve years.
On the release of Harry Potter and the Half-Blood Prince, a book review race is on, on the blogosphere:
“Harry Potter and the Half-Blood Prince will finally be released to the muggle world at one minute past midnight tonight. . .
And so Culture Vulture will be covering it, in the muggle form of Arts editor Andrew Dickson and me. We’ll be joining the over-excited ankle-biters in our local branches of Waterstone’s – Notting Hill and Brighton – to report on the atmosphere in the bookshops as the frenzied hordes of youngsters up well past their bedtimes and their long-suffering parents queue to get their sticky mitts on the first copies of the book.
Then we will be speedreading the book through the night – blogging as we go – to produce the first review of the book anywhere in the world (we hope. If we can stay awake).”
(Hat tip: Maeve Adams)
Like what you're reading? Don't keep it to yourself!
Michael LeBossiere, at The Philosophers’ Magazine Online, looks at the morality of eating foie gras.
“The debate over the morality of mistreating animals and eating them is clearly philosophically interesting. However, this situation also raises another matter of concern: this debate has clearly revealed that philosophical ignorance is rather widespread among those discussing the matter. This ignorance, one may safely assume, probably extends beyond this issue. A May 2, 2005 article, ‘A Flap Over Foie Gras,’ in Newsweek nicely reveals the nature of the ignorance-all quotes below are taken from that article (page 58).
First, consider the position of American-French chef Rick Tramonto. In response to chef Charlie Trotter’s decision to stop serving foie gras (but to keep serving other meat dishes), chef Tramonto said ‘Either you eat animals or you don’t eat animals.’ While this is a good example of a tautology (a claim that is true in virtue of its logical structure), it also nicely expresses the fallacy known as false dilemma. The idea is that a person present two alternatives, rejects one and then asserts that the remaining one must be correct. This reasoning is fallacious when there are, in fact, more than two alternatives-both of the presented alternatives could be incorrect/false, while a third (or twentieth) alternative is correct/true.
While it is true that one either does or does not eat animals, there certainly are many alternatives lying between not eating animals at all and eating any animal.”
Like what you're reading? Don't keep it to yourself!
Cass Sunstein in The American Prospsect on the problem of having a Supreme Court justice whose opinions are entirely predictable:
“Right-wing activists have made it all too clear that they want President George W. Bush to appoint Supreme Court justices who are ‘predictable.’ The longtime refrain of ‘No more David Souters’ has been joined by ‘No more Anthony Kennedys.’ Some groups demand a nominee who does not believe that the Constitution protects abortion or gay rights or even privacy; others insist that the next justice should reliably protect economic interests of which they approve. The activists, and according to some reports the White House itself, do not want surprises.
In the law, predictability is usually important. People need to know the rules, and they cannot plan their lives unless they know the law in advance. We expect predictability from our trial court judges, who are meant to follow the law far more than to make it. And of course we want to be able to predict that Supreme Court justices will not ignore the Constitution, or refuse to protect free speech, or permit racial segregation. But in the hard cases that come to the Supreme Court, complete predictability is terrible, because it compromises judicial independence.”
Like what you're reading? Don't keep it to yourself!
Maya Jasanoff reviews Gautam Chakravarty’s new book on the Great Indian Mutiny of 1857 and how it was woven into the British imagination, in The London Review of Books.
“From the outset, British writers infused the mutiny with ideological and emotive significance. East India Company administrators made a point of stressing its military origins, pointing the finger at the army. Officers, in turn, sought to blame administrators for enacting policies that led to wider discontent, such as the unpopular annexation of Awadh in 1856. Many British commentators condemned the company, continuing a long tradition of Whig criticism; while the Muslim reformer Syed Ahmad Khan, in his 1858 Causes of the Indian Revolt, attributed the rebellion to the company’s unwillingness to incorporate Indian voices in its legislative council.
Apportioning blame for what had happened was one thing. Describing what happened was another.”
Like what you're reading? Don't keep it to yourself!
“This paper presents an evolutionary argument for the role of dreams in the development of human cognitive processes. While a theory by Revonsuo proposes that dreams allow for threat rehearsal and therefore provide an evolutionary advantage, the goal of this paper is to extend this argument by commenting on other fitness-enhancing aspects of dreams. Rather than a simple threat rehearsal mechanism, it is argued that dreams reflect a more general virtual rehearsal mechanism that is likely to play an important role in the development of human cognitive capacities. This paper draws on current work in cognitive neuroscience and philosophy of mind in developing the argument.”
Like what you're reading? Don't keep it to yourself!
“The new research strategy presented in this paper, Evolutionary Social Science, is designed to bridge the gap between evolutionary psychology that operates from the evolutionary past and social science that is bounded by recent history. Its core assumptions are (1) that modern societies owe their character to an interaction of hunter-gatherer adaptations with the modern environment; (2) that changes in societies may reflect change in individuals; (3) that historical changes and cross-societal differences are due to the same adaptational mechanisms, and (4) that different social contexts (e.g., social status) modify psychological development through adaptive mechanisms. Preliminary research is reviewed concerning historical, societal, and cross-national variation in single parenthood as an illustration of the potential usefulness of this new approach. Its success at synthesizing the evidence demonstrates that the time frames of evolutionary explanation and recent history can be bridged.”
Like what you're reading? Don't keep it to yourself!