It’s not often that research results look this good. An elegant new way to visualize individual brain cells not only provides a major boost to scientists trying to understand how the brain works, but has also won one of its developers a major prize in science photography. The method — described by neuroscientists at Harvard University in Cambridge, Massachusetts, in today’s Nature — allows researchers to see more clearly how individual neurons connect with each other by colouring each one from a palette of about 90 shades. In this way they will be able to build up a detailed diagram of the brain’s wiring, which will help to study how it computes.
More than a century ago, neuroscientists developed the first method of staining individual neurons — with silver chromate. Work with this technique was the basis of the Nobel Prize in Physiology or Medicine in 1906. But this could only stain neurons with one colour. Only in the last decade have scientists improved on this technique, using genetic engineering to transfer genes for fluorescent proteins into mice such that they are expressed in neurons. But until now they could transfer no more than two florescent-protein genes at a time, lighting up the brain with two colours. “It was clear that two colours were not enough to map connections efficiently in the brain’s complex tangle of neurons,” says Joshua Sanes, one of the paper’s senior scientists.
Despite the promise of the international community, the camp [Shatila] was later desecrated. Using oral histories, Naji and the audience are taken into the world of Shatila refugee camp, where it is estimated that between 2,000 and 3,500 people were murdered in 1982. Here, we learn of the testimonies of a population displaced not once, but several times, who endure hardship within a hostile environment.
The play [Sunlight at Midnight] is thought-provoking, bringing into question many themes such as identity formation, exile and the power of memory, while simultaneously highlighting the failures of the international community and the need to commemorate this tragedy.
As we are introduced to different characters throughout the play, we gain an insight into selective memory and historical narrative. The process of exile has affected each person differently, with the sense of belonging to a Palestinian heritage stronger within the camp. History adopts two meanings in the two worlds we enter: one is associated with power, knowledge and purity, and the other with something threatening or irrelevant.
This picture – that our minds were formed by processes of evolutionary adaptation, and that the environment they are adapted to isn’t the one that we now inhabit – has had, of late, an extraordinarily favourable press. Darwinism has always been good copy because it has seemed closer to our core than most other branches of science: botany, say, or astronomy or hydrodynamics. But if this new line of thought is anywhere near right, it is closer than we had realised. What used to rile Darwin’s critics most was his account of the phylogeny of our species. They didn’t like our being just one branch among many in the evolutionary tree; and they liked still less having baboons among their family relations. The story of the consequent fracas is legendary, but that argument is over now. Except, perhaps, in remote backwaters of the American Midwest, the Darwinian account of our species’ history is common ground in all civilised discussions, and so it should be. The evidence really is overwhelming.
But Darwin’s theory of evolution has two parts. One is its familiar historical account of our phylogeny; the other is the theory of natural selection, which purports to characterise the mechanism not just of the formation of species, but of all evolutionary changes in the innate properties of organisms. According to selection theory, a creature’s ‘phenotype’ – the inventory of its heritable traits, including, notably, its heritable mental traits – is an adaptation to the demands of its ecological situation. Adaptation is a name for the process by which environmental variables select among the creatures in a population the ones whose heritable properties are most fit for survival and reproduction. So environmental selection for fitness is (perhaps plus or minus a bit) the process par excellence that prunes the evolutionary tree.
More often than not, both halves of the Darwinian synthesis are uttered in the same breath; but it’s important to see that the phylogeny could be true even if the adaptationism isn’t.
Fifty years ago, New American Library published the Mentor Philosophers series, each with a title beginning The Age of . . . Belief, Ideology, Reason, and so on; the 20th-century selections bore the title The Age of Analysis. Had the series continued to the end of that century and into this, the volume should no doubt be The Age of Apology. Our postmodern ethos seems to hold that if anything can be proved to have happened, then surely someone needs to apologize for it.
We live amid a veritable tsunami of apology. The Catholic Church, which, of course, has much to apologize for, has, of late, offered mea culpas to Galileo, the Jews, the gypsies, Jan Hus, whom it burned at the stake in 1415, even to Constantinople (now Istanbul) for its sacking 800 years ago by the knights of the Fourth Crusade, an event for which the late John Paul II expressed “deep regret.” No wonder that a group in England, claiming descent from the medieval Knights Templars, is asking the Vatican to apologize for the violent suppression of the order and for torturing to death its Grand Master Jacques de Molay in 1314, an apology timed to commemorate the 700th anniversary of that fell deed.
Thousands of Cubans and foreigners have been flocking to a mausoleum in central Cuba to commemorate the 40th anniversary of Che Guevara’s death. For 10 years, the Cuban government has been telling the world that the body inside the mausoleum is that of the famous guerrilla.
It’s a lie designed to bamboozle the population into worshiping the Argentine-born revolutionary as if he were a saint–and the Cuban Revolution as if it were a religion. A brilliant investigation by French journalist Bertrand de la Grange, recently published in Spain’s El Pais newspaper, demolishes the official version.
In 1995, Bolivian Gen. Mario Vargas, who had fought Che’s guerrillas in the 1960s, revealed that the revolutionary’s body was buried a few meters from the airport runway in Vallegrande, a town close to La Higuera, the village in eastern Bolivia where Guevara was killed on Oct. 9, 1967. (Guevara had been executed after the Bolivian president ordered the soldiers who ambushed and captured him to get rid of him.) Cuba sent a forensic, diplomatic and legal team to Vallegrande. On June 28, 1997, they claimed to have found the body, which was brought to Cuba a few weeks before the 30th anniversary of the guerrilla’s death.
The meaning of Kahlo’s art comes across in reproductions, but not its full dynamic, which involves brooding subtleties of surface and color. The reproduced images are shiny and bright. The paintings are matte and grayish, drinking and withholding light. (Their display calls for intense illumination—that of the Mexican sun, say. They should not be hung on white walls, as they are at the Walker, where the contrast makes them look like holes in a snowbank.) The lovely, highly varied, blushing colors (even Kahlo’s browns and greens blush) don’t radiate. Fused with represented flesh, foliage, fabrics, and, yes, ribbons and jewelry, they turn their backs to us. The payoff of this reticence is an absorption in the artist’s touch. It’s easy to fantasize that Kahlo’s brushes were fingertips, able to mold her own more than familiar features in the dark. The tactility of certain self-portraits is, among other things, staggeringly sexy. In “Me and My Parrots” (1941), it combines with sharp tonal contrasts of warm color to convey invisible moistness, as of a summertime, full-body, delicate sweat. Elsewhere, the felt oneness of sight and touch stirs harrowing empathy, as in “The Broken Column” (1944). Kahlo’s nude body is split open to reveal a crumbling pillar, nails penetrating her flesh everywhere. Tears flow from her eyes, but her face is dispassionate, as always. Her pain is not her. It just won’t let her mind stray to anything else, for the moment. The work belongs to a category of images with which Kahlo confronted and endured episodes of agony, including heartbreak and rage. (Most piercing are laments of her disastrous pregnancies; she longed for children but physically could not bring a baby to term.) They aren’t great art, but they are moving testaments of a great artist.
“There was no Herodotus before Herodotus.” This little pearl, courtesy of the historical polymath Arnaldo Momigliano (1908–1987), belongs to the class of truly illuminating tautologies. When Herodotus, in the middle of the 5th century B.C.E., composed his “history” of the Persian wars, there was simply no one around to tell him how it was done. The result, as anyone who has lost the thread amid one of Herodotus’s labyrinthine geographic detours knows, is anything but a “history” in the familiar sense of the term — that is, scrupulous, meticulous, and humorless. The project is better understood as an “inquiry” — a more accurate translation of the Greek word anyhow — into the shape of the known world, almost as if such an inquiry were necessary to understand, as Herodotus put it in his preface to the work, “the reason why the Greeks and barbarians fought one another.”
The cultural understanding of mountains seems so bound up with the aesthetics of the sublime and the advent of Romanticism that it is hard to understand exactly what mountains meant before the eighteenth century. Were they simply seen as blanks, deserts, wild places?
It would be wrong to propose that there’s no refined mountain perception prior to Romanticism. You only need to look at someone like Leonardo da Vinci. He’s making extraordinary sketches of mountain phenomena in the Italian Alps: they’re beautiful, and attentive both to meteorology and geology. In so many ways—as he always does—da Vinci anticipates what’s to come by several centuries. There’s also a biblical tradition of revelation at height: Moses, obviously, on Sinai, or on Mount Pisgah, looking down into the promised land. So there are visionary traditions that precede the late eighteenth century. Petrarch claims to have climbed Mount Ventoux in April 1336; doubts have been voiced about whether he actually made the ascent, but the falsifiability of the account doesn’t really matter, because he gives us one of the first expedition journals (the book of Exodus would be another of these), and one of the first mountain descriptions in which mountain and text, or mountain and representation, become blurred almost to the point of interchangeability. So, in one sense, you can construct a tradition of the visionary and the beautiful for mountains which precedes Romanticism, going as far back as you want to go. But on the other hand, it’s quite possible to argue that mountains existed as little more than wallpaper, by and large, through the Medieval and Early Modern periods.
Kroo believes the way we fly planes may change. Currently all commercial airliners cruise at speeds of around Mach 0.85 (85 percent of the speed of sound). Kroo believes in the future planes may slow down, say from Mach 0.85 to Mach 0.75.
He also believes planes could fly at lower altitudes because of concerns that contrails affect the atmosphere. Other environmental impacts would also be reduced. Nitrogen oxide emissions, unburned hydrocarbons and water vapor all have an impact strongly related to how long they stay in the atmosphere: lower altitudes help reduce this.
“It’s uncertain, but people are actively planning flight paths at lower speeds and altitudes. The sky in the future may not be filled with white lines,” Kroo said.
This would mean very efficient airplanes flying at slightly slower speeds — a small change in convenience but a profound reduction in environmental impact. A reduction in fuel burn of 50 percent is not out of the question, according to Kroo.
Harper Lee, the author of To Kill a Mockingbird, has been awarded the Presidential Medal of Freedom, by George Bush. Whether or not one of the world’s most publicity-shy literary stars will relish being given America’s highest – and very public – award remains to be seen. According to the citation the reclusive author has been honoured for “an outstanding contribution to America’s literary tradition. At a critical moment in our history, her beautiful book, To Kill a Mockingbird, helped focus the nation on the turbulent struggle for equality.”
Lee was born in Monroeville in 1926, in the deep South, at a time of strict racial segregation. She was a voracious reader who moved to New York determined to become a writer, and succeeded with To Kill A Mockingbird. The book was an instant bestseller and won a Pulitzer prize. It was also made into a hit film starring Gregory Peck, which quickly gained similar “classic” status to the book’s.
Fly envious Time, till thou run out thy race, Call on the lazy leaden-stepping hours, Whose speed is but the heavy Plummets pace; And glut thy self with what thy womb devours, Which is no more then what is false and vain, And meerly mortal dross; So little is our loss, So little is thy gain.
Professor Stefano Mancuso knows it isn’t easy being green: He runs the world’s only laboratory dedicated to plant intelligence.
At the International Laboratory of Plant Neurobiology (LINV), about seven miles outside Florence, Italy, Mancuso and his team of nine work to debunk the myth that plants are low-life. Research at the modern building combines physiology, ecology and molecular biology.
“If you define intelligence as the capacity to solve problems, plants have a lot to teach us,” says Mancuso, dressed in harmonizing shades of his favorite color: green. “Not only are they ‘smart’ in how they grow, adapt and thrive, they do it without neuroses. Intelligence isn’t only about having a brain.”
Plants have never been given their due in the order of things; they’ve usually been dismissed as mere vegetables. But there’s a growing body of research showing that plants have a lot to contribute in fields as disparate as robotics and telecommunications. For instance, current projects at the LINV include a plant-inspired robot in development for the European Space Agency. The “plantoid” might be used to explore the Martian soil by dropping mechanical “pods” capable of communicating with a central “stem,” which would send data back to Earth.
Dartmouth researchers looked at the online encyclopedia Wikipedia to determine if the anonymous, infrequent contributors, the Good Samaritans, are as reliable as the people who update constantly and have a reputation to maintain.
The answer is, surprisingly, yes. The researchers discovered that Good Samaritans contribute high-quality content, as do the active, registered users. They examined Wikipedia authors and the quality of Wikipedia content as measured by how long and how much of it persisted before being changed or corrected.
“This finding was both novel and unexpected,” says Denise Anthony, associate professor of sociology. “In traditional laboratory studies of collective goods, we don’t include Good Samaritans, those people who just happen to pass by and contribute, because those carefully designed studies don’t allow for outside actors. It took a real-life situation for us to recognize and appreciate the contributions of Good Samaritans to web content.”
Paul Krugman and Brad DeLong discuss the southern strategy, the undoing of the New Deal coalition, and the future of America’s electoral terrain over at TPM Cafe. Krugman:
To give you a sense of just how little there is to be explained once you take this shift into account, here’s a statistic from Larry Bartels, my Princeton colleague. Everyone knows that white men have left the Democratic Party. But what everyone knows isn’t true, if you exclude the South. In 1952, 40 percent of non-Southern white males voted Democratic; in 2004, that was down to, um, 39 percent. (And no, the choice of years doesn’t matter – a fitted trend line tells the same story.)
Now, you could argue that the distinctiveness of the Southern vote isn’t about race. But during the rise of movement conservatism, conservative politicians clearly campaigned on race – that is, they behaved as if they thought that was what it was all about. Ronald Reagan – the real RR, not the latter-day saint – was best known in the 70s for his tales of welfare queens driving Cadillacs. He began his 1980 campaign with the infamous states’ rights speech at Philadelphia, Mississippi, where civil rights workers were murdered.
Back in the 1920s, you see, there were a lot of northern liberals who voted Republican because Lincoln had freed the slaves (they were called “Progressives”) and a lot of southern conservatives who voted Democratic because Lincoln had freed the slaves (“Dixiecrats”). The Great Crash and the Great Depression broke the allegiance of northern Republican liberals, so from 1933 on northern liberals vote Democratic. Southern conservatives, however, by and large continue to vote Democratic until the 1980s or so.
This means that from 1933 to 1994 the partisan balance of seats in the congress (and, to a much lesser extent, the presidency) is substantially to the left of where America is. From 1933 to 1960 or so the fact that southern conservative Democrats are long-serving and hold the committee chairs moderates the effects of the partisan balance. But by the 1980s the committee chairships are mostly held by northern liberals–pushing the balance of power in congress substantially to the left of the country. And in the 1990s the balance shifted back as southern conservatives stopped voting Democratic.
Russell Roberts over at Cafe Hayek and Robin Hanson over at Overcoming Bias argue about the value of statistical techniques. Roberts:
The nature of the analysis is such that neither side can convince the other that “their” analysis is reliable. That’s not always true. As I suggest in the podcast, Milton Friedman was able to convince the skeptics that inflation is everywhere and always a monetary phenomenon. Friedman won the debate. But how many other studies can you think of where someone staked out a controversial position and convinced the skeptics based on empirical analysis? I think it can be done, but it’s rare. And in today’s world, most of the interesting empirical claims are being made in cases where the data are too incomplete and the issue is so complex that we can’t move to a consensus. The empirical work doesn’t improve our understanding of what’s going on. It masks what’s going on. It gives a patina of science when in effect the numbers aren’t really informing the debate.
If Russ relies little on data to draw his conclusions, then on what does he rely? Perhaps he relies on theoretical arguments. But can’t we say the same thing about theory, that we mainly just search for theory arguments to support preconceived conclusions? If so, what is left, if we rely on neither data nor theory?
Try saying this out loud: “Neither the data nor theory I’ve come across much explain why I believe this conclusion, relative to my random whim, inherited personality, and early culture and indoctrination, and I have no good reasons to think these are much correlated with truth.” That does not seem a conclusion worth retaining.
My basic point was that when it comes to high-powered sophisticated statistical techniques, our biases as researchers and as consumers of that research often triumph over truth. The truth is elusive in complex systems with many things changing at once. It’s hard to isolate the independent effect of one particular variable. When scholars can run hundreds of multivariate regressions at very low cost, it easy to convince yourself that the results that confirm your prior beliefs are the “right “ results. The ones that failed must be the “bad ones.”
The people of Iran and Iranian advocates for freedom and democracy are experiencing difficult days. They need the moral support of the proponents of freedom throughout the world and effective intervention by the United Nations. We categorically reject a military attack on Iran. At the same time, we ask you and all of the world’s intellectuals and proponents of liberty and democracy to condemn the human rights violations of the Iranian state. We expect from Your Excellency, in your capacity as the Secretary-General of the United Nations, to reprimand the Iranian government – in keeping with your legal duties – for its extensive violation of the articles of the Universal Declaration of Human Rights and other international human rights covenants and treaties.
Above all, we hope that with Your Excellency’s immediate intervention, all of Iran’s political prisoners, who are facing more deplorable conditions with every passing day, will soon be released. The people of Iran are asking themselves whether the UN Security Council is only decisive and effective when it comes to the suspension of the enrichment of uranium, and whether the lives of the Iranian people are unimportant as far as the Security Council is concerned.
In “The Manhattan Project” (Black Dog & Leventhal), published last month, Dr. Norris writes about the Manhattan Project’s Manhattan locations. He says the borough had at least 10 sites, all but one still standing. They include warehouses that held uranium, laboratories that split the atom, and the project’s first headquarters — a skyscraper hidden in plain sight right across from City Hall.
“It was supersecret,” Dr. Norris said in an interview. “At least 5,000 people were coming and going to work, knowing only enough to get the job done.”
Manhattan was central, according to Dr. Norris, because it had everything: lots of military units, piers for the import of precious ores, top physicists who had fled Europe and ranks of workers eager to aid the war effort. It even had spies who managed to steal some of the project’s top secrets.
“The story is so rich,” Dr. Norris enthused. “There’s layer upon layer of good stuff, interesting characters.”
Still, more than six decades after the project’s start, the Manhattan side of the atom bomb story seems to be a well-preserved secret.
As the second world war drew to a close, two women thought about applying to Harvard Law School.
The first was an African-American native of North Carolina, the granddaughter of a slave and the great-granddaughter of a slave-owner, who had moved North for college, survived the lean years of the Depression, befriended Eleanor Roosevelt, and sought unsuccessfully to do graduate work in sociology at the all-white University of North Carolina. When instead she finished Howard Law School at the top of her class, she sought the fellowship traditionally awarded to Howard’s best student: a year at Harvard to complete a master’s of law degree. But, wrote the admissions committee, “Your picture and the salutation on your college transcript indicate that you are not of the sex entitled to be admitted to Harvard Law School.”
The second woman who thought about applying to law school was a Midwesterner of Scottish descent, educated in Catholic schools paid for by her mother’s hard-earned wages. After politely defying her teachers by declining a full scholarship to a local Catholic college, this woman spent the war acing a full course load in political science at her state’s best university by day, and working eight-hour shifts testing firearms in a munitions factory by night. Upon her graduation, Columbia, Radcliffe, and Wellesley all offered her financial aid for graduate study; she chose correctly, and so impressed her Radcliffe professors that one offered to sponsor her application to law school. The steep cost of a legal education led her to decline, and she was off to Washington to seek a job in the federal government.
The first woman was Pauli Murray, whose remarkable contributions to American legal and women’s history are beginning to be recognized, thanks in large part to the voluminous personal papers she left to the Radcliffe Institute’s Schlesinger Library. The second woman is, quite decidedly and proudly, not a feminist icon. She is Phyllis Schlafly, A.M. ’45, who some say is more responsible than anyone for the rise of grassroots religious conservatism and the transformation of the Republican Party, and who all agree can take the lion’s share of credit or blame for the defeat of the Equal Rights Amendment and the rise of antifeminism as a force in American politics.