Thursday, October 16, 2014
Leo Carey in the New York Review of Books:
Nine hundred and thirty pages into Jan Swafford’s new biography of Beethoven, there is an interesting juxtaposition. After the composer died, in March 1827, his funeral was “one of the grandest Vienna ever put on for a commoner.” Schools were closed. Some 10,000 people crowded into the courtyard of the building where he had lived, then followed the coffin to the local parish church—not, as Swafford has it, to St. Stephen’s Cathedral. (Among the torchbearers was Franz Schubert.) Franz Grillparzer, the leading Viennese writer of the day, wrote a funeral oration. But later that year, when Beethoven’s effects were auctioned off, a lifetime’s worth of manuscripts and sketchbooks fetched prices that Swafford calls “pathetic.” Beethoven’s late masterpiece the Missa Solemnis went for just seven florins. By comparison, his old trousers and stockings sold for six florins.
Beethoven’s last years are rich in anecdotes of neglect. The late works were too abstruse for the public, and he told a visitor (exaggerating somewhat) that even earlier ones were out of fashion and never performed in Vienna. When Rossini, then Europe’s most popular composer, paid a visit, he was appalled at the squalor in which the great man was living and left in tears. He appealed to aristocratic contacts to do something, but they refused, considering Beethoven crazy and beyond help. Even Beethoven’s successes in these years were partial: the ecstatic reception of the Ninth Symphony is well known, but Swafford suspects that the audience at the premiere had come to cheer “the man and his legacy” rather than the music.
Nobel prizewinners May-Britt Moser and Edvard Moser have spent a career together near the Arctic Circle exploring how our brains know where we are.
Alison Abbott in Nature:
The fact that Edvard and May-Britt Moser have collaborated for 30 years — and been married for 28 — has done nothing to dull their passion for the brain. They talk about it at breakfast. They discuss its finer points at their morning lab meeting. And at a local restaurant on a recent summer evening, they are still deep into a back-and-forth about how their own brains know where they are and will guide them home. “Just to walk there, we have to understand where we are now, where we want to go, when to turn and when to stop,” says May-Britt. “It's incredible that we are not permanently lost.”
If anyone knows how we navigate home, it is the Mosers. They shot to fame in 2005 with their discovery of grid cells deep in the brains of rats. These intriguing cells, which are also present in humans, work much like the Global Positioning System, allowing animals to understand their location. The Mosers have since carved out a niche studying how grid cells interact with other specialized neurons to form what may be a complete navigation system that tells animals where they are going and where they have been. Studies of grid cells could help to explain how memories are formed, and why recalling events so often involves re-envisioning a place, such as a room, street or landscape.
While pursuing their studies, the two scientists have become a phenomenon. Tall and good-looking, they operate like a single brain in two athletic bodies in their generously funded lab in Trondheim, Norway — a remote corner of northern Europe just 350 kilometres south of the Arctic Circle. They publish together and receive prizes as a single unit — most recently, the Nobel Prize in Physiology or Medicine, which they won this week with their former supervisor, neuroscientist John O’Keefe at University College London.
Nathan Jurgenson in The New Inquiry (image in Franz Kline, Suspended, 1953):
Modernity has long been obsessed with, perhaps even defined by, its epistemic insecurity, its grasping toward big truths that ultimately disappoint as our world grows only less knowable. New knowledge and new ways of understanding simultaneously produce new forms of nonknowledge, new uncertainties and mysteries. The scientific method, based in deduction and falsifiability, is better at proliferating questions than it is at answering them. For instance, Einstein’s theories about the curvature of space and motion at the quantum level provide new knowledge and generates new unknowns that previously could not be pondered.
Since every theory destabilizes as much as it solidifies in our view of the world, the collective frenzy to generate knowledge creates at the same time a mounting sense of futility, a tension looking for catharsis — a moment in which we could feel, if only for an instant, that we know something for sure. In contemporary culture, Big Data promises this relief.
As the name suggests, Big Data is about size. Many proponents of Big Data claim that massive databases can reveal a whole new set of truths because of the unprecedented quantity of information they contain. But the big in Big Data is also used to denote a qualitative difference — that aggregating a certain amount of information makes data pass over into Big Data, a “revolution in knowledge,” to use a phrase thrown around by startups and mass-market social-science books. Operating beyond normal science’s simple accumulation of more information, Big Data is touted as a different sort of knowledge altogether, an Enlightenment for social life reckoned at the scale of masses.
As with the similarly inferential sciences like evolutionary psychology and pop-neuroscience, Big Data can be used to give any chosen hypothesis a veneer of science and the unearned authority of numbers. The data is big enough to entertain any story. Big Data has thus spawned an entire industry (“predictive analytics”) as well as reams of academic, corporate, and governmental research; it has also sparked the rise of “data journalism” like that of FiveThirtyEight, Vox, and the other multiplying explainer sites. It has shifted the center of gravity in these fields not merely because of its grand epistemological claims but also because it’s well-financed. Twitter, for example recently announced that it is putting $10 million into a “social machines” Big Data laboratory.
Pankaj Mishra makes the case in The Guardian (An empty billboard site in São Paolo, Brazil. Billboard advertising has been banned there since 2007. Photograph: Tony de Marco):
As late as 2008, Fareed Zakaria declared in his much-cited book, The Post-American World, that “the rise of the rest is a consequence of American ideas and actions” and that “the world is going America’s way”, with countries “becoming more open, market-friendly and democratic”.
One event after another in recent months has cruelly exposed such facile narratives. China, though market-friendly, looks further from democracy than before. The experiment with free-market capitalism in Russia has entrenched a kleptocratic regime with a messianic belief in Russian supremacism. Authoritarian leaders, anti-democratic backlashes and rightwing extremism define the politics of even such ostensibly democratic countries as India, Israel, Sri Lanka, Thailand and Turkey.
The atrocities of this summer in particular have plunged political and media elites in the west into stunned bewilderment and some truly desperate cliches. The extraordinary hegemonic power of their ideas had helped them escape radical examination when the world could still be presented as going America’s way. But their preferred image of the west – the idealised one in which they sought to remake the rest of the world – has been consistently challenged by many critics, left or right, in the west as well as the east.
Herzen was already warning in the 19th century that “our classic ignorance of the western European will be productive of a great deal of harm; racial hatred and bloody collisions will develop from it.” Herzen was sceptical of those liberal “westernisers” who believed that Russia could progress only by diligently emulating western institutions and ideologies. Intimate experience and knowledge of Europe during his long exile there had convinced him that European dominance, arrived at after much fratricidal violence and underpinned by much intellectual deception and self-deception, did not amount to “progress”. Herzen, a believer in cultural pluralism, asked a question that rarely occurs to today’s westernisers: “Why should a nation that has developed in its own way, under completely different conditions from those of the west European states, with different elements in its life, live through the European past, and that, too, when it knows perfectly well what that past leads to?”
Lindsay Beyerstein reviews Katha Pollitt's Pro: Reclaiming Abortion Rights in The American Prospect:
A full-throated defense of abortion as a social good, Pro is a thorough debunking of anti-abortion pieties. But it’s not just anti-abortion activists on whom Pollitt, columnist for The Nation and a noted poet, sets her sights: She also takes pro-choicers to task for what she calls “the awfulization” of abortion by paying lip service to the idea that abortion is always an agonizing decision that all women feel somewhat bad about.
Pollitt makes a compelling point: Why would that be universally true, unless all women inherently want babies all the time, or unless everyone believes there’s something at least a bit wrong with abortion? As Pollitt’s own reporting makes clear, regret and uncertainty are hardly universal experiences for those who have abortions.
Women’s reasons for abortion are dismissed as frivolous and selfish, Pollitt argues, because society doesn’t take women’s aspirations for a better life seriously. Furthermore, she writes, women are depicted as shallow—or worse—for wanting to have sex while avoiding pregnancy. This attitude rests on the conservative assumption that pregnancy is a natural “consequence” of sex, and that women who try to avoid it are shirkers.
John Quiggin in Crooked Timber:
I want to offer a very simple explanation of Piketty’s point. I’m aware that this may seem glaringly obvious to some readers, and remain opaque to others, but I hope there is a group in between who will benefit.
Suppose that you are a debtor, facing an interest rate r, and that your income grows at a rateg. Initially, think about the case when r=g. For concreteness, suppose you initially owe $400, your annual income is $100 and r=g is 5 per cent. So, your debt to income ratio is 4. Now suppose that your consumption expenditure (that is, expenditure excluding interest and principal repayments) is exactly equal to your income, so you don’t repay any principal and the debt compounds. Then, at the end of the year, you owe $420 (the initial debt + interest) and your income has risen to $105. The debt/income ratio is still 4. It’s easy to see that this will work regardless of the numerical values, provided r=g. To sum it up in words: when the growth rate and the interest rate are equal, and income equals consumption expenditure, the ratio of debt to income will remain stable.
On the other hand, if r>g, the ratio of debt to income can only be kept stable if you consume less than you earn. And conversely if r < g (for example in a situation of unanticipated inflation or booming growth), the debt-income ratio falls automatically provided you don’t consume in excess of your income.
Now think of an economy divided into two groups: capital owners and everyone else (both wage-earners and governments). The debt owed by everyone else is the wealth of the capital owners. If r>g, and if capital owners provide the net savings to allow everyone else to balance income and consumption, then the ratio of the capital stock to (non-capital) income must rise. My reading of Piketty is that, as we shift from the C20 situation of r ≤ g to one in whichr>g the ratio of capital to stock to non-capital income is likely to rise form 4 (the value that used to be considered as one of the constants of 20th century economics) to 6 (the value he estimates for the 19th century)
This in turn means that the ratio of capital income to non-capital income must rise, both because the capital stock is getting bigger in relative terms and because the rate of return, r, has increased as we move from r=g to r>g.
Owen Bennett-Jones reviews two books on the AfPak situation in the London Review of Books:
The conflict in the Afghanistan-Pakistan borderlands has similarities with other contemporary struggles. From Timbuktu to Kandahar, jihadis, national governments, ethnic groups and, in some cases, tribes are fighting for supremacy. In each place there are complicating local factors: badly drawn international borders; the relative strength or weakness of non-violent Islamist movements; the presence or absence of foreign forces, whether Western or jihadi; and different historical experiences of colonialism. From the point of view of Western policymakers some of these conflicts seem to be more important than others. For the French, the potential fall of Mali to radical Islamist forces was unacceptable, so they intervened. In Somalia, by contrast, the problem has largely been ignored by the West and is mostly being dealt with by the African Union. It was said that al-Qaida must not be allowed to hold territory in Syria, but both an al-Qaida affiliate and Isis have been doing just that, and it wasn’t until earlier this month that Obama announced he’d strike Isis from the air.
It’s far from clear that these varied responses to jihadi activity are the result of rational decision-making. In Yemen, for example, al-Qaida supporters move about freely and plot attacks against the West. Yet although the US has used air power in Yemen it has for the most part left the fighting to the far from capable Yemeni armed forces. But the Pashtun areas of the Afghanistan-Pakistan borderlands are an exception to the mixed messages. There the West has used every tactic at its disposal to confront jihadis: boots on the ground, air strikes, drone attacks, bribes, social welfare programmes and infrastructure projects – the effort to control the Pashtuns hasn’t lacked commitment. There are, of course, important differences between Yemen and the Pashtun areas. Attacks organised in Pashtun areas – including 9/11 and 7/7 – have succeeded; even the most sophisticated plot to emerge from Yemen, in which bombs were disguised as printer cartridges, was foiled. And it isn’t just that the US was impelled to avenge 9/11. The outside world is interested in the Pashtuns’ poppy crop and their hosting of much of Pakistan’s nuclear arsenal. Over the last century and a half the intricacies of Pashtun politics have been discussed by politicians and their advisers in the capitals of all the Great Powers: it’s Washington that’s worrying today, but it used to be Moscow, and before that London.
Rachel Nuwer in Smithsonian:
Not long after the second nurse to contract Ebola was identified, it came out that she had flown on a commercial flight after being exposed to the disease. The presence of Ebola in the United States has flamed fears fed by misinformation about the disease, poor understanding of epidemiology, lack of perspective and panic-mongering. It has led to some surprising and disturbing reactions from both officials and members of the public. Here are a few of the more egregious examples:
- In Texas, a community college just announced a new rule: “Navarro College is not accepting international students from countries with confirmed Ebola cases,” the Daily Beast reports. On this grounds, Navarro College rejected a well-qualified Nigerian applicant—despite the fact that the disease isn't affecting Nigeria.
- Officials in DeKalb County allegedly threatened to disconnect Emory University's sewer system when two Ebola patients were rushed to its hospital, the New York Times writes. Pizza delivery guys supposedly wouldn't serve the doctors, either. (These claims were later disputed, however).
- Some lady wore a full-on hazmat suit to catch her flight at Dulles International Airport in Washington, D.C.
- Protective gear like hazmat suits, Business Insider reports, is now "the hottest trade in the stock market." On Monday, shares of two companies that manufacture the gear were up 47 and 33 percent.
- The guy who owns "ebola.com" wants $150,000 for the domain name. As 1ClickNews reports, he hopes to find a buyer soon because "he is worried something may 'ameliorate' the outbreak, diminishing Ebola’s news value – and the worth of his domain."
- This year's "hot"—get it?—Halloween costume: Ebola!
The Halloween costume, at least, does get one thing right—Americans think Ebola is very scary.
Wednesday, October 15, 2014
Carl Zimmer in the New York Times:
News that a nurse in full protective gear had become infected with the Ebola virus raised some disturbing questions on Monday. Has the virus evolved into some kind of super-pathogen? Might it mutate into something even more terrifying in the months to come?
Evolutionary biologists who study viruses generally agree on the answers to those two questions: no, and probably not.
The Ebola viruses buffeting West Africa today are not fundamentally different from those in previous outbreaks, they say. And it is highly unlikely that natural selection will give the viruses the ability to spread more easily, particularly by becoming airborne.
“I’ve been dismayed by some of the nonsense speculation out there,” said Edward Holmes, a biologist at the University of Sydney in Australia. “I understand why people get nervous about this, but as scientists we need to be very careful we don’t scaremonger.”
Ebola is a mystery that invites speculation. The virus came to light only in 1976, the first known outbreak. Forty years later, scientists are just starting to answer some of the most important questions about it.
Just last month, for example, Derek J. Taylor, an evolutionary biologist at the University at Buffalo, and his colleagues published evidence that Ebola viruses are profoundly ancient, splitting off from other viral lineages at least 20 million years ago. Dr. Taylor’s research suggests that for most of that time, strains of Ebola infected rodents and other mammals.
In 1976, the virus spilled over into the human population from one of those animals, possibly bats. And every few years since then, a new outbreak has emerged in different parts of Central Africa.
Bill Gates in his blog:
A 700-page treatise on economics translated from French is not exactly a light summer read—even for someone with an admittedly high geek quotient. But this past July, I felt compelled to read Thomas Piketty’s Capital in the Twenty-First Century after reading several reviews and hearing about it from friends.
I’m glad I did. I encourage you to read it too, or at least a good summary, like this one from The Economist. Piketty was nice enough to talk with me about his work on a Skype call last month. As I told him, I agree with his most important conclusions, and I hope his work will draw more smart people into the study of wealth and income inequality—because the more we understand about the causes and cures, the better. I also said I have concerns about some elements of his analysis, which I’ll share below.
I very much agree with Piketty that:
- High levels of inequality are a problem—messing up economic incentives, tilting democracies in favor of powerful interests, and undercutting the ideal that all people are created equal.
- Capitalism does not self-correct toward greater equality—that is, excess wealth concentration can have a snowball effect if left unchecked.
- Governments can play a constructive role in offsetting the snowballing tendencies if and when they choose to do so.
To be clear, when I say that high levels of inequality are a problem, I don’t want to imply that the world is getting worse. In fact, thanks to the rise of the middle class in countries like China, Mexico, Colombia, Brazil, and Thailand, the world as a whole is actually becoming more egalitarian, and that positive global trend is likely to continue.
But extreme inequality should not be ignored—or worse, celebrated as a sign that we have a high-performing economy and healthy society. Yes, some level of inequality is built in to capitalism.
At the Horse Pavilion
We lost you once,
at the Horse Pavilion, on a day
of snappy wind beating five flags
above that brilliant nightmare green
in the sun and beyond prayer but ready to
live on a diet of it for the rest of our days,
we ducked and ran among faces made blank or tender
by our terror, so that we understood for the first time
that this was the way the world was truly divided:
into those faces that could be startled into goodness,
and those that could not, but none of them worth
anything at all to us except for what
they could tell us as we kept calling out to them
the only words left to us, A little boy!, and the
colours of the clothes you were wearing, while the
polished horses kept mindlessly
clearing gates that were hardships,
but distant, whitewashed, the hardships of others,
and sounds mocked us too, in that whinnied
bright air--a ring of faint surf, the civil, evil
sound of horsemen's applause, and we ran into
each other and ran back and ran through the
stadium of stalls and sick straw-smell and ran out
into the sun of the Pavilion's mud plaza
and there you were, on the other side
of the soot track that led toward the weeping
green park, your eyes fixed without flinching
on the main doorway, waiting for us to come out
sometime before dark and we fled to you, crying
your name and I could see in your eyes
how hard you'd been standing your ground
against terror, how long you'd been forbidding
yourself to invent us, as if in inventing us you'd have
lost all chance to see us come out to you,
but how brilliant you seemed, having saved yourself
from harm, you didn't know it, you turned
your face to the taut thigh of my skirt,
not to cry, and we walked that way,
my hand holding your head to me while I
could have sworn I could feel you inhaling
what I was thinking through the skirt's grass-engraved
cotton: Until this moment I never knew what love is.
by Elisabeth Harvor
from Fortress of Chairs. Vehicule Press, 1992
Sandali in HimalSouthAsian:
Tara Books, a feminist publishing house located in Chennai, has been collaborating with ‘folk’ and ‘tribal’ artists for the last few years to produce illustrated books for both children and adults. Many of the art forms they have worked with are believed to have evolved from women’s creative expressions within the household, created for the purposes of ritual and decoration. On the occasion of International Women’s Day on 8 March this year, Tara Books inaugurated a photo exhibition titled ‘From Floor to Book: Women’s Everyday Art Traditions’ at their office, the Book Building. The exhibition, which ran until the end of July, traced the journey of select art traditions across the country, from their original contexts to newer canvases and spaces. A few years ago, Zubaan, another feminist publishing house located in Delhi, showcased artworks by rural women in a travelling exhibition titled ‘Painting Our World: Women’s Messages through Art’ in several cities across the country. As part of Zubaan’s larger project of mapping the women’s movement through visual material, the exhibition aimed to document rural women’s voices on issues ranging from violence, health, communalism and domestic work to marriage, livelihood and the environment, expressed through ‘folk’ and ‘tribal’ art and embroidery practices. While Zubaan’s exhibition captured the overtly political discursive articulations stemming from the women’s movement, Tara Books’ concern seems to lie in understanding meaning-making processes of women – the ways in which they comprehend gender and other social relations through the interplay of quotidian and critical consciousness.
The first of the three sections of Tara Books’s exhibition is titled ‘Everyday Art’. It showcases aspects of an art form difficult to categorise, lying perhaps somewhere at the interstices of craft, art, household labour, tradition and practice. The exhibition shows viewers that women’s everyday art is ‘created’ and ‘displayed’ in the context of the household, and is by nature ephemeral.
Richard Van Noorden in Nature:
This year’s Nobel Prize for Chemistry was awarded to three researchers who developed ways to capture images of living cells at nanoscale resolution — well below the 200 nanometres thought to be the best possible resolution for visible-light microscopes.
A fourth recently-developed super-resolution technique, called structured illumination microscopy (SIM), illuminates samples with stripes of light. A computer program analyses the interference patterns formed by the stripes (usually combining composite pictures with stripes in different orientations) to reconstruct a picture of a cell at about double the resolution limit of optical microscopy. This SIM image shows a three-dimensional view of a human bone cancer cell with actin in purple, DNA in blue, and mitochondria in yellow.
Tuesday, October 14, 2014
Howard Zinn (1922 – 2010) was an American historian, author, playwright, and social activist. The following is adapted from his acclaimed A People's History of the United States.
Howard Zinn in Jacobin:
Columbus would never have made it to Asia, which was thousands of miles farther away than he had calculated, imagining a smaller world. He would have been doomed by that great expanse of sea. But he was lucky. One-fourth of the way there he came upon an unknown, uncharted land that lay between Europe and Asia—the Americas. It was early October 1492, and thirty-three days since he and his crew had left the Canary Islands, off the Atlantic coast of Africa. Now they saw branches and sticks floating in the water. They saw flocks of birds.
These were signs of land. Then, on October 12, a sailor called Rodrigo saw the early morning moon shining on white sands, and cried out. It was an island in the Bahamas, the Caribbean Sea. The first man to sight land was supposed to get a yearly pension of 10,000 maravedis for life, but Rodrigo never got it. Columbus claimed he had seen a light the evening before. He got the reward.
So, approaching land, they were met by the Arawak Indians, who swam out to greet them. The Arawaks lived in village communes, had a developed agriculture of corn, yams, cassava. They could spin and weave, but they had no horses or work animals. They had no iron, but they wore tiny gold ornaments in their ears.
This was to have enormous consequences: it led Columbus to take some of them aboard ship as prisoners because he insisted that they guide him to the source of the gold. He then sailed to what is now Cuba, then to Hispaniola (the island which today consists of Haiti and the Dominican Republic). There, bits of visible gold in the rivers, and a gold mask presented to Columbus by a local Indian chief, led to wild visions of gold fields.
Hodgkinson and Bridle in The White Review:
There’s a marvellous lecture by Tim Berners-Lee (‘How the world wide web just happened’) in which he talks about how he came to create what turned out to be the world wide web. He describes growing up as the child of Computer Scientist parents who had worked at Bletchley Park, and building his first circuits from bits of wire, wrapped around nails, hammered into a piece of wood. Once he’d got the hang of that, he was just in time for the invention of the transistor, and then the integrated circuit. As the components available to him got ever smaller, the complexity of the machines they could power increased exponentially. As Berners-Lee tells it – with some modesty – it was a simple, natural progression from crystal radio, to building his own computer, to putting in place the fundamental transfer protocol that most of us use to access the internet. As things shrank, they also became more powerful, more networked, leading inevitably to an almost total, sublime connectivity.
...This book, For the Motherboard: The Rubáiyát of Omar Khayyám, takes into account some of the limitations of working at scale across digital devices. The first of these is the display size: in an era of ‘retina screens’, where the pixel density of our displays begins to surpass in definition the clusters of rods and cones in the human eye, we are still limited in print by the apertures of our ink nozzles. This is not a new problem: For the Motherboard… is set in Bell Centennial, a font commissioned by AT&T from the designer Matthew Carter in 1975, to replace Bell Gothic, which it had been using in its phone directories since 1938. Between those years, the number of telephones in the United States alone grew from some twenty million to around 140 million. Carter’s Bell Centennial typeface exists because of explosive, networked growth, addressing both this increased technological density, by condensing the character width, and the limitations of contemporary printing, by adding ‘ink traps’ to the letters, minute nicks in the letterforms to absorb and counter the ink spread caused by rapid printing on newsprint. When printed at sufficient size on coated paper, these traps remain visible, tiny reminders of previous technological limitations.
Christopher Bray in Spiked:
One evening in December 1966, the great American writer and critic Edmund Wilson had Sir Isaiah Berlin over for dinner. And a good time they doubtless had of it, but later that night Wilson recorded in his diary that he found Berlin prone to ‘violent, sometimes irrational prejudice against people’. On the evening in question the object of Berlin’s ire was the philosopher and political theorist Hannah Arendt, whose book about the trial of the Nazi officer Adolf Eichmann, Eichmann in Jerusalem, he excoriated without, Wilson claimed, his ever having troubled to read it.
On that last point at least, Wilson seems to have been wrong. Granted the evidence marshalled in David Caute’s Isaac & Isaiah: The Covert Punishment of a Cold War Heretic, it is fair to conclude that Berlin had not only read Arendt’s bestseller, but had also likely arranged for his close friend John Sparrow, then warden of All Souls College at Oxford, to give the book a kicking in the pages of the Times Literary Supplement. Since TLS reviews were printed without bylines back then, why didn’t Berlin write about the book himself? Because, Caute argues, he had for some reason ‘always avoided referring to Arendt in print’. Privately, though, he was happy to rubbish her work. A few years earlier, he had written Faber & Faber a report on Arendt’s The Human Condition. It opened by telling them he ‘could recommend no publisher to buy the UK rights of this book. There are two objections to it: it won’t sell, and it is no good.’
Fans of Berlin’s waspish wit will relish those last two clauses (invert them, as the logic of the sentence dictates, and the wit is gone), but did Arendt’s most considered work really merit such a stinging rebuke?
Modern critics would probably hail the up and coming rock artists that once inhabited Indonesia. About a hundred caves outside Moras, a town in the tropical forests of Sulawesi, were once lined with hand stencils and vibrant murals of abstract pigs and dwarf buffalo. Today only fragments of the artwork remain, and the mysterious artists are long gone.
For now, all we know is when the caves were painted—or at least ballpark dates—and the finding suggests that the practice of lining cave walls with pictures of natural life was common 40,000 years ago. A study published today in Nature suggests that paintings in the Maros-Pangkep caves range from 17,400 to 39,900 years old, close to the age of similar artwork found on the walls of caves in Europe.
“It provides a new view about modern human origins, about when we became cognitively modern,” says Maxime Aubert, an archaeologist at Griffith University in Australia. “It changes the when and the where of our species becoming self-aware and starting to think abstractly, to paint and to carve figurines.”
Walter Kempowski’s writing career began on a winter evening in 1950, nineteen years before he published his first novel. Then 21, he was serving time for espionage in an East German prison at Bautzen. For two years, he had passed the time by going from bunk to bunk and plying his fellow prisoners with questions about their lives. He met a glassblower from the Vogtland, a businessman who had worked in Persia, a bank president. He discovered Auschwitz survivors sleeping above former camp commandants, Americans alongside Finns and Brits, and a Frenchman who had been stationed at Dien Bien Phu. One evening, as Kempowski trudged through the yard for his nightly exercise, he found himself thinking how painful it was that the conversations going on throughout the prison at that moment should be lost, like the choir of voices swirling around the Tower of Babel. The guard on duty told Kempowski to pay attention. “Those are your comrades in the cells,” he said. “They’re telling you something.”
By the time of his death in 2007, Kempowski had earned an international reputation as Germany’s premier chronicler—a quirky old uncle spending time in his attic, surrounded by faded photographs and dusty junk.
Carl Zimmer in the New York Times:
Invasive species are both a fact of life and a scientific puzzle. Humans transport animals and plants thousands of miles from where they first evolved — sometimes accidentally, sometimes intentionally. Many of those species die off in their new homes. Some barely eke out an existence.
But some become ecological nightmares. In the Northeast, emerald ash borers are destroying ash trees, while Japanese barberry is blanketing forest floors, outcompeting native plants. Scientists aren’t certain why species like these are proving superior so far from home.
“If natives are adapted to their environment and exotics are from somewhere else, why are they able to invade?” asked Dov F. Sax, an ecologist at Brown University.
A big part of the answer may be found in the habitats in which invasive species evolve. Many alien species in the northeastern United States, including the emerald ash borer and Japanese barberry, invaded from East Asia. But the opposite is not true. Few species from the northeastern United States have become problems in East Asia.
In a new study published in the journal Global Ecology and Biogeography, Dr. Sax and Jason D. Fridley, a biologist at Syracuse University, argue that this is not a coincidence. They offer evidence that some parts of the world have been evolutionary incubators, producing superior competitors primed to thrive in other environments.
Andrew O'Hehir in Salon:
Here’s a news flash: None of these heated public debates about atheism and religion, or about how Western “liberals” should think about Islam, ever reach a satisfactory conclusion. There are many reasons for this, including the fact that talk-show hosts and movie stars (just for instance) aren’t necessarily the best people to bring nuance or thoughtfulness or clarity to these conversations. An even bigger reason may be that religion in general, and fundamentalist religion in particular, is a major sore spot in Western culture, a source of tremendous vulnerability and anxiety.
One of the few propositions that Reza Aslan and Sam Harris might both agree with is that God’s return to the world-historical stage long after Nietzsche supposedly killed him off, as both an internal and external enemy of the Western secular-capitalist order, is a dangerous phenomenon for which our society has no clear answer. Our exaggerated response to ISIS is a dead giveaway: They may be a stateless desert army of bloodthirsty nutjobs, but they have something we lost a long time ago and can’t get back.
Fundamentalist Christianity appeared to be on a long, slow decline in the United States. Now right-wing Christians have mounted a vigorous counterattack against reproductive rights, largely by cloaking themselves (ingeniously, it must be said) in pseudo-liberal sheep’s clothing, as an oppressed and disenfranchised group entitled to legal protection. Similarly, fundamentalist Islam seemed to be on the run in the Middle East, although that required the expenditure of trillions of dollars, hundreds of thousands of human lives and the last reservoir of goodwill toward America in the Arab world. Then came the rollout of ISIS, with its genocidal mass killings and its beheading videos: an al-Qaida 2.0 for the YouTube age, with better graphics and an even more deranged vision.
SOMETIME AROUND 1989, Jacques Derrida must have agreed to give the opening keynote at the UCLA conference entitled “Nazism and the ‘Final Solution’: Probing the Limits of Representation.” Derrida must have agreed since, on 26 April 1990, in front of an undoubtedly sizable audience, he delivered that lecture, a reading of Walter Benjamin’s “Critique of Violence” that has since become one of Derrida’s most influential and most generative texts.
The academic equivalent of a star-studded event — in Los Angeles no less! — the conference had been explicitly and centrally organized as a defensive call to arms against those who might question, in the name of historical probity, the historical profession’s strenuous policing of Holocaust testimony, evidence, and representation. The conference singled out Hayden White, himself a historian, as representative of the risks — and negationist, even fascistic, inclinations, however unwitting — courted by “postmodernist” claims. White participated in the conference, and he was duly included, along with numerous detractors of his, in the published proceedings. Derrida was not.
Siddhartha Mukherjee in The New York Times:
ONE feature of the tragic case of Thomas Eric Duncan, the first traveler known to have carried the Ebola virus into the United States, rankles me as a physician: Even if every system in place to identify suspected carriers had been working perfectly, he may have still set off a mini-epidemic in Dallas. Mr. Duncan, recall, was screened before his flight and found to have a normal temperature. Asked specifically about exposures, he denied any contact with the ill. On Sept. 25, when he first presented to the emergency room with a fever, he was discharged. He returned three days later with fulminant infection. But the fact remains that even if Mr. Duncan had been identified and isolated on the first visit, it may have been too late. He had probably been exuding the virus for days. The news that a nurse who helped treat Mr. Duncan has now tested positive for the disease, evidently because of a breach of safety protocols, adds to the picture of disorder.
In the wake of the Duncan case, three strategies to contain the entry and spread of Ebola in the United States have been proposed. The first suggests drastic restrictions on travel from Ebola-affected nations. The second involves screening travelers from Ebola-affected areas with a thermometer, which the federal government is beginning to do at selected airports. The third proposes the isolation of all suspected symptomatic patients and monitoring or quarantining everyone who came into contact with them. Yet all these strategies have crucial flaws. In the absence of any established anti-viral treatment, we may need to rethink the concept of quarantine itself. “Quarantine” sounds like a medieval concept because it is. Invented in the mid-1300s to stop the bubonic plague, the word derives from the Italian for “40 days,” the time used to isolate potential carriers. Although the practice of quarantine has been reformed over the centuries, pitfalls remain. They are especially evident during this epidemic.