Saturday, January 31, 2015
Andrea Woodhouse in Guernica (Image by Charles Pertwee):
The root of the word catastrophe is Greek, katastréphō, and means to overturn. Ten years ago, on December 26, 2004, in the Indonesian province of Aceh, the Indian Ocean tsunami brought tens of thousands of people to their deaths, with waves so wild and vast they seemed to overturn the sea.
I was at home in Indonesia’s capital, Jakarta, when images of bodies and debris began to appear on television. They were shown in slow, fragmented loops, like dreams. In layers they gathered force. On screen, people’s bodies clogged up rivers, their limbs and heads bulbous like rag dolls. My mother asked how anyone could celebrate the new year with death so close.
The images disturbed me. When I learned that a friend of mine, Monica Tanuhandaru, was in Aceh working on the relief effort with the Indonesian government, I joined her.
In Aceh, I met a man who suffered terrible nightmares. Nek Beng and his wife lived in the ruins of the only house to survive the waves in Calang, a town down Aceh’s west coast. Their neighborhood was once a lively town center of about a thousand people, but I was told that only seventeen had survived. In colonial times, Dutch merchants drove through the streets smoking cigars, and later, when Nek Beng was growing up, the town was full of traders, fishermen, and thieves from Java and other parts of Sumatra.
When I visited, nothing of the original town was left but rubble and asbestos. Mounds of earth dotted a field nearby. They were burial sites, some the size of children, marked with white pebbles and faded scraps of cloth on sticks. Dry grass blew silently among the graves. And yet, life persisted. Disaster agencies had put up tents and wooden shacks, and tsunami survivors from elsewhere had moved in. The front part of Nek Beng’s house had been turned into a restaurant. Every evening it was full of people laughing and smoking cigarettes. Curries were piled up in large bowls by the window, and people sat eating noodles and gossiping under a bright electric light.
I wanted to know more about this—about life returning when the waters had scarcely retreated—and went to visit one afternoon with a student from the local university. The restaurant was closed, so we picked our way round to the back. Inside, a large set of stairs led up to the second floor. The banisters had been destroyed, leaving only rough cement steps; above them, a light bulb swayed from a wire. The waves had punched large holes in the walls, through which I could see palm trees and blue tarpaulin.
Gayil Nalls interviews Anjan Chatterjee in Nautilus:
In The Aesthetic Brain, Chatterjee tells us that our ability to have aesthetic experiences has its origins deep in our brains, in the orbitofrontal cortex and the nucleus accumbens, and is aided by neurotransmitters such as dopamine, opiates, and cannabibens, which control emotional responses. These responses developed, Chatterjee says, because they were useful for survival. But what happens to our aesthetic sense when the demands of survival are removed?
We caught up with Chatterjee last November at his office in the University of Pennsylvania’s Pennsylvania Hospital, where he is the new head of Neurology...
You’ve suggested that aesthetics stems from prioritizing liking over wanting. How?
When people talk about rewarding experiences in the brain, these tend to be carried out within deep structures of the brain—so parts of our emotional system, parts of the brain that are called the ventral striatum, or the frontal cortex, and the amygdala, and parts of the insula. So there’s this kind of network of brain structures within the brain that seem to be important for how we evaluate things and our reward systems.
Within that, there is a division of two systems that a neuroscientist at [the University of] Michigan named Kent Berridge has referred to as the “wanting and the liking” system. And the general idea there is that we typically like things we want and we want things we like, so these two systems operate in concert, as they should. But they have slightly different neurochemical bases and their anatomy is somewhat different. So one thing tends to be driven by dopamine. As an important neurochemical for our reward system, it’s important for learning, it’s important for movement—so people, for example, with Parkinson’s disease have a deficient state in dopamine and so their movements get constricted; they’re very slow. But [dopamine] is also important for learning and it’s important for our drive to get to things that we desire. Distinct from that is a separate system that is what Berridge calls the “liking” system and this is about the system that is purely the hedonic experience of something and that is mediated by cannabinoid and opioid receptors in the brain. So when people ingest cannabinoids or opioids, the kind of high you get from that is an exaggerated version of how these systems work in the normal brain.
So again, both systems tend to work together in most of our experiences—we want what we like and we like what we want—however, they can get dissociated. One example of a dissociation where you can have wanting without as much liking are in addictive states. So people as they progress and become quite addicted, crave their fix, so this desire to get their fix is extremely exaggerated. But it’s not clear that they enjoy the experience in the same way that they did. So that seems to be a dissociation that moves in one direction.
Reihan Salam in Slate:
We often hear about the political muscle of the ultrarich. Billionaires like the libertarians Charles and David Koch and Tom Steyer, the California environmentalist who’s been waging a one-man jihad against the Keystone XL pipeline, have become bogeymen for the left and right respectively. The influence of these machers is considerable, no doubt. Yet the upper middle class collectively wields far more influence. These are households with enough money to make modest political contributions, enough time to email their elected officials and to sign petitions, and enough influence to sway their neighbors. Upper-middle-class Americans vote at substantially higher ratesthan those less well-off, and though their turnout levels aren’t quite as high as those even richer than they are, there are far more upper-middle-class people than there are rich people. One can easily turn the Kochs or the Steyers of the world into a big fat political target. It’s harder to do the same to the lawyers, doctors, and management consultants who populate the tonier precincts of our cities and suburbs.
Another thing that separates the upper middle class from the truly wealthy is that even though they’re comfortable, they’re less able to take the threat of tax increases or benefit cuts in stride. Take away the mortgage interest deduction from a Koch brother and he’ll barely notice. Take it away from a two-earner couple living in an expensive suburb and you’ll have a fight on your hands. So the upper middle class often uses its political muscle to foil the fondest wishes of egalitarian liberals. This week offered a particularly vivid reminder of how that works. In the windup to his State of the Union address, Barack Obama released a proposal to curb the tax benefits associated with 529 college savings plans, which primarily benefit upper-middle-class families, to help finance the expansion of a separate tax credit that would primarily benefit lower-middle- and middle-middle-class families. Only 3 percent of households actually make use of these accounts, and 70 percent of the tax benefits go to households earning more than $200,000, so you can see why Obama might have thought no one would get too worked up about the proposal.
Humeira Kazmi in The Nation:
Let’s play a little game here that I was taught in school as part of our language arts class. We would be handed a photograph and asked to analyze it.
Take a look at this picture.
What does the picture say to you? What do you see? A soldier in a war zone? What is the soldier thinking? What are you thinking as you look at his picture?
I’ll tell you what I see.
A soldier. But not in a war zone. In a school in Peshawar where 132 children were butchered by the Taliban to exact vengeance upon Pak Army. I think the soldier is thinking – this can’t be true. This cannot be a school in my country, this cannot be the blood of my children whom I vowed to protect. He’s trying to understand how this could happen. He’s looking at the debris of paper and mortar and bullet holes, staring at the blood splattered all over the walls, the floor and ceiling. Blood. So much blood, it soaks everything. He’s trying to make sense of the enemy’s motives. And he’s not sure of what he’ll do except bring the murderers to justice. I believe that about him. I trust that about him.
More here. [Thanks to Fawzia Naqvi.]
Cynthia Haven in The Book Haven:
My sole face-to-face encounter with Susan Sontag occurred at Stanford, when she was a visiting star sometime in the 1990s. She was dressed in the slightly dowdy “prison matron” threads that were her trademark, alleviated with a colorful scarf, another trademark. I had expected her to be physically towering; she was not. Obviously, that was the impression her books left on my psyche. I’m pretty certain she would say that had been the real encounter.
Steve Wasserman, editor at large for Yale University Press (and my former editor at the Los Angeles Times Book Review) got the double exposure of her books and her friendship. He recounted both yesterday in Berlin, in his keynote address, “Susan Sontag: Critic and Crusader” at a symposium at the Institute for Cultural Inquiry (Steve called it a “secular monastery”). He spoke to a standing-room-only crowd at the “Susan Sontag Revisited” symposium honoring the legendary cultural critic and author ten years after her death. He gave was a knockout address – one that should become the defining retrospective on the impact Susan Sontag has had on an entire generation.
His comments on her writing:
“Sontag’s style is her subject. For it is the way she thinks, how she goes about it, how she offers her readers the chance, as it were, to eavesdrop on a mind thinking as hard and as nimbly as it can that is most compelling about her work. Or, to put it another way, it is not so much her opinions that matter—though of course they do—but rather how she goes about arriving at them, how she renders them, the very warp and woof of her sentences."
George Dvorsky in io9:
Scientists from Stanford Medical Center have devised a technique for extending the length of human telomeres. It's a breakthrough that could eventually result in therapies to treat a host of age-related diseases, including heart disease and diabetes. It could also result in longer, healthier lives.
Telomeres are those critical protective caps located on the tips of chromosomes. Think of them as those protective pieces of plastic on the ends of your shoelaces. Without them, the tips basically fall apart, which is kind of what happens with chromosomes over time; human telomeres, which are about 8,000 to 10,000 nucleotides long, get shorter and shorter with each cell division. Over time, they reach a critical length, and the cell stops dividing, or simply dies. At the macroscale, we experience this as senescence, or aging, as your body's cells progressively lose the ability to replenish.
So you can see why this breakthrough, in which the Stanford scientists rapidly and efficiently increased the length of human telomeres, is big news.
To do it, a research team led by Helen Blau delivered a bioengineered version of messenger RNA that encodes a telomere-extending protein to cultured human cells. In this case, the RNA contained a coding sequence called TERT, which is the active component of telomerase.
Ken Roth in the Human Rights Watch World Report 2015:
The world has not seen this much tumult for a generation. The once-heralded Arab Spring has given way almost everywhere to conflict and repression. Islamist extremists commit mass atrocities and threaten civilians throughout the Middle East and parts of Asia and Africa. Cold War-type tensions have revived over Ukraine, with even a civilian jetliner shot out of the sky. Sometimes it can seem as if the world is unraveling.
Many governments have responded to the turmoil by downplaying or abandoning human rights. Governments directly affected by the ferment are often eager for an excuse to suppress popular pressure for democratic change. Other influential governments are frequently more comfortable falling back on familiar relationships with autocrats than contending with the uncertainty of popular rule. Some of these governments continue to raise human rights concerns, but many appear to have concluded that today’s serious security threats must take precedence over human rights. In this difficult moment, they seem to argue, human rights must be put on the back burner, a luxury for less trying times.
That subordination of human rights is not only wrong, but also shortsighted and counterproductive. Human rights violations played a major role in spawning or aggravating most of today’s crises. Protecting human rights and enabling people to have a say in how their governments address the crises will be key to their resolution. Particularly in periods of challenges and difficult choices, human rights are an essential compass for political action.
Sarah Chayes in Salon:
Abd al-Rahman Atiya, killed in a drone strike in 2011, conceded that the 9/11 attacks had been launched because of hatred for some aspects of Western culture, but the main rationale was the U.S. role enabling Arab kleptocracies.
Yes we hate the corruptive financial lifestyle that does not please God . . . But . . . the more important reason is their . . . appointing collaborative regimes for them in our countries. Then they support these regimes and corruptive governments against their people, who demand freedom and want to abide by Islam.
This Western support for Middle Eastern kleptocracies was “the real reason that pushed the mujahideen to carry out these blessed attacks.” Atiya went on to blast the aspects of Western culture he deemed most objectionable—excesses that have seemed especially pronounced since the 1990s: “It is a corrupt, wayward, and unjust system . . . based on beastly behavior, and seven principles: greed, gluttony, injustice, selfishness, extreme materialism, abandonment of religion.” In this context—and recalling the history of Dutch Protestants ransacking the physical manifestations of Catholic kleptocracy—the choice of the Pentagon and the Twin Towers, near Wall Street, as the target of the 9/11 attacks may take on an enhanced meaning. Perhaps Al Qaeda’s main intent was not to kill large numbers of Americans so much as to visit a spectacular symbolic punishment upon the manifestations of what it saw as a criminal kleptocracy that controlled the most powerful instruments of force on earth. Perhaps Al Qaeda was in fact committing an act of iconoclasm: replicating the kind of sentences that the 1566 Protestants executed on churches and articles of devotion the length and breadth of the Low Countries. These roughly comparable instances, from diçerent centuries and religions, exemplify a persistent relationship between corruption and religious extremism. In periods of acute, self-serving behavior on the part of public leaders, Christians and Muslims alike have often sought a corrective in strict codes of personal behavior derived from the precepts of puritanical religion. And they have imposed it, if necessary, by force. Those who object to this remedy should look for other ways to cure the cause. These roughly comparable instances, from diçerent centuries and religions, exemplify a persistent relationship between corruption and religious extremism. In periods of acute, self-serving behavior on the part of public leaders, Christians and Muslims alike have often sought a corrective in strict codes of personal behavior derived from the precepts of puritanical religion. And they have imposed it, if necessary, by force. Those who object to this remedy should look for other ways to cure the cause.
Mark Danner in The New York Times:
On or about Sept. 11, 2001, American character changed. What Americans had proudly flaunted as “our highest values” were now judged to be luxuries that in a new time of peril the country could ill afford. Justice, and its cardinal principle of innocent until proven guilty, became a risk, its indulgence a weakness. Asked recently about an innocent man who had been tortured to death in an American “black site” in Afghanistan, former Vice President Dick Cheney did not hesitate. “I’m more concerned,” he said, “with bad guys who got out and released than I am with a few that, in fact, were innocent.” In this new era in which all would be sacrificed to protect the country, torture and even murder of the innocent must be counted simply “collateral damage.”
“Guantánamo Diary” is the most profound account yet written of what it is like to be that collateral damage. One fall day 13 years ago Mohamedou Ould Slahi, a 30-year-old electrical engineer and telecommunications specialist, received a visit at his house in Noakchott, Mauritania, from two officers summoning him to come answer questions at the country’s intelligence ministry. “Take your car,” one of the men told him, as Slahi stood in front of his house with his mother and his aunt. “We hope you can come back today.” Listening to these words, Slahi’s mother fixed her eyes on her son. “It is the taste of helplessness,” he writes, “when you see your beloved fading away like a dream and you cannot help him. . . . I would watch both my mom and my aunt praying in my rearview mirror until we took the first turn and I saw my beloved ones disappear.” That was Nov. 20, 2001. Slahi’s mother has since died. Her son has never returned.
Friday, January 30, 2015
David Z Morris in Aeon (Photo by Gallery Stock):
Though retailers and service providers adopted Bitcoin en masse in 2014, financial servicers have been much more cautious. What this means is that there are still very few ways for users to convert Bitcoin or other cryptocurrencies into traditional currencies or commodities.
Partly in order to work around this, the BitShares trading platform offers participants no ownership of the commodities they’re nominally taking positions on. Instead, Larimer describes it as a ‘predictive market’, in which participants set commodity prices by taking positions – or, put more directly, by placing bets. BitShares is, for now, a digital version of the 19th-century bucket shops where the working classes would go to bet on stock‑market moves. This form of speculation is illegal in most US states, including Washington, California and Mississippi. But that could change quickly. As digital cash becomes integrated with more and more services, DACs [Distributed Autonomous Corporations] such as BitShares will almost certainly find ways to interface with real-world markets for gold, dollars and goods of all kinds. If those connections expand, the possibilities of DACs will expand with them.
Imagine, for instance, a bike-rental system administered by a DAC hosted across hundreds or thousands of different computers in its home city. The DAC would handle the day-to-day management of bikes and payments, following parameters laid down by a group of founders. Those hosting the management programme would be paid in the system’s own cryptocurrency – let’s call it BikeCoin. That currency could be used to rent bikes – in fact, it would be required to, and would derive its value on exchanges such as BitShares from the demand for local bike rentals.
Guided by its management protocols, our bike DAC would use its revenue to pay for repairs and other upkeep. It could use online information to find the right people for various maintenance tasks, and to evaluate their performance. A sufficiently advanced system could choose locations for new stations based on analysis of traffic information, and then make the arrangements to have them built.
Michelle Goldberg in The Nation (Reuters/Kacper Pempel):
Unlike some of Chait’s critics, I think there is such a thing as renascent political correctness. (He quotes my own writing on the subject in the piece.) Most writers I know, including quite lefty ones, talk about it a lot in private. But as many others have pointed out, his examples don’t quite work, because he conflates several different things. First, there’s the genuine suppression of speech, as with Omar Mahmood, the University of Michigan student who was fired from his school newspaper, and whose apartment was vandalized, for running afoul of lefty sensitivities. Then there are the annoying rhetorical tropes of online discourse, which can make good-faith argument impossible. (Accusing every man who disagrees with you of “mansplaining” is one of these, even though mansplaining is a real phenomenon.) Finally, there is energetic debate. Telling these last two apart is pretty subjective. I think the response to Giraldi was the third. He probably thinks it was the second.
At one point, Chait describes a torrent of online derision directed at his friend Hanna Rosin under the hashtag #RIPpatriarchy. In Chait’s version, the hashtag is a reaction to her book, The End of Men, which, he writes, “argued that a confluence of social and economic changes left women in a better position going forward than men, who were struggling to adapt to a new postindustrial order.” In fact, the hashtag was spurred by a related Slate piecewith the trollish headline, “The Patriarchy is Dead: Feminists, accept it.” The patriarchy not being dead, feminists did not accept it. That’s not stifling political correctness. It’s responding to speech with more speech.
Yet that’s not the end of the story. Sure, Rosin was wrong, and Giraldi wrong-headed. But the sheer volume of the rebukes, the loud public ostracism, probably felt hugely disproportionate. When you’re on the wrong end of one of these things, it can seem like your identity has been hijacked, rendering you a caricature of yourself. “Her response since then has been to avoid committing a provocation, especially on Twitter,” Chait writes of Rosin. He quotes her saying, “The price is too high; you feel like there might be banishment waiting for you.” Part of what Chait is describing as political correctness is the way social media has dramatically raised the psychic cost of voicing unpopular opinions, whether they have merit or not.
To which, I suspect, many on Twitter would reply boo-fucking-hoo. Indeed, the milieu Chait has imperfectly described has developed a whole lexicon to mock those who admit to feeling bruised by it: they have the sadz, they’re butthurt, they’re crying #maletears. For Twitter’s guardians of righteousness, if privileged journalists feel more inhibited about bucking lefty pieties, so much the better. If a certain sort of skeptical, contrarian liberal intellectual style is being endangered, they won’t mourn it.
Mark Blyth and Cornel Ban in Foreign Affairs (registration required) (image by (Kostas Tsironis / Courtesy Reuters):
It may be odd to use a Roman metaphor to describe a Greek political event, but in this case, it’s apt. Just as Julius Caesar crossed the Rubicon river because he could, in spite of the warnings of the Roman Senate not to, so Alex Tsipras, leader of the anti-austerity party, Syriza, has decided to try to end austerity in Greece, in spite of Europe’s leaders saying he shouldn’t. Whether Tsipras will succeed is still unclear, but whatever happens, his victory represents a crucial turning point for Europe—a signal that time has run out on austerity policies.
A “Tsipras” had to happen somewhere eventually, because there’s only so long you can ask people to vote for impoverishment today based on promises of a better tomorrow that never arrives. If voting for impoverishment brings only more impoverishment, eventually people will stop voting for it—and the timing of “eventually” will depend on when people’s assets run out. In the Greek case, backers of the incumbent New Democracy party and its austerity policies constitute that quarter of the electorate who still have assets (pensions, paper, and portfolios) after five years of depression and who want to preserve what they have. The 36 percent that voted for Syriza were the young, the asset-less, and the unemployed—people who either lost what they once had or never had much to begin with. Greece’s 1.9 percent of growth last year means essentially nothing to a society that has lost nearly 30 percent of GDP in a little over half a decade; on the current course, it would take, by latest estimates, two generations for the country to get back above water.
Syriza’s victory presents two lessons for the rest of Europe. First, no one votes for a 15-year-long recession. Second, you can’t run a gold standard in a democracy. Either the gold standard goes, or democracy goes, and that is the choice Europe may face sooner than it thinks.
The Euro is the gold standard that pretends that it’s not one—and therein lies the rub. While Europe has a plethora of national parliaments and free and fair elections, as well as a European parliament and multiple institutions with delegated power to represent the interests of citizens, once a country is a member of the eurozone, certain things happen that bypass any possible democratic checks. On the upside, its credit history gets rewritten. Greece and Italy get to borrow like Germany (with predictable results). On the downside, when a eurozone country is hit with an economic shock, it cannot respond to it through the exchange rate (devaluation) or by using the printing press (inflation). It must choose between default, which is not allowed, and balancing its books through internal devaluation (austerity). And if that means a couple of constitutional coups d’état have to happen in the heart of democracy to get the policies through, as happened in Italy and Greece in 2011, then so be it.
Talking animal stories have their roots in a prehistory when, according to the literary scholar Egon Schwarz, professor emeritus at Washington University, consciousness had yet to distinguish between man and animal, ‘when people still believed in the possibility of slipping from one to the other, entirely according to desire or need’. And since then, talking animals have developed in a variety of rather amorphous ways to satisfy human desire; a kind of cipher for our own existential dilemmas.
Talking animals can provide us with joy and laughter. They can serve as a displacement for weaknesses and anger. And they can teach us very human lessons while simultaneously serving as a continual source of wonder. Think of Lewis Carroll’s menagerie of talking animals – from the White Rabbit to a Mouse offended by Alice’s bad manners – all of whom are animated by the wonder of childhood imagination. Or of Aesop’s animals designed to didactically instruct young minds toward the path of proper morality. Or of the canine narrator of Franz Kafka’s short story ‘Investigations of a Dog’ (1922), an ideal stand-in for the author’s alienated existence.
They can also serve to remind us of the idyllic pleasure of nature itself.
How did we reach a point where “nothing at all escapes technique today”? Ellul offers a long genealogy of technique, from primitive man to the Greeks and Romans, to Christianity, the early modern era, and lastly the Industrial Revolution, when technique finally came into ascendancy. Ellul’s attention to social changes — technological, economic, legal, administrative, institutional — makes it a more earthy account of modern technical development than those frequently given that focus entirely on shifts in philosophical and religious outlooks. (This hints at the influence on Ellul of Marx, who famously rejected Hegel’s preoccupation with consciousness, rather than the material conditions of life, in understanding history.) While explanations from the history of ideas are not irrelevant for Ellul — although he probably dismisses them far too quickly — he considers them sorely lacking when it comes to explaining the rapid spread of technical development across Europe. A better explanation, he believes, can be found in the convergence of five phenomena in the nineteenth century: the availability of scientific knowledge amassed over centuries; population growth; an economy at once stable but adaptable; a clear intention on the part of the whole society to exploit technical possibilities in all areas; and perhaps most importantly social plasticity — that is, a society willing to surrender its religious and social taboos and to trade in the supremacy of traditional groups for that of the individual.
It may sound incomprehensible—senseless, Constance Garnett would have put it, as she did in her translation of The Brothers Karamazov—but while the rest of the world may dread the return of the prolonged hostile stare-down known in the last half of the last century as the Cold War, in some ways, I welcome the refreeze. It plunges me into nostalgia for my 1970s and 1980s childhood in Michigan, Indiana, and Oklahoma, when my professor parents threw incessant pirozhki-and-samovar parties for Russian Club students and for the peaceable, intellectual Soviet émigrés who were landing in American college towns in those years, bringing news from behind the Iron Curtain and beet-and-mayonnaise salads. I suspect that writers of James Bond-type thrillers feel much the same way I do, though for different reasons. Since the demise of the USSR—and the KGB—in 1991, it’s been a stretch for them to keep roping Soviet-era villains into their plots; now they can breathe easy. In the 1990s and well into the aughts, during the post-Soviet thaw, I sometimes wondered if my parents’ obsession with the culture and history of the Soviet Union had been a mistake, a generational fluke. But now that bare-chested, border-crashing Vladimir Putin has brought back the jangling tensions of the good-old bad-old days, I am feeling some vindication. So, I imagine, are the dozens of midwestern students who fell under the spell of my parents’ Slavophilia, getting doctorates in Russian just before Americans stopped caring about the “Evil Empire” and Russian-language enrollments plummeted.
Note: For Abbas. CRISPR is one of the four most beautiful scientific discoveries of the last 100 years in biology and has entirely changed the direction of our research.
Pallab Ghish in BBC:
It will be the first concerted use of an emerging technique called Crispr to "snip out" specific disease genes in order to discover drugs. The technique is cheaper, faster and more accurate than current methods. The research will be carried out with four leading academic and industrial gene-research centres across the world.
...The human genome project determined that humans had about 24,000 genes. These are found along the DNA double helix in every cell in the body. The decoding of the human genome 15 years ago led to the hope that doctors would eventually identify faulty genes responsible for specific diseases and eventually develop medicines to treat them. The principle is simple - drug companies would "snip out" the gene responsible for the disease from the patient's DNA, then use it to test drugs to see if they could fix the problem. At the time, US President Bill Clinton said: "Our children's children will only know cancer as a constellation of stars," and hailed the completion of the project after a 10-year race that cost billions. And Tony Blair, then UK Prime Minister, who joined the Mr Clinton by satellite from Downing Street, added: "Every so often in the history of human endeavour, there comes a breakthrough that takes mankind across the frontier and into a new era." Fifteen years on, one could wonder: "What new era?" There are only a handful of new medicines based on the human genome project, and, although Mr Clinton may eventually be proved right, cancer is still known as "cancer". Progress has been hampered by two main factors. First, researches soon began to realise that most common illnesses were caused by any combination of tens of genes. Second, the genetic techniques to snip out specific genes are expensive and take a long time. Researchers have to make what are in effect "genetic scissors" tailor-made to the gene they want to snip out. This process can take months for each and every gene. But in recent years, scientists have developed a set of genetic scissors that can be quickly and cheaply tailored to cut out a specific gene. And this technique, called Crispr, will be the focus of the research programme.
Thursday, January 29, 2015
Kenan Malik in Pandaemonium:
I published recently a transcript of a radio documentary I had made that explored the question of ‘Who owns culture?’. Perhaps the most fractious of recent debates around this question has been over ‘Kennewick Man’, an ancient skeleton found on the banks of the Columbia River in America’s Washington State. The 9000-year old skeleton became the focus for two major controversies: What is race? And who owns history? I tell the story of Kennewick Man in my book Strange Fruit: Why Both Sides are Wrong in the Race Debate. I am publishing here an extract that lays out part [of] that story, looking at the question of the ownership of culture and history and of the clash between scientific rationality and cultural identity. I will publish a second extract next week that delves into the debate about race posed by Kennewick Man.
Darold Treffert in Scientific American:
I met my first savant 52 years ago and have been intrigued with that remarkable condition ever since. One of the most striking and consistent things in the many savants I have seen is that that they clearly know things they never learned.
Leslie Lemke is a musical virtuoso even though he has never had a music lesson in his life. Like “Blind Tom” Wiggins a century before him, his musical genius erupted so early and spontaneously as an infant that it could not possibly have been learned. It came ‘factory installed’. In both cases professional musicians witnessed and confirmed that Lemke and Wiggins somehow, even in the absence of formal training, had innate access to what can be called “the rules” or vast syntax of music.
Alonzo Clemons has never had an art lesson in his life. As an infant, after a head injury, he began to sculpt with whatever was handy–Crisco or whatever–and now is a celebrated sculptor who can mold a perfect specimen of any animal with clay in an hour or less after only a single glance at the animal itself–every muscle and tendon perfectly positioned. He has had no formal training.
To explain the savant, who has innate access to the vast syntax and rules of art, mathematics, music and even language, in the absence of any formal training and in the presence of major disability, “genetic memory,” it seems to me, must exist along with the more commonly recognized cognitive/semantic and procedural/habit memory circuits.
Genetic memory, simply put, is complex abilities and actual sophisticated knowledge inherited along with other more typical and commonly accepted physical and behavioral characteristics.
The second is that I am saturated in digital life and I want to return to the actual world again. I’m a human being before I am a writer; and a writer before I am a blogger, and although it’s been a joy and a privilege to have helped pioneer a genuinely new form of writing, I yearn for other, older forms. I want to read again, slowly, carefully. I want to absorb a difficult book and walk around in my own thoughts with it for a while. I want to have an idea and let it slowly take shape, rather than be instantly blogged. I want to write long essays that can answer more deeply and subtly the many questions that the Dish years have presented to me. I want to write a book.
I want to spend some real time with my parents, while I still have them, with my husband, who is too often a ‘blog-widow’, my sister and brother, my niece and nephews, and rekindle the friendships that I have simply had to let wither because I’m always tied to the blog. And I want to stay healthy. I’ve had increasing health challenges these past few years. They’re not HIV-related; my doctor tells me they’re simply a result of fifteen years of daily, hourly, always-on-deadline stress. These past few weeks were particularly rough – and finally forced me to get real.
more here. And I should say that we here at 3QD send him off into the real world with a special, heartfelt, blogger's salute.
This extraordinary book, a huge dictionary of philosophical terms from many languages, is a translation of Vocabulaire européen des philosophies: Dictionnaire des intraduisibles, originally published in 2004, the brainchild of the French philosopher Barbara Cassin. If the original project was paradoxical, then the present version is doubly so: not just a dictionary of untranslatable words, but a translation of that dictionary. Rather than despair at the self-undermining self-referentiality of the whole idea, the editors rejoice in it. Indeed, moving the word “untranslatable” to the beginning of the English title proudly asserts the paradox even more forcefully than the original French title does, and forms what the English-language editor Emily Apter calls “an organising principle of the entire project”.
In her preface, Apter comments (apparently without irony) that “the extent of our translation task became clear only when we realised that a straightforward conversion of the French edition into English simply would not work”. She is right, of course: translation is almost never a straightforward conversion. This is why it is such a fertile subject for philosophy. Like so much in philosophy, theorizing about translation (and, of course, about the related concept of meaning) lurches between two unappealing extremes.
IT IS OBVIOUS BY NOW that Paul Thomas Anderson isn’t making individual movies so much as building an oeuvre block by block—the sturdiest, most resilient body of work by a big-time American director since Stanley Kubrick died and Martin Scorsese ran out of steam.
Big, ambitious, and American are the operative words. Boogie Nights (1997) and Magnolia (1999) were sprawling ensemble pieces that challenged Scorsese and Robert Altman on their own turf; in their concern with self-invented American Übermenschen and up-front eccentricity, There Will Be Blood (2007) and The Master (2012) engaged Orson Welles. Anderson’s smaller films, Hard Eight (1996) and Punch-Drunk Love(2002), pondered more marginal if equally echt-American types, and his latest movie, Inherent Vice, which stars Joaquin Phoenix as Thomas Pynchon’s hippie private eye Doc Sportello, falls into this category. A panoramic actor fest, it is also an extremely credible adaptation of the closest thing to an easy read by the writer whom some consider America’s greatest living novelist.
Structurally, Inherent Vice is pure School of Chandler, with Doc suckered into the plot by an old girlfriend, Shasta Fay Hepworth (Katherine Waterston), whose problems with her sugar daddy, scumbag developer Mickey Wolfmann (Eric Roberts), illuminate a classically Los Angeles real-estate scam . . . for starters. Behind it all is an “Indo-Chinese” drug cartel, a stand-in for the Vietnam War and ultimately a front for whatever cosmic antiplan you like—Doc Sportello being a sort of acidhead Don Quixote complete with intermittent sidekick, maritime lawyer Sauncho Smilax (Benicio Del Toro).
Just occasionally in Blake’s engravings there are pictures within pictures, and we get a glimpse of the life he thought images might lead in a better world. The most moving of these visions is Plate 20 of Blake’s Illustrations of the Book of Job. Job has survived his doubts and torments, and is telling the story to his daughters – in an earlier watercolour, they hold the instruments of Poetry, Painting and Music. No doubt the young women are taking their father’s narrative to heart, and in due course will rephrase it in terms appropriate to their arts: the lute and lyre are in the margins of the plate, ready to be strummed. But the first form of the story is visual: Job sits in a circular room – or maybe it is ten or 12-sided – and points towards two frescoed roundels on the walls left and right. Neither is unequivocally an episode from Job’s life – they could be analogous scenes from the story of the Fall – but the square panel over his head must be a version of ‘Then the Lord answered Job out of the Whirlwind.’ (It combines and condenses elements of Blake’s previous engraving of the subject.) As so often in Blake, the balance between positive and negative in the scene as a whole is precarious: Job is central and patriarchal (‘their Father gave them Inheritance among their Brethren’), and there is more than a touch of the baleful exhausted God-the-Father to him, heavy lids, pointing fingers and all. But there cannot be any doubt that the basic form and function of the room, with its echoes of the early 19th-century diorama (it is important that the plate was engraved in 1825), were meant to strike the viewer as wonderful – all-enveloping. Here were images at work.