Friday, October 31, 2014
Sam Anderson in the NYT Magazine (photo Dolly Faibyshev for The New York Times):
For more than 100 years, Punta Gorda has claimed to have the Fountain of Youth: an artesian well that once drew such long lines of tourists that, according to National Geographic, the fountain’s handle had to be replaced every six months. I walked there as soon as I woke up. I knew I was getting close when I started to see kitschy images of Ponce de León everywhere: murals on the sides of restaurants, fake motorized galleons parked at an Oktoberfest carnival. Ponce: his bulging armor, his pointy beard, the cockatoo crest of his helmet plume. He always seemed to be gesturing at something. “Go over there,” he seemed to be saying. “The important things are just out of the frame.” It was hot; after only a few minutes of walking, my face was pouring sweat. My plan, while I was in Florida, was to drink exclusively out of self-described Fountains of Youth, which meant I was already very thirsty. When I reached the spot where the fountain was supposed to be, it was nowhere. There was just an empty small-town intersection — restaurant, bank, chiropractor, stop sign. No special plaque, no burbling fountain, no crowds of elderly people leaping out of wheelchairs and dancing with joy. I worried, for a minute, that the trip had been a waste.
Then I saw it, and I laughed out loud. The Fountain of Youth was tiny, shabby and neglected: a blocky little drinking fountain, not much bigger than the garbage can it stood next to, covered in green tile that must have been decorative 90 years ago but was now cracked and stained. Today nothing identified it as the Fountain of Youth. In fact, the only sign on it was a warning from the Florida Department of Health: “Use Water at Your Own Risk: The water from this well exceeds the maximum contaminate levels for radioactivity as determined by the United States Environmental Protection Agency under the Safe Drinking Water Act.”
I turned the little spigot, and sure enough, water sputtered out. It smelled sulfurous. I bent down and drank. It was not refreshing, not at all. It tasted exactly like hard-boiled eggs. But I was thirsty, so I kept drinking. It seemed to have a little more body than regular water — maybe the high mineral content thickened it, I thought, or the radiation was already warping the nerves on the inside of my cheeks. Every mouthful felt like swallowing a single, liquid hard-boiled egg. I started to feel ill. But I had come all this way, and it was hot, and there was a long day of driving ahead of me, so I kept gulping it down. I filled a few plastic bottles to get me to the next fountain.
Thursday, October 30, 2014
Geoffrey Pullum in the Chronicle of Higher Education:
On the panel show A Good Read (Radio 4, October 17, 2014), each guest recommends a book, which the other guests also read and discuss. And Pinker’s recommendation for a good read was … The Elements of Style !
It was like hearing Warren Buffett endorsing junk bonds. It was like learning that Stanley Kubrick called Plan 9 From Outer Space high-quality cinematography. It was like seeing Chet Atkins (Never mind. I am too dispirited to go on with this potentially entertaining game of analogy-making.)
You see, Pinker’s own new book, The Sense of Style (Viking, 2014), which of course the ethos of the radio program would not permit him to pick, has solved a problem I’ve had for years. People keep asking me what, given my low opinion of The Elements of Style, I would recommend instead; and I have had little to say except that I wished there were an answer. Today there is an answer: For a sensible guide to what makes good writing good, buy Pinker’s book.
From The Baffler:
Moderators: You both appear to think that the prevailing economic and financial system has run its course, and cannot endure much longer in its present form. I would like to ask each of you to explain why.
Thomas Piketty: I am not sure that we are on the eve of a collapse of the system, at least not from a purely economic viewpoint. A lot depends on political reactions and on the ability of the elites to persuade the rest of the population that the present situation is acceptable. If an effective apparatus of persuasion is in place, there is no reason why the system should not continue to exist as it is. I do not believe that strictly economic factors can precipitate its fall.
Karl Marx thought that the falling rate of profit would inevitably bring about the fall of the capitalist system. In a sense, I am more pessimistic than Marx, because even given a stable rate of return on capital, say around 5 percent on average, and steady growth, wealth would continue to concentrate, and the rate of accumulation of inherited wealth would go on increasing.
But, in itself, this does not mean an economic collapse will occur. My thesis is thus different from Marx’s, and also from David Graeber’s. An explosion of debt, especially American debt, is certainly happening, as we have all observed, but at the same time there is a vast increase in capital—an increase far greater than that of total debt.
The creation of net wealth is thus positive, because capital growth surpasses even the increase in debt. I am not saying that this is necessarily a good thing. I am saying that there is no purely economic justification for claiming that this phenomenon entails the collapse of the system.
Based on real data (latitude, longitude and height) from the University of Amsterdam the animation initially shows the tracks of 12 birds, but then concentrates on a pair - male and female, as they migrate south in Autumn 2010 from the Veluwe forest in the Netherlands to warmer weather on the African coast (Liberia, Ghana and Cameroon). After wintering in Africa, in Spring 2011 the birds fly back. But en route we see the female lose her way - possibly due to unfavourable winds. After a long journey the male arrives back in the Veluwe forest and waits for her.
Wudan Yan in Hippo Reads:
You may have read recent media stories stating that a cure for Type I Diabetes is “imminent” and wondered what the buzz was about—is a cure indeed imminent and, if so, what does this mean for modern medicine?
Yes, scientists at Harvard University have recently made a huge breakthrough in the treatment possibilities for Type I Diabetes, an inherited condition affecting over three million Americans that causes the body’s immune system to malfunction. Type I Diabetes destroys the pancreatic beta cells in the body that manufacture insulin, a hormone critical for processing sugars. Under current medical practice, people with Type I Diabetes must regularly check their blood sugar levels and inject themselves with insulin to keep levels in check, an imperfect process disruptive to routine life. For decades, researchers have tried to generate pancreatic beta cells that could be used to provide insulin for Type I patients.
Now, thanks to a research group led by Doug Melton, a stem-cell researcher at Harvard Medical School, they may in fact be closer to that goal: Melton’s talented team of scientists have generated functional human pancreatic beta cells from stem cells in large quantities (the paper reporting these findings was published inCell on October 9, 2014). Hippo Reads’s Science Correspondent Wudan Yan spoke with Felicia Pagliuca, a postdoc in Melton’s lab, about the work that went into this landmark study, the importance of collaboration, and where diabetes research will go from here.
WY: Thanks for chatting with Hippo Reads! We’re interested to know: how did you first get involved in this research?
Felicia Pagliuca: I had been doing my PhD at Cambridge University in the UK at the Gurdon Institute. I was conducting research in cancer biology at the time but [Doug] Melton came to Cambridge to give a seminar. By the end of that seminar, I was just blown away—completely inspired by his vision and what you could do with stem cells in the field of regenerative biology and the impact that could have on patients. I reached out to Doug and told him about my interest and background. We hit it off and I was fortunate enough to be offered an opportunity to work in his laboratory.
On a pillow
The Sutra on
On my cushion
Hiding from fears
To my old mantra:
Full of grace...
by Mark J. Mitchell
from The Buddhist Poetry Review
Craig Lambert in Harvard Magazine:
In the spring of 2012, Brown University hosted an extraordinary academic conference. “Being Nobody?” honored the thirtieth anniversary of the publication of Slavery and Social Death by Orlando Patterson, Harvard’s Cowles professor of sociology. Giving a birthday party for a scholarly book is a rarity in itself. Even more unusual, the symposium’s 11 presenters were not sociologists. They were classicists and historians who gave papers on slavery in ancient Rome, the neo-Assyrian empire, the Ottoman Middle East, the early Han empire, West Africa in the nineteenth century, medieval Europe, and eighteenth-century Brazil, among other topics. “I’m not aware of another academic conference held by historians to celebrate the influence of a seminal work by a social scientist writing for a different discipline,” says John Bodel, professor of classics and history at Brown, one of the organizers.
But Patterson is no ordinary academician. “Orlando is one of a kind—the sheer scope and ambition of his work set him apart from 99 percent of social scientists,” says Loic Wacquant, JF ’94, professor of sociology at Berkeley. “In an era when social scientists specialize in ever-smaller objects, he is a Renaissance scholar who takes the time to tackle huge questions across multiple continents and multiple centuries. There was another scholar like this in the early twentieth century, named Max Weber. Orlando is in that category.”
Viv Groskop in The Guardian:
Azar Nafisi, 58, is an Iranian writer and professor of English literature. She lives in Washington DC and became an American citizen in 2008. In 1995 she quit her job as a university lecturer in Tehran and taught a small group of students at home, discussing works considered controversial in Iran at the time, such as Lolita and Madame Bovary. Her 2003 book based on this experience, Reading Lolita in Tehran, was on the New York Times bestseller list for 117 weeks and won a string of literary awards. Nafisi’s latest non-fiction book, The Republic of Imagination (Viking), is described as “a passionate tribute to literature’s place in a free and enlightened society”.
What motivated the latest book?
In the last chapter of Reading Lolita in Tehran I talk about how my students were uncritically in love with this world they could not connect to physically – the west. I wanted them to know that this was an illusion. That there were serious critiques of any system, no matter how wonderful. When I came here [to the US], I realised how the ideal of freedom is being eroded. One canary in the mine is the denigration of ideas.
What do you mean by this? What are the signs?
The inequalities of the education system [in the US]. You are also experiencing this in Britain. Where public schools [ie state schools] are virtually being dismantled. Where children are deprived of music, art and fiction more and more. And where all the privilege goes to the private schools. This is not the America I want my children to grow up in.
Why is fiction in particular important in solving all this?
The importance of ideas and the imagination is that they really defy borders and limitations. Books are representative of the most democratic way of living. There’s a James Baldwin quote about feeling all alone and isolated until you read Dostoevsky and you discover that someone who lived a hundred years ago connects to you – and you don’t feel lonely any more.
The premise of this book is that “to deny literature is to deny pain and the dilemma that is called life”. In what way can fiction help us with this dilemma?
Fiction confronts a great many things that we cannot fully confront in real life. Fiction is the ability to be multi-vocal and to speak through the mind and the heart of even the villain. In doing that, it forces us to face the pain of being human and being transient. It’s what Nabokov talks about: “The conclusive evidence of having lived.”
Joanna Scutts in Lapham's Quarterly:
In The Burning of the World, his recently discovered memoir of the first few weeks of World War I, the Hungarian artist, officer, and man about town Béla Zombory-Moldován writes frequently about his attachment to his watch. When he’s wounded in the confusion of battle in the forests of Galicia, he finds the watch unscathed during an agonizing evacuation of the area, and exalts the survival of “my trusty companion, sharer of my fate, the comrade that connected me to my former life.” Much more than a watch, it’s almost a miracle: “Not just an object, but a true and staunch friend. I held it in my left hand and marveled at it as it measured off the seconds.”
How to tell time was a matter of survival and strategy during the Great War, a war in which communication technologies had to advance rapidly to keep pace with the new instruments of battle. The war was a crucible of innovation in destruction, in which chlorine gas, tanks, and heavy artillery choked, crushed, and obliterated human bodies in new ways. Vast armies dug in opposite each other across unprecedented distances—the Western Front alone stretched well over four hundred miles, from the Swiss border to the North Sea. Because much of the infantry went underground, it was no longer possible simply to holler or sound a hunting horn as a signal to attack, nor for regiments to advance proudly, and visibly, together on horseback. Instead, it became necessary to coordinate time and to tell it accurately; the practice and the phrasesynchronize watches was born from this need during the war. Officers in crowded trenches watched for second hands to tick down before blowing the whistle and rallying their men, who scrambled up ladders into the awaiting gunfire. The term zero hour, the moment of no return, was first recorded in the New York Times in November 1915: “At 5:05 a.m. September 25 a message came to the dugout that the ‘zero’ hour, that is, the time the gas was to be started, would be at 5:50 a.m.” The irony of ascribing a precise time for an attack as uncontrollable and weather-dependent as gas goes unmentioned.
Paul Fussell, in his influential 1975 study The Great War and Modern Memory, notes that sunrise and sunset dominated soldiers’ trench lives and their understanding of the passage of time. These periods of “stand-to” were times of heightened tension and observation, when men would keep watch on the raised fire step and strain their eyes through field glasses for movement. When it wasn’t raining, the skies above the flat, endless fields would burst into color as they waited—an unforgettable combination of beauty and terror. (Fussell writes that “dawn has never recovered from what the Great War did to it.” Dawn and dusk were unavoidable natural markers of time, both ordinary and mystical. “The darkness crumbles away./It is the same old druid Time as ever,” as Isaac Rosenberg puts it in his 1916 poem “Break of Day in the Trenches.”Dawn is relentless, and soldiers are powerless to hide from it, speed it up or slow it down. A watch then gives the illusion of controlling time, a sustaining fantasy of life at the front. As Zombory-Moldován suggests, the watch is something more than practical; it’s a link back to a world where a man was free to make his own appointments, to run his own life.
Tim Martin in Aeon (Illustration by Lee Moyer):
Alan Moore is waiting when I get off the train in Northampton, a majestically bearded figure in a hoodie, scanning the crowd that pushes through the turnstiles with a look of fearsome intent. When I wave, the glare becomes a beaming smile. ‘How are you, mate?’ he booms. ‘Splendid, splendid. I thought we’d go for a bit of a walk, so I can show you around and we can work up an appetite.’
Off we go up the hill. Moore swings his stick – a wooden snake coiled around the handle to symbolise his enthusiastic worship of Glycon, a second-century Macedonian snake god – and keeps up a constant flow of arcane local chatter. This station car park, he tells me, used to be King John’s castle, where the First Crusade began. That charmless glass-and-steel building was once a Saxon banqueting hall. Over there was a pub where, ‘if you’d come along here on a Sunday afternoon in the 1920s or ’30s, you’d have found a zebra tied up outside it.’
Before long, tramping through the riverside mud under a railway bridge, we’ve moved on to grander concerns. Moore has embarked on a potted summary of eternalism, the philosophical concept of time that ran through Kurt Vonnegut’s novel Slaughterhouse-Five (1969), played a part in his own revolutionary superhero comic Watchmen (1986-87), and is the central conceit behind ‘Jerusalem’, the million-word mega-novel the first draft of which he has now, after more than a decade, shepherded to its conclusion.
In essence, eternalism proposes that space-time forms a block – ‘imagine it as a big glass football’, Moore suggests – where past and future are endlessly, immutably fixed, and where human lives are ‘like tiny filaments, embedded in that gigantic vast egg’. He gestures around him at the rubbish-strewn path, his patriarch’s beard waving in the wind. ‘What it’s saying is, everything is eternal,’ he tells me. ‘Every person, every dog turd, every flattened beer can – there’s usually some hypodermics and condoms and a couple of ripped-open handbags along here as well – nothing is lost. No person, no speck or molecule is lost. No event. It’s all there for ever. And if everywhere is eternal, then even the most benighted slum neighbourhood is the eternal city, isn’t it? William Blake’s eternal fourfold city. All of these damned and deprived areas, they are Jerusalem, and everybody in them is an eternal being, worthy of respect.’
If this mixture of local history, cosmological speculation and messianic mysticism sounds bewildering, then perhaps you haven’t been reading enough Alan Moore lately.
Michael Collins in In These Times:
Set in the present day, the film follows the lives of five black people on the fictitious Ivy League college Winchester as they navigate race, love and ever-shifting personal identities. Broken into a series of blithely titled chapters, the film is billed as “a satire about being a black face in a white place.”
The film, however, is less a satire in the sense of using “wit to expose stupidity” as much as it is a mockumentary whose humor comes from its earnestness, in the vein of films like Best in Show. Perhaps this is because, as the title suggests, the work is narrowly pointed at white America. Or, more specifically, the type of liberal white America that prefaces racist statements with “I’m not racist, but…” and when challenged responds, “But my best friend is a black!” For those who already know that all black people aren’t the same (we have different names for a reason!), and that race, class and sexuality are complex parts of a greater whole, the film will have little critical edge. But for those who haven’t taken Race in America 101, the film may yet be productive.
Through a series of occasionally disjointed chapters, we are presented with a host of college archetypes: the charismatic jock played by the astonishingly beautiful Brandon Bell; the black militant played by Tessa Thompson, the pushover nerd played convincingly by Tyler Williams ofEverybody Hates Chris fame, the society queen with a terrible secret (and an amazing wardrobe of pearl necklaces and backless dresses) played by Teyonah Parris, and the incorrigible dean played by Dennis Haysbert. Throughout, the film adds various layers to these one-dimensional caricatures by highlighting their “performance of blackness.”
For those who slept through critical race theory, it’s now taken for granted that there is no essential black experience. Rather, blackness is a social, political and economic construct that individuals engage with as society, the economy or our personal desires dictate. The film revels in multiplicity of identity, internal contradictions and the general sense of confusion and misidentification that characterize public discussions of race.
Wednesday, October 29, 2014
This was a very low period for Waugh. There was an urgent necessity for him to find a way of making a living, and eventually, with deep foreboding, he took a post as a teacher at Arnold House Preparatory School on the north coast of Wales. This grotesque establishment was the model for the hilariously awful Llanabba Castle in Decline and Fall. He did not stay there for long, and found another teaching job at a more nearly normal school in Buckinghamshire, from which eventually he was sacked, apparently for drunkenness. Waugh was not cut out to be a teacher.
He did not really know what he was cut out to be. He had started to write, and some short stories had been published, but he had not yet given up hope of being a painter. He also spent a brief, happy few months taking carpentry lessons with a view to embarking on a career as a cabinetmaker. He did some journalistic work, and began his first book, a life of the Pre-Raphaelite painter Dante Gabriel Rossetti, but the most important event of these years was his meeting Evelyn Gardner on April 7, 1927. (They would come to be known to their friends as He-Evelyn and She-Evelyn.)
It is not guaranteed, they say, that a successful vaccine against Ebola can be “developed, produced, and distributed” in time, and in large enough amounts, to throw a fence of containment around the disease.
If not, they warn, it is possible that the rest of the world’s reaction could trigger the next global financial crisis.
Being someone who has a professional specialty of covering epidemics (HIV, the anthrax attacks, SARS, H5N1, H1N1, lots of smaller outbreaks), I reluctantly have to conclude: Lanard and Sandman are not being alarmist here. Imagine that Ebola cannot be contained; think back to the events of this weekend; and then imagine that reaction multiplied thousands of times. It isn’t a big leap to the suspicion, disruption and expense that will then be triggered in response to any travelers from the region. From there, it isn’t much of a further leap to closed borders, curbs on international movement, disruption in global trade, cuts in productivity, even civil unrest and the opportunities that unrest offers to extremist movements. None of that is far-fetched, if Ebola is not controlled.
The protest failed because it relied on falsehoods: the opera is not anti-Semitic, nor does it glorify terrorism. Granted, Adams and his librettist, Alice Goodman, do not advertise their intentions in neon. The story of the Achille Lauro hijacking is told in oblique, circuitous monologues, delivered by a variety of self-involved narrators, with interpolated choruses in rich, dense poetic language. The terrorists are allowed ecstatic flights, private musings, self-justifications. But none of this should surprise a public accustomed to dark, ambiguous TV shows like “Homeland.” The most specious arguments against “Klinghoffer” elide the terrorists’ bigotry with the attitudes of the creators. By the same logic, one could call Steven Spielberg an anti-Semite because the commandant in “Schindler’s List” compares Jewish women to a virus.
In the opera, the opposed groups follow divergent trajectories. The terrorists tend to lapse from poetry into brutality, whereas Leon Klinghoffer and his wife, Marilyn, remain robustly earthbound, caught up in the pleasures and pains of daily life, hopeful even as death hovers. Those trajectories are already implicit in the paired opening numbers, the Chorus of Exiled Palestinians and the Chorus of Exiled Jews. The former splinters into polyrhythmic violence, ending on the words “break his teeth”; the latter keeps shifting from plaintive minor to sumptuous major, ending on the words “stories of our love.”
Howard Jacobson in New Statesman:
If I were to give this essay a title, it would be “Waiting for Calvin”. Not John Calvin the theologian, nor Calvin Klein the fashion designer, but Calvin, a Navajo baby whose first laugh I travelled to Arizona in 1995 to film as part of a series of television programmes I was making about comedy. It’s a nerve-racking business waiting for a baby to laugh, particularly if you have a camera crew standing by in another state, but Calvin’s laugh was as important to my film as it was to his family and community. The Navajo celebrate a baby’s first laugh as a rite of passage, a moment in which the baby laughs himself, as it were, out of inchoate babydom and into conscious humanity. It’s a wonderful concept and grants a primacy to laughter that we, who probably laugh too automatically and certainly far too much, would do well to think about. If it’s laughter that makes us human, or at least kick-starts the process of our becoming human, what does that say about what being human is?
It is sometimes argued that laughter is what distinguishes us from animals, but not everyone would agree that we have laughter to ourselves. Thomas Mann, for example, wrote an essay about his dog Bashan in which he made a claim for Bashan’s demonstrating many of the signs of mirth. And that’s before we get on to the tricky question of internal laughter – that appreciation of ironical mishap or absurd situation that even in human beings doesn’t always issue in a smile, never mind a laugh. Laughter, we can say, is an act of comprehension – whether immediate or arising out of rumination – but which of us can know for sure how much animals comprehend of what they see and how long they go on thinking about it?
From recognizing speech to identifying unusual stars, new discoveries often begin with comparison of data streams to find connections and spot outliers. But simply feeding raw data into a data-analysis algorithm is unlikely to produce meaningful results, say the authors of a new Cornell study. That’s because most data comparison algorithms today have one major weakness: somewhere, they rely on a human expert to specify what aspects of the data are relevant for comparison, and what aspects aren’t. But these experts can’t keep up with the growing amounts and complexities of big data. So the Cornell computing researchers have come up with a new principle they call “data smashing” for estimating the similarities between streams of arbitrary data without human intervention, and even without access to the data sources.
How ‘data smashing’ works
Data smashing is based on a new way to compare data streams. The process involves two steps.
- The data streams are algorithmically “smashed” to “annihilate” the information in each other.
- The process measures what information remains after the collision. The more information remains, the less likely the streams originated in the same source.
Data-smashing principles could open the door to understanding increasingly complex observations, especially when experts don’t know what to look for, according to the researchers.
A Ball Rolls on a Point
The whole ball
of who we are
the green baize
of a single tiny
spot. A aural
track of crackle
betrays our passage
it's hot and
spring out of it.
The pressure is
intense and the
sense that we've
As though bringing
too much to bear
too locally were
Victoria Law in Jacobin (image “Prison Blueprints.” Remeike Forbes/Jacobin):
Casting policing and prisons as the solution to domestic violence both justifies increases to police and prison budgets and diverts attention from the cuts to programs that enable survivors to escape, such as shelters, public housing, and welfare. And finally, positioning police and prisons as the principal antidote discourages seeking other responses, including community interventions and long-term organizing.
How did we get to this point? In previous decades, police frequently responded to domestic violence calls by telling the abuser to cool off, then leaving. In the 1970s and 1980s, feminist activists filed lawsuits against police departments for their lack of response. In New York, Oakland, and Connecticut, lawsuits resulted in substantial changes to how the police handled domestic violence calls, including reducing their ability to not arrest.
Included in the Violent Crime Control and Law Enforcement Act, the largest crime bill in US history, VAWA was an extension of these previous efforts. The $30 billion legislation provided funding for one hundred thousand new police officers and $9.7 billion for prisons. When second-wave feminists proclaimed “the personal is the political,” they redefined private spheres like the household as legitimate objects of political debate. But VAWA signaled that this potentially radical proposition had taken on a carceral hue.
At the same time, politicians and many others who pushed for VAWA ignored the economic limitations that prevented scores of women from leaving violent relationships.
Keith Doubt on Eric Gordy's Guilt, Responsibility, and Denial: The Past at Stake in Post-Milošević Serbia, in Berfrois (Belgrade, Serbia. Photograph by Jamie Silva):
The intellectual integrity of cultural anthropology is based largely on its commitment to cultural relativism as a principled notion. Cultural relativism is the principle from which the discipline achieves its sense of empirical objectivity. Cultural differences are cherished as just that, cultural differences. No difference is stipulated as superior or inferior, better or worse. The commitment guards against ethnocentric judgments, colonizing prejudices, and, worst of all, grand theorizing with metaphysical pretense. This ethos in the discipline of cultural anthropology guides the recent book by Eric Gordy titled, Guilt, Responsibility, and Denial: The Past at Stake in Post-Milošević Serbia.
While cultural initiatives rarely investigate and never sentence, they offer some of the keys to understanding that have been missing from political legal projects: the ability to hear and identify with the lived experiences of individuals, a route to engagement that participants in the public can understand, and openness to interpretation that constitutes an invitation to dialogue. (p. 179)
There is a contrasting notion in the social sciences to the principle of cultural relativism, namely, the assumption that social science has a valid knowledge-base and ethical responsibility from which to demonstrate how some societies are healthier than others and how some social structures are better for community life. Social science depicts certain normative orientations and collective sentiments as more functional for the vitality of human life and sociability. For example, human rights scholars assume that a genuine respect for the principle of human rights is good: good for people in society, good for their communities, and good for their governments. Gordy understands this perspective but recognizes its unintended consequences, given his political knowledge of what Max Weber calls the ethical irrationality of the world in his famous lecture, “Politics as a Vocation.” In politics, it is necessary to employ force in realizing one’s values. When, however, force is employed, no matter how good the intentions behind the use of force, bad results follow or evil consequences occur. Weber calls this the ethical irrationality of the world which is the reason for the sense of disenchantment that characterizes the spirit of the modern world. In politics, actions whose motives are seemingly good can lead to bad results. The reverse is also true; actions whose motives are seemingly bad can lead to good results. Weber calls this the paradox of consequences, an ever-repeating empirical and historical pattern, and Gordy understands this matter well. There is a hubris that informs the forceful use of law and legal process at both the national and the international level, and Gordy wants to debunk this hubris that guides international interventions in societies experiencing conflict and social violence.
To introduce the structure of his book, Gordy writes, “the ordering of the chapters is meant to lead readers through the logic that brought the study from apparently clear and relatively simple moral questions to greater complexity and uncertainty, and to an insistence on the importance of the cultural and social context” (p. xv). After relatively simple moral questions implode upon themselves when confronted with empirical scrutiny and historical accounts, the significance of cultural variables within their own milieu and within their own historical context assume their rightful place.
Over at The Physics arXiv Blog:
Taleb and co begin by making a clear distinction between risks with consequences that are local and those with consequences that have the potential to cause global ruin. When global harm is possible, an action must be avoided unless there is scientific near-certainty that it is safe. This approach is known as the precautionary principle.
The question, of course, is when the precautionary principle should be applied. Taleb and co begin by saying that their aim is to place the precautionary principle within a formal statistical structure that is grounded in probability theory and the properties of complex systems. “Our aim is to allow decision-makers to discern which circumstances require the use of the precautionary principle and in which cases evoking the precautionary principle is inappropriate.”
Their argument begins by dividing potential harm into two types. The first is localised and non-spreading. The second is propagating harm that results in irreversible and widespread damage. Taleb and co say that traditional decision-making strategies focus on the first type of risk where the harm is localised and the risk is easy to calculate from past data.
In this case, it is always possible to make a mistake when decision-making about risk. The crucial point is that when the harm is localised, the potential danger from a miscalculation is bounded.
By contrast, harm that is able to propagate on a global scale is entirely different. “The possibility of irreversible and widespread damage raises different questions about the nature of decision-making and what risks can be reasonably taken,” say Taleb and co. In this case, the potential danger from a miscalculation can be essentially infinite. It is in this category of total ruin problems that the precautionary principle comes into play, they say.
Tuesday, October 28, 2014
Jonathan Rée in Prospect:
Every October for the past 13 years, the Oxford Lieder Festival has been bringing classy performances of classical art songs to what used to be a rather unmusical town. I love it: great art taken seriously, mostly in the glorious intimacy of the Holywell Music Room, and without any pomp, artifice or unnecessary formality. But I must say I was rather dismayed when I heard what was planned for this year’s festival, which started a week ago: a complete survey of all the songs that Schubert ever wrote.
Schubert was, of course, the inventor of the classical Liederabend or song recital: the extraordinary musical institution that features nothing but a singer and a pianist, achieving, when all goes well, a thrillingly direct communication with their audience. And apart from inventing the institution, Schubert wrote the classics against which all subsequent efforts are measured—notably Winterreiseand Schöne Müllerin, whose depth, variety, drama, animation and melancholy place them amongst the bare necessities of any possible desert island. But given that Schubert died at the age of 31 (in 1828) and that, apart from inventing the Liederabend, he composed in practically every other genre of classical music, you might think that he could not have written terribly many songs.
Actually he wrote more than 600, so the idea of a three-week festival featuring every single one is perhaps even crazier than you might have thought. A suitable event for anoraks, pub-quizzers and musical train-spotters, perhaps, but why should anyone who cares for the art of singing want to scrape the barrel for hundreds of minor works, rather than remaining with the tried and tested pre-loved masterpieces?
More here. [Thanks to Brooks Riley.]
Morgan Meis in The Smart Set:
In the year 1905, Henri Matisse painted a portrait of his wife wearing a rather extraordinary hat. The painting was displayed at the Salon d’Automne in Paris that same year. Much shock and controversy followed. To many, the hat looked like a giant lump of randomly chosen colors sitting atop the poor woman’s head. What, also, was the point of all the green on the woman’s face? People and hats don’t look like that. The world doesn’t look like that.
By 1905, this game of looking at contemporary painting and expressing shock and dismay had been going on for some time. A generation had already passed since Impressionism first scandalized right-thinking art aficionados. In the years just after Impressionism, artists like Gauguin and Van Gogh fully dispatched the idea that color in painting had to correspond to color as we see it in the real world. In 1905, the public should have been ready for Matisse. But something about that portrait by Matisse was extra upsetting, even to a public that was now used to being scandalized by art. The color wasn’t just unexpected; it was jarring, verging on ugly. Critics dubbed Matisse and other painters in the show Fauvists. The word means, literally, wild beasts.
You’d expect the wild beast who painted "Woman with a Hat" (1905) to go even further into brutality and ugliness. Shocking the bourgeoisie is hard work. You’re constantly forced to up the ante.
That’s not what happened with Matisse. He continued to experiment radically with color. But his pictures became gentler as time went on. Matisse brought into his paintings a sense of balance, poise, beauty. By the end of his life, Matisse was making art that was downright pretty. There is no more damning adjective to an avant-garde artist than “pretty.”
John Donoghue and Sheila Nirenberg, computer scientist Michel Maharbiz, and psychologist Gary Marcus discuss the cutting edge of brain-machine interactions
The matter of the legacy of Dietrich Bonhoeffer is at once straightforward and immensely complicated. About the man there is no question. Whatever Bonhoeffer’s flaws—and Charles Marsh’s masterly and comprehensive new biography Strange Glory reveals that there were more than is commonly supposed—the witness of his breathtakingly courageous opposition to Adolf Hitler’s Third Reich leaves criticism disarmed.1 In the one great challenge of his life, he was magnificent. He behaved the way that the rest of us, in our most hopeful moments, like to imagine we would.
But Bonhoeffer is known to history not simply as a victim of Nazi horror but as a theologian of note. His appeal is startlingly ecumenical: He finds adherents across the Christian spectrum from conservative evangelicals to Lutherans (of various stripes) to liberal Protestants to celebrants of the death of God. Bonhoeffer himself was sympathetic to Catholicism—Karl Barth worried about his “nostalgia for Rome”—and he even came to insist, in Marsh’s words, on “equivalence before God of the church and the synagogue, between the body of Christ and the chosen people of Israel.”
But from such extravagant pluralism, can there be any coherence?