Marc Hirsh on one of his music listening rituals, over at NPR:
But for all the romanticizing of the first time we hear an album or a song, that's almost never the moment of its crucial impact. That's not really how music works, not if it can actually hold up beyond that first listen. Unlike books, movies or plays (and television, to a lesser extent), recorded music is consumed repetitively. It's usually anywhere between the second and fifth listen that fragments that maybe weren't evident on first glance suddenly come at you or your brain makes a connection that could only have been made indirectly. That's when a song start to mean something to you.
Of course, there's something to be said about hearing a song and instantly connecting to it; that experience is just as valid as any, and it's certainly happened to me countless times. But that's precisely an experience, a one-off. The songs that are important to us are more like objects or possessions. They aren't bound by any one moment but instead continue to exist as time trundles ahead.
MANOHLA DARGIS On one level the allure of comic book movies is obvious, because, among other attractions, they tap into deeply rooted national myths, including that of American Eden (Superman’s Smallville); the Western hero (who’s separate from the world and also its savior); and American exceptionalism (that this country is different from all others because of its mission to make “the world safe for democracy,” as Woodrow Wilson and, I believe, Iron Man, both put it). Both Depression babies, Superman and Batman, were initially hard-boiled types, and it’s worth remembering that the DC in DC Comics was for Detective Comics. Since then the suits have largely remained the same even as the figures wearing them have changed with their times. Every age has the superhero it wants, needs or deserves.
Comic book movies are also fun (except when they’re not) and often easy viewing (except when they make your head hurt). They’re also blunt: A guy in a unitard pummels another guy — pow! — and saves the day, the girl and the studio. I like some comic-book movies very much, dislike others. But as a film lover I am frustrated by how the current system of flooding theaters with the same handful of titles limits my choices. (According to boxofficemojo.com “The Avengers” opened on 4,349 screens in the United States and Canada, close to 1 in 10.) The success of these movies also shores up a false market rationale that’s used to justify blockbusters in general: that is, these movies make money, therefore people like them; people like them, therefore these movies are made.
SCOTT And yet these stories do have some appeal, beyond the familiarity of the characters and the relentlessness of the marketing campaigns. As you suggest, they strike mythic, archetypal chords, and cater to a persistent hunger for large-scale, accessible narratives of good and evil.
It’s telling that Hollywood placed a big bet on superheroes at a time when two of its traditional heroic genres — the western and the war movie — were in eclipse, partly because they seemed ideologically out of kilter with the times.
No doubt you have been disappointed in some of us. Some of us are very disappointing. No doubt you have found that justice in the United States goes only with a pure heart and a right purpose, as it does everywhere else in the world. No doubt what you have found here did not seem touched for you, after all, with the complete beauty of the ideal which you had conceived beforehand. But remember this: If we had grown at all poor in the ideal, you brought some of it with you. … And if some of us have forgotten what America believed in, you, at any rate, imported in your own hearts a renewal of the belief. That is the reason that I, for one, make you welcome. … You dreamed dreams of what America was to be, and I hope you brought the dreams with you. No man that does not see visions will ever realize any high hope or undertake any high enterprise. Just because you brought dreams with you, America is more likely to realize dreams such as you brought. You are enriching us if you came expecting us to be better than we are.
—Woodrow Wilson, to 4,000 newly naturalized citizens, Philadelphia, May 10, 1915
The story begins in 1933, when depression-fueled unemployment rates hit an all-time high of 25 percent. Progressive reformers, including Wisconsin’s influential husband-and-wife reformers Elizabeth and Paul Raushenbush, were desperately casting about for a constitutional basis for national unemployment insurance. Action at the state level was paralyzed because no one state seemed able to adopt an expensive insurance plan without driving employers into neighboring states. But action at the federal level seemed impossible, too, because the conservative Supreme Court seemed unlikely to allow the Congress to enact a comprehensive unemployment system as a regulation of interstate commerce.
That’s where Brandeis comes in. Elizabeth was the justice’s daughter, and when she and her husband visited with him in his summer cottage in Massachusetts, Brandeis suggested a novel solution to the constitutional dilemma: the tax power, he told them, would offer a constitutionally sound footing for the vast social insurance system they were contemplating.
Four years later, Brandeis was a decisive vote in the sharply divided 5-4 decision in Stewart Machine Co. v. Davis, upholding the unemployment insurance provisions of the Social Security Act over the dissent of the four conservative justices, who were known collectively as the “Four Horsemen of the Apocalypse.” Brandeis’s tax theory had become the foundation of the new American social insurance state.
In 1934, Thomas Hart Benton, purveyor of muscular scenes of American life, was the country’s most famous painter and one of the very few ever to have his picture on the cover of Time. In 1949, Jackson Pollock, painter of abstract drips and swirls, appeared in a four-page spread in Life teasingly headlined “Is He the Greatest Living Painter in the United States?” Yes or no didn’t really matter: he was the nation’s new art star. What changed in the 15 years that separated the public elevation of these two artists and their radically different art? The world changed, for one thing, moving out of the Great Depression, through World War II, and into a bomb-haunted cold war. America changed from a mighty fortress to an outreaching global imperium. And American art, including Pollock’s, changed from illustrating provincial sagas to dramatizing universal myths.
Or again, imagine if the literary folk suddenly tired of it all, realized how unhelpful it all was; if the critics and academics wearied of untangling torment for a living (I see you haven’t got any better, Beckett’s old analyst responded after the author sent him a copy of Watt). Imagine if the publishers—let’s call them the Second Arrow Publishing Corporation—informed all their great authors, all the masters of the mercilessly talkative consciousness, that they are winding up their affairs; they have seen the light, they will no longer publish elaborations of tortured consciousness, lost love, frustrated ambition, however ingenious or witty. Imagine! All the great sufferers saved by Buddhism, declining the second arrow: quietness where there was Roth, serenity where there was McCarthy, well-being where there was David Foster Wallace? Do we want that? I suspect not.
Marilynne Robinson, the Pulitzer-winning novelist, is a confounding writer in today’s political alignment. Her new essay collection, “When I Was a Child I Read Books,” is — despite the sentimentality of its title — fundamentally a leftist political manifesto and lament for America’s loss of faith in government. Yet it grants a central argument of many religious conservatives — that America’s virtues are indeed steeped in biblical thought. “When I Was A Child” is a broadside defense of literature and classical liberalism that demands we include the unfashionable Old Testament as a foundation of both. Through rigorous citation and deep personal reflection, Robinson builds an excellent case. New Atheists like Sam Harris and medieval nostalgists like Rick Santorum would each find occasions for garment-rending in this collection.
Over at Rationally Speaking, Leonard Finkelman on the Richard Dawkins-E.O. Wilson debate about levels of natural selection:
The so-called “selfish gene” theory, technically known as gene selection, is an elaboration of work done by W.D. Hamilton and G.C. Williams on a phenomenon known as “kin selection.” Kin selection is predicated on the idea that the impulse I feel to care for my nephew is stronger than the impulse I feel to care for (say) my neighbor’s nephew. I know that my nephew is my sister’s son, and that my sister and I were born of the same parents; I therefore know that he carries 50% of my sister’s genetic alleles, and that there’s a 50% chance that any one of my sister’s alleles is one that I also carry. For any one of my nephew’s alleles, then, there’s a 25% chance that I also carry that allele. If I care for my nephew, then my genes have a one in four chance of helping themselves; if I care for my neighbor’s nephew, the odds are much, much lower. Gene selectionists therefore argue that genes are the individuals who benefit in the process of natural selection. Hence Dawkins’ famous claim that organisms are “gigantic lumbering robots” for carrying genes around: I have an impulse to care for my nephew because it helps (some of) my genes, even though it hurts me as a whole.
In 2010, E.O. Wilson and two collaborators wrote an article in Nature attacking the viability of kin selection. We won’t get into the details of their mathematical argument; the bottom line is that things rarely work out so neatly as “my nephew has half of my sister’s genetic alleles and she has half of mine,” and the complexities ultimately call into question the idea that gene selection can explain altruistic behavior. In his newest book and a recent New York Times “Stone” column (interestingly, a philosophy blog!), Wilson proposes an alternative that he calls “multi-level selection.” His account is so called because Wilson believes that nature sometimes selects genes, sometimes selects organisms, and sometimes selects groups—and that the latter option is the one that explains altruism. It was this claim that prompted Dawkins’ scathing review of Wilson’s book, linked in the first paragraph. Undermining the very foundation of Dawkins’ account of selection probably had something to do with it, too.
[E]ven if the M-theory hypothesis is correct, does it in fact answer the question of “Why is there something rather than nothing?” It would certainly account for the existence of the world. But would it not raise a fresh question: “Where did M-theory come from? What is responsible for its existence?”
This brings us up against what one suspects is a fundamental limitation of the scientific enterprise. The job of science is to describe the world we find ourselves in — what it consists of, and how it operates. But it appears to fall short of explaining why we are presented with this kind of world rather than some other — or why there should be a world at all.
Indeed, there is cause to wonder whether science even gets as far as describing the world. For instance, what is the world made of? One might answer in terms of the electrons, protons, and neutrons that make up atoms. But what are electrons, protons and neutrons? Quantum physics shows how they are observed to behave like waves as they move about. But on reaching their destination and giving up their energy and momentum they behave like tiny particles. But how can something be both a spread out wave with humps and troughs, and at the same time be a tiny localized particle? This is the famous wave/particle paradox. It afflicts everything, including light.
The solution given by the Danish physicist Neils Bohr was that one has to stop trying to explain what something, such as an electron, is. Instead, we are confined to explaining how something behaves in the context of a certain kind of observation being made on it — whether we are observing it moving from one place to another (in which case the language of waves is appropriate), or alternatively observing it interacting on reaching its destination (requiring the language of particles).
There's a popular student story about Martha Nussbaum giving a talk in a small living room of the Episcopal Church's chaplaincy centre on the leafy campus of the University of Chicago. As she was holding forth, a bird flew down the chimney and started to flutter around the room, bashing into the walls and generally panicking, as trapped birds do. The students were immediately busy opening windows and trying to shoo the poor creature to freedom. All their attention was taken up with the bird. But in the midst of all the excitement, Nussbaum didn't break her intellectual stride. She just carried on delivering the lecture as if nothing whatsoever was going on. She emanates detached academic cool – fully in command of herself and her material. From someone who has spent a distinguished academic career emphasising the riskiness and vulnerability of the human condition, all this slightly frosty control comes as something of a surprise.
Why, she once asked in a brilliant essay entitled “Love's Knowledge”, do the gods of the ancient world often fall in love with human beings? Why would they prefer mortals to immortals? It is precisely because human beings are able to fail, she argues, that they are able to manifest so many attractive qualities. Take courage. What place can courage have in the world of immortal gods? How could an immortal god risk everything for another if their own welfare were always guaranteed in advance? And what sort of parent would an immortal parent be to an immortal child? Certainly not one that is up half the night worrying. Risk and vulnerability are intrinsic to being human. And that is what makes us attractive, sometimes heroic.
Romano was a literary critic with The Philadelphia Inquirer for a quarter of a century and has also been a professor of philosophy. He presumably enjoyed this latter job, because he writes that today’s America is the best place to do philosophy that there has ever been, surpassing even the Athens of those ingenious and polite men Socrates, Plato and Aristotle. In one fit of enthusiastic chauvinism he goes yet further, and announces that it is the “perfectly designed environment” to ply his trade, as if no greater intellectual paradise could be imagined. This news will not provide much comfort to declinists who feel the political and economic hegemony of the United States to be fading fast. But perhaps it will help a little. Let deficits grow, good jobs disappear and China loom — hang it all, America will always have world-beating epistemology and metaphysics up its sleeve. Well, maybe that isn’t quite fair to Romano, because his claim depends on redefining the term “philosophy,” giving it a nebulous meaning that embraces far more than is taught under that name in universities. (More later about this revisionist wordplay.) Also, one part of his case is convincing, and oddly still worth making: America is not nearly so dumbed down as its detractors at home like to say.
“Idiot America: How Stupidity Became a Virtue in the Land of the Free,” “Unscientific America: How Scientific Illiteracy Threatens Our Future” and “The Age of American Unreason” are just three of the books from American writers in the past five years that belabor religious fundamentalism, conservative talk shows, scientific illiteracy or the many available flavors of junk food for thought. The fallacy of such books, as Romano argues, is that they take some rotten parts for the largely nutritious whole. It’s not so much that they compare American apples with foreign oranges, but that they fail to acknowledge that the United States is an enormous fruit bowl. Everything is to be found in it, usually in abundance, including a vibrant intellectual life. Rather like that of India — which has over a third of the planet’s illiterate adults but also one of the largest university systems in the world — the intellectual stature of America eludes simple generalizations.
Andrew Cohen weighing in as part of The Atlantic'scontinuing debate on work-life balance:
Anne-Marie Slaughter's remarkable articleWhy Women Still Can't Have It All clearly has meant different things to different people since it was published and posted. To me, first, it is further evidence of what I have come to believe after 46 years on this planet: most women are not just smarter than most men but braver and more aspirational, too. There is the noble, ancient striving to “have it all.” And then there is the earnest and thought-provoking debate, largely between and among women if I am not mistaken, over exactly what that phrase means and whether the quest to achieve it is even worth it.
Men? Please. Such an earnest public conversation on this topic between and among men is impossible to imagine (no matter how hard The Atlantic tries). That's why so many of us diplomatically stayed on the sideline last week. And haven't men as a group largely given up hope of “having it all” anyway? Did we ever have such hope to begin with? I don't remember ever getting a memo on that. Without any statistics to back me up — how typical of a man, right? — I humbly suggest that a great many of us long ago decided in any event to focus upon lesser, more obtainable mottoes, like “doing the best I can” or “hanging in there,” as we try to juggle work, family, and a life.
I was traveling yesterday, Rousseau's 300th, and did not get a chance to post this piece by Laurie Fendrich in The Chronicle of Higher Ed:
Today, June 28, is Jean-Jacques Rousseau’s 300th birthday. Although it’s hard to imagine philosophers as squalling newborns, in Rousseau’s case, it makes sense. His whole philosophy hinges on the idea that we humans are born good but, along the way of making civilization, we manage to destroy what’s good in ourselves. From the moment the umbilical cord is cut, Rousseau essentially says, we systematically obliterate our real nature, which is one of benevolent beings happily living a simple existence.
But for someone living in any complex society since the Industrial Revolution, Rousseau’s philosophy is not only difficult to believe (aren’t education, exposure to the arts, technological progress inarguably good things?), but inconvenient to practice—even in small instances, such as bringing up his ideas for discussion in a 21st-century college class. None of this has prevented me from loving Rousseau’s complex, contradictory, and exhilaratingly exasperating philosophy ever since first encountering it as a sophomore, in a college course in political philosophy.
Why would a young college student who was just discovering the solitary joys of painting pictures become obsessed with the one and only Enlightenment thinker who ferociously attacked the very value of art (and science as well)? And why would that young college student never manage to break with the almost ubiquitously maligned Rousseau, never manage to put him to the side and forget him? Or, if she was going to stay with him, why couldn’t she have found a way to concentrate on his sweeter side—the side expressed in, for example, his Reveries, where he walks in a “lonely meditation on nature”?
What is missing from our health care debate—even as conducted by our most insightful and radical critics of the dysfunctional American health care system—is a recognition of what, underneath it all, drives the system. It is Americans' insatiable lust for health care. What Americans possess in overwhelming abundance is the urge to be treated for their maladies. Witness our massive formaladdiction and mental health disease treatment and support system (as opposed to the informal community supports offered more readily around the world). And our most forward-thinking health care advocates can only imagine expanding this system exponentially (e.g., parity in health care coverage between physical and emotional illness).
American health care costs are driving America into the ground. These costs stand at from 2-3:1 compared with other nations (like the UK), and the chasm is widening since virtually all other nations have stablizied these costs, while we are only beginning to tackle the rate at which they increase. But Republicans can still run on simply resuming lock, stock and barrel the same old private care system, Americans in general dislike Obamacare, and Obamacare itself is built primarily around expanding coverage without controlling costs. This is because any effort to rein in such costs is met by accusations like “death panels” or “rationing,” which immediately kills them like glassy-eyed dead fish floating on the surface of the stagnant pond that is our care system.
WE CAN reasonably conclude that the verdict is not yet in on Egypt’s future. Popular empowerment has so far been a thorn in the side of those trying to destroy the revolution. And it is hard to imagine that the millions who have thrust themselves so decisively onto the center stage of their own history could be dismissed so easily. Romanticism aside, however, one must realize that revolution is an ugly business. Those with vested interests in authoritarian rule will not simply step aside under social pressure, nor will they wither away over time. Their total suppression and defeat is of essence to any true revolution. As long as Egyptians find this course distasteful—preferring instead conciliatory solutions and wishing that sporadic pressure from below along with clustering around the Muslim Brothers (as a revolutionary movement by proxy) can somehow convince the military and security elite to “do the right thing”—little can be done. And as long as revolutionaries cannot organize their ranks and encourage their fellow citizens to make difficult choices, take risks, and accept short-term instability, then there is little hope that the people themselves will be able to turn their gallant uprising into a complete revolution. Reflecting back on the Iranian case in The Making of the Islamic Revolution, Mohsen M. Milani rightly noted, “Theorizing about revolution sounds romantic, but winning it is no romantic enterprise. The verdict on those who refuse to treat revolution as a furious war has been unequivocally clear: oblivion or death… Revolutions are like wars.” And the key to winning wars is organization.
LATELY THE POSTHUMOUS CORPUS of Roland Barthes has been growing at a rate that rivals Tupac Shakur’s. (Can a hologram Barthes be far behind?) Recent years have witnessed the publication of lecture notes from his last seminars at the Collège de France (Preparation of the Novel) as well as the journals he kept following the death of his mother (Mourning Diary). The latest addition to his English catalogue is Travels in China, a translation of his notebooks from a three-week trip there in 1974 with a delegation from the French literary review Tel Quel. In France, the publication of Barthes’s private notebooks and journals (Carnets du voyage en Chine and Journal de deuil both appeared in 2009) spurred a round of contentious debate about the ethics of looting a dead writer’s archives. (Somewhere, no doubt, Max Brod is sighing with sympathy.) It’s not hard to attribute the spate of posthumous publications to the mercenary incentive to squeeze every last drop out of an author with any degree of fame. If we’re feeling a little more charitable, we might also see them as testaments to the desire for more of a distinctive voice and a singular intelligence. Each death of a major intellectual figure seems to prompt a flurry of new publications of old material, much of it scraps, all of it suggesting an inability to accept that no more words will issue from that pen, a kind of disbelief that the author is, at last, really and truly dead.
more from Dora Zhang at the LA Review of Books here.
Calvino professed to be fascinated by the world of adolescence – that in-between time, McLaughlin writes, where “a sense of failed initiation hangs over everything, a sense of thresholds not crossed”. The author regarded Into the War as a “polemic against the habitual image of adolescence in literature”, and all three stories attest to the potentially magical, transformative space of adolescence, however thwarted by the environment of war and Fascism. Calvino’s note to the trilogy points out that his “entry into life” and the Italian “entry into war” coincided. Throughout the book, the hyper-aware narrator senses the incoming storm, but he is too preoccupied with girls and peer pressure, too distracted by the circus atmosphere of Fascist politics, to confront this reality directly. After all, Calvino was no D’Annunzio, the Italian poet who led a group of Legionnaires in laying siege to the city of Fiume in the First World War; he was more the heir of Baudelaire, a flâneur thrust into a Fascist Youth uniform.
Artificial intelligence began with an ambitious research agenda: To endow machines with some of the traits we value most highly in ourselves—the faculty of reason, skill in solving problems, creativity, the capacity to learn from experience. Early results were promising. Computers were programmed to play checkers and chess, to prove theorems in geometry, to solve analogy puzzles from IQ tests, to recognize letters of the alphabet. Marvin Minsky, one of the pioneers, declared in 1961: “We are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines.”
Fifty years later, problem-solving machines are a familiar presence in daily life. Computer programs suggest the best route through cross-town traffic, recommend movies you might like to see, recognize faces in photographs, transcribe your voicemail messages and translate documents from one language to another. As for checkers and chess, computers are not merely good players; they are unbeatable. Even on the television quiz showJeopardy, the best human contestants were trounced by a computer.
In spite of these achievements, the status of artificial intelligence remains unsettled. We have many clever gadgets, but it’s not at all clear they add up to a “thinking machine.” Their methods and inner mechanisms seem nothing like human mental processes. Perhaps we should not be bragging about how smart our machines have become; rather, we should marvel at how much those machines accomplish without any genuine intelligence.
More here. [Photo shows IBM's chess-playing computer Deep Blue.]