May 31, 2012
Carlin Romano's America the Philosophical
William Giraldi in the Los Angeles Review of Books:
You've probably heard the news: We Americans are a mob of dipshits. In our nation's emporium of "ideas," the madcap and maniacal sell like batteries in a blackout. We can't help it, apparently. We've been dullards since our inception — Boobus americanus in H.L. Mencken's unkind coinage — and so relish our pop-pundits and their orangutan ilk in Washington, our searing rabblement of the religious, our creationists, cranks, crackpots, or any wide-eyed witch in the street. In the slothful spirit of fairness, we like to give the scientist and the voodoo priestess equal measure, and then applaud the voodoo. That we are also a sub-literate breed is probably obvious, and probably the problem in the first place, since quality reading builds antibodies against bullshit. Mention Fernando Pessoa to a Portuguese — any Portuguese — and prepare yourself for an afternoon's colloquy. Toss a pebble into a crowd of Germans and the first person it touches will be pleased to pontificate on the importance of Goethe. Now go say "Walt Whitman" to the next American you run into and you'll be confronted with the vacant countenance of the over-medicated.
But forget the poor plebe — even some of last century's distinguished scholars and writers held American literature to be an anemic enterprise unworthy of serious account. Van Wyck Brooks enjoyed exclaiming the calumny that American artists and intellectuals had no "tradition" to build upon (then he let posterity know precisely who he was when he dubbed Mark Twain a fraud). Mencken, in an uncharacteristic break with discernment, thought Emerson an oaf with no influence, despite the fact that Mencken couldn't look on anything without wearing Nietzsche's eyeglasses; he must have missed those parts in Nietzsche — in the letters, journals, and Twilight of the Idols — extolling Emerson's genius. If you'd like to dine at a banquet of boorish inanity, see Theodore Dreiser's essay "Life, Art and America," in which he castigates our nation for a famine of consequential writers and poets while inexplicably forgetting the existences of Dickinson, Thoreau, Hawthorne, Melville, and Henry James. And Mr. James didn't help, either, when in his biography of Hawthorne he claimed that American air didn't have enough oxygen to let big ideas breathe properly. He sailed for England as soon as he could, and a generation later some of the best minds born on American soil — Eliot, Stein, and Pound for starters — followed in his huffy wake.
From Bauhaus to Bollywood
Aditya Dev Sood in Design! Public:
I spent Sunday morning at the Barbican, a curious London cultural institution that dates from the 1970s. Its heavy and brutalist architecture could have been featured in A Clockwork Orange. The Barbican was hosting a widely acclaimed exhibition on the Bauhaus. I went in there with my friend Sarah not expecting much — what was there about the Bauhaus, I wondered, that I had left to learn?
But the exhibition was a comprehensive curation, not only of the themes and preoccupations of the Bauhaus at various stages of its development and peripatetic movement around Germany to increasingly large urban centers, but also of its historical development and shifting, evolving priorities: now arts and crafts, now total-art-work, now industrial support, now architecture. There was even a brief section of the future legacy of the Bauhaus, which documented the movement of different students and teachers from the school to centers in other parts of Germany and the United States. I was surprised to learn that the Ulm School of Design, of which we have heard so much from M. P. Ranjan in the last couple of Design Public events, was set up by a Bauhaus student after the war, in 1953.
I had spent my entire college years in thrall to the lost but resilient legacy of the Bauhaus, studying its personalities from the point of view of painting, sculpture, theater — and even design pedagogy. Like all architects and designers, my foundational education also included a kind of recreation of the Bauhaus, and I too was therefore steeped in their lore. When I looked up, from the art books, posters, and gelatin prints through which Bauhaus culture continues to be transmitted, I found the rest of the world odd and strange.
Nerves of Steel
Freaks, Geeks and Microsoft: How Kinect Spawned a Commercial Ecosystem
Rob Walker in the NYT Magazine:
At the International Consumer Electronics Show earlier this year, Steve Ballmer, Microsoft’s chief executive, used his keynote presentation to announce that the company would release a version [of Kinect] specifically meant for use outside the Xbox context and to indicate that the company would lay down formal rules permitting commercial uses for the device. A result has been a fresh wave of Kinect-centric experiments aimed squarely at the marketplace: helping Bloomingdale’s shoppers find the right size of clothing; enabling a “smart” shopping cart to scan Whole Foods customers’ purchases in real time; making you better at parallel parking.
An object that spawns its own commercial ecosystem is a thing to take seriously. Think of what Apple’s app store did for the iPhone, or for that matter how software continuously expanded the possibilities of the personal computer. Patent-watching sites report that in recent months, Sony, Apple and Google have all registered plans for gesture-control technologies like the Kinect. But there is disagreement about exactly how the Kinect evolved into an object with such potential. Did Microsoft intentionally create a versatile platform analogous to the app store? Or did outsider tech-artists and hobbyists take what the company thought of as a gaming device and redefine its potential?
This clash of theories illustrates a larger debate about the nature of innovation in the 21st century, and the even larger question of who, exactly, decides what any given object is really for. Does progress flow from a corporate entity’s offering a whiz-bang breakthrough embraced by the masses? Or does techno-thing success now depend on the company’s acquiescing to the crowd’s input? Which vision of an object’s meaning wins? The Kinect does not neatly conform to either theory. But in this instance, maybe it’s not about whose vision wins; maybe it’s about the contest.
Emily Nussbaum on Community, Doctor Who, and fan cults, in the New Yorker (h/t: Amanda Marcotte):
The NBC series “Community” was created by Dan Harmon, a mad scientist of sitcoms—so divisive a figure that he was just run out of town by his own studio. (The show was re-upped for a fourth season, but Harmon was replaced with new showrunners.) Even amid the brutality of network TV production, this was a pretty shocking event, since “Community” is Dan Harmon, the way “Mad Men” is Matt Weiner. Set at a community college that is really a stage for wildly inventive genre experiments, it’s a comedy that’s at once alienating and warm, a sitcom lover’s sitcom that attracts the kind of fans that the media scholar Henry Jenkins once described, with admiration, as “frighteningly ‘out of control,’ undisciplined and unrepentant, rogue readers.”
In other words, not everyone. So perhaps it’s no coincidence that “Community” ’s excellent third season, which ended two weeks ago, featured a season-long meditation on the pains and pleasures of cult fanhood, structured around an homage to one of the greatest science-fiction shows: “Doctor Who.” The key to this exploration was the character of Abed Nadir, played by Danny Pudi with the gaze of an amused basilisk. Abed, who has Asperger’s syndrome and dreams of making documentaries, is in one sense a familiar sitcom character, the gentle alien observer—like Latka, in “Taxi.” But with each season he has drifted closer to the show’s center, replacing its ostensible hero, the smart-ass Jeff, and injecting “Community” with his super-fan enthusiasms, which range from Batman to “My Dinner with André.”
As Abed emerged, “Community” became a bit of a science-fiction show itself, the kind of series in which, in the season’s signature moment, a tossed die splits a dinner party into six alternate realities. In another plot this season, Abed and his best friend, Troy, constructed a Holodeck-like space in their apartment, which they called the Dreamatorium. Inside that green-and-yellow grid, Abed and Troy played out imaginary plots of their favorite show, “Inspector Spacetime,” which stars an “infinity knight” in a bowler hat, and his associate, Constable Reginald (Reggie) Wigglesworth. “Inspector Spacetime” is, of course, an affectionate tribute to “Doctor Who,” the long-running series that helped create our modern breed of Abeds and Dan Harmons—the sort of difficult obsessives who make original things and then get fired. “Doctor Who” débuted on the BBC in 1963, three years before “Star Trek” (and the day after Kennedy was assassinated). The show’s eponymous hero was (and is) a Time Lord, capable of jumping through time and space. He does so in the whirling TARDIS, which looks like a bright-blue phone booth but is as large as a mansion once you step inside. When near death, he generates a new body, conveniently played by a new actor (something NBC surely wishes were a tradition for showrunners). There have also been many “companions,” often plucky females—most famously Sarah Jane Smith (Elisabeth Sladen)—as well as enemies, like those Nazi-ish pepper pots the Daleks. The show used the shabbiest possible effects, plus a fly-by-night attitude toward narrative logic, although its low budget was as much a feature as a bug: it made something out of nothing, much the way Abed and Troy constructed their Dreamatorium engine out of cardboard tubes and a funnel.
Reality Hunger: On Lena Dunham's "Girls"
Jane Hu in the LA Review of Books:
In the promotional trailer for the series, Dunham's character Hannah Horvath sits before her parents and proclaims: "I think I may be the voice of my generation," only to retreat instantly behind the modification: "or at least a voice … of a generation." This line, tagged as the catchphrase of Girls in the lead up to its pilot, was received almost as a dare. Someone, finally, was going to take on the challenge of speaking the real and raw truth for recession-era youth! For all its overwhelming narcissism, though, the line also anticipates the mix of recklessness and reluctance that the show cultivates. Girls wants to have it both ways: it wants to be both brash and unsure of itself, universal and specific, speaking (when it wants to) for a generation but reserving the right not to specify which one.
Based on the internet chatter, there seems to be a voracious desire to find oneself in Girls, implying an urgency to locate a voice for this generation, a generation that understands itself to be diverse. As The Hairpin's Jenna Wortham says about these girls: "They are us but they are not us. They are me but they are not me." The show's representations of race, class, and gender have generated an expansive range of reactions, not least because of the show's monolithic middle-class whiteness. It seems like the one thing anyone can agree on is that, unlike Hannah Horvath, they don't eat cupcakes in the bathtub.
But if we're looking for what's truly universal in Dunham's depiction of young, white, upper-middle-class life in New York City, then maybe the cupcake isn't such a bad place to start. Eating is, after all, about as universal as it gets. The overwhelming excitement about and immediate backlash to Dunham's show both seem to suggest a profound hunger on the part of its audience for something nourishing, sustaining, and nutritious, prepared especially for them. This is fitting, because hunger, in all its manifestations, drives Girls. As with all lost generations, there seems to be a profound sense of lack among Hannah's friends. Hannah showcases her appetite for attention, sex, and food, none of which prove exclusive to one another.
Red Plenty Seminar
Crooked Timber is hosting a seminar on Francis Spufford’s novel about the socialist calculation debate, Red Plenty, with posts by Carl Caldwell, Antoaneta Dimitrova, Felix Gilman, Kim Stanley Robinson, George Scialabba, Cosma Shalizi, and Rich Yeselson. (Cosma's Yakov Smirnoff-titled entry, "In Soviet Union, Optimization Problem Solves You," made me laugh out loud.) Antoaneta Dimitrova:
Red Plenty is a book for social scientists in more ways than one. First because it draws on history and uses a great amount of documentary material, economic and social history of the Soviet Union to tell the story of the communist dream of abundance for all. And second, and perhaps more important, because its evidence driven narrative aims to answer several typical social science questions, especially for a social scientists interested in communism’s rise and fall. How could the Soviet planning economy be so successful in producing serious economic growth in the 1950s and 1960s, how could the Soviet system produce the science and innovation that led to space exploration and many other scientific achievements? And why did it then fail to continue doing so, to keep the pace of economic growth and scientific discovery?
Among Spufford’s many achievements in this book is that he provides some direct and some indirect answers to these questions. Even though he leads us to the answers by telling the stories of characters that are convincing and fully capable of engaging the reader’s interest in their destiny, he manages somehow to explore mechanisms that are structural and not personal. Despite the attention for Khrushchev and other historical figures from the Soviet Union, the personal vignettes are embedded in a narrative in which science, even more so than the idea of plenty – is the hero. This is perhaps best represented in by the prominent and fairly convincing character and the fate of the mathematician and economist Kantorovich. Other Red Plenty characters remain, as the planner Maksim Mokhov, ‘a confabulated embodiment of (the) institution’ (p. 395).
In contrast to many other books written about the Soviet period and especially about Stalinism, Spufford’s account is not emotional, grim and dramatic, does not aim to show the suffering of ordinary people or their disillusionment with the system as has already been done with unrivalled mastery by the classical works of Solzhenitsyn, Pasternak or Bulgakov, to name but a few. Instead, he shows the various characters influenced not so much by the cruel decisions, but by the dreams of the communist leaders. The leaders who, in accordance with Marxist dogma, pretended (Stalin) or hoped (Khrushchev) that they were social scientists and in Spufford’s interpretation harbored dreams of achieving abundance for all – Red Plenty.
The genome that keeps on giving
Here’s more of the latest news about genetic research:
- Jack and Jill went on The Pill: Now that Scottish scientists have identified a gene that’s critical for sperm production, the chances look better that we’ll someday have a male birth control pill.
- Bad influences: A team of researchers at Imperial College London found that the danger of a woman getting breast cancer doubled if her genes had been changed by exposure to smoke, alcohol, pollution and other factors.
- When mice age better than cheese: For the first time, Spanish scientists have been able to use gene therapy to lengthen the lives of adult mice. In the past, this has been done only with mouse embryos.
- Head games: Should high school kids be tested to see if they have an Alzheimer’s gene before they’re allowed to play football? Two scientists who study both Alzheimer’s and traumatic brain injuries to football players have raised that pointed question in the journal Science Translational Medicine.
- Forget about his feet, send his hair: Researchers at Oxford University in London have put out a call to anyone holding Bigfoot hair or other samples from the creature. They promise to do genetic testing on anything that comes their way.
Video bonus: Richard Resnick is CEO of a company called GenomeQuest so he definitely has a point of view about how big a role genome sequencing will play in our lives. But he does make a good case in this TED talk.
Smells Like Old Spirit
Older folks give off a characteristic scent that's independent of race, creed, or diet. The Japanese even have a name for it: kareishu. Most people say they find the smell disagreeable, typically describing it as "stinky-sweet." But in a new study, participants in a "blind sniff test" found the body odor of older people less intense and more pleasant than that of the young or middle-aged. Sensory neuroscientist Johan Lundström has been familiar with old-person scent since his childhood in Sweden, where he sometimes accompanied his mother to her job at a nursing home. Decades later, as the head of his own lab at the Monell Chemical Senses Center in Philadelphia, Pennsylvania, he gave a talk at another nursing home. "The same smell hit me again," he says. Lundström wondered if there really are specific age-related odors that the human sense of smell can detect. Although research shows that animals can distinguish the ages of other animals based on their odor, no comparable studies had been done in humans.
So Lundström and colleagues recruited 20 men and 21 women between the ages of 20 and 30 to be sniffers. All were healthy nonsmokers who didn't take drugs or medications. Meanwhile, a group of "donors" who were young (20 to 30 years old), middle age (45 to 55 years old), and old (75 to 95 years old) went to bed for five consecutive nights wearing T-shirts with absorbent pads sewn into the armpits. To make sure they gave off only their natural scent, the donors washed their hair and bodies with odorless shampoos and soap before going to bed each night. They also refrained from smoking, drinking alcohol, or eating spicy food. The volunteers sniffed the pads worn by the variously aged donors and grouped the smells by age. They classified the smells of the older donors with 12% greater accuracy than would be expected by chance, compared with 8% better than chance for the younger and middle-aged donors, the researchers report online today in PLoS ONE. According to Lundström, the real surprise came when the sniffers were asked to rate the smells by intensity and unpleasantness. Even though the volunteers compared the smell of old people to stale water or old basements, when they encountered the smell amid those of the other age groups, they consistently rated the old person odor as the least intense and least unpleasant of the three.
A Swan from Prague
I was making my way in halfsteps across a bridge
In that city of bridges, and met coming my way,
Looking head-on like a fat white ham with wings,
A swan in flight, waist high, at the bridge crest.
I was inching along as the swan with its yard-long neck
Towed its floating midriff in air speeding past.
Lost, it wanted back to the city’s river,
A river with two names in opposing tongues.
I looked ahead and saw some police laughing
At the wings going mad and the paddle-feet tucked.
I could not remember not being in pain,
Not being a man with bone spurs gouging his hip.
In that city of memorials, among memorials
Of immolation and metamorphosis,
I thought about this place in history—
I’d seen the altered road signs from ’68,
I’d seen the thugs in videos of ’89—
And knew for this span of time there was no place.
The police saw me leaning and halting
And turned to watch the swan, as I did,
All of us grateful to be distracted.
And I was sure that they, the laughing police,
Imagined that whatever my trouble was—drunkenness,
Disability—it would take care of itself,
And that the bird would come to rest again
On the river, the river of clashing names.
I told my wife this story, and as a memento
She gave me a solid bubble of Czech crystal,
A lovely blue-headed swan which rides
Now on a shifting river of paper.
by Mark Jarman
from Blackbird, Fall 2011, Vol. 10 No. 2
no frumpy old bird woman
Imagine being the kind of person who finds everything provocative. All you have to do is set out on a walk through city streets, a Rolleiflex hanging from a strap around your neck, and your heart starts pounding in anticipation. In a world that never fails to startle, it is up to you to find the perfect angle of vision and make use of the available light to illuminate thrilling juxtapositions. You have the power to create extraordinary images out of ordinary scenes, such as two women crossing the street, minks hanging listlessly down the backs of their matching black jackets; or a white man dropping a coin in a black man’s cup while a white dog on a leash looks away, as if in embarrassment; or a stout old woman braced in protest, gripping the hands of a policeman; or three women waiting at a bus stop, lips set in grim response to the affront represented by your camera, their expressions saying “go away” despite the sign behind them announcing, “Welcome to Chicago.” Welcome to this crowded stage of a city, where everyone is an actor—the poor, the rich, the policemen and street vendors, the nuns and nannies. Even a leaf, a balloon, a puddle, the corpse of a cat or horse can play a starring role. And you are there, too, as involved in the action of this vibrant theater as anyone else, caught in passing at just the right time, your self-portraits turned to vaporous mirages in store windows, outlined in the silhouettes of shadows and reflected in mirrors that you find in unexpected places.more from Joanna Scott at The Nation here.
The House That Doe Built
A concrete mansion sits empty on the edge of Zwedru, the capital of Liberia’s Grand Gedeh county. Its tall balustrades are unpainted; its window frames lack glass. Liberia is strewn with buildings abandoned in the course of its wars, but this one was never finished. The mansion was scheduled for completion in the summer of ’91, after it was commissioned by Samuel Doe, the young army sergeant from Grand Gedeh who wrested power from William Tolbert in the 1980 coup. For his thirty-ninth birthday, President Doe planned a party at his new home–dinner in the blue room followed by dancing around the pool, lined with a mosaic of the Liberian flag. Four months before the celebrations, Doe was captured and hacked to death–ear by ear, limb by limb–in a grisly show of violence orchestrated by the former rebel chief Prince Johnson, a presidential candidate in October’s presidential race, and a former ally of the ex-President Charles Taylor, convicted on 11 counts of war crimes at The Hague last month.more from Kate Grace Thomas at Guernica here.
New Spanish Finance Horrors
Spain is an unhappy federal structure held together by subsidies and crooked accounting. The drive of Catalans and Basques and others for independence has been checked by a system in which the regions of the country have gained more and more fiscal and policy autonomy. That worked pretty well when Spain was booming, the markets were as bubbly as a glass of champagne, money was cheap and credit was good. But now the music has stopped. Spain’s new European paymasters want the country to march in lockstep. They want the central government to sign austerity agreements that will bind the Catalans, the Basques, the Galicians and everybody else. Essentially, they are demanding that Spain recentralize, and that the national government set out tight national budgets that tell the ‘autonomous’ provinces what they can and can’t do. This may not work at all, and it cannot work for long. Spain is a democracy. People vote. Sometimes they vote for regional parties, sometimes they vote for the big national ones. If the central government is imposing tough fiscal limits on the provinces, it’s likely that over time — and not much of it — support will shift away from the national parties to the provincial ones. The Catalans will be sure that they are getting cheated by the poorer provinces; others will also believe that they aren’t getting their ‘fair share’.more from Walter Russell Mead at The American Interest here.
May 30, 2012
Guilty, but Not Responsible?
Rosalind English in The Guardian:
The US neuroscientist Sam Harris claims in a new book that free will is such a misleading illusion that we need to rethink our criminal justice system on the basis of discoveries coming from the neurological wards and MRI scans of the human brain in action.
The physiologist Benjamin Libet famously demonstrated in the 1980s that activity in the brain's motor regions can be detected some 300 milliseconds before a person feels that he has decided to move. Subjects were hooked up to an EEG machine and were asked to move their left or right hand at a time of their choosing. They watched a specially designed clock to notice what time it was when they were finally committed to moving their left or right hand. Libet measured the electrical potentials of their brains and discovered that nearly half a second before they were aware of what they were going to do, he was aware of their intentions. Libet's findings have been borne out more recently in direct recordings of the cortex from neurological patients. With contemporary brain scanning technology, other scientists in 2008 were able to predict with 60% accuracy whether subjects would press a button with their left or right hand up to 10 seconds before the subject became aware of having made that choice (long before the preparatory motor activity detected by Libet).
Clearly, findings of this kind are difficult to reconcile with the sense that one is the conscious source of one's actions. The discovery that humans possess a determined will has profound implications for moral responsibility. Indeed, Harris is even critical of the idea that free will is "intuitive": he says careful introspection can cast doubt on free will. In an earlier book on morality, Harris argues
Thoughts simply arise in the brain. What else could they do? The truth about us is even stranger than we may suppose: The illusion of free will is itself an illusion (The Moral Landscape)
But a belief in free will forms the foundation and underpinning of our enduring commitment to retributive justice. The US supreme court has called free will a "universal and persistent" foundation for our entire system of law.
On Julian Assange's The World Tomorrow, Slavoj Zizek and David Horowitz, Believe It or Not
Warning: watching it in one go may make your head explode:
Climate Armageddon: How the World's Weather Could Quickly Run Amok
Fred Guterl in Scientific American:
The true gloomsters are scientists who look at climate through the lens of "dynamical systems," a mathematics that describes things that tend to change suddenly and are difficult to predict. It is the mathematics of the tipping point—the moment at which a "system" that has been changing slowly and predictably will suddenly "flip." The colloquial example is the straw that breaks that camel's back. Or you can also think of it as a ship that is stable until it tips too far in one direction and then capsizes. In this view, Earth's climate is, or could soon be, ready to capsize, causing sudden, perhaps catastrophic, changes. And once it capsizes, it could be next to impossible to right it again.
The idea that climate behaves like a dynamical system addresses some of the key shortcomings of the conventional view of climate change—the view that looks at the planet as a whole, in terms of averages. A dynamical systems approach, by contrast, consider climate as a sum of many different parts, each with its own properties, all of them interdependent in ways that are hard to predict.
Daniel Kahneman: Thinking That We Know
Andrew C. Revkin in the New York Times:
The National Academy of Sciences did a great service to science early this week by holding a conference on “The Science of Science Communication.” A centerpiece of the two-day meeting was a lecture titled “Thinking That We Know,” delivered by Daniel Kahneman, the extraordinary behavioral scientist who was awarded a Nobel Prize in economics despite never having taken an economics class.
The talk is extraordinary for the clarity (and humor) with which he repeatedly illustrates the powerful ways in which the mind filters and shapes what we call information. He discusses how this relates to the challenge of communicating science in a way that might stick.
Please carve out the time to watch his slide-free, but image-rich, talk. It’s a shorthand route to some of the insights described in Kahneman’s remarkable book, “Thinking, Fast and Slow” (I’m a third of the way through).
Here’s the video of the talk (which is “below the fold” because it’s set up to play automatically):
The descent of Edward Wilson
Richard Dawkins in Prospect Magazine:
When he received the manuscript of The Origin of Species, John Murray, the publisher, sent it to a referee who suggested that Darwin should jettison all that evolution stuff and concentrate on pigeons. It’s funny in the same way as the spoof review of Lady Chatterley’s Lover, which praised its interesting “passages on pheasant raising, the apprehending of poachers, ways of controlling vermin, and other chores and duties of the professional gamekeeper” but added: “Unfortunately one is obliged to wade through many pages of extraneous material in order to discover and savour these sidelights on the management of a Midland shooting estate, and in this reviewer’s opinion this book can not take the place of JR Miller’s Practical Gamekeeping.”
I am not being funny when I say of Edward Wilson’s latest book that there are interesting and informative chapters on human evolution, and on the ways of social insects (which he knows better than any man alive), and it was a good idea to write a book comparing these two pinnacles of social evolution, but unfortunately one is obliged to wade through many pages of erroneous and downright perverse misunderstandings of evolutionary theory. In particular, Wilson now rejects “kin selection” (I shall explain this below) and replaces it with a revival of “group selection”—the poorly defined and incoherent view that evolution is driven by the differential survival of whole groups of organisms.
My body is a palimpsest
under your hands,
a papyrus scroll
unfurled beneath you,
waiting for your mark.
I clean my skin,
scrape it back to
a pale parchment,
so that your touch
can sink as deep
as the tattooist’s ink,
and leave its tracery
over the erased lines
of other men.
You are all that’s
written on my body
by Nuala Ní Chonchúir
from Tattoo : Tatú
Publisher: Arlen House, Galway, 2007
In the original after the jump
faoi do lámha,
ag tnúth le do rian.
Glanaim mo chraiceann,
sciúraim siar é
go pár báiteach
ionas go bpúchfaidh
do lámh mar
ag líníocht thar
gach fir eile.
Níl faic ach tusa
scrábáilte ar mo chorp.
by Nuala Ní Chonchúir
Teenager reportedly finds solution to 350 year old math and physics problem
In Isaac Newton's Principia Mathematica published in 1687, the man many consider the most brilliant mathematician of all time used a mathematical formula to describe the path taken by an object when it is thrown through the air from one point to the next, i.e. an arc based on several factors such as the angle it is thrown at, velocity, etc. At the time, Newton explained that to get it completely right though, air resistance would need to be taken into account, though he could not figure out himself how to factor that in. Now, it appears a 16 year old immigrant to Germany has done just that, and to top off his work, he’s also apparently come up with an equation that describes the motion of an object when it strikes an immobile surface such as a wall, and bounces back.
Shouryya Ray, a modest student who just four years ago was living in Calcuta, has been on an accelerated learning course and is taking his Abitur exams two years early. His math equations won him first place in a state science competition and second place in the national Math and IT section at finals. He’s told the press that figuring out how to come up with his formulas was more due to school-boy naivety than genius, which the German press has been suggesting. Ray moved with his family to Germany when his father landed a job as a research assistant at the Technical University of Freiburg. He has apparently shown great aptitude for math from an early age, learning calculus from his dad when he was still just six years old. He’s told the press that he got the idea of trying to develop the two formulas after visiting Dresden University on a field trip where he was told that no one had been able to come up with equations to describe the two dynamics theories.
Himmler’s brain is called Heydrich
If HHhH nonetheless doesn’t feel like a postmodern novel, it is because Binet does not revel in the freedom and indeterminacy of fiction. On the contrary, because he is writing about real historical events, whose gravity he himself feels very deeply, Binet is always trying to close the gap between invention and truth. This is clear from the very first sentence of the book: “Gabcik—that’s his name—really did exist.” Jozef Gabcik and Jan Kubis, we learn soon enough, were the secret agents parachuted into Czechoslovakia by the British to carry out the assassination of Heydrich. The whole motive for writing HHhH, Binet explains, is to honor these men, their courage and sacrifice: “So, Gabcik existed. … His story is as true as it is extraordinary. He and his comrades are, in my eyes, the authors of one of the greatest acts of resistance in human history, and without doubt the greatest of the Second World War. For a long time I have wanted to pay tribute to him.” The inspiration of HHhH is not ironic, then, but deeply earnest. And in this context, the novelist’s power to shape and invent feels less like a privilege than a curse. For every time Binet makes something up, it is a reminder that he doesn’t know all the facts. “My story has as many holes in it as a novel,” he writes, “but in an ordinary novel, it is the novelist who decides where these holes should occur.”more from Adam Kirsch at Tablet here.
norman manea and "the terror which rules our moral situation"
For most of the writers we love and admire, it is possible to say something comprehensive. One reader says of Saul Bellow that “throughout his life” he searched “for some ultimate and invisible spiritual reality,” and we think, yes, that is true, that is one good way of conferring upon a life like Bellow’s a sort of splendid coherence. Or we agree that the Austrian writer Thomas Bernhard sought, in everything he wrote, to “be misunderstood,” reviled, alienated, the better to exempt himself from the judgment he directed at a world he considered stupid and meaningless. But what comprehensive statement will we dare to make about Norman Manea? For one thing, we who know his writing only in English translation, and thus have not read many of the titles included in the collected Romanian edition of his work, are somewhat reluctant to sum him up as if we were fully equipped to do so. And yet we have more than enough to proceed, to begin at least. Consulting what is already out there we find, inevitably, that the established line on this writer is at once useful and misleading. Ought we to think of him as a writer defined by the exercise of “conscience”? That is one of those misleading suggestions you can read even on the dust jackets of his books. Is he, in the end, one of the many gifted contributors to what is called “the literature of totalitarianism”? Or is he, as has been said, one of “the great poets of catastrophe” and thus fit to stand alongside predecessors like Kafka or Bruno Schulz, or even Paul Celan?more from Robert Boyers at Threepenny Review here.
Calgary looks ever forward and often moves as fast as a prairie storm; its official motto, adopted in 1884, is a single propulsive word: “Onward.” It can seem, at a glance, like a place with no past at all. By world standards, and even by Canadian ones, this isn’t much of an overstatement. To say that it is a young city is accurate demographically — its median age, 35.8, is the lowest in Canada, and its population has grown faster than any other in the country since 2001, as legions of young job seekers poured in by the tens of thousands from Regina and Mississauga and St. John’s — but it is equally true on a historical scale. In 1882, the year Sir John A. Macdonald founded the Albany Club in Toronto, Calgary was a collection of tents and shacks in the shadow of a North West Mounted Police outpost, still waiting on the arrival of the Canadian Pacific Railway. Montreal built its first skyscraper, the New York Life Building, fifteen years before Calgary got its first telephone. At the end of World War I, Winnipeg was a booming industrial city of 165,000; Calgary would not reach that benchmark until ten years after World War II ended.more from Chris Turner at The Walrus here.
Sean Carroll to Judge 4th Annual 3QD Science Prize
UPDATE 6/25/12: The winners have been announced here.
UPDATE 6/18/12: The finalists have been announced here.
UPDATE 6/17/12: The semifinalists have been announced here.
UPDATE 6/11/12: Voting round is now open. Click here to see full list of nominees and vote.
Dear Readers, Writers, Bloggers,
We are very honored and pleased to announce that Sean M. Carroll has agreed to be the final judge for our 4th annual prize for the best blog and online writing in the category of science. (Details of the previous science prizes can be seen by clicking on the names of their respective judges here: Steven Pinker, Richard Dawkins, and Lisa Randall).
I have to admit that I was especially and extraordinarily pleased when Sean agreed to judge this prize for a number of reasons:
- Sean is a practicing scientist at the forefront of his field, which is physics.
- Sean is also one of the foremost science communicators of our time (I extremely highly recommend his last book From Eternity to Here) and he was one of the early science bloggers with Preposterous Universe and has continued with the ever excellent Cosmic Variance.
- Sean was an early supporter of 3QD and drove much traffic to us in our early days when we were basically unknown. Thanks again, Sean! :-)
- I am honored and happy to count Sean and his very distinguished (and former 3QD columnist) science-writer wife, Jennifer Oullette, as friends.
- Sean is a past winner of a 3QD prize himself.
Sean, as many of you may already know, is a physicist at the California Institute of Technology. He received his Ph.D. in 1993 from Harvard University. His research focuses on theoretical physics and cosmology, especially the origin and constituents of the universe, and he has contributed to models of interactions between dark matter, dark energy, and ordinary matter; alternative theories of gravity; and violations of fundamental symmetries. Sean is the author of "From Eternity to Here: The Quest for the Ultimate Theory of Time," "Spacetime and Geometry: An Introduction to General Relativity," and the upcoming "The Particle at the End of the Universe." He blogs at Cosmic Variance, hosted by Discover magazine, and has been featured on television shows such as The Colbert Report and Through the Wormhole with Morgan Freeman. You may follow him on Twitter here.
As usual, this is the way it will work: the nominating period is now open, and will end at 11:59 pm EST on June 9, 2012. There will then be a round of voting by our readers which will narrow down the entries to the top twenty semi-finalists. After this, we will take these top twenty voted-for nominees, and the four main editors of 3 Quarks Daily (Abbas Raza, Robin Varghese, Morgan Meis, and Azra Raza) will select six finalists from these, plus they may also add up to three wildcard entries of their own choosing. The three winners will be chosen from these by Sean Carroll.
The first place award, called the "Top Quark," will include a cash prize of one thousand dollars; the second place prize, the "Strange Quark," will include a cash prize of three hundred dollars; and the third place winner will get the honor of winning the "Charm Quark," along with a two hundred dollar prize.
(Welcome to those coming here for the first time. Learn more about who we are and what we do here, and do check out the full site here. Bookmark us and come back regularly, or sign up for the RSS feed.)
May 30, 2012:
- The nominations are opened. Please nominate your favorite blog entry by placing the URL for the blog post (the permalink) in the comments section of this post. You may also add a brief comment describing the entry and saying why you think it should win. (Do NOT nominate a whole blog, just one individual blog post.)
- Blog posts longer than 4,000 words are strongly discouraged, but we might make an exception if there is something truly extraordinary.
- Each person can only nominate one blog post.
- Entries must be in English.
- The editors of 3QD reserve the right to reject entries that we feel are not appropriate.
- The blog entry may not be more than a year old. In other words, it must have been written after May 29, 2011.
- You may also nominate your own entry from your own or a group blog (and we encourage you to).
- Guest columnists at 3 Quarks Daily are also eligible to be nominated, and may also nominate themselves if they wish.
- Nominations are limited to the first 200 entries.
- Prize money must be claimed within a month of the announcement of winners.
June 9, 2012
- The nominating process will end at 11:59 PM (NYC time) of this date.
- The public voting will be opened soon afterwards.
June 16, 2012
- Public voting ends at 11:59 PM (NYC time).
June 25, 2012
- The winners are announced.
One Final and Important Request
If you have a blog or website, please help us spread the word about our prizes by linking to this post. Otherwise, post a link on your Facebook profile, Tweet it, or just email your friends and tell them about it! I really look forward to reading some very good material, and think this should be a lot of fun for all of us.
Best of luck and thanks for your attention!
May 29, 2012
Gandhi's Letter to Hitler
Over a month before the outbreak of WW2, Mahatma Gandhi writes his "dear friend" Adolf Hitler.
A Bookforum Conversation with Tom Bissell
With Morten Høi Jensen in Bookforum:
We’re fortunate to live in a time where a handful of enormously gifted writers are revitalizing the essay form. One example is Tom Bissell, whose new collection, Magic Hours: Essays on Creators and Creation, adds up to a kind of narrative of contemporary culture, weighing in on video games, underground literary movements, bad movies and the fates of great writers. Before his recent reading with his friend and fellow writer Gideon Lewis-Kraus at KGB Bar in New York, I spent an hour with Tom Bissell at his cousin’s apartment in Manhattan, where he and his girlfriend were staying while they were in town. Looking out on an unseasonably hot midtown afternoon, we drank scotch and chatted about the publishing industry, the resurgence of the essay form, and our mutual love for the Australian writer Clive James.
Bookforum: Do you miss New York?
Tom Bissell: Desperately. When I’ve lived in Portland and California and I wake up, Pacific Time just seems like the wrong time to me. Events in America happen on Eastern Standard Time, and knowing when you wake up at 9 in the morning—or, if you’re a writer, 9:30—that it’s already after lunch in the heartbeat of America—it’s just something I’ve never gotten used to.
BF: How long did you live in New York?
Tom Bissell: I lived here from 1997 to 2006.
BF: I enjoyed reading in Magic Hours about your experience here as an editorial assistant. You called it a “thankless but intensely interesting job.” I was wondering if it influenced your early work as a journalist in any way.
TB: I think what it did for me was make me much less hostile to the editorial apparatus once I became a writer. I’ve always been way more willing to empathize with editors than my other writer friends who didn’t have that experience. Without the editorial experience, I would have never have had so many myths about book publishing shattered before I even wrote a word. And I think the most insidious myth among writers is that publishers just get books lined up before them, pick the ones they want to sell, and then push them out the door. Now, in some sense of course they do that, but the really important thing that writers seem to forget is that just because publishers pick books that they want expend resources on doesn’t guarantee that the books will succeed. Good publishers are the ones who, when something’s not working, are capable of redirecting their focus onto the stuff that is working, and choose to support the stuff they maybe initially thought didn’t have a good shot. Bad publishers are the ones who just double down on a bad choice and throw good money out the window. I’ve been lucky enough to work with good, smart publishers, and though I don’t claim to know how exactly publishing works, I do often get the heebie jeebies when I hear my writer friends talk about book publishers in a needlessly hostile tone.
BF: That was the strength of your essay about the Underground Literary Alliance. You took them to task for that hostility—and not just them, because I think that hostility is actually quite common—and for not understanding that the majority of the people who work in the publishing industry hold literature just as sacred as they do. But that doesn’t automatically give them the resources or the privilege to publish everything.
TB: Right. And the other thing is that the publishing industry is much smaller than it was in, say, 1999, when I was a young editor, so I think the representative spectrum of taste is much smaller. It’s just so much harder to be a young writer right now, especially if you’re a fiction writer. I wouldn’t wish being a fiction writer right now on my worst enemy. I wouldn’t wish that on Osama Bin Laden’s children.
by Marge Piercy
from Colors Passing Through Us
publisher Knopf, 2003
The Music's Over
Prospero's obituary for Donna Summer and Robin Gibb, in The Economist:
AS A genre, disco gets a rotten press. It tends to conjure up images of hairy chests and medallions, and the worst kind of dad-dancing: a roll of the hands and a finger thrust from the floor to the sky. It was, said Bethann Hardison, a black runway model in the 1970s, “created so that white people could dance”.
Such a caricature does it no justice. The beat might be the simplest 4/4, but the origins are more complex. To understand where disco came from, and why it should be considered culturally important, one must first place oneself in dysfunctional, dangerous 1970s New York. If punk rock, born of a similar time and place, and hip-hop, a little younger, are the musical styles that define that city’s disaffected youth, then they have a sibling in disco. “Disco was born, maggot like, from the rotten remains of the Big Apple”, wrote Peter Shapiro in “Turn the Beat Around” a history of the genre.
The release it gave was different though. While punk was like a child throwing a tantrum and hip hop was about fierce rhetoric, disco meant escaping reality. The outrageous clothes and ostentatious dance moves took the mind off of the gang violence and unemployment. For the city’s gays, who were still striving for acceptance, it was particularly liberating.
The disco beat quickly spread around the world. By the time that Donna Summer released “I Feel Love” in 1977, it was mainstream. Everyone was at it. Even the Rolling Stones released a lamentable disco attempt, “Hot Stuff”, in 1976. Nonetheless, “I Feel Love” was one of the most influential records of the decade. Produced by Giorgio Moroder, it layered Moog synthesiser tracks (until then the preserve of avant garde electronica bands such as Kraftwerk) to create one of the most compelling dance tunes ever released. It is also the exact moment that disco sprouted the branch that evolved into house music.
Trees of Life: A Visual History of Evolution
Maria Popova in Brain Pickings:
Since the dawn of recorded history, humanity has been turning to the visual realm as a sensemaking tool for the world and our place in it, mapping and visualizing everything from the body to the brain to the universe toinformation itself. Trees of Life: A Visual History of Evolution (public library) catalogs 230 tree-like branching diagrams, culled from 450 years of mankind’s visual curiosity about the living world and our quest to understand the complex ecosystem we share with other organisms, from bacteria to birds, microbes to mammals.
Though the use of a tree as a metaphor for understanding the relationships between organisms is often attributed to Darwin, who articulated it in his Origin of Species by Means of Natural Selection in 1859, the concept, most recently appropriated in mapping systems and knowledge networks, is actually much older, predating the theory of evolution itself. The collection is thus at once a visual record of the evolution of science and of its opposite — the earliest examples, dating as far back as the sixteenth century, portray the mythic order in which God created Earth, and the diagrams’ development over the centuries is as much a progression of science as it is of culture, society, and paradigm.
How Markets Crowd Out Morals
A Boston Review forum on the arguments made by Michael Sandel in What Money Can’t Buy: The Moral Limits of Markets, with responses from Richard Sennett; Matt Welch; Anita L. Allen; Debra Satz; Herbert Gintis; Lew Daly; Samuel Bowles; Elizabeth Anderson; and John Tomasi. From Michael Sandel's lead piece:
We live in a time when almost anything can be bought and sold. Markets have come to govern our lives as never before. But are there some things that money should not be able to buy? Most people would say yes.
Consider friendship. Suppose you want more friends than you have. Would you try to buy some? Not likely. A moment’s reflection would lead you to realize that it wouldn’t work. A hired friend is not the same as a real one. You could hire people to do some of the things that friends typically do—picking up your mail when you’re out of town, looking after your children in a pinch, or, in the case of a therapist, listening to your woes and offering sympathetic advice. Until recently, you could even bolster your online popularity by hiring some good-looking “friends” for your Facebook page—for $0.99 per friend per month. (The phony-friend Web site was shut down after it emerged that the photos being used, mostly of models, were unauthorized.) Although all of these services can be bought, you can’t actually buy a friend. Somehow, the money that buys the friendship dissolves it, or turns it into something else.
This fairly obvious example offers a clue to the more challenging question that concerns us: Are there some things that money can buy but shouldn’t? Consider a good that can be bought but whose buying and selling is morally controversial—a human kidney, for example. Some people defend markets in organs for transplantation; others find such markets morally objectionable. If it’s wrong to buy a kidney, the problem is not that the money dissolves the good. The kidney will work (assuming a good match) regardless of the monetary payment. So to determine whether kidneys should or shouldn’t be up for sale, we have to engage in a moral inquiry. We have to examine the arguments for and against organ sales and determine which are more persuasive.
So it seems, at first glance, that there is a sharp distinction between two kinds of goods: the things (like friends) that money can’t buy, and the things (like kidneys) that money can buy but arguably shouldn’t. But this distinction is less clear than it first appears.
The Faster-Than-Light Telegraph That Wasn't
Physicists had long known that the two flavors of polarization—plane or circular—were intimately related. Plane-polarized light could be used to create circularly polarized light, and vice versa. For example, a beam of H-polarized light consisted of equal parts R- and L-polarized light, in a particular combination, just as a beam of R-polarized light could be broken down into equal parts H and V. Likewise for individual photons: a photon in state R, for example, could be represented as a special combination of states H and V. If one prepared a photon in state R but chose to measure plane rather than circular polarization, one would have an equal probability of finding H or V: a single-particle version of Schrödinger’s cat.
In Herbert's imagined set-up, one physicist, Alice ("Detector A" in the illustration), could choose to measure either plane or circular polarization of the photon headed her way . If she chose to measure plane polarization, she would measure H and Voutcomes with equal probability. If she chose to measure circular polarization, she would find R and L outcomes with equal probability.
In addition, Alice knows that because of the nature of the source of photons, each photon she measures has an entangled twin moving toward her partner, Bob. Quantum entanglement means that the two photons behave like two sides of a coin: if one is measured to be in state R, then the other must be in state L; or if one is measured in state H, the other must be in state V. The kicker, according to Bell's theorem, is that Alice's choice of which type of polarization to measure (plane or circular) should instantly affect the other photon, streaming toward Bob . If she chose to measure plane polarization and happened to get the result H, then the entangled photon heading toward Bob would enter the state V instantaneously. If she had chosen instead to measure circular polarization and found the result R, then the entangled photon instantly would have entered the state L.
Next came Herbert's special twist.
How Bad Is It?
George Scialabba in New Inquiry:
Pretty bad. Here is a sample of factlets from surveys and studies conducted in the past twenty years. Seventy percent of Americans believe in the existence of angels. Fifty percent believe that the earth has been visited by UFOs; in another poll, 70 percent believed that the U.S. government is covering up the presence of space aliens on earth. Forty percent did not know whom the U.S. fought in World War II. Forty percent could not locate Japan on a world map. Fifteen percent could not locate the United States on a world map. Sixty percent of Americans have not read a book since leaving school. Only 6 percent now read even one book a year. According to a very familiar statistic that nonetheless cannot be repeated too often, the average American’s day includes six minutes playing sports, five minutes reading books, one minute making music, 30 seconds attending a play or concert, 25 seconds making or viewing art, and four hours watching television.
Among high-school seniors surveyed in the late 1990s, 50 percent had not heard of the Cold War. Sixty percent could not say how the United States came into existence. Fifty percent did not know in which century the Civil War occurred. Sixty percent could name each of the Three Stooges but not the three branches of the U.S. government. Sixty percent could not comprehend an editorial in a national or local newspaper.
Intellectual distinction isn’t everything, it’s true. But things are amiss in other areas as well: sociability and trust, for example. “During the last third of the twentieth century,” according to Robert Putnam in Bowling Alone, “all forms of social capital fell off precipitously.” Tens of thousands of community groups – church social and charitable groups, union halls, civic clubs, bridge clubs, and yes, bowling leagues — disappeared; by Putnam’s estimate, one-third of our social infrastructure vanished in these years. Frequency of having friends to dinner dropped by 45 percent; card parties declined 50 percent; Americans’ declared readiness to make new friends declined by 30 percent. Belief that most other people could be trusted dropped from 77 percent to 37 percent. Over a five-year period in the 1990s, reported incidents of aggressive driving rose by 50 percent — admittedly an odd, but probably not an insignificant, indicator of declining social capital.
Still, even if American education is spotty and the social fabric is fraying, the fact that the U.S. is the world’s richest nation must surely make a great difference to our quality of life?
Money and Morality
From The Guardian:
Something curious happened when I tried to potty train my two-year-old recently. To begin with, he was very keen on the idea. I'd read that the trick was to reward him with a chocolate button every time he used the potty, and for the first day or two it went like a breeze – until he cottoned on that the buttons were basically a bribe, and began to smell a rat. By day three he refused point-blank to go anywhere near the potty, and invoking the chocolate button prize only seemed to make him all the more implacable. Even to a toddler's mind, the logic of the transaction was evidently clear – if he had to be bribed, then the potty couldn't be a good idea – and within a week he had grown so suspicious and upset that we had to abandon the whole enterprise. It's a pity I hadn't read What Money Can't Buy before embarking, because the folly of the chocolate button policy lies at the heart of Michael Sandel's new book. "We live at a time when almost everything can be bought and sold," the Harvard philosopher writes. "We have drifted from having a market economy, to being a market society," in which the solution to all manner of social and civic challenges is not a moral debate but the law of the market, on the assumption that cash incentives are always the appropriate mechanism by which good choices are made. Every application of human activity is priced and commodified, and all value judgments are replaced by the simple question: "How much?"
Sandel leads us through a dizzying array of examples, from schools paying children to read – $2 (£1.20) a book in Dallas – to commuters buying the right to drive solo in car pool lanes ($10 in many US cities), to lobbyists in Washington paying line-standers to hold their place in the queue for Congressional hearings; in effect, queue-jumping members of the public. Drug addicts in North Carolina can be paid $300 to be sterilised, immigrants can buy a green card for $500,000, best man's speeches are for sale on the internet, and even body parts are openly traded in a financial market for kidneys, blood and surrogate wombs. Even the space on your forehead can be up for sale. Air New Zealand has paid people to shave their heads and walk around wearing temporary tattoos advertising the airline.
‘What Is’ Meets ‘What if’: The Role of Speculation in Science
From The New York Times:
And yet speculation is an essential part of science. So how does it fit in? Two recent publications about the misty depths of canine and human history suggest some answers. In one, an international team of scientists concludes that we really don’t know when and where dogs were domesticated. Greger Larson of the University of Durham, in England, the first of 20 authors of that report, said of dog DNA, “It’s a mess.” In the other, Pat Shipman, an independent scientist and writer, suggests that dogs may have helped modern humans push the Neanderthals out of existence and might even have helped shape human evolution. Is one right and the other wrong? Are both efforts science — one a data-heavy reality check and the other freewheeling speculation? The research reported by Dr. Larson and his colleagues in The Proceedings of the National Academy of Sciences is solid science — easily judged by peers, at any rate. The essay by Dr. Shipman is not meant to come to any conclusion but to prompt thought and more research. It, too, will be judged by other scientists, and read by many nonscientists. But how is one to judge the value of speculation? The questions readers ought to ask when confronting a “what-if” as opposed to “what-is” article are: Does the writer make it clear what is known, what is probable, and what is merely possible?
May 28, 2012
Are Millennials Less Green Than Their Parents?
A highly publicized Journal of Personality and Social Psychology study depicts Millennials as more egoistic than Baby Boomers and Generation Xers. The research is flawed. The psychologists fail to see that kids today face new problems that previously weren’t imaginable and are responding to them in ways that older generations misunderstand.
The psychological study seems persuasive largely because the conclusions are supported by massive data. Investigators examined two nationally representative databases (Monitoring the Future and American Freshman surveys) containing information provided by 9.2 million high school and college students between 1966 and 2009. Such far-reaching longitudinal analysis seems to offer a perfect snapshot of generational attitudes on core civic issues.
Comparison makes Millennials look bad. According to the study, they aren’t just primed to consume more electricity and pass on community leadership. Overall, they’re ethically deficient: concerned less with the environment and keeping up with political affairs, while driven more by extrinsic values (money, fame, image) than intrinsic ones (self-acceptance, community, and group affiliation). The media couldn’t wait to spin these characterizations into headlines, running pieces like “Millenial Generation’s Non-Negotiables: Money, Fame, and Image” and “Young People Not So ‘Green’ After All”.
Jean Twenge, the study’s lead author, seems entitled to sit back with a told you so look on her face. For some time, she’s contested portrayals of Millennials as “Generation We.” The new study updates her anti-entitlement manifesto, Generation Me: Why Today's Young Americans are More Confident, Assertive, Entitled--And More Miserable than Ever Before, and she presents more damning information in a recent Chronicle of Higher Education article accusing Millennials of declining empathy.
Unfortunately, when Twenge and her colleagues consider limitations that could constrain their study, they miss a big one. They acknowledge potential complications from students dropping out of school before surveys are administered, demographic shifts in college samples, self-reporting bias, attitudes changing as people age, and the recession having an indeterminate impact. They don’t consider the limits of longitudinal analysis.
If moral problems remained constant, psychologists could treat age as the decisive independent variable. Take the problem of environmentalism. Millennials, Boomers, and Xers all had to decide whether to save energy during the winter, or say sweaters be damned and crank up thermostats to maximize personal comfort. All things being equal, it makes sense to judge the conservationists more environmentally committed, and do the same with charitable donation, voting, and writing to public officials.
All things aren’t equal, though. Longitudinal studies designed decades ago are out of sync with the unique challenges Millennials face. They live in a world plagued by new problems that require a new global mindset to solve. To pick but one of many relevant dilemmas, if Millennials appear less concerned about personal energy consumption, it is because they are increasingly worried about climate change. This extraordinary problem wasn’t part of Boomers and Xers’ high school and college vocabularies, and it profoundly shifts the moral landscape. Climate change renders older, methodologically individualist, approaches to responsible behavior obsolete.
Fundamentally, climate change must be solved at a global level because the earth’s atmosphere has a limited capacity to store green house gas emissions. If the United States tries to do the right thing by cutting back on its CO2 emissions, and powerful regulation doesn’t bind nations together, it simply incentivizes China and India to emit more. The net effects would impact everyone and overall could make the planet worse off. National sacrifice would have the perverse consequence of making us suckers on the global stage.
This basic game theory problem holds for individuals, too. Without effective regulation, my choice to cool off with open windows over the summer quickly becomes your incentive to crank up and air conditioners. My Prius purchase encourages you to buy a gas guzzling SUV. My SKYPE conducted business meetings inspire you to fly to more overseas conferences. The road to hell would get paved by good, but naïve intentions.
Of course, Millennials could behave like good Boomers. To prevent free riding, they could write to elected officials and ask them to make new laws mandating restrictive caps limiting the energy available to every residence in a county, city, state—or, even the entire country. But given the unpopular legacy of President Carter’s “Crisis of Confidence” speech, why should they assume this gesture would be anything but token participation?
While Millennials might be cynical about traditional political participation, they are open to other forms of civic engagement. As members of the socially networked, digital generation they view posting, linking, blogging, and tweeting as moral acts that provide opportunities for participating in collective governance. Critics don't give them credit for this, viewing the mindset as the triumph of slacktivism over activism. But, the issue is hotly debated, and its import, including the possibility that in at least some instances the Millennials are right, falls off the psychologists’ radar.
Now, it would be sensible to ask Millennials about their views on institutional responsibility, especially in cases where a positive outcome can be identified. For example, would they be willing to pay increased tuition to convert campus buildings into LEED-certified structures, particularly if doing so would advance a trend and make it advantageous for other colleges to follow suit? Since LEED certification came on the scene in 1998, it post-dates the longitudinal framing. Consequently, this issue also falls outside the data that Twenge and collaborators scrutinize.
Given the growing importance of large-scale geoengineering projects, it also makes sense to ask Millennials how they feel about them. But, as debate only recently started over whether geoengineering is technocratic solution to a political problem, the surveys certainly don’t address them or their implication for attitudes towards globalized political participation. To continue with the game theoretic concepts, some have argued that geoengineering projects will benefit developed nations at the expense of developing ones, with inequity engendering conflict.
In short, the psychologists didn’t prove that Millennials care less about the environment than their predecessors. By using longitiundal studies that are tone deaf to change, they obscure the special issues at stake in being an environmentalist today—issues that are incommensurate with past problems where singular actions and good intentions could make people feel good about themselves for discharing their environmental responsibilities.
Critics might reply that our analysis casts the Millennial vision in terms that are too sophisticated. The might ask: How many Millenials actually understand game theory? There is some truth to this rebuttal, but less than meets the eye.
In many cases, Millennials likely don’t understand game-theoretic reasoning in a rational, analytic way. However, their behavior demonstrates they grasp it heuristically. They will lower their thermostats when they are part of a social network that is doing it with them.
Prior generations exhibited similar behaviors that are consistent with our reasoning. According to what has been termed the “cul-de-sac effect,” homeowners will not install roof-mounted PV panels until they see one or more neighbors do it. Likewise, studies show people are less motivated to reduce heating/cooling use by financial incentives than they are to respond to information about what their neighbors are doing. In short, people often wait for social cues before acting because they heuristically understand that without reassurance that they are acting in concert with others, attempts at moral action may be counter-productive. Milllennials exhibit this type of interdependency to a greater extent than any prior generation because they are the most inter-connected of all generations.
Evan Selinger is an associate professor of Philosophy at Rochester Institute of Technology.
Thomas Seager is an associate professor at the School of Sustainable Engineering and the Built Environment and a Lincoln fellow of ethics and sustainability at Arizona State University.
Jathan Sadowski is a research technician in the Lincoln Center for Applied Ethics at Arizona State University.
The authors were supported by National Science Foundation funded project, “An Experiential Pedagogy for Sustainability Ethics” (#1134943). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
postcards from srinagar
by Vivek Menezes
Nigeen Lake 27/05/2012
I am writing this lakeside in Srinagar, at the end of a month-long stay in this amazing, ancient city, along with my wife and three young sons (12, 8, 4). This is high season in Kashmir – the authorities expect as many as two million tourists by the time winter sets in. But with the exception of Dal Lake – certainly one of the great marvels of the subcontinent – we’ve found ourselves just about the only “outsiders” almost everywhere we’ve gone. It has been quite a strange phenomenon, I think largely explained by the reluctance of most travel agents and tour operators to venture off a narrow beaten track that takes in Dal, the (vastly over-rated) Mughal Gardens, and day trips to trample snow in Gulmarg, etc. There needs to be more and better information about Srinagar made available for travellers, and over some time I hope to contribute some.
But right now, because connectivity is deeply intermittent here, I am going to quickly post a few images, and scribble comments postcard-style
Our first few days in Kashmir couldn’t have been more eye-opening. This is because we became immediately immersed in the fourth annual festival hosted by the Dara-e-Shikoh Centre. The initiative of Jyotsna Singh, grand-daughter of the last monarch of Jammu and Kashmir, the event was mostly held outdoors, and had a terrifically positive energy. There were art, writing and puppetry workshops, training sessions for teachers and counsellors, and terrific interactions between the overwhelmingly young audience and visiting resource people, most notably Gopal Gandhi – grandson of the Mahatma, senior bureaucrat and diplomat, and author of several books, including a play about Dara, the Sufi Prince. It was a remarkably inclusive event, with every possible viewpoint freely exchanged with an unusual spirit of acceptance. Here at the Dara Shikoh Centre, I realized that this is actually a bedrock Kashmiri virtue. This was particularly underlined during a spellbinding performance by one of the last surviving Bhand Pather (folk entertainers) groups of Kashmir, directed by M. K. Raina. It turned out that most of the almost entirely Kashmiri audience had never seen such a performance – big-shots, students, security guards, drivers, all screamed with delight together all through the show. No translations needed for my kids either, they laughed along with everyone else.
Everyone knows about the elaborate meat dishes of the wazawan, the multi-course food so beloved of the Kashmiris. Barring a wedding invitation, the best place for that is the venerable Ahdoo’s. But Srinagar’s extraordinary variety of breads is strangely unsung – it easily rivals that of my native Goa, and in fact there may be more types of bread easily available than back home: flat breads, sweet breads, flaky croissant-like breads, and these little rounds that are near-identical to bagels. Then, from Pahalgam – but available in many places in the city – comes the best European-style cheese I’ve ever eaten in India, truly excellent goudas made from impeccably sourced milk (see http://www.himalayancheese.com) – just perfect with those bagels! But most addictive of all is ‘tujj’ or ‘barbecue’, what locals call coal-roasted skewers of mutton that are served with several different chutneys (actually spicy pickles and cooling raitas). Everyone seems to think the stands proliferating opposite Khyber cinema are the best, but I vote for the slightly more expensive version at the oddly-named ‘Mummy Please’ in Lal Chowk.
Travelling with three kids in Srinagar has been much less challenging than we expected. They’ve found plenty to get excited about: shikhara rides, magnificent countryside, ever-present birdlife. My sons loved being fitted for caps at Hilal Cap Shop near Jamia Masjid, fishing for trout in the Aru Valley, and love their houseboat so much that it is going to be hard to take them away next week.
We’ve come to Srinagar in what everyone says is a lull, so I really can’t comment much about what it is like at other times. But for me and the kids alike, the great highlight of this city has been its beguiling, multi-layed, ancient old city. We’ve walked into the dense mohallas at least a dozen times over the past few weeks, and each time we’ve been made to feel completely welcome, and glimpsed an extraordinary living culture that is simultaneously medieval and contemporary. There are too many highlights to list here on this crappy connection, as the wind off the lake grows increasingly inhospitable – but let me just say that ‘downtown’ Srinagar is one of the great wonders of the subcontinent.
Venice is the Srinagar of the West. That’s not a joke – this city is home to a waterborne, riparian culture that is fully contained and self-sustaining: floating vegetable gardens, a unique boat-borne economic eco-system. Every sector of this city’s society and business culture seems connected to the water, and to boats. These children power themselves around on fragile skiffs like yours might aspire to manoeuvre on skateboards – girls, women, old men, they ‘re all extraordinarily adept. It took only a few days before I commandeered a little ‘naav’ myself, and immediately the entire lakeside accepted me as one of their own. Each evening at sunset, I position myself for the best view of the surrounding peaks and listen to the azaan resound across the rippling surface. There is no place on earth like Srinagar. Hope you visit soon.
"In her photographs of clothing constructed from materials that could never realistically be worn, Yeonju Sung captures what she describes as a series of phantoms—temporal checkpoints depicting objects destined to decay, objects that fail in function what they seem to fulfill in appearance."
When the Fruit Ripens Seed Scatters: Notes towards a History of Motility
Quum fructus maturus semina dispergat. Linnæus, Philosophia Botanica, 1751
1. In The Beginning Was the Verb
In the beginning was the Verb, and the Verb was with God, and the Verb set all things in motion. More than just any Word (Latin verbum, word) the God who is, was, and shall be a Verb commuted motion of an Absolute form to Relative Motion. In the universe created of the Verb everything moves; absolutes have no meaning.
And some things rose and other things fell. Those which rose remained in constant motion until impeded and of those which fell some acquired spontaneous motion. These self-moved movers, called motile, include some cells, spores, the quadrupeds, and the bipeds. The Philosopher studied the motile keenly, since the prime mover and all that had risen remained less accessible to knowledge. Since the self-moved require the unmoving for motion they must themselves be, he concluded, comprised of a series of both fixed and moving parts at the seat of which is an unmoved mover – the animal soul. In this way the motile mimic the first mover.
Living things move and they share this characteristic with every other thing; stasis, that is, there can only ever be relative stasis. Movement differs from motility in as much as the latter, in its most fully expressed form, is movement where a purpose that goads, a desire that compels, and a body that advances, converge.
2. Arise and Be Bipedal
Humans possess an unusual form of bipedality technically called walking. Walking emerged earlier than did a brain large enough to befuddle us regarding our destination or pensive enough to cogitate walking’s origins. It is the oldest of our peculiarities and the process and its origins remains fruitfully perplexing. As engineer Tad McGeer designer of passive walking machines wrote more than a couple of decades ago: “Today we can build machines to travel beyond the other planets, yet we do not really understand how we move about on our own two legs.” But there are no shortages of bright ideas about the phenomenon. Like other bipedalisms (that, for instance, of dinosaurs, birds, lizards, kangaroos, ostriches, and even cockroaches when one provokes them appropriately) walking merits examination from an energetics perspective. Energy spent on slower movement (compared to running, that is) is reimbursed by the energetics of pendular action: a leg swings out from the hips, followed by the succeeding leg as the first leg performs an inverted pendular motion from heel to toe. All accompanied by arm swinging. Sporting a jaunty hat remains a human innovation. Thus a series of fixed and moving parts propels the animal along with relatively little energy wasted. All bipeds are Aristotelian, though for the most part unwittingly so.
Of certain squabbles it can be said that they are productive without being settled; of others that they are unsettling without being productive. Questions concerning human origins remain both unsettled and unsettling. While considerations of energetic efficiency, especially over longer distances, point to a selective advantage for walking, nevertheless there is little agreement on what the most parsimonious explanation might be. Walking frees up the hands for foraging, for carrying the children, it provides the tropical sun with a diminished target and thus may be thermodynamically recommended and so forth.
Hominins have walked the earth for four million years or so. Four million years of ambulating with purpose. Since things did not come to us, we marched off to them. That is, human mobility, however it was achieved, and to whatever selective pressure it was a response, was always a walking to. Food goaded, human appetites compelled, and an erect body complied.
3. Let Them (foodstuffs) Come Onto Me
Though a person might well walk and chew gum and the same time, it’s unlikely that she will walk and write at the same time. Nietzsche’s aphorisms may be the closest we have to mobilography – writing born on the hoof. Writing may overcome space and time but it also, with consequences, impedes movement. History, therefore, is a report by the sedentary (Latin sedēre, to sit) written for the stationary. Not surprisingly academic disquisitions prioritize fixity over mobility. Even the lives of nomads have typically been characterized as fanning out from an immobile sacred center.
Sedentarism is a plant’s revenge. The late Peter Wilson, the New Zealand anthropologist, in his now classic account of the origins of architecture, The Domestication of the Human Species, pointed out that while we were busy domesticating plants and animals, they were reciprocating by domesticating us. We fumbled around with their edible reproductive parts; they conferred upon us their rootedness. So, permanent architectural structures and the Neolithic revolution coincide in their origins. Both the domestication of creatures and the setting up of a domicile called for a settling down – a cessation of movement that, though not absolute, was decisive. Agnostic though one might be about the progressive nature of the agricultural revolution, nonetheless the implications are such that civilization can be seen as a pimple on that revolution’s ample rump. On the basis of an agricultural productivity beyond the threshold of mere subsistence, the accoutrements of civilization emerged: a high degree of occupational specialization, writing, the growth of cities and so on. We traded mobility in the larger landscape for access to a larder. And even though our scholarly sensibilities may rail against so simple a dichotomy as nomadic versus sedentary lifestyles (and the correlates attendant to each), nonetheless one must resist being so refined as to reject a real discontinuity when we stumble across it.
Humans and their domesticated plants and animals have their place. In fact they make their place. Place, as the human geographers have told us, is space made personal. Proust’s madeleine – ten thousand years of post-agricultural history clarified and made delicious – conjured up an instance and a place, and not merely space-time co-ordinates (though it does that too). If the primordial ecology of our species was fashioned by traversing to things, the reversal involved in agriculture was that we are now bound to things in a place.
4. Though I Scattered Them Among the Nations
The sound of dehiscence is a barely-audible pop. It is the process by which anthers, follicles, some fruits, spherules, pods and other biological capsules explode and release their mature contents. Less gloriously, the term is also reserved for the rupturing of a surgical wound, either superficially or completely, releasing the infected flesh from the strain of the suture. Whether the Great Dehiscence of the human population during the Age of Discovery can be considered a triumph or calamity: the scattering of the matured human seed or a gangrenous discharge from an exploded wound will, I supposes, depend on one’s perspective.
In the view of prehistorian Grahame Clark a distinctive attribute of humans is that they perceive the spatial and temporal dimensions of their environment more consciously and decisively than other animals. In freeing ourselves of some of our more immediate telluric constraints we extend a conception of space over progressively larger territory. Thus, Henry the Navigator (1394–1460), a Portuguese prince, exemplifies the esprit of early modern exploration. His achievements were more cerebral than swashbuckling. He recruited Arab scholars, Jewish merchants and mariners from around Europe to create maps that collated the most precise geographical information of the age. He encouraged changes in on-board instrumentation for calculating latitude. His fame, therefore, in some circles is more for his cerebrations concerning space than for his acumen in personally navigating it. Although he accumulated great wealth from West Africa for the Portuguese, he himself never joined in on an expedition there.
Less perfervidly, however, one might rename the Age of Discovery as the Age of Invasion, Conquest, and Occupation. Evaluated from this perspective Prince Henry appears more savage than savant. For example, he commissioned the design of the caravel, a vessel better equipped than the more traditional barca for traversing the treacherous waters of the West African coast. It was, of course, a craft perfectly suited to the task of plunder. The Portuguese made it as far as Cabo Branco (now, Ras Nouadhibou, Mauritania) in 1441. Within two years of this they were shipping back slaves to Portugal, a task for which the caravel was coincidentally well equipped. This was a defining early moment in the modern Atlantic slave trade.
The dehiscence of early modern Europe is thus a threshold event in the history of human motility. On the basis of the stored energy from domesticated plants and animals, and the subsequent accumulation of cultural ingenuity, social stratification, and the attrition of resources and landscapes, the merchant countries of Europe were ready by the 15th Century to teem across the globe.
Humans overcome the fear of being touched when they form a crowd said Elias Canetti in Crowds and Power. An important moment in the genesis of a crowd comes when differences are discharged and all members are placed on an equal footing. But that happy moment is just an illusion – they are not equal. The thousand of years of human sedentary life was a lengthy gestation of the multitude, or a swarm. Now, in a bee swarm apparently the insects take off to a new nest site with only a few individuals knowing the location of the new site, yet these few individuals guide the swarm to their new home. So, it is with humans. The human swarm in the days of European exploration represented the migration of the many at the behest of the few. In this manner, contemporary migrations differ strikingly from the peregrination of early bipedal hominins.
5. Take up your Gadgets Daily…
Three themes of contemporary life are the compression of space and time and the miniaturization of the object. The agricultural revolution compressed space by bringing the necessities of lives to our door; while also, it must be said, creating the door. The age of exploration and exploitation (which I term the European dehiscence) compressed time (and space) by making of our globe a more easily traversable marketplace. Finally, Steve Jobs compressed the object making gadgets that can flit around the now tinier globe in our hip pockets. And when I say Steve Jobs here, I naturally mean to perch him on the shoulders of the giants of miniaturization.
The miniaturization of technology and the portability of objects is part of an evolutionary progression, according to Italian born architect Paolo Soleri, whereby complexity increased over time and which in turn, he thinks, should be linked to miniaturization. Arcology, Soleri’s name for his combination of architecture, urban planning, and ecology, is based upon the notion that large systems dissipate energy, but small ones conserve it. Arcosanti, the town being built (slowly, very slowly) according to Soleri’s designs will occupy only two percent of the footprint of conventional towns of comparable size.
Miniaturization thus has two dominant flavors. One is consistent with environmental concerns where we scale back some dimensions of the human enterprise. Since the global footprint of the 7 billion of us is now greater than the biocapacity of the globe (that is, we are living by drawing down natural capital). miniaturization is an ultimate objective of Soleri‘s designs. The other trend provisions us with portable devices. If the physical plant is the symbol of industrial times, the iPod is the fruit of these…let’s call them post-industrial times – both terms have pleasing references to vegetation, the plant rooted, the pod prepared to dehisce and disperse.
Though one might think that the nanofication of devices gets us back to some sort of ur-technology – the tune-packed iPod as equivalent to the chipped flint in the hands of a hunter – however the portable device is typically hiding its significant mass elsewhere (the entailments of production and waste). The conflicting trends in miniaturization can take us in two directions – the first is an environmentally motivated reduction that pulls us back within the limits of the planet, the second is a miniaturization that gets us off this planet. Interestingly though, Elon Musk, a co-founder of SpaceX whose craft, the Dragon, just docked with the International Space Station, stresses environmental concerns in touting multiplanetary life as a plan for guaranteeing human survival.
In his book The Invisible Pyramid (1970), written right after the first bipedal stepped onto the moon, Loren Eiseley contemplated the inner and outer space of humanity. In a chapter called The Spore Bearers he compares us to the fungus Pilobulus whose countless spores get hurtled away from the capsule in which the matured. Though the story of humans in space may not have progressed as rapidly as some in 1970 may have predicted it may yet be the case that our most unbridled motility is just ahead of us.
All things move, some things are motile; motile humans rose up and peregrinated across Pliocene savannas; a complicity with plants ended our peripatetic ways, plant and man settled down; the relatively vast populations of the Old World dehisced and pullulated across the globe; contemporary humans conferred mobility on things that they formerly left behind; the human enterprise marched to limits of the globe; some urge curtailment, while others watched optimistically as the SpaceX Dragon connected to the International Space Station
….and there shall be no night there; and they need no candle…
Photo Credit: The photograph of running legs is by Randall Honold. The editor generously donated the sperm. The idea for this piece came up during a conversation with my DePaul University Human Impacts on the Environment Class - those kids are the best!
Only Philosophers Go to Hell
by Scott F. Aikin and Robert B. Talisse
The Problem of Hell is familiar enough to many traditional theists. Roughly, it is this: How could a loving and just god create a place of endless misery? The Problem of Hell is a special version of the Problem of Evil, which is the general challenge that a just and loving God would not intentionally create a world with excessive misery, and yet we see the excesses all around us. Hell, on its face, seems like it is actually part of God’s plan, and moreover, the misery there far exceeds misery here. At least the misery here is finite; it ends when one dies. But in Hell, death is just the beginning. Those in Hell suffer for eternity. Hell, so described, seems less the product of a just and loving entity than a vicious and spiteful one. That’s a problem.
There are two standard lines in defense of Hell. The first is the retributivist line, and the second is the libertarian line. We think that if either succeeds, only philosophers could go to Hell. This is because only someone who understands exactly what she is doing in sinning or rejecting God could deserve such a fate as Hell, and only a philosophical education could provide that kind of understanding. So, it follows, only philosophers can go to Hell.
Retributivism with regard to Hell runs as follows: Those in Hell are sinners, and sin demands punishment. Therefore, Hell is necessary; it is the place where that punishment is delivered. This seems reasonable as far as it goes, and it does work as a nice counterpoint to the regular complaint that sometimes the wicked prosper in this life – they will suffer appropriately in the next. But retributivism about Hell ultimately seems problematic. Grant that sinners deserve punishment. Nonetheless, the amount of punishment being visited upon those in Hell is objectionable. Sinners can’t do infinite harm, no matter how bad they are. But they get an eternity of torment. Punishment is just only when it is proportionate to the wrongs committed by the guilty. So even if Hell’s express purpose is to enact retribution on those who are guilty of sin, and even if the guilty do get what’s coming to them in Hell, making that punishment eternal is moral overkill. Again, disproportionate punishment is morally wrong, and Hell is guaranteed to be exactly that for everyone there.
Take a moment to consider some moral wrong you’ve done. Perhaps you stole a piece of bubblegum from the corner store. That was wrong. You know that. Now imagine that you were caught in the act, and you were given a beating for doing that wrong. And we’re not talking just any beating – we’re talking about a real drubbing, one that ranges from your legs, up to your torso, and then to your face. And it doesn’t stop. The people who caught you keep hitting you. For a week. For a month. For a year. Now, for sure, you got punished for your moral error. The problem with the punishment is that it was out of proportion to the seriousness of the wrong you committed. You stole bubblegum, but you got a year-long beating in return. The beating was much worse than the moral harm done in stealing the bubblegum. Now consider: Every sin is only a finite harm, but punishment in Hell is eternal. No matter how bad the sins of sinners are, they will always be punished disproportionately in Hell. That’s unjust.
One response might contend that the sin of those in Hell isn’t in the temporal wrongs they have committed in sinning, but rather, the sinners in Hell commit the wrong of rejecting God, the greatest good. That is their infinite error. Consequently, the sin of those in Hell is infinite, and so they deserve eternal (hence proportionate) punishment.
Notice that in order to deserve the full measure of that punishment in Hell, a sinner who rejects God must know exactly what she’s doing. If, say, the person who rejects God does so because she did not understand Him properly or because she did not know what she was rejecting Him, then she cannot deserve full punishment of Hell. She has made an error, but it was not related to her character, but consists in her failure to grasp the divine. She didn’t fully understand her actions. Only those who understand exactly what they are doing deserve proportionate retribution.
It seems clear that only someone with appropriate philosophical acumen could have that kind of understanding. Being familiar with a textual tradition is clearly insufficient, as the art of interpreting those texts is what’s required to take them appropriately. (No one takes Solomonic wisdom to consist in the threatening to chop up anything in contention.) Philosophy is what constitutes those interpretive moves. So, on the retributive theory of Hell, only a philosopher could justly go there.
The other going justification for Hell is libertarianism, the view that one freely chooses Hell as embracing an eternity away from God. God made Hell as a place where those who want to be away from Him can go. As C. S. Lewis put it, “the doors of Hell are locked from inside.”
Again, choosing is not simply a matter of what gets chosen, but it is also a matter of what the chooser thinks she’s choosing. A person who freely drinks a cup of petrol while believing it to be a cup of water does not really choose to drink petrol. Consequently, only those who know who and what God is can properly choose to be without Him. And only those with accurate philosophical understanding of God can be in this position. Again, only philosophers can go to Hell.
All this seems excellent news for non-philosophers. Socrates may have been right that the unexamined life is not worth living, but at least it keeps you out of Hell. But there’s some bad news, too. By way of the same kind of arguments presented above, we should hold that Heaven is reserved only for philosophers. If Heaven is our loving communion with God, it must be something we’ve knowingly chosen. God could not want us to enter into an eternity of loving communion with Him without our knowing what we are doing. And, again, only philosophers could understand what that choice amounts to. Only philosophers can go to Hell. And only philosophers can go to Heaven. Maybe that’s not such good news for non-philosophers. But perhaps there’s some comfort in the thought that non-philosophers might be able to avoid going anywhere for eternity.
Aikin and Talisse's Reasonable Atheism is available from Prometheus Books.
Boy in an Apple Tree Grappling
sun through leaves shadows
on his face
as on a dappled stallion
time was a tick, a heartbeat
long as the orbit of Uranus
84 to 1 of our years
heartbeat that sustains us
in a capsule with companions
in a memory
in a moment that contains us
thread of something through the raptures
of the changes of dominions that remains us
in our sky nearby a star affirms
holds feet to fire
a blistering gold medallion
by Jim Culleny
Gillian Wearing at the Whitechapel, London
by Sue Hubbard
“Happy families are all alike”, claimed Tolstoy, while “every unhappy family is unhappy in its own way.” The same could be said of individuals. Happiness, a sense of well being, involves a feeling of rightness with the world, of belonging in one’s own skin, while unhappiness and dysfunction have their own infinite variety. The mind’s response to emotional pain is ever inventive. Self-destruction is a creative business. In many cases it turns out to be a life’s work, as those who give their true confessions to the artist Gillian Wearing attest.
In his book I’m Ok, You’re Ok (1969), Eric Berne’s post-Freudian model of transactional analysis, the relationships between internal adult, parent and child are explored so that the maladaptations embedded in old childhood scripts can be confronted in order for an individual to become free of inappropriate emotions that are not a true reflection of the here-and-now. Because people decide their stories and their destinies, attitudes, it is argued, can be changed. That is the ideal anyway. Yet for many of those who chose to answer a small ad placed in Time Out in 1994, which read: ‘Confess all on video. Don’t worry, you will be in disguise. Intrigued? Call Gillian’, they may have felt that they had little choice when it came to addictive, sad or compulsive behaviour.
It was this act that set in motion the artist Gillian Wearing’s work with strangers. Whilst she explores cultural notions of production versus the finished work such technical niceties are much less interesting than the stories that her sitters have to tell and the apparent compulsion that they have to share their pain, on record, with whoever happens to be listening. Wearing first began to use masks, along with joke shop wigs and false beards, in this 1994 video in which variously disguised figures speak straight into the camera. Confess All on Video… consists of ten voices edited into a continuous 30 minute piece. There is an array of confessions from the admission of a first visit to a brothel to an incredibly sad narrative from a nervous man disguised as George Bush who tells of an incestuous relationship with his siblings that has quite literally ruined his life. Protected by their anonymity and free of any judgmental response the participants are remarkably candid. This seems to connect back to the use of masks in ancient Greek drama. The mask, then, was a significant element in the worship of Dionysus and is known to have been used since the time of Aeschyluss by members of the chorus, who were there to help the audience know what a character was thinking. Illustrations from 5th century display helmet-like masks, covering the entire face and head of the actors, with holes for the eyes and a small aperture for the mouth, as well as an integrated wig. It is interesting to note that these ancient paintings never show actual masks on the actors in performance; they are mostly shown being handled by the actors before or after a performance, emphasising the liminal space between the audience and the stage, between myth and reality. The mask melted into the face allowing the actor to vanish into a role. Research suggests that the mask served as a resonator for the head, thus enhancing vocal acoustics and altering its quality leading to an increased energy and presence that allowed for the more complete metamorphosis of the actor into his character. Many of these aspects remain true in Gillian Wearing’s work.
The use of the video camera and the mask, which confers anonymity, occurs again in two later works, Trauma (2000) in which the participant wears a mask that reflects the age at which they suffered their pertinent trauma. Often too small for the adult wearer’s face the smooth mask barely covers a grey beard or wrinkled neck, poignantly reminding us that both child and adult are the same person. In Secret and Lies (2009) the Alan Bennett style ‘talking heads’ appear in inside a specially made video screening box that evokes images both the confessional and the police cell.
It is an early video work from 1995, Homage to the woman with the bandaged face who I saw yesterday down the Walworth Road that provides evidence for Wearing’s early fascination with masks. After videoing a woman with a bandaged face she saw in the Walworth Road, Wearing decided to bandage her own face and go out into the street to record the reactions of those passing by with the aid of a hidden video camera. In so doing she set about subverting the relationship between the observer and the observed. The following year she made 10-16, which remains one of her most affecting projects. Here she recorded children between the ages of 10 and 16 talking about their lives, their fears and dreams. These voices were then lip-cinched on video by adults so that they appear to be talking with children’s voices. The effect is disturbing, affecting and often very sad, reminding us, yet again, that trauma and dysfunction in childhood remain evident within the adult personality.
The adaptation of different personae in order to explore aspects of the self is familiar device from the work of artists such as Claude Cahun and Cindy Sherman. In her series Album (2003-6) Wearing takes on and inhabits the members of her family, including her parents and brother. In this apparently conventional set of family portraits she explores not only aspects of herself but also the dynamic social roles within the family group.
I will admit to a certain coolness on my part towards Wearing’s Turner prize entry, Sixty Minute Silence, 1996, where actors were dressed as police and to her 1994 Dancing in Peckham, which seemed both self-conscious and contrived. But Wearing has matured as an artist and this exhibition at the Whitechapel charts her progress from the clever one liner to a body of work that explores the way in which we construct social roles and images of ourselves. Self-Portrait of me Now in a Mask (2011) is her most recent self-portrait of herself wearing a mask of her own face. In it she poses questions about the multi personae that go to make up any one individual personality. For we all wear masks and construct versions of the self to fit different circumstances. Behind one mask there is often another. Wearing questions the assumption that there is such a thing as an ‘essential essence’. Hers is a landscape of shifting mirrors and partial truths.
Self Portrait at 17 Years Old, 2003 Framed c-type print, 115.5 x 92 cm
Dancing In Peckham, 1994 Colour video with sound, 25 min
Trauma, 2000. Colour video for monitor with sound, 30 min
All images copywrite of the artist courtesy of Maureen Paley, London
At Whitechapel until 12th June 2012, then 8 Sep-6 Jan, 2013 Kunstsammlung Nordrhein-Westfalen, Dusseldorf, and 1 March- 9 June 2013 Pinakothek der Moderne, Munich
Take The Skyway, Part 2
There wasn't a damn thing I could do or say
Up in the skyway
An empty downtown, with boarded-up shops and desolate sidewalks, is truly a sad sight to behold. It is also symptomatic of much larger forces, namely the flight from the urban core into the suburbs that wound up decimating the vitality of American cities during the second half of the 20th century. Last month I argued that the urban form of the skywalk was a partial and misguided response to reviving the emptied-out downtowns of American cities. In most instances these structures, which sought to connect buildings without touching the street, were a prolonged, painful failure, because they further segregated street life and did not succeed in drawing people back into that urban core, at least in a way that could be considered dynamic and responsive to the larger needs of the urban fabric. In a sense, much was expected of skywalks, but in fact they were little more than a Band-Aid, and served to only exacerbate the problem through the fundamentally anti-social tendencies that underlie their design and use.
And yet, like any other urban form, skywalks are agnostic – what determines their success is not just their design and implementation, but also the problem that they seek to address. It is perhaps more accurate to say that skywalks, along with many other forms of intervention in the urban built environment, reveal the question that designers have posed themselves, believing that that question, whatever it might be, is in fact the correct and most pressing one. So, in the case of American cities, skywalks were employed to revive downtowns, and, generally speaking, failed. Other cities around the world have enlisted skywalks not because there is too little density, but because there is too much. Does this new context increase the possibility of success? In order to understand what a difference a difference makes, we first need to consider the forces that shaped cities in the West, and what the difference might be between this phenomenon and that of the global urban South.
The narrative describing the development of American cities can be retold as a narrative of excessive space. When energy and labour are cheap, economic logic drives growth outward; it is always easier to build on virgin ground rather than re-organize an existing built environment. This is especially true when urban areas are not bounded by geographic obstacles such as water or mountains – a condition true of most mid-Western cities and not a few coastal ones.
For much of history, the urban periphery was the provenance of the poor who could not afford to live within the cities, but the industrial revolution saw the rise of urban manufacturing and the densification of cities with a relatively new class: the urban working poor. It was with industrialization that the urban-rural divide was first thrown into sharp relief, and it did not take long for the cities to become overcrowded, unhealthy, and downright dangerous places. Consider Friedrich Engels’ description of Manchester, arguably the first truly industrialized city in the world:
Such is the Old Town of Manchester, and on re-reading my description, I am forced to admit that instead of being exaggerated, it is far from black enough to convey a true impression of the filth, ruin, and uninhabitableness, the defiance of all considerations of cleanliness, ventilation, and health which characterise the construction of this single district, containing at least twenty to thirty thousand inhabitants. And such a district exists in the heart of the second city of England, the first manufacturing city of the world. If any one wishes to see in how little space a human being can move, how little air - and such air! - he can breathe, how little of civilisation he may share and yet live, it is only necessary to travel hither. True, this is the Old Town, and the people of Manchester emphasise the fact whenever any one mentions to them the frightful condition of this Hell upon Earth; but what does that prove? Everything which here arouses horror and indignation is of recent origin, belongs to the industrial epoch. (The Condition of the Working-Class in England in 1844, p53).
But the consequences of industrialization also led to a tremendous outpouring of economic activity that included an unprecedented efflorescence of infrastructure, especially that of transportation. First railroads and then automobiles massively broadened the places where people could live and work in relation to the city. It is thus one of the ironies of industrialization that such activity simultaneously created both the reason and the means for the flight from urban centers.
This reversal of spatial logic was finalized once the automobile and its accompanying infrastructure assumed its hegemonic status. The suburbs were actively marketed to those consumers who could afford to leave the urban core as safe places adequately distant from the city and its reputation of stink and grind that has plagued it since Engels’s time. Those consumers, perhaps most iconically represented in America’s post-World War II generation, were easy fodder for developers and politicians, for whom easy profits and an ever-expanding tax base came to be seen as the sine qua non of national growth and prosperity. Given the geographic convenience of a seemingly limitless urban periphery, there was every reason to give up on cities and the shambles into which they had been transformed. When taken to its extremes, we end up with irrational outcomes such as exurbia on the one hand, and Detroit on the other. Corresponding responses to this gutting of the urban center also led to an increasing awareness of the need to fix it, or at least acknowledge it, and one might classify skywalks as one such half-hearted attempt to patch things up.
Shifting to the developing world, however, and a different narrative comes into view. The steady immigration into the cities has, for some commentators, recreated Engels’s nightmare of density. However, there are crucial distinctions. Labour is still cheap, and although energy and capital are somewhat more dear, a distinguishing difference is that the scale of growth is far beyond what was experienced in Europe’s and America’s period of industrialization. Unlike American cities in the late 19th century (but perhaps not unlike Manchester’s initial growth spurt), the urban global South is growing so rapidly that these city administrations cannot build infrastructure quickly enough to provide urbanites with widely distributed water, sewage and electrical services; telecommunications are perhaps the most functional, but this is mostly because of investment made by the private sector. In terms of the development of transportation, these cities are still at the start of the S-curve that commonly describes the growth of automobile ownership, and as such continue to rely heavily on mass transit, which can take diverse forms, some of which are generally beneficial (such as bus rapid transit) or generally harmful (such as hordes of unregulated auto-rickshaws, etc).
As a result, there is still relatively little flight to the suburbs. Of course, this may change over the next decades as not only car ownership but the building of roads and highways creates more escape routes from the city (although one could argue rather persuasively that cheap gas is a thing of the past and this will stymie urban flight somewhat). Thus the problem encountered by planners in these cities is one of too much density, most dynamically represented by the conflicting modalities of pedestrians and drivers, and further exacerbated by the fact that sidewalks, where they exist at all, are likely poorly designed and clogged with street vendors, illegally parked vehicles and unauthorized spatial appropriations by bordering buildings. So it is perhaps (un)surprising that, in order to deal with this astonishing chaos, planners in some cities have come to a similar conclusion, and plumped for the creation of skywalks.
Mumbai is perhaps the most vocal proponent of Skywalk 2.0. Like Manchester, Mumbai got its head start on industrialization via the cotton trade – Manchester was in fact its principal trading partner. (It is an indication of the ongoing densification of the city that its 58 textile mills, once long abandoned, have seen their locations, now in the center of Mumbai, become extraordinarily valuable; indeed, the former Shrinivas Mill is now slated to be the site of the world’s tallest residential tower.) Mumbai’s metropolitan railway serves over 7 million daily riders; by way of comparison, New York’s subway serves about 5 million. With city streets increasingly straitened, planners saw the need to get commuters to and from the train stations and other “targeted” points of interest quickly and efficiently, especially since pedestrian deaths have been on the rise. The skywalks project was ambitiously slated for 50 skywalks and saw its first completed construction in 2008; since then another 35 have been built. By August 2010, the Mumbai Metropolitan Region Development Authority was proud to report that nearly 600,000 people were using the skywalks. Although this is still less than 10% of the ridership of the rail system, one ought to concede a certain amount of time for the system to mature.
Nevertheless, troubling signs have been emerging. Similar to the attitudes of American planners, there is an implicit mistrust of the street, which is unpredictable, incomplete and messy. It may be that planners want to decrease the number of pedestrians struck by cars; this is a noble undertaking. But as was the case in American cities, skywalks are in fact a design choice that deliberately dismembers the urban street: people who want to do Activity X are meant to use this infrastructure, whereas everyone else is left to go about their business as before. Principally, what this does not do is solve any of the other issues the street might already have. For example, it does not address the issue of street vendors, but it merely prohibits them from reaching possible customers who now walk above their heads. This does not mean that street vendors will pack up and go home. They may, however, become more aggressive in order to ensure their prior level of income. Furthermore, it is difficult to estimate the effect of removing a significant voice – that of the commuters – from the chorus of urban pedestrians and street-users who might otherwise campaign together for general improvements. Disassembling the street is an effective form of divide-and-conquer, and therefore of suppressing protest, complaint, suggestion and virtually every other form of civic participation.
It also creates entirely new issues, such as the fact that residents living several floors up suddenly find themselves eye-to-eye with newly elevated pedestrians. And even among pedestrians themselves, further inequalities are created by the skywalks. Consider, for example, the commentary of one Indian flâneure:
Getting off the skywalk at Bandra Station brings home a lot of realities about the way things are constructed in India. Though the skywalk offers a safer mode of travel for people, it cannot be accessed by the people who arguably need it the most—the disabled and the elderly. There are no ramps at all at any of the entry/exit points on the skywalk, and the staircase is relatively steep and uneven at places, making navigation a bit tricky even for a relatively fitter person like me. Why is our urban planning and development so discriminatory, not to mention so poorly planned?
Skywalks don’t come cheap, either – unlike the street, there is quite a bit riding on getting them right the first time, as well as maintenance, etc. In a wholly ironic clash of public transportation modalities, it recently emerged that three already completed skywalks would have to be destroyed in order to make room for Mumbai’s evolving metro line. Evidently, the metro line was originally meant to go underground, and design changes now place it aboveground. Whoops! Thus an infrastructure of limited application becomes an enormous albatross, and the design thinking needed to fix what has always been there, and what always will be there – the street – remains untapped.
So it seems fair to ask what role, if any, does an extensive network of skywalks have to play in the urban environment. As I suggested above, like any urban intervention, skywalks are built as an answer to a particular problem. Oftentimes, it is the way in which we choose to frame the question that determines the net success of our answer. In the case of American cities, the big question was, How do we revitalize abandoned downtown districts? The failure of skywalks to do so occurred because this, in fact, was not the question that designers were answering. Rather, they were answering the question, How do we get more people to more shops? Skywalks are, in fact, a terrific answer to this, much narrower question. By creating the human version of a system of pneumatic tubes, you measure your success by the effectiveness with which you move people from one place to another. Nevertheless, at the end of the day you will still have no good answer as to why the downtown district is still empty.
In the case of Mumbai, the question is, how do we keep pedestrians moving in a ludicrously overcrowded city? Clearly, by not encouraging them to interact with anything that keeps them from getting to and from their destination – hence the preference for designing another set of pneumatic tubes. One may look at the statistic of nearly 600,000 people using the Mumbai skywalks every day as a successful metric, if that is what you are looking to measure. But if you are considering what is the cost of social interaction foregone, then a very different calculus takes precedence. In fact, the question that ought to persist first and foremost in the minds of designers is, What kind of a city do we want to live in?
One possible answer to this question actually does involve skywalks. Consider a project recently initiated by Carlos Leite. Leite and his collaborators in the Smart Informal Territories Lab have been working in the Heliopolis slum of São Paulo, where they have been developing approaches to enhancing informal settlements in ways that are minimally disruptive to the community. They have distilled down the question to an extremely elegant formulation:
How might we create public space that enhances [and] reinforces creative capacity [and] social interations, while strengthening informal networks within Heliopolis, without removing anyone?
One of the chronic difficulties that slum dwellers experience is the fact that getting around is not easy. Given the jumble of houses, lack of consistent streets and oftentimes poor access to the interior of large and irregular blocks, it is not hard to imagine why planners prefer the tabula rasa approach. However, another feature of Heliopolis is that it is built on fairly hilly territory – which is not surprising, since the area is prone to landslides and it is only in these high-risk areas that may be available to the urban poor. Leite’s innovation was to turn these idiosyncrasies from liabilities into assets, and a series of skywalks was the perfect way in which to do this. That is, the skywalks are open-air, and have many points at which one can enter or exit. They are designed not to funnel people towards commerce, or away from transportation hubs, but are meant to connect places. Designed in conversation with the community, their entire purpose is to enhance the entirety of what is already present, which is the city itself. In this sense, there may indeed be a future for skywalks.
May 27, 2012
How combat changed Paul Fussell, and how Fussell changed American letters
Stephen Metcalf in Slate:
Fussell had written a guide to poetic form and an equally fine critical life of Samuel Johnson when, in 1975, he broke out as an intellectual celebrity with The Great War and Modern Memory, which won the National Book Award and National Book Critics Circle Award. The Great War tells the story of the destruction of the 19th century —of its class system and its faith in progress; really, of any way of living predicated on a stable system of value —by World War I. Out of the mass experience of pointless death, a new way of speaking and writing, devoid of euphemism, arose, a plain style we associate with Hemingway (“Abstract words such as glory, honor, courage, or hallow were obscene beside the concrete names of villages, the number of roads, the names of rivers, the number of regiments and dates”) but in England may just as easily evoke Siegfried Sassoon, Wilfred Owen, Robert Graves, and Edmund Blunden —writers who saw action in the Great Fuck-Up, as infantrymen soon called it, writers who, as a result of firsthand acquaintance with the trenches, sought a way of making literature without any recourse to elevated literary diction.
The Great War chronicles the loss of the old rhetoric, of high pieties, of sacrifice and roseate dawns, in favor of “blood, terror, agony, madness, shit, cruelty, murder, sell-out, pain and hoax,” as Fussell lists it at one point; the sound of “ominous gunfire heard across water.” Fussell himself fought in World War II, and himself wrote in a candid style. “I am saying,” he concludes one chapter in The Great War, as if replying to a margin note from a junior editor, “that there seems to be one dominating form of modern understanding; that it is essentially ironic; and that it originates largely in the application of mind and memory to the events of the Great War.”
Horseshoe Crabs and Velvet Worms
Constance Casey in the New York Times:
Richard Fortey has spent most of his life looking at fossils, the imprints of the skeletons of the very thoroughly dead. Here he sets out — like a more deeply thoughtful David Attenborough, without the cameras — to describe the distinguished groups of organisms that are still recognizable and thriving after millions and millions of years. The horseshoe crabs, velvet worms and other venerable creatures he encounters are Earth’s true conservatives. “We’ve devised a system that works very well for our niche,” they would tell us. “No big changes necessary. Maybe just a tweak at the molecular level.” As Fortey says, “to look at a living horseshoe crab is to see a portrait of a distant ancestor repainted by time, but with many of its features still unchanged.”
Fortey’s dozen or so subjects have survived the many cataclysms the planet has thrown at them over the past 450 million years. As if repeated earthquakes, volcanic eruptions and ice sheets weren’t enough, there were two mass-extinction events. The best known was the disaster 65 million years ago that led to the downfall of the dinosaurs. We’re less familiar with the more devastating earlier extinction — about 251 million years ago — that erased 90 percent of life from the sea and almost as large a percentage of the little things struggling on land. The horseshoe crab made it through; its fossil remains date from 450 million years ago.
Somewhere then, perhaps at the bottom of a poisoned sea, with tsunamis rolling above, some organisms stayed alive, including something we would recognize as the horseshoe crab if it clambered up onto the beach. It’s astonishing to consider that the lucky few — arthropods, snails, clams, jellyfish, worms and a few small four-legged creatures on land — that survived the worst extinction gave rise to everything that followed, including us.
Can science explain why we tell stories?
Adam Gopnik in The New Yorker:
Of all the indignities visited on the writer’s life these days, none is more undignified than the story or pitch meeting, a ritual to which every writer, from the gazillion-dollar screenwriter to the lowly essayist, will sooner or later submit. “So tell us the story,” the suits say after a few minutes of banter and schmooze, and the writer gulps and jumps in. “Well, uh, it’s sort of, like—it’s sort of a fish out of water story…“and then as one pale incident succeeds the next, the tycoons emit a slow burn of polite disbelief and boredom, ending with a forced smile and a we’ll-get-back-to-you. Sometime. Soon…
And yet something interesting, even encouraging, is revealed in this ritual, all its humiliations aside. Stories, more even than stars or spectacle, are still the currency of life, or commercial entertainment, and look likely to last longer than the euro. There’s no escaping stories, or the pressures to tell them. And so the pathetic story-pitcher turns to pop science—to Jonathan Gottschall’s new book, “The Storytelling Animal,” for instance— for some scientific, or at least speculative, ideas about what makes stories work and why we like them. Gottschall’s encouraging thesis is that human beings are natural storytellers—that they can’t help telling stories, and that they turn things that aren’t really stories into stories because they like narratives so much. Everything—faith, science, love—needs a story for people to find it plausible. No story, no sale.
Dog domestication may have helped humans thrive while Neandertals declined
Pat Shipman in American Scientist:
We all know the adage that dogs are man’s best friend. And we’ve all heard heartwarming stories about dogs who save their owners—waking them during a fire or summoning help after an accident. Anyone who has ever loved a dog knows the amazing, almost inexpressible warmth of a dog’s companionship and devotion. But it just might be that dogs have done much, much more than that for humankind. They may have saved not only individuals but also our whole species, by “domesticating” us while we domesticated them.
One of the classic conundrums in paleoanthropology is why Neandertals went extinct while modern humans survived in the same habitat at the same time. (The phrase “modern humans,” in this context, refers to humans who were anatomically—if not behaviorally—indistinguishable from ourselves.) The two species overlapped in Europe and the Middle East between 45,000 and 35,000 years ago; at the end of that period, Neandertals were in steep decline and modern humans were thriving. What happened?
A stunning study that illuminates this decisive period was recently published inScience by Paul Mellars and Jennifer French of Cambridge University. They argue, based on a meta-analysis of 164 archaeological sites that date to the period when modern humans and Neandertals overlapped in the Dordogne region of southwest France, that the modern-human population grew so rapidly that it overwhelmed Neandertals with its sheer numbers.
How FBI Entrapment Is Inventing 'Terrorists' - and Letting Bad Guys Off the Hook
Rick Perlstein in Rolling Stone:
This past October, at an Occupy encampment in Cleveland, Ohio, "suspicious males with walkie-talkies around their necks" and "scarves or towels around their heads" were heard grumbling at the protesters' unwillingness to act violently. At meetings a few months later, one of them, a 26-year-old with a black Mohawk known as "Cyco," explained to his anarchist colleagues how "you can make plastic explosives with bleach," and the group of five men fantasized about what they might blow up. Cyco suggested a small bridge. One of the others thought they’d have a better chance of not hurting people if they blew up a cargo ship. A third, however, argued for a big bridge – "Gotta slow the traffic that's going to make them money" – and won. He then led them to a connection who sold them C-4 explosives for $450. Then, the night before the May Day Occupy protests, they allegedly put the plan into motion – and just as the would-be terrorists fiddled with the detonator they hoped would blow to smithereens a scenic bridge in Ohio’s Cuyahoga Valley National Park traversed by 13,610 vehicles every day, the FBI swooped in to arrest them.
Right in the nick of time, just like in the movies. The authorities couldn’t have more effectively made the Occupy movement look like a danger to the republic if they had scripted it. Maybe that's because, more or less, they did.
The guy who convinced the plotters to blow up a big bridge, led them to the arms merchant, and drove the team to the bomb site was an FBI informant. The merchant was an FBI agent. The bomb, of course, was a dud. And the arrest was part of a pattern of entrapment by federal law enforcement since September 11, 2001, not of terrorist suspects, but of young men federal agents have had to talk into embracing violence in the first place.
Bee Gees - I Started A Joke (LIVE)
Just for the sake of comparison with the much older version that Morgan has posted below, here is a more recent and live version of the song. RIP Robin Gibb! :-(