Friday, January 30, 2015
Talking animal stories have their roots in a prehistory when, according to the literary scholar Egon Schwarz, professor emeritus at Washington University, consciousness had yet to distinguish between man and animal, ‘when people still believed in the possibility of slipping from one to the other, entirely according to desire or need’. And since then, talking animals have developed in a variety of rather amorphous ways to satisfy human desire; a kind of cipher for our own existential dilemmas.
Talking animals can provide us with joy and laughter. They can serve as a displacement for weaknesses and anger. And they can teach us very human lessons while simultaneously serving as a continual source of wonder. Think of Lewis Carroll’s menagerie of talking animals – from the White Rabbit to a Mouse offended by Alice’s bad manners – all of whom are animated by the wonder of childhood imagination. Or of Aesop’s animals designed to didactically instruct young minds toward the path of proper morality. Or of the canine narrator of Franz Kafka’s short story ‘Investigations of a Dog’ (1922), an ideal stand-in for the author’s alienated existence.
They can also serve to remind us of the idyllic pleasure of nature itself.
How did we reach a point where “nothing at all escapes technique today”? Ellul offers a long genealogy of technique, from primitive man to the Greeks and Romans, to Christianity, the early modern era, and lastly the Industrial Revolution, when technique finally came into ascendancy. Ellul’s attention to social changes — technological, economic, legal, administrative, institutional — makes it a more earthy account of modern technical development than those frequently given that focus entirely on shifts in philosophical and religious outlooks. (This hints at the influence on Ellul of Marx, who famously rejected Hegel’s preoccupation with consciousness, rather than the material conditions of life, in understanding history.) While explanations from the history of ideas are not irrelevant for Ellul — although he probably dismisses them far too quickly — he considers them sorely lacking when it comes to explaining the rapid spread of technical development across Europe. A better explanation, he believes, can be found in the convergence of five phenomena in the nineteenth century: the availability of scientific knowledge amassed over centuries; population growth; an economy at once stable but adaptable; a clear intention on the part of the whole society to exploit technical possibilities in all areas; and perhaps most importantly social plasticity — that is, a society willing to surrender its religious and social taboos and to trade in the supremacy of traditional groups for that of the individual.
It may sound incomprehensible—senseless, Constance Garnett would have put it, as she did in her translation of The Brothers Karamazov—but while the rest of the world may dread the return of the prolonged hostile stare-down known in the last half of the last century as the Cold War, in some ways, I welcome the refreeze. It plunges me into nostalgia for my 1970s and 1980s childhood in Michigan, Indiana, and Oklahoma, when my professor parents threw incessant pirozhki-and-samovar parties for Russian Club students and for the peaceable, intellectual Soviet émigrés who were landing in American college towns in those years, bringing news from behind the Iron Curtain and beet-and-mayonnaise salads. I suspect that writers of James Bond-type thrillers feel much the same way I do, though for different reasons. Since the demise of the USSR—and the KGB—in 1991, it’s been a stretch for them to keep roping Soviet-era villains into their plots; now they can breathe easy. In the 1990s and well into the aughts, during the post-Soviet thaw, I sometimes wondered if my parents’ obsession with the culture and history of the Soviet Union had been a mistake, a generational fluke. But now that bare-chested, border-crashing Vladimir Putin has brought back the jangling tensions of the good-old bad-old days, I am feeling some vindication. So, I imagine, are the dozens of midwestern students who fell under the spell of my parents’ Slavophilia, getting doctorates in Russian just before Americans stopped caring about the “Evil Empire” and Russian-language enrollments plummeted.
Note: For Abbas. CRISPR is one of the four most beautiful scientific discoveries of the last 100 years in biology and has entirely changed the direction of our research.
Pallab Ghish in BBC:
It will be the first concerted use of an emerging technique called Crispr to "snip out" specific disease genes in order to discover drugs. The technique is cheaper, faster and more accurate than current methods. The research will be carried out with four leading academic and industrial gene-research centres across the world.
...The human genome project determined that humans had about 24,000 genes. These are found along the DNA double helix in every cell in the body. The decoding of the human genome 15 years ago led to the hope that doctors would eventually identify faulty genes responsible for specific diseases and eventually develop medicines to treat them. The principle is simple - drug companies would "snip out" the gene responsible for the disease from the patient's DNA, then use it to test drugs to see if they could fix the problem. At the time, US President Bill Clinton said: "Our children's children will only know cancer as a constellation of stars," and hailed the completion of the project after a 10-year race that cost billions. And Tony Blair, then UK Prime Minister, who joined the Mr Clinton by satellite from Downing Street, added: "Every so often in the history of human endeavour, there comes a breakthrough that takes mankind across the frontier and into a new era." Fifteen years on, one could wonder: "What new era?" There are only a handful of new medicines based on the human genome project, and, although Mr Clinton may eventually be proved right, cancer is still known as "cancer". Progress has been hampered by two main factors. First, researches soon began to realise that most common illnesses were caused by any combination of tens of genes. Second, the genetic techniques to snip out specific genes are expensive and take a long time. Researchers have to make what are in effect "genetic scissors" tailor-made to the gene they want to snip out. This process can take months for each and every gene. But in recent years, scientists have developed a set of genetic scissors that can be quickly and cheaply tailored to cut out a specific gene. And this technique, called Crispr, will be the focus of the research programme.
Thursday, January 29, 2015
Kenan Malik in Pandaemonium:
I published recently a transcript of a radio documentary I had made that explored the question of ‘Who owns culture?’. Perhaps the most fractious of recent debates around this question has been over ‘Kennewick Man’, an ancient skeleton found on the banks of the Columbia River in America’s Washington State. The 9000-year old skeleton became the focus for two major controversies: What is race? And who owns history? I tell the story of Kennewick Man in my book Strange Fruit: Why Both Sides are Wrong in the Race Debate. I am publishing here an extract that lays out part [of] that story, looking at the question of the ownership of culture and history and of the clash between scientific rationality and cultural identity. I will publish a second extract next week that delves into the debate about race posed by Kennewick Man.
Darold Treffert in Scientific American:
I met my first savant 52 years ago and have been intrigued with that remarkable condition ever since. One of the most striking and consistent things in the many savants I have seen is that that they clearly know things they never learned.
Leslie Lemke is a musical virtuoso even though he has never had a music lesson in his life. Like “Blind Tom” Wiggins a century before him, his musical genius erupted so early and spontaneously as an infant that it could not possibly have been learned. It came ‘factory installed’. In both cases professional musicians witnessed and confirmed that Lemke and Wiggins somehow, even in the absence of formal training, had innate access to what can be called “the rules” or vast syntax of music.
Alonzo Clemons has never had an art lesson in his life. As an infant, after a head injury, he began to sculpt with whatever was handy–Crisco or whatever–and now is a celebrated sculptor who can mold a perfect specimen of any animal with clay in an hour or less after only a single glance at the animal itself–every muscle and tendon perfectly positioned. He has had no formal training.
To explain the savant, who has innate access to the vast syntax and rules of art, mathematics, music and even language, in the absence of any formal training and in the presence of major disability, “genetic memory,” it seems to me, must exist along with the more commonly recognized cognitive/semantic and procedural/habit memory circuits.
Genetic memory, simply put, is complex abilities and actual sophisticated knowledge inherited along with other more typical and commonly accepted physical and behavioral characteristics.
The second is that I am saturated in digital life and I want to return to the actual world again. I’m a human being before I am a writer; and a writer before I am a blogger, and although it’s been a joy and a privilege to have helped pioneer a genuinely new form of writing, I yearn for other, older forms. I want to read again, slowly, carefully. I want to absorb a difficult book and walk around in my own thoughts with it for a while. I want to have an idea and let it slowly take shape, rather than be instantly blogged. I want to write long essays that can answer more deeply and subtly the many questions that the Dish years have presented to me. I want to write a book.
I want to spend some real time with my parents, while I still have them, with my husband, who is too often a ‘blog-widow’, my sister and brother, my niece and nephews, and rekindle the friendships that I have simply had to let wither because I’m always tied to the blog. And I want to stay healthy. I’ve had increasing health challenges these past few years. They’re not HIV-related; my doctor tells me they’re simply a result of fifteen years of daily, hourly, always-on-deadline stress. These past few weeks were particularly rough – and finally forced me to get real.
more here. And I should say that we here at 3QD send him off into the real world with a special, heartfelt, blogger's salute.
This extraordinary book, a huge dictionary of philosophical terms from many languages, is a translation of Vocabulaire européen des philosophies: Dictionnaire des intraduisibles, originally published in 2004, the brainchild of the French philosopher Barbara Cassin. If the original project was paradoxical, then the present version is doubly so: not just a dictionary of untranslatable words, but a translation of that dictionary. Rather than despair at the self-undermining self-referentiality of the whole idea, the editors rejoice in it. Indeed, moving the word “untranslatable” to the beginning of the English title proudly asserts the paradox even more forcefully than the original French title does, and forms what the English-language editor Emily Apter calls “an organising principle of the entire project”.
In her preface, Apter comments (apparently without irony) that “the extent of our translation task became clear only when we realised that a straightforward conversion of the French edition into English simply would not work”. She is right, of course: translation is almost never a straightforward conversion. This is why it is such a fertile subject for philosophy. Like so much in philosophy, theorizing about translation (and, of course, about the related concept of meaning) lurches between two unappealing extremes.
IT IS OBVIOUS BY NOW that Paul Thomas Anderson isn’t making individual movies so much as building an oeuvre block by block—the sturdiest, most resilient body of work by a big-time American director since Stanley Kubrick died and Martin Scorsese ran out of steam.
Big, ambitious, and American are the operative words. Boogie Nights (1997) and Magnolia (1999) were sprawling ensemble pieces that challenged Scorsese and Robert Altman on their own turf; in their concern with self-invented American Übermenschen and up-front eccentricity, There Will Be Blood (2007) and The Master (2012) engaged Orson Welles. Anderson’s smaller films, Hard Eight (1996) and Punch-Drunk Love(2002), pondered more marginal if equally echt-American types, and his latest movie, Inherent Vice, which stars Joaquin Phoenix as Thomas Pynchon’s hippie private eye Doc Sportello, falls into this category. A panoramic actor fest, it is also an extremely credible adaptation of the closest thing to an easy read by the writer whom some consider America’s greatest living novelist.
Structurally, Inherent Vice is pure School of Chandler, with Doc suckered into the plot by an old girlfriend, Shasta Fay Hepworth (Katherine Waterston), whose problems with her sugar daddy, scumbag developer Mickey Wolfmann (Eric Roberts), illuminate a classically Los Angeles real-estate scam . . . for starters. Behind it all is an “Indo-Chinese” drug cartel, a stand-in for the Vietnam War and ultimately a front for whatever cosmic antiplan you like—Doc Sportello being a sort of acidhead Don Quixote complete with intermittent sidekick, maritime lawyer Sauncho Smilax (Benicio Del Toro).
Just occasionally in Blake’s engravings there are pictures within pictures, and we get a glimpse of the life he thought images might lead in a better world. The most moving of these visions is Plate 20 of Blake’s Illustrations of the Book of Job. Job has survived his doubts and torments, and is telling the story to his daughters – in an earlier watercolour, they hold the instruments of Poetry, Painting and Music. No doubt the young women are taking their father’s narrative to heart, and in due course will rephrase it in terms appropriate to their arts: the lute and lyre are in the margins of the plate, ready to be strummed. But the first form of the story is visual: Job sits in a circular room – or maybe it is ten or 12-sided – and points towards two frescoed roundels on the walls left and right. Neither is unequivocally an episode from Job’s life – they could be analogous scenes from the story of the Fall – but the square panel over his head must be a version of ‘Then the Lord answered Job out of the Whirlwind.’ (It combines and condenses elements of Blake’s previous engraving of the subject.) As so often in Blake, the balance between positive and negative in the scene as a whole is precarious: Job is central and patriarchal (‘their Father gave them Inheritance among their Brethren’), and there is more than a touch of the baleful exhausted God-the-Father to him, heavy lids, pointing fingers and all. But there cannot be any doubt that the basic form and function of the room, with its echoes of the early 19th-century diorama (it is important that the plate was engraved in 1825), were meant to strike the viewer as wonderful – all-enveloping. Here were images at work.
Sandip Roy in The Telegraph:
“Just come back any time with madam to approve the kitchen design,” the beaming modular kitchen consultant told me. I explained patiently, again, that there was no madam around. I would be approving my own modular kitchen, cabinet colours and all. He smiled indulgently and said, “But we can wait few days if needed for madam.” When it finally dawned on him that there was no madam at all, he was aghast. I don’t know what shocked him more – that a man might approve a kitchen design, or that I lived alone, or that a man who lived alone wanted a kitchen.
When I first moved to the United States as a graduate student I could not wait to live by myself. The idea of a town where no one knew your name was just exhilarating.When I was moving back to India after 20 years in the US, many friends were aghast. How will you manage, they wondered uneasily. Twenty years of San Francisco can change you. How would I adjust to life back in a city without non-GMO Swiss chard, late-night carnitas quesadillas and gay bars? “Do they have gay bars in India?” well-meaning American friends asked me. Kolkata actually had the first Rainbow Pride parade in India back in 1999. But no, there were no gay bars here, though there were several men-only bars, no Leather Weekend street fairs with paddling stations, no same-sex marriages officiated by the city’s mayor. I knew and I understood that certain things I took for granted in a San Francisco lifestyle would just not work in India. Neighbours in San Francisco minded their own business. Neighbours in India minded your business. While the gay movement in the US was focused on marriage equality, in India it had its hands full trying to overturn a Victorian era anti-sodomy law that had hung around after the British had packed up and left. India had changed dramatically in the last decade when it came to visibility of gay issues in the media but there was still a fog of Don’t Ask Don’t Tell around issues of sexuality.
Imagine a micromotor fueled by stomach acid that can take a bubble-powered ride inside a mouse — and that could one day be a safer, more efficient way to deliver drugs or diagnose tumors for humans. That’s the goal of a team of researchers at the University of California, San Diego. The experiment is the first to show that these micromotors can operate safely in a living animal, said Professors Joseph Wang and Liangfang Zhang of the NanoEngineering Department at the UC San Diego Jacobs School of Engineering. Wang, Zhang and others have experimented with different designs and fuel systems for micromotors that can travel in water, blood and other body fluids in the lab. “But this is the first example of loading and releasing a cargo in vivo,” said Wang. “We thought it was the logical extension of the work we have done, to see if these motors might be able to swim in stomach acid.”
In the experiment, the mice ingested tiny drops of solution containing hundreds of the micromotors, which are 20 micrometers long. The motors become active as soon as they hit the stomach acid and zoom toward the stomach lining at a speed of 60 micrometers per second. They can self-propel like this for up to 10 minutes. This propulsive burst improved how well the cone-shaped motors were able to penetrate and stick in the mucous layer covering the stomach wall, explained Zhang. “It’s the motor that can punch into this viscous layer and stay there, which is an advantage over more passive delivery systems,” he said. The researchers found that nearly four times as many zinc micromotors found their way into the stomach lining compared with platinum-based micromotors, which don’t react with and can’t be fueled by stomach acid. Wang said it may be possible to add navigation capabilities and other functions to the motors, to increase their targeting potential. Now that his team has demonstrated that the motors work in living animals, he noted, similar nanomachines soon may find a variety of applications including drug delivery, diagnostics, nanosurgery and biopsies of hard-to-reach tumors.
You shot them.
Two beautiful purebred dogs
Siberian Huskies, each
In the head,
They were sniffing around your chickens,
Biting a few,
Killing a few.
And this I understand,
Those birds are easily excitable, and they start
Clucking, and in the way acquaintances soon become intolerable
When they start squawking and screaming,
The dogs would snap up a chicken,
Around the throat,
to tell them,
Someone is coming
He is not a friend.
The bullet, zipping through the air,
Silent now after bursting “Hallelujah!”
From its pre-natal chamber,
Rides the wind. It is done playing
And it pierces a skull, rollicking
in the explosive greeting.
His twin brother,
Born seconds after,
Grabs Esau’s heel and follows his steps.
My sons of snow and survival
They died an ignominious death, with yelps
And lived through a funeral of disgrace
When you hog-tied them and dumped them on my doorstep this morning.
by Elaine Wang
from Cahoodaloodaling, Issue 14
Wednesday, January 28, 2015
Adam Rutherford in The Guardian:
In this lush, epic and hugely enjoyable book, biologist Armand Marie Leroi explores the idea that it was another ancient Greek giant whose shoulders we may all stand upon. In his mid 30s, around 346 BCE, Aristotle exiled himself from Athens to islands in the Aegean, where he spent time thinking and writing about nature, possibly near a lagoon on Lesbos. We primarily know him as a philosopher, but here, Aristotle's biological output was titanic: he dissected dozens of species and compiled the first biology textbook, Historia Animalium. It is this body of work that Leroi argues continues to percolate through scientific thought today.
There's great temptation in analysing historical scholars to suggest that their insights were in some way vatic, or to use that horrid phrase, they were "anticipating" things to come. Leroi does a splendid job of avoiding hagiography of his hero, and never springs that inviting trap. Aristotle's contention that seals are mutated quadrupeds is true in a purely Darwinian sense: they are mammals, evolved from terrestrial four-legged mammals. But Leroi points out that this is not Aristotle's thinking. The idea that mutation from common ancestors was the cause of species is not present in any of Aristotle's work. He was not predicting evolution, nor did he once consider that Darwinian truth.
Katia Moskvitch in Scientific American:
What does graphene mean for the future of computing?
It is certain that silicon will be used for transistors—semiconductor devices that are the building blocks of modern computers—for at least the next five to 10 years. But people are already thinking about possible alternative materials and technologies to replace silicon when it will fail to deliver for increasingly smaller and smaller transistors. A graphene transistor is one of the alternatives.
I’m also looking into other one-atom-thick 2-D materials that were obtained soon after graphene and at heterostructures based on those 2-D crystals. Potentially they can provide an alternative to silicon technologies, but here we’re talking about completely new architecture rather than just introducing a new material into the system. It’s hard to predict how it will develop because when you introduce one new material into a process, it’s already quite a complicated step, and if you want to change the whole architecture, it requires years of research. That’s why research should start now if we want to achieve something like that in 10 years’ time.
What do you think computers of the future could look like?
Computers are much more than just a display, interface and software: they are mainly about computing power and microprocessors—also known as the central processing unit [CPU], or the “brain” of a computer. In the future, we’ll probably expand the parallel computations, utilizing microprocessors with larger number of cores, when several CPUs will be working together on the same chip, enabling the computer to perform many more tasks with a much greater overall system performance. At the same time more specialized computers will start to appear because the cost won’t be so prohibitive anymore.
Conor Friedersdorf in The Atlantic:
Last month, an improbable Internet exchange inspired many who noticed it to reconsider what's possible when debating politics online. It began when MIT professor Scott Aaronson published a blog post on a sexual harassment controversy. A predictably heated argument ensued in the comments section. Then, 171 comments into the thread, Aaronson achieved a breakthrough: He posted a reply so personal, vulnerable and powerful that it transformed the character of the conversation. And all sides emerged better able to see one another's humanity.
The comment that begat this small Internet miracle wasn't perfect. Neither were the responses to it–as ever online, some needless cruelty and lack of charity followed.
But Aaronson and his interlocutors did transform an obscure, not-particularly-edifying debate into a broad, widely read conversation that encompassed more earnest, productive, revelatory perspectives than I'd have thought possible. The conversation has already captivated a corner of the Internet, but deserves wider attention, both as a model of public discourse and a window into the human experience. It began with the most personal thing that the professor had ever publicly shared.
If posthumanism signals the end of a certain way of describing—or, more precisely, orienting—selfhood, then we might ask, as Ralph Waldo Emerson did at the start of his famous essay, “Experience” (that addressed, among other crucial issues, slavery), “Where do we find ourselves?” (266). 
To be sure, technology has already expanded ideas about seeing the human as created through evolution. Marvin Minsky argues that robots will be the next evolutionary phase; they will be our “children.” Ray Kurzweil anticipates the ethical issues of posthumanism will be worked out by machines gaining consciousness and then guiding themselves (and, presumably, us) through deeper realms of spiritual experience and insight. 
But, it must be asked, where does all this talk about spiritual transcendentalism leave the crucial subject of our bodies? N. Katherine Hayles cautions that privileging the disembodiment of information is a return to Cartesian dualism that supports the liberal humanist subject: what posthumanism seeks to challenge. Cary Wolfe, moreover, reminds us that we have to take into account how posthumanism is shaped by our relationships with other embodied forms of life constituted by non-human animals.
The art of literary conversation, by whatever name, is certainly not new.Hannah Rosefield opened her review of John Freeman’s How to Read a Novelist to a larger discussion of our cultural obsession with the interview as a way to look behind the authorial mask. Rosefield is dismissive of Freeman’s collection of 55 profiles of novelists, calling them “weirdly artificial…as if the writer is sitting alone in a restaurant or, sometimes, in her glamorous apartment, addressing occasional comments to the atmosphere.” Literary hero worship.
Rosefield isn’t enthralled with interviews as a whole, but her discussion is insightful. Many contemporary writers are known for their disinterest in the form — ranging from the prolific and visible Joyce Carol Oates to the prolific and invisible Thomas Pynchon — but she traces the displeasure back to Henry James, who gave his first interview in 1904, nearly 30 years after he published his first novel.
The magazine that has become synonymous with interviews is The Paris Review, which, as Rosefield notes, published a long interview with E.M. Forster in their first issue, Spring 1953. John Rodden, author of Performing the Literary Interview: How Writers Craft Their Public Selves, the first book-length examination of the literary interview genre, thinks George Plimpton “virtually invented” the literary interview as a genre for the “little magazine.”
In any analysis of a public figure, partisan interests will influence one’s opinion, and there isn’t anything particularly productive about pointing out that conservatives tend to forgive in conservative leaders what they don’t in liberals. A more helpful question is this: Why has Pope Francis addressed political issues, such as climate change, inequality, poverty, and overpopulation? Is it evidence of abject partisan interest, or a covert dedication to communism, Marxism, or some other insidious ideology?
Or is it just that we now presume that “politics” belongs outside the Church’s purview—despite the Church’s historical record of considering and intervening in political affairs? To me, this appears to be the distortion at hand.
This is partly because the notion that "politics" can be neatly separated from daily life is a new one. For earlier political theorists, like Aristotle and Augustine, politics was just a natural extension of community life. But over time, a fantasy of “politics” wholly divorced from everyday life and experience has emerged in certain corners of liberal thought, producing with it the expectation that politics is a matter for professional politicians and their colleagues, while those in religious offices should simply avoid addressing politics altogether.
Carley Moore in TNB:
Last summer I turned 42 years old. On the morning of my birthday, my then-boyfriend asked me what I was doing when I was 21, half that age. I said, “Baking quiches, dropping acid, and chasing boys.” I imagined this retort as a tweet—short and to the point. I’d managed to get my life at that time down to 39 characters, and it was mostly accurate.
At 21 years old, I was obsessed with Molly Katzen’s Moosewood cookbook, The Enchanted Broccoli Forest. I was going to a state school in upstate New York, not far from the home of the Moosewood restaurant in Ithaca, which had always seemed to me a cultural mecca in a vast state of industrial depression and blight. Ithaca was the home of my favorite thrift shop, Zoo Zoos, and a lot of cute hippie musicians I dreamed of fucking. The cookbook was steeped in that same sexy, vintage, hippie musician lore. I imagined myself cooking for one of those musicians. I could be his “old lady” for a recipe or two. Many of my activities then were overlaid with a fantasy plot line, worthy of an episode of Laverne and Shirley or Three’s Company. I was rarely just doing something; I was doing that thing while imagining I was in the TV sitcom version of it. As a child, I’d made it through my sometimes chore of washing the dishes by pretending I was in a Dawn dish soap ad.
Alison Abbott in Nature:
If you have to make a complex decision, will you do a better job if you absorb yourself in, say, a crossword puzzle instead of ruminating about your options? The idea that unconscious thought is sometimes more powerful than conscious thought is attractive, and echoes ideas popularized by books such as writer Malcolm Gladwell’s best-selling Blink. But within the scientific community, ‘unconscious-thought advantage’ (UTA) has been controversial. Now Dutch psychologists have carried out the most rigorous study yet of UTA — and find no evidence for it.
...A typical study probing UTA asks subjects to make a complex decision, such as choosing a car or a computer, after either mulling over a list of the object’s attributes or viewing the list quickly and then engaging in a distracting activity such as a word puzzle. However, such studies have drawn different conclusions, with about half of those published so far reporting a UTA effect and the other half finding none. Proponents of the theory claim that the effect is exquisitely sensitive to experimental variations, and often attribute the negative results to the fact that many research groups varied elements of the set-up, such as the choice of puzzle used for the distraction3. Critics say that the positive results came from having too few participants in the experiments. Psychologists Mark Nieuwenstein and Hedderik van Rijn at the University of Groningen in the Netherlands set out with their colleagues to determine which explanation was correct. They asked 399 participants — around ten times more than the typical (median) sample sizes in other studies — to choose between either 4 cars or 4 apartments on the basis of 12 desirable or undesirable features. They incorporated the full list of conditions that UTA proponents had reported as yielding the strongest effect, such as the exact type of puzzle used as a distraction. They found that the distracted group was no more likely than the deliberating group to choose the most desirable item.
Steven Pinker in the Boston Review:
More than two centuries after freedom of speech was enshrined in the First Amendment to the Constitution, that right is very much in the news. Campus speech codes, disinvited commencement speakers, jailed performance artists, exiled leakers, a blogger condemned to a thousand lashes by one of our closest allies, and the massacre of French cartoonists have forced the democratic world to examine the roots of its commitment to free speech.
Is free speech merely a symbolic talisman, like a national flag or motto? Is it just one of many values that we trade off against each other? Was Pope Francis right when he said that “you cannot make fun of the faith of others”? May universities muzzle some students to protect the sensibilities of others? Did the Charlie Hebdo cartoonists “cross a line that separates free speech from toxic talk,” as the dean of a school of journalism recently opined? Or is free speech fundamental — a right which, if not absolute, should be abrogated only in carefully circumscribed cases?
The answer is that free speech is indeed fundamental. It’s important to remind ourselves why, and to have the reasons at our fingertips when that right is called into question.