Sunday, August 31, 2014
Cambridge Study Reveals How Life Could Have Started From Nothing
Linda Geddes in New Scientist:
Metabolic processes that underpin life on Earth have arisen spontaneously outside of cells. The serendipitous finding that metabolism – the cascade of reactions in all cells that provides them with the raw materials they need to survive – can happen in such simple conditions provides fresh insights into how the first life formed. It also suggests that the complex processes needed for life may have surprisingly humble origins.
"People have said that these pathways look so complex they couldn't form by environmental chemistry alone," says Markus Ralser at the University of Cambridge who supervised the research.
But his findings suggest that many of these reactions could have occurred spontaneously in Earth's early oceans, catalysed by metal ions rather than the enzymes that drive them in cells today.
The origin of metabolism is a major gap in our understanding of the emergence of life. "If you look at many different organisms from around the world, this network of reactions always looks very similar, suggesting that it must have come into place very early on in evolution, but no one knew precisely when or how," says Ralser.
One theory is that RNA was the first building block of life because it helps to produce the enzymes that could catalyse complex sequences of reactions. Another possibility is that metabolism came first; perhaps even generating the molecules needed to make RNA, and that cells later incorporated these processes – but there was little evidence to support this.
"This is the first experiment showing that it is possible to create metabolic networks in the absence of RNA," Ralser says.
Science as Salvation?
Michael Saler profiles Marcelo Gleiser, who " wants to heal the rift between humanists and scientists by deflating scientific dreams of establishing final truths," in The Nation:
The battle lines became firmly drawn in the years following World War II. In Science and Human Values (1956), Jacob Bronowski attempted to overcome the sullen suspicions between humanists and scientists, each now condemning the other for the horrifying misuse of technology during the conflict:
Those whose education and perhaps tastes have confined them to the humanities protest that the scientists alone are to blame, for plainly no mandarin ever made a bomb or an industry. The scientists say, with equal contempt, that the Greek scholars and the earnest explorers of cave paintings do well to wash their hands of blame; but what in fact are they doing to help direct the society whose ills grow more often from inaction than from error?
Bronowski was a published poet and biographer of William Blake as well as a mathematician; he knew that artists and scientists had different aims and methods. Yet he also attested that both engaged in imaginative explorations of the unities underlying the human and natural worlds.
If Bronowski’s stress on the imagination as the foundation of both the arts and sciences had prevailed, Gleiser would not need to remind his readers that Newton and Einstein shared a similar “belief in the creative process.” However, while Bronowski meant to heal the breach by exposing it, he inadvertently encouraged others to expand it into an unbridgeable gulf, a quagmire of stalemate and trench warfare. His friend C.P. Snow battened on the division in lectures that were subsequently published under the meme-friendly title The Two Cultures and the Scientific Revolution (1959). Snow acknowledged that scientists could be philistine about the humanities, but his ire was directed at the humanists: they composed the governing establishment, their willful ignorance about science impeding policies that could help millions worldwide. As the historian Guy Ortolano has shown in The Two Cultures Controversy (2009), Snow tactlessly insinuated that the literary intelligentsia’s delight in irrational modernism rather than rational science was partly responsible for the Holocaust: “Didn’t the influence of all they represent bring Auschwitz that much closer?” Such ad hominem attacks raised the hackles of the literary critic F.R. Leavis, himself a master of the art. His response, Two Cultures? The Significance of C.P. Snow (1962), proved only that humanists could be just as intemperate as Snow implied. (One critic, appalled by Leavis’s vituperation, dubbed him “the Himmler of Literature.”)
The Ethical Machiavelli
Richard Marshall interviews Erica Benner in 3:AM Magazine:
3:AM: You’ve written extensively about Machiavelli. Your take is revisionary isn’t it in that you say he’s not what we’ve been led to suppose he is – the quintessence of amoral realpolitik. He’s an individualist deontological ethicist and this is the foundation for a political ethics. So how come few people recognized the irony?
EB: Lots of early readers did. Up to the second half of 18th century some of Machiavelli’s most intelligent readers – philosophers like Francis Bacon and Spinoza and Rousseau – read him as a thinker who wanted to uphold high moral standards. They thought he wrote ironically to expose the cynical methods politicians use to seize power, while only seeming to recommend them. Which doesn’t mean they thought he was writing pure satire, a send-up of political corruption. He had constructive aims too: to train people to see through plausible-sounding excuses and good appearances in politics, and think harder about the spiralling consequences of actions that seem good at the time.
Even his worst critics doubted that Machiavelli could be taken at face value. In one of the first reactions to the Prince on record, Cardinal Reginald Pole declares that its devil’s-spawn author can’t seriously be recommending deception and oath-breaking and the like, since any prince who does these things will make swarms of enemies and self-destruct. To Pole, what later generations would call Machiavellian realism looked utterly unrealistic. Then during the Napoleonic Wars, amoral realist readings started to drive out rival interpretations. German philosophers like Fichte and Hegel invoked Machiavelli as an early champion of national unification, if necessary by means of blood and iron. Italian nationalists of the left and right soon followed. Since then, almost everyone has read Machiavelli through some sort of national-ends-justify-amoral-means prism. Some scholars stress his otherwise moral republicanism. Others insist that he was indifferent to any moral good other than that of personal or collective survival. But it’s become very, very hard to question the ‘realpolitik in the last instance’ reading.
On the Difference Between Science and Pseudoscience
Maarten Boudry and Massimo discuss the difference over at Rationally Speaking:
In our first mini-interview episode Massimo sits down to chat with his colleague Maarten Boudry, a philosopher of science from the University of Ghent in Belgium. Maarten recently co-edited the volume on The Philosophy of Pseudoscience (Chicago Press) with Massimo, and the two chat about the difference between science and pseudoscience and why it is an important topic not just in philosophy circles, but in the broader public arena as well.
Also see the blogginheads discussion here.
Latitudes of Acceptance
Matthew Lieberman in Edge:
I'll tell you about my new favorite idea, which like all new favorite ideas, is really an old idea. This one, from the 1960s, was used only in a couple of studies. It's called "latitude of acceptance". If I want to persuade you, what I need to do is pitch my arguments so that they're in the range of a bubble around your current belief; it's not too far from your current belief, but it's within this bubble. If your belief is that you're really, really anti-guns, let's say, and I want to move you a bit, if I come along and say, "here's the pro-gun position," you're actually going to move further away. Okay? It's outside the bubble of things that I can consider as reasonable.
We all have these latitudes around our beliefs, our values, our attitudes, which teams are ok to root for, and so on, and these bubbles move. They flex. When you're drunk, or when you've had a good meal, or when you're with people you care about versus strangers, these bubbles flex and move in different ways. Getting two groups to work together is about trying to get them to a place where their bubbles overlap, not their ideas, not their beliefs, but the bubbles that surround their ideas. Once you do that, you don't try to get them to go to the other position, you try to get them to see there's some common ground that you don't share, but that you think would not be a crazy position to hold.
Can Science Offer New Answers to Mental Illness?
Lisa Appiganesi in New Republic:
Way back in 1977 the prescient French philosopher/historian Michel Foucault pointed out that in our societies, “the child is more individualised than the adult, the patient more than the healthy man, the madman and the delinquent more than the normal and the non-delinquent.” Whatever our concurrent desire for a painless sanity, normality or, as it is now known, neuro-typicality, having a “secret madness” can help constitute what makes us individual. This may be one of the clues to the alarming rise of mental illness in recent decades. Foucault might not have been surprised that the biggest success story in the pharmaceutical world since the advent of antibiotics has been the growth of antidepressants in the form of selective serotonin reuptake inhibitors (SSRIs)—those much-hailed little pills that helped to bring about the very illness for which they are the touted cure. After a rocky start and unsuccessful clinical trials, SSRIs took off in the 1990s. By 2002 about 25 million Americans were taking them. Now, although they have been exposed as no more effective than placebos, the figure is closer to 40 million. The situation is no different in the UK, where one in four of us will, it is said, succumb to depression and anxiety at least once in our lifetime—though the more usual pattern is for these to become chronic conditions.
In the west we live in a time when we look to medics (rather than, say, politicians, priests, artists or philosophers) for solutions to most of our life and death problems. It is clear that the NHS in Britain and the rise of scientific medicine in the west count among the greatest achievements of the postwar years. But can doctors really be the providers of all our goods? Do they have the wherewithal to direct the mind and the emotions, do they hold the keys to sex, reproduction and death, besides healing our diseases?
Louis Riel's Address to the Jury
Gentlemen of the Jury:
I cannot speak
English well, but am trying
because most here
When I came to the North West
I found the Indians suffering
I found the half-breeds
eating the rotten pork
of the Hudson Bay Company
and the whites
We have made petitions I
have made petitions
We have taken time; we have tried
And I have done my duty.
My words are
by Kim Morrisy
from Batoche Regina
Coteau Books, 1989
william greaves (1926 - 2014)
the man with a movie camera
C.S. Lewis on Suffering and What It Means to Have Free Will in a Universe of Fixed Laws
Maria Popova in Brain Pickings:
If the universe operates by fixed physical laws, what does it mean for us to have free will? That’s what C.S. Lewis considers with an elegant sidewise gleam in an essay titled “Divine Omnipotence” from his altogether fascinating 1940 book The Problem of Pain (public library) — a scintillating examination of the concept of free will in a material universe and why suffering is not only a natural but an essential part of the human experience. Though explored through the lens of the contradictions and impossibilities of belief, the questions Lewis raises touch on elements of philosophy, politics, psychology, cosmology, and ethics — areas that have profound, direct impact on how we live our lives, day to day.
He begins by framing “the problem of pain, in its simplest form” — the paradoxical idea that if we were to believe in a higher power, we would, on the one hand, have to believe that “God” wants all creatures to be happy and, being almighty, can make that wish manifest; on the other hand, we’d have to acknowledge that all creatures are not happy, which renders that god lacking in “either goodness, or power, or both.”
To be sure, Lewis’s own journey of spirituality was a convoluted one — he was raised in a religious family, became an atheist at fifteen, then slowly returned to Christianity under the influence of his friend and Oxford colleague J.R.R. Tolkien. But whatever his religious bent, Lewis possessed the rare gift of being able to examine his own beliefs critically and, in the process, to offer layered, timeless insight on eternal inquiries into spirituality and the material universe that resonate even with those of us who fall on the nonreligious end of the spectrum and side with Carl Sagan on matters of spirituality.
Saturday, August 30, 2014
How Scientists Captured the Brains of Amis and McEwan
Leo Robson in TNR (photo from Wikimedia Commons):
Flaubert’s prescription, set down in 1852, was never one likely to be followed by Martin Amis, the guy who said he didn’t want to “write a sentence that any guy could have written,” or his contemporary Ian McEwan, who from his earliest stories kept in such close contact with his benighted characters that you could virtually smell his breath on the page. Over the years, the desire to editorialise has proved increasingly hard to resist, with Amis engaging in lofty allocutions on human nature, many of them borrowed from his essays and memoirs (“It’s the death of others that kills you in the end”—Experience in 2000 and The Pregnant Widow in 2010), and McEwan adopting a stealthier approach, superficially more dramatic and yet no less tailored to communicating his personal opinions—on science, mores, ethics.
The turning point came in 1987, with Amis’s story collection Einstein’s Monsters and McEwan’s novel The Child in Time, the first books that each writer published after making the transition from enfant terrible to proud father. For all the books’ differences, a number of shared concerns emerged. Sex, once either casual or squalid, had become something else entirely—cataclysmic, even cosmic. Violence was no longer a pay-off or punchline but a thing to walk in fear of. Also indicative were these words from McEwan: “I am indebted to the following authors and books . . .” And these ones from Amis: “May I take the opportunity to discharge—or acknowledge—some debts? . . . I am grateful to Jonathan Schell, for ideas and for imagery.” Bedtime reading on subjects such as nuclear weapons, quantum mechanics and the Second World War had been delivering the kinds of shocks and thrills that the authors had been aiming for with stories about boys and girls mistreating one another in decaying city bedrooms. It was time to chase a grander frisson.
What distinguishes this move from, say, the more recent fashion for the essay novel—see the work of W. G. Sebald, Geoff Dyer, Teju Cole, Laurent Binet—is that Amis and McEwan have tried to accommodate facts and arguments into a prose that resists being candidly discursive. Ideas about sexual politics (Amis’s The Pregnant Widow, McEwan’s On Chesil Beach), science v. superstition (McEwan’s Enduring Love and Saturday), the new physics (The Child in Time, Amis’s Night Train) and political violence (Amis’s Time’s Arrow, Black Dogs and House of Meetings) are put into characters’ mouths (mostly by Amis) or wedged into a narrative structure (mostly by McEwan). The novels in this period that seem freest from these vices—among them, McEwan’s Atonement and Amis’s Yellow Dog—are beset, to varying degrees, by other problems; in McEwan’s case, maniacal control and, in Amis’s, frivolity and self-plagiarism.
How Ought We Die?
Derek Ayeh in The New Inquiry:
Imagine the dying patient today: sitting in the intensive care unit, hooked up to a ventilator that artificially pumps their heart and a feeding tube because they can no longer eat on their own. The patient could be on several drugs or antibiotics, hooked up to devices that keep an eye on every bodily function, or even need hemodialysis because their kidneys have failed. All the while physicians scramble about doing everything in their power to keep this patient alive as long as they possibly can, even when they know that time is limited. Why? Because this person is a patient in a hospital and everyone knows you go to hospitals to get better, not to die.
Lydia Dugdale gives such a description in her Hasting’s Center Report article “The Art of Dying Well.” Dugdale claims that American society is ill equipped for the experience of dying. Instead a physician’s focus is solely on perpetuating life as long as possible, and the family often times desires the same thing. According to Dugdale, today’s focus on continued life doesn’t make dying any better than in the mid-fourteenth century in Europe during the Bubonic plague epidemic. Then, the constant presence of death turned society’s attention to ensuring that the dying would receive a good death.
To aid laypeople in giving their loved ones good deaths, the Catholic Church created a text called Ars Moriendi, the Art of Dying, in 1415. It guided the layperson through the dying process by teaching them the appropriate prayers, preparations, and listing questions that the dying person should consider and answer about their life as a way of confirming that they lead a repentant and righteous life. But one could start considering what it meant to die well just by being in close proximity with the dying. By encountering the prescribed preparations, others involved were able to think critically about death and the inevitable end of their own lives. The Ars Moriendi in time expanded into its own genre, with numerous religious authorities reinterpreting what it meant to die well and promoting their own texts. These guidebooks were written for centuries after.
The original Ars Moriendi consisted of six separate sections, each serving to help either the dying individual or his or her close ones through prayer and guidance. Part two, for example, deals with five temptations that the dying person faces in death: lack of faith, despair, impatience, vainglory, and avarice. These temptations were devils that came to the dying man’s bedside and tried to tempt him towards hell. For despair, the devil says, “Wretched one, look at your sins which are so great that you would never be able to acquire grace.” But with each temptation comes a remedy, the words of a good angel meant to inspire and comfort. In this case, the angel reminds the dying of the sinners who confessed late and still received grace.
Parasites Practicing Mind Control
Carl Zimmer in the NYT(image by Jitender P. Dubey/U.S.D.A.)
An unassuming single-celled organism called Toxoplasma gondii is one of the most successful parasites on Earth, infecting an estimated 11 percent of Americans and perhaps half of all people worldwide. It’s just as prevalent in many other species of mammals and birds. In a recent study in Ohio, scientists found the parasite in three-quarters of the white-tailed deer they studied.
One reason for Toxoplasma’s success is its ability to manipulate its hosts. The parasite can influence their behavior, so much so that hosts can put themselves at risk of death. Scientists first discovered this strange mind control in the 1990s, but it’s been hard to figure out how they manage it. Now a new study suggests that Toxoplasma can turn its host’s genes on and off — and it’s possible other parasites use this strategy, too.
Toxoplasma manipulates its hosts to complete its life cycle. Although it can infect any mammal or bird, it can reproduce only inside of a cat. The parasites produce cysts that get passed out of the cat with its feces; once in the soil, the cysts infect new hosts.
Toxoplasma returns to cats via their prey. But a host like a rat has evolved to avoid cats as much as possible, taking evasive action from the very moment it smells feline odor.
Experiments on rats and mice have shown that Toxoplasma alters their response to cat smells. Many infected rodents lose their natural fear of the scent. Some even seem to be attracted to it.
Does It Help to Know History?
Adam Gopnik in The New Yorker:
About a year ago, I wrote about some attempts to explain why anyone would, or ought to, study English in college. The point, I thought, was not that studying English gives anyone some practical advantage on non-English majors, but that it enables us to enter, as equals, into a long existing, ongoing conversation. It isn’t productive in a tangible sense; it’s productive in a human sense. The action, whether rewarded or not, really is its own reward. The activity is the answer.
It might be worth asking similar questions about the value of studying, or at least, reading, history these days, since it is a subject that comes to mind many mornings on the op-ed page. Every writer, of every political flavor, has some neat historical analogy, or mini-lesson, with which to preface an argument for why we ought to bomb these guys or side with those guys against the guys we were bombing before. But the best argument for reading history is not that it will show us the right thing to do in one case or the other, but rather that it will show us why even doing the right thing rarely works out. The advantage of having a historical sense is not that it will lead you to some quarry of instructions, the way that Superman can regularly return to the Fortress of Solitude to get instructions from his dad, but that it will teach you that no such crystal cave exists. What history generally “teaches” is how hard it is for anyone to control it, including the people who think they’re making it.
Omar Khayyam: The Poet of Uncertainty
August 1939. You sail to Buenos Aires on the Chombry as a cultural ambassador of Poland. Why say no to a little holiday on the government’s tab? Soon after arriving you sense that something isn’t right. You emerge from a welcome reception and your ears are “filled with newspaper cries: ‘Polonia, Polonia,’ most irksome indeed.” Before you’ve even had a chance to see the sights, world war breaks out. The natural, dutiful response is to pile back aboard with your countrymen and head home to Europe. You line up on the dock with your bags and wait. Then, something—a big something—makes you turn around. You leave your group and slip through the crowd and into the streets, never to see Poland again. So began the self-imposed exile of Witold Gombrowicz.
Trans-Atlantyk may be Gombrowicz’s most autobiographical novel but getting caught up in a comparison between the author and protagonist, though tempting, would be as silly as the plot, which starts reasonably enough then quickly descends into a giddy chaos of deranged office scenarios, pompous soirees, disgraced honor, and botched duels. The story is told in the style of a gawęda, or fireside chat. There is no direct equivalent in English.
Love Letters Between Christopher Isherwood and Don Bachardy
In 1956, Christopher Isherwood wrote from Cheshire, England, to his partner, the artist Don Bachardy. It is the first entry in a 14-year correspondence: a brief, eccentric document that mentions, in a short space, Alexander Korda’s memorial service, incest between two cats and Courbet’s “Diligence in the Snow.”
“I think about you all the time,” he writes in closing, “and about times I might have been kinder and more understanding, and I make many resolutions for the future — some of which I hope I’ll keep.”
Their relationship began on Valentine’s Day of 1953 and was in an important sense defined by its many periods of separation. The world of the letters lived inside the broader, coded world of midcentury homosexuality. Within it, Isherwood and Bachardy were able to express themselves through a more personal kind of code, in what would become the prolonged metaphor of the Animals.
a history of civilisation that peers into a post-human future
Already a bestseller in Hebrew, Sapiens mounts a fundamental challenge to the predominant contemporary view of humans and their place in the world. “Liberal humanism,” Harari points out, “is built on monotheist foundations.” Take away the soul and the privileged place in the world accorded to humans by a creator-god, and it becomes difficult to explain why humans are so special. The task becomes harder if we perform a thought-experiment based on the facts of human origins. We’ve grown used to thinking of ourselves as the only species of humans. But for most of its history Homo sapiens shared the planet with several humanoid species – the Neanderthals being only the best known. “The earth of a hundred millennia ago was walked by at least six different species of man”, writes Harari. Suppose some or all of these species had survived alongside ourselves up to the present. What would become of the cherished sense that we are set apart from the rest of the natural world by having some peculiar transcendent value? Human uniqueness, Harari concludes, is a myth spawned by an accident of evolution.
For most people today, history is a tale of human advance fuelled by increasing brainpower. For Harari, this is just another myth. There is no evidence that human beings have become more intelligent over time, and most of history’s largest changes have not involved an improvement in the quality of life. The agricultural revolution is touted as a great advance; but “for the average person, the disadvantages probably outweighed the advantages”. For most human beings, the shift to farming was not a choice but a trap.
Pico Iyer in The New York Times:
“I don’t summon anything up,” protests Holly Sykes, the down-to-earth protagonist of “The Bone Clocks,” David Mitchell’s latest head-spinning flight into other dimensions. “Voices just . . . nab me.” She’s trying to explain to a skeptical, curmudgeonly English writer how she occasionally falls out of time and sees what’s going to happen next. Embarrassed about her gift — she’s just a regular daughter of the owner of the Captain Marlow pub in Gravesend, Kent — and reluctant to credit such way-out ideas as precognition, she goes on, “Oh, Christ, I can’t avoid the terminology, however crappy it sounds: I was channeling some sentience that was lingering in the fabric of that place.”
There you have it: a perfectly matter-of-fact, unvarnished evocation of how regular folks speak, married to a take-no-prisoners fascination with all that we can’t explain. Coming from a writer himself famous for his gift for channeling voices (not least of pub-owners’ daughters) and for his preternatural talent for seeing things, in the world, above it and all around it, the admission gives off a flash of unexpected self-revelation. (One recalls how the last novel Hilary Mantel published before her uncannily mediumistic “Wolf Hall” was about a woman full of demons who contacts the other world for a living.) “The Bone Clocks” — a perfect title for a novelist who’s always close to the soil and orbiting the heavens in the same breath — is a typically maximalist many-storied construction: In one of its manifold secret corners, it sounds as if a sublimely original writer is wondering how much “writing’s a pathology” (as one of his characters puts it) and whether it’s possible to conjure up time-traveling characters and scenes from the distant past and future, yet not believe in magic.
Name-to-Know: Régime des Fleurs
Lauren Sherman in The Wall Street Journal:
THE SELF-TAUGHT NOSES behind new perfume line Régime des Fleurs refer to themselves as "lifetime fragrance geeks." Ex-fashion stylist Ezra Woods, 30, was born into a family of florists, and Alia Raza, 36, formerly a filmmaker and video artist, often found herself inspired by perfume while dreaming up concepts for her work. The longtime friends both had a habit of obsessively researching perfumes and raw materials. About a year ago, they finally decided it was time to make their own. This spring, they introduced a range of six unique, beautifully bottled fragrances with highly romantic descriptions. Turquoise, for instance, a fruity floral, is said to evoke "a teenage Marie Antoinette gone abroad to India."
...Still, Mr. Woods and Ms. Raza are in the midst of developing a more "conventional, scalable" secondary collection, which will bring them closer—but not too close—to the mass market. "We're really interested in other product categories, different types of personal care, apparel, home—even edible products," said Mr. Woods. "We love the idea of making everything floral," added Ms. Raza. "Why is there not jasmine chewing gum, and why don't I have gardenia toothpaste?"
More here. (Disclosure: Alia Raza is our niece)
Friday, August 29, 2014
The pathos of Stefan Zweig and his overdue revival
Adam Kirsch in The New Republic:
The careers of Stefan Zweig and Walter Benjamin offer a contrast so perfect as to become almost a parable. The two writers were contemporaries—Benjamin was born in 1892, Zweig in 1881—and both operated in the same German literary ecosystem, though Benjamin was from Berlin and Zweig from Vienna. Both reached their height of productivity and reputation during the Weimar Republic, and as Jews both were forbidden from publishing in Germany once Hitler took power. And both ended darkly as suicides: Benjamin took his life in 1940 while trying to flee from France to Spain, and Zweig died a year and a half later in Brazil, where he sought refuge after unhappy sojourns in England and America.
Yet the similarities end with their biographies. As writers, they could not have been more different, and their literary destinies were exact opposites. Zweig flourished during his lifetime, enjoying huge sales of his psychologically charged novels and his popular historical biographies. Born with a fortune—his father was a textile manufacturer in Bohemia—he earned another fortune through his books, carrying into literature the bourgeois discipline and regularity that he inherited from his businessman ancestors.
The return of radical empiricism
Massimo Pigliucci in Scientia Salon:
Recently, here at Scientia Salon I published three essays — two by Robert Nola  and one by Coel Hellier  — that epitomize radical empiricism, more so in Hellier’s than in Nola’s case, I might add. Interestingly, Nola is a philosopher and Hellier a scientist, and indeed it is known by now that “scientism” — which is the attitude that results from radical empiricism — is being championed by a number of scientists (e.g., Lawrence Krauss , Neil deGrasse Tyson ) and philosophers (James Ladyman and Don Ross , Alex Rosenberg ).
Clearly, I find myself puzzled and bewildered by this state of affairs. As someone who has practiced science for a quarter century and then has gone back to graduate school to switch to philosophy full time I have a rather unusual background that, I think, makes me appreciate where radical empiricists come from, and yet which also precludes me from buying into their simplistic worldview.
In the remainder of this essay, then, I will try to do the following:
- Sketch out what I see are the logical moves attempted by radical empiricists;
- Show why they don’t work;
- Explain why this is more than an academic debate, and certainly more than “just semantics.”
Friends of Israel
Connie Bruck in The New Yorker:
On July 23rd, officials of the American Israel Public Affairs Committee—the powerful lobbying group known as AIPAC—gathered in a conference room at the Capitol for a closed meeting with a dozen Democratic senators. The agenda of the meeting, which was attended by other Jewish leaders as well, was the war in the Gaza Strip. In the century-long conflict between the Israelis and the Palestinians, the previous two weeks had been particularly harrowing. In Israeli towns and cities, families heard sirens warning of incoming rockets and raced to shelters. In Gaza, there were scenes of utter devastation, with hundreds of Palestinian children dead from bombing and mortar fire. The Israeli government claimed that it had taken extraordinary measures to minimize civilian casualties, but the United Nations was launching an inquiry into possible war crimes. Even before the fighting escalated, the United States, Israel’s closest ally, had made little secret of its frustration with the government of Prime Minister Benjamin Netanyahu. “How will it have peace if it is unwilling to delineate a border, end the occupation, and allow for Palestinian sovereignty, security, and dignity?” Philip Gordon, the White House coördinator for the Middle East, said in early July. “It cannot maintain military control of another people indefinitely. Doing so is not only wrong but a recipe for resentment and recurring instability.” Although the Administration repeatedly reaffirmed its support for Israel, it was clearly uncomfortable with the scale of Israel’s aggression. AIPAC did not share this unease; it endorsed a Senate resolution in support of Israel’s “right to defend its citizens,” which had seventy-nine co-sponsors and passed without a word of dissent.
An Evening with Billy Collins
from church bells to dumbbells
In an article in the Spectator in July 1711, the eponymous character Mr. Spectator—as written by Joseph Addison, one of the magazine’s founders—described his exercise routine. When in town, and therefore not able to go out riding, “I exercise myself an Hour every Morning upon a dumb Bell that is placed in a Corner of my Room, and pleases me the more because it does every thing I require of it in the most profound Silence.”1 We know dumbbells now as handy at-home pieces of gym equipment—free weights that have been around, in some form, at least since ancient Greek athletes used halteres to increase the length of their long jumps. But the dumbbell that Mr. Spectator refers to, and from which the heavy gym weights borrow their name, is something different. An illustration of a similar piece of equipment, published in the Gentleman’s Magazine in 1746, shows a wooden contraption in which two crossed bars with weights on the ends are mounted on an axle, around which is wound a length of rope. This mechanism would be elevated within a room, or placed in a garret, with the rope hanging down for a person standing below to pull. It mimics the apparatus used for ringing church bells, with the bell itself replaced by two weighted bars—it’s these that resemble the dumbbells of today.