Monday, September 01, 2014
by Brooks Riley
Jean-Luc Godard once inscribed a picture to me with these words: "This is the surface, Brooks, and that's why it's deep." At the time, I was skimming the surface, darting from one life experience to another without stopping to sink down or dive deeper—or give his jeu de mots much thought. While I always relished his love of word play in both English and French, this time I was suspicious of what sounded to me like a facile paradox.
As a man of cinema, Godard must first have thought of that great cinematic paradox, the flat screen and the depth of field that miraculously occurs when a film is projected onto it. In the photograph, he stands in front of a blank wall, very like the blank screen he would soon use for a shadow play to the opening bars of Mozart's Requiem Mass in D Minor, defying the double-entendre of flatness and cinematic depth with a chiaroscuro ballet in front of the screen—a crane operator and his crane moving the camera and cameraman slowly up, over and then down again, a graceful pas de deux silhouetted against the flat white surface—a two-dimensional triumph.
How appropriate that this holy moment was filmed in the soundstage where the glitzy streets of Las Vegas had been built for Francis Ford Coppola's One from the Heart. The crew, borrowed for a Saturday from that bigger film, consisted of Italians—Vittorio Storaro and his Italian crew—and the hardboiled Hollywood mainstream pros who had seen it all—or thought they had. As Godard piped Mozart over the loudspeakers, and the camera rolled, a cathedral hush permeated the vast interior of the soundstage as the middle-aged, elegant crane operator began to move in front of the screen with the assurance of a dancer, or a man who knows his job. When the music faded out at the end, the hush prevailed. No one, not the crew, not the visitors, not the cast, had ever seen anything like it. It was surface magic, deep beyond words. Now I knew that his inscription made sense.
(As a 9-year-old with not enough movie experience I could easily have retorted, ‘This is the surface, Jean-Luc, and it's a grande illusion,' as I waited in vain for Marlon Brando to emerge from the back door of my local movie theatre after a showing of Desiree.)
Too often surface is a euphemism for superficial. But living on the surface makes it easier to be ubiquitous. The assumption that one has to dig or dive for treasures is not necessarily reliable. Analogies can also be arrived at by moving far afield over a surface, like the gerridae, those bugs who walk on water, always finding what they need on top, not deep down. Knowledge is like that body of water: You can dive down into it, but to see clearly, you have to rise to the surface.
everything unknown returns to life
upon awakening in my bed supine in light
sun bequeathed day ignites a fire beneath
my blankets burn mind’s the filament of a lamp
upon awakening stupidity tumbles down a sheer of chance
small thoughts plunge they start an avalanche
the ground gives way beneath my feet
upon awakening where am I?
light ricochets from every wall
blind see deaf hear motion stills
minutiae interlock upon awakening
east and west do not collide they mesh
upon awakening bias stands upon its head
draining deadliness, its river Cocytus circles a sewer
upon awakening states recede decline abjure
the babble of all the varied words of god unite
upon awakening they steep in a cauldron of love
the clock’s a joke upon awakening
doors swing wide though no one knocks
upon awakening each ajar as each unlocks
windows blast from jambs upon awakening
lions lie with lambs every noise becomes a note
upon awakening every weight begins to float
even cacophony sings upon awakening
nothing is ever learned again by rote
upon awakening everything becomes
the final sacrificial goat
by JIm Culleny
by Charlie Huenemann
Now that every click we make is watched, archived, and meta-data-fied, it is time to start thinking seriously about a personal ethics of internet consumption. This goes beyond mere paranoia and worry over what others might think of what you're taking interest in. Each click is in fact a tiny vote, proclaiming to content providers that you support this sort of thing, and hope to see more of it in the future. And - as always! - we should vote responsibly.
It's too bad, really. Gone are the days where, with the adjustment of a couple of browser settings ("Privacy - on!"), no one could ever know that we were clicking away at all sorts of embarrassments, from naked people to celebrity gossip to stuff that might accurately be labeled as very nasty. It was a seemingly harmless way to let that little id go crazy and graze its fill. Content providers happily supplied the forbidden fruits and we gobbled them up.
Now the jig is up. Privacy settings are as effective as the dark vs. light lever on a toaster. But more significant than any embarrassment we may feel is the fact that our clicking is factored into incredibly effective algorithms which help to steer more of the same our way. And as more of us click on crap, more and more similar crap is generated for consumption, and the internet gradually expands into wall-to-wall crap.
Immanuel Kant's perspective on ethics might suggest to us a Categorical Internet Imperative: Click only on those links that you can at the same time will all your fellow citizens to click on. I don't know about you, but many times I feel that if everybody were just clicking on what I'm clicking on, our culture would be racing toward - well, to pretty much where we are these days, I guess: a few reliable sources of insight and information doing their best to compete with freak shows, bear-baitings, and adorable kittens attacking paper bags.
Not to say we have to be Prussian prudes, of course. Insight and information can come from surprising places, and we surely need clowns to tip us off balance and question ourselves. And there's nothing wrong with just plain old fun (as if anyone needs to be told that). But once we begin seeing our clicks as tiny votes, we begin to think about what sort of sustenance we are channeling into our own minds, and what sort of diet we are recommending to our neighbors. Let us dream a little: if all the clickers out there aimed more consistently toward "good stuff", content providers would be competing to produce more and more of that stuff. Gradually, one hopes, we would witness the ebbing of the crap, and the waxing of a gloriously informed and inspired culture.
(Okay, that's crazy talk. But even a small shift in that direction would be to the good.)
by Emrys Westacott
I dislike tipping. That is, I dislike the whole tipping system. As a card-carrying tightwad I can't honestly say I enjoy leaving tips, but that's not my point. My point is about the general practice, the social institution.
What set me thinking about this was a slightly unpleasant experience I had recently in a café in Quebec City. My wife and I had finished breakfast and after quite a long delay the waitress brought the bill. In Canada these days, as in Europe, it's normal for customers to pay using a portable credit card reader that is brought to the table. These reportedly reduce credit card fraud by eliminating the opportunity for dishonest wait staff to "skim" the credit card information while out of sight of the card owner. The bill is displayed on a screen along with various tipping options. These vary according to the machine, but a typical range of options is: 10%, 15%, 20%, custom tip, no tip. Usually I tip 15%, but on this occasion, partly because of the long delay in getting the bill, and partly because I felt the waitress had from first to last been unpleasantly condescending, I tapped the 10% button. She was looking over my shoulder (another thing I had against her), and immediately asked me if I was dissatisfied in any way with the service. Being taken by surprise, and also being a wimp, I answered "No." She then told me that in Quebec it was normal to tip at least 15%. I said, "Oh, I didn't know," and left the tip at 10%. If I'd been less of a wimp I would have explained my dissatisfaction and complained about her looking over my shoulder. Then again, if I'd been even wimpier I would have adjusted the tip according to her recommendation.
Tipping is a peculiar institution. Whether you leave a tip is optional, and there are many circumstances where you would suffer no adverse consequences (other than possible feelings of guilt) should you not tip: for instance, when you check out of a hotel, alight from a taxi, or eat at a restaurant you are unlikely to revisit. If we were nothing but little carbon-based bundles of rational self-interest, as some economists prone to abstraction have at times assumed, tipping would be much less common and might even never have become an established custom. In some places—Japan, Finland, South Korea, for instance--it isn't. And even in places like the US, where tipping is widespread, the conventions aren't especially consistent. Many people leave a tip for the person who cleans up their hotel room, but not for the person at the reception desk who checks them in and out. They add a tip for their hairdresser, but not for their dental hygienist.
Mohau Modisakeng. Inzilo (Film Still), 2013.
Single channel video installation, duration: 4 min 57 sec
by Michael Lopresto
Chris Mortensen is Emeritus Professor of Philosophy at the University of Adelaide. He thinks that the inconsistent hasn't been taken seriously enough in Western philosophy, that the masterpieces of Reutersvärd rub our noses in the inconsistent, and that Western philosophy and Buddhism are complementary. He's the author of Inconsistent Mathematics (1995) and Inconsistent Geometry (2010).
Firstly, what made you get into philosophy?
I think I was always interested in it, really—since high school, anyway. I was diverted for while into maths and physics during my first couple of years at university, before coming back to philosophy. I realised that if what you want to do is what you like doing, then philosophy is the thing to do. I still kept up with the maths subjects, but philosophy was more fun, and I was better at it.
Had there always been a lot of overlap between your interest in philosophy and your interest in maths?
There was, but one thing I noticed was that my logic lecturers would always motivate what they were doing. They would tell you why this was interesting, why there was a debate here. Whereas my maths lecturers on the other hand tended to be very pure and syntactical, leaving aside motivation much of the time. Some logicians are very pure – some of my best friends are very pure. But perhaps it is possible to be a bit too pure and syntactical in philosophy, it depends on what you are trying to achieve I suppose. Just pop down to the library and have a look at Russell and Whitehead's Principia Mathematica. It doesn't contain too much English (even though Russell excelled as a philosopher, as opposed to a logician).
by Carl Pierer
In his Groundwork of the Metaphysics of Morals Kant states that an action has moral worth if and only if it is done from duty.Kant argues for his position by showing that morally right actions done from motives other than duty lack moral worth. He gives two examples:
The Shopkeeper always gives correct change. She does not care whether this is morally right or not. She is only faithful to her costumers because it ensures her making profit. Her sole motivation is self-interest and not duty.
The Philanthropist is kind because of a natural inclination. He just feels like being a morally good person. He too is not concerned with morality. Rather he behaves correctly because that is what he wants to do. He is lacking a feeling for duty.
It might be said that both actions lack moral worth since the agents are not concerned with morality at all. They do not care for whether their actions are moral. It is a happy coincidence that they are. Therefore, the agents do not deserve any moral credit. However, this argument does not prove Kant's claim conclusively. It only shows that actions lacking motivation from duty entirely are morally worthless. What if we understood Kant to mean that the action is only morally worthy if duty is the sole motivation?
Schiller's Joke expresses an intuitive resistance against this reasoning:
"The first speaker says: Gladly I serve my friends, but alas I do it with pleasure. Hence I am plagued with doubts that I am not a virtuous person.
And the reply is: Sure, your only resource is to try to despise them entirely. And then with aversion to do what your duty enjoins you."
Intuitively, this sounds wrong. We are under no obligation to despise our friends. We can confidently like them and our friendly acts would still be morally worthy. This seems to refute Kant's claim that only actions done from duty have moral worth.
However, this objection differs from Kant's examples. Both the shopkeeper and the philanthropist lack motivation from duty entirely. All that drives them to do the morally right thing is self-interest or their natural character. Yet, the person who serves his friends with pleasure can at the same time still believe in his duty to serve his friends. So the case is: In addition to some non-duty directed motivation, the agents believe in their duty to moral behaviour. Do these actions have moral worth?
by Brooks Riley
by Mathangi Krishnamurthy
As a rule, I am wary of art installations. I am never sure if the form they take bear any relation to the political content they claim to espouse. Also, as a rule, I visit modern art exhibitions for their verbosity. The words speak to me of artistic intent that always races ahead, far in excess of its signifying objects. The intent itself I find to be of such beauty, nudging me with its faint hints of revolution and radical joy. Of course, it does worry me that I have to read the labels of things before I can calculate the impact they will have on my fervor and/or joy.
However, on the lowest rung of my pleasure-affording hierarchy lie modern art installations. I remember once visiting the Museum of Modern Art in New York City and staring hard at a diagonal tube light mounted up on a wall. I also metaphorically bonked myself on the head for "Artist" not making the top three on the list of possibilities suitable to my eight year old self's artistic ability or lack thereof.
As I walked into Ai Weiwei's exhibition "Evidence" I thought to myself that I should maintain a healthy cynicism and a suitably controlled set of expectations about what a set of art installations ought to be able to evoke. In the late afternoon of a confusing Berlin summer, I got off the bus already flush with the pleasure of a scarily efficient public transport system, and walked down the lane to the spot on my Google map that said "Martin-Gropius-Bau". The Bau is a startlingly beautiful building, all neo-Renaissance in its pastiche of dome, entryway columns, curlicued windows and shadowy moldings. Something already felt right. The sun shone bright and the clouds filtered out its strongest rays. I was suitably warm and the light was suitably right. Ai Weiwei in his entire grandfatherly wallpapered aura stared straight ahead and betrayed no amusement at my sudden and unexpected enthusiasm.
Across eighteen rooms of the Bau were spread all the works that were being curated under the title "Evidence". Playing with the concept of both what "discovery" means to police and detective records, and the concept of empirical "evidence" as relating to crimes both contemporary and historical, the main items of this exhibit comprise found, made, and remade artifacts—touchy, feely, gritty physical objects. Most of them display familiar hints of the Ai Weiwei oeuvre. They offer confusing and paradoxical cues by playing with the material they are composed of, they are parts of a much larger story that they bear evidence to, and they are often directly related to aspects of the artist's life.
by Thomas Rodham Wells
Governments should tax the production and consumption of junk entertainment like Angry Birds and The Bachelor to correct the market failures that encourage their overconsumption. As with tobacco and alcohol, the point of such sin taxes is not to prevent people from consuming things that are bad for them if they really want to. They are not like bans. Rather, such taxes communicate to consumers the real but opaque long-term costs to themselves of consuming such products so that they can better manage their choices about how much of their lives to give up to them.
At the heart of this proposal is the fact that high art – i.e. real art - like Booker prize winning novels and Beethoven is objectively superior to junk entertainment like Candy Crush and most reality TV. (For now, let us abstract from ‘middle-brow' entertainment like our new Golden Age of TV.) Some egalitarians of taste dispute the existence of any objective distinction in quality between pushpin and Pushkin and argue that the value of anything is merely the subjective value people put on it. I will humour them. The case for the objective superiority of art can be made entirely within a narrowly utilitarian -‘economistic' - account of subjective value: in the long run consuming junk entertainment is less pleasurable than consuming art.
At best, junk entertainment passes the time and brings us closer to death in a relatively painless way. At worst, passing a lot of time in this way makes us stupid by atrophying our abilities to appreciate anything more difficult. Hence the pejorative term ‘junk', for there is a strong resemblance between this sort of mental activity and eating cheeseburgers: the more cheeseburgers we eat, the less we enjoy each new one, and the fatter and more unhealthy we become. In contrast, art has the capacity not only to fill up the limited time we have in our lives, but in the process also to educate us in the enjoyment of its intellectual depths so that it produces more delight in us the more of it we consume. In economics terminology, the consumption of junk entertainment exhibits diminishing marginal utility and reduces our human capital while the consumption of art exhibits the opposite. Art is special.
Sunday, August 31, 2014
Linda Geddes in New Scientist:
Metabolic processes that underpin life on Earth have arisen spontaneously outside of cells. The serendipitous finding that metabolism – the cascade of reactions in all cells that provides them with the raw materials they need to survive – can happen in such simple conditions provides fresh insights into how the first life formed. It also suggests that the complex processes needed for life may have surprisingly humble origins.
"People have said that these pathways look so complex they couldn't form by environmental chemistry alone," says Markus Ralser at the University of Cambridge who supervised the research.
But his findings suggest that many of these reactions could have occurred spontaneously in Earth's early oceans, catalysed by metal ions rather than the enzymes that drive them in cells today.
The origin of metabolism is a major gap in our understanding of the emergence of life. "If you look at many different organisms from around the world, this network of reactions always looks very similar, suggesting that it must have come into place very early on in evolution, but no one knew precisely when or how," says Ralser.
One theory is that RNA was the first building block of life because it helps to produce the enzymes that could catalyse complex sequences of reactions. Another possibility is that metabolism came first; perhaps even generating the molecules needed to make RNA, and that cells later incorporated these processes – but there was little evidence to support this.
"This is the first experiment showing that it is possible to create metabolic networks in the absence of RNA," Ralser says.
Michael Saler profiles Marcelo Gleiser, who " wants to heal the rift between humanists and scientists by deflating scientific dreams of establishing final truths," in The Nation:
The battle lines became firmly drawn in the years following World War II. In Science and Human Values (1956), Jacob Bronowski attempted to overcome the sullen suspicions between humanists and scientists, each now condemning the other for the horrifying misuse of technology during the conflict:
Those whose education and perhaps tastes have confined them to the humanities protest that the scientists alone are to blame, for plainly no mandarin ever made a bomb or an industry. The scientists say, with equal contempt, that the Greek scholars and the earnest explorers of cave paintings do well to wash their hands of blame; but what in fact are they doing to help direct the society whose ills grow more often from inaction than from error?
Bronowski was a published poet and biographer of William Blake as well as a mathematician; he knew that artists and scientists had different aims and methods. Yet he also attested that both engaged in imaginative explorations of the unities underlying the human and natural worlds.
If Bronowski’s stress on the imagination as the foundation of both the arts and sciences had prevailed, Gleiser would not need to remind his readers that Newton and Einstein shared a similar “belief in the creative process.” However, while Bronowski meant to heal the breach by exposing it, he inadvertently encouraged others to expand it into an unbridgeable gulf, a quagmire of stalemate and trench warfare. His friend C.P. Snow battened on the division in lectures that were subsequently published under the meme-friendly title The Two Cultures and the Scientific Revolution (1959). Snow acknowledged that scientists could be philistine about the humanities, but his ire was directed at the humanists: they composed the governing establishment, their willful ignorance about science impeding policies that could help millions worldwide. As the historian Guy Ortolano has shown in The Two Cultures Controversy (2009), Snow tactlessly insinuated that the literary intelligentsia’s delight in irrational modernism rather than rational science was partly responsible for the Holocaust: “Didn’t the influence of all they represent bring Auschwitz that much closer?” Such ad hominem attacks raised the hackles of the literary critic F.R. Leavis, himself a master of the art. His response, Two Cultures? The Significance of C.P. Snow (1962), proved only that humanists could be just as intemperate as Snow implied. (One critic, appalled by Leavis’s vituperation, dubbed him “the Himmler of Literature.”)
Richard Marshall interviews Erica Benner in 3:AM Magazine:
3:AM: You’ve written extensively about Machiavelli. Your take is revisionary isn’t it in that you say he’s not what we’ve been led to suppose he is – the quintessence of amoral realpolitik. He’s an individualist deontological ethicist and this is the foundation for a political ethics. So how come few people recognized the irony?
EB: Lots of early readers did. Up to the second half of 18th century some of Machiavelli’s most intelligent readers – philosophers like Francis Bacon and Spinoza and Rousseau – read him as a thinker who wanted to uphold high moral standards. They thought he wrote ironically to expose the cynical methods politicians use to seize power, while only seeming to recommend them. Which doesn’t mean they thought he was writing pure satire, a send-up of political corruption. He had constructive aims too: to train people to see through plausible-sounding excuses and good appearances in politics, and think harder about the spiralling consequences of actions that seem good at the time.
Even his worst critics doubted that Machiavelli could be taken at face value. In one of the first reactions to the Prince on record, Cardinal Reginald Pole declares that its devil’s-spawn author can’t seriously be recommending deception and oath-breaking and the like, since any prince who does these things will make swarms of enemies and self-destruct. To Pole, what later generations would call Machiavellian realism looked utterly unrealistic. Then during the Napoleonic Wars, amoral realist readings started to drive out rival interpretations. German philosophers like Fichte and Hegel invoked Machiavelli as an early champion of national unification, if necessary by means of blood and iron. Italian nationalists of the left and right soon followed. Since then, almost everyone has read Machiavelli through some sort of national-ends-justify-amoral-means prism. Some scholars stress his otherwise moral republicanism. Others insist that he was indifferent to any moral good other than that of personal or collective survival. But it’s become very, very hard to question the ‘realpolitik in the last instance’ reading.
Maarten Boudry and Massimo discuss the difference over at Rationally Speaking:
In our first mini-interview episode Massimo sits down to chat with his colleague Maarten Boudry, a philosopher of science from the University of Ghent in Belgium. Maarten recently co-edited the volume on The Philosophy of Pseudoscience (Chicago Press) with Massimo, and the two chat about the difference between science and pseudoscience and why it is an important topic not just in philosophy circles, but in the broader public arena as well.
Also see the blogginheads discussion here.
Matthew Lieberman in Edge:
I'll tell you about my new favorite idea, which like all new favorite ideas, is really an old idea. This one, from the 1960s, was used only in a couple of studies. It's called "latitude of acceptance". If I want to persuade you, what I need to do is pitch my arguments so that they're in the range of a bubble around your current belief; it's not too far from your current belief, but it's within this bubble. If your belief is that you're really, really anti-guns, let's say, and I want to move you a bit, if I come along and say, "here's the pro-gun position," you're actually going to move further away. Okay? It's outside the bubble of things that I can consider as reasonable.
We all have these latitudes around our beliefs, our values, our attitudes, which teams are ok to root for, and so on, and these bubbles move. They flex. When you're drunk, or when you've had a good meal, or when you're with people you care about versus strangers, these bubbles flex and move in different ways. Getting two groups to work together is about trying to get them to a place where their bubbles overlap, not their ideas, not their beliefs, but the bubbles that surround their ideas. Once you do that, you don't try to get them to go to the other position, you try to get them to see there's some common ground that you don't share, but that you think would not be a crazy position to hold.
Lisa Appiganesi in New Republic:
Way back in 1977 the prescient French philosopher/historian Michel Foucault pointed out that in our societies, “the child is more individualised than the adult, the patient more than the healthy man, the madman and the delinquent more than the normal and the non-delinquent.” Whatever our concurrent desire for a painless sanity, normality or, as it is now known, neuro-typicality, having a “secret madness” can help constitute what makes us individual. This may be one of the clues to the alarming rise of mental illness in recent decades. Foucault might not have been surprised that the biggest success story in the pharmaceutical world since the advent of antibiotics has been the growth of antidepressants in the form of selective serotonin reuptake inhibitors (SSRIs)—those much-hailed little pills that helped to bring about the very illness for which they are the touted cure. After a rocky start and unsuccessful clinical trials, SSRIs took off in the 1990s. By 2002 about 25 million Americans were taking them. Now, although they have been exposed as no more effective than placebos, the figure is closer to 40 million. The situation is no different in the UK, where one in four of us will, it is said, succumb to depression and anxiety at least once in our lifetime—though the more usual pattern is for these to become chronic conditions.
In the west we live in a time when we look to medics (rather than, say, politicians, priests, artists or philosophers) for solutions to most of our life and death problems. It is clear that the NHS in Britain and the rise of scientific medicine in the west count among the greatest achievements of the postwar years. But can doctors really be the providers of all our goods? Do they have the wherewithal to direct the mind and the emotions, do they hold the keys to sex, reproduction and death, besides healing our diseases?
Louis Riel's Address to the Jury
Gentlemen of the Jury:
I cannot speak
English well, but am trying
because most here
When I came to the North West
I found the Indians suffering
I found the half-breeds
eating the rotten pork
of the Hudson Bay Company
and the whites
We have made petitions I
have made petitions
We have taken time; we have tried
And I have done my duty.
My words are
by Kim Morrisy
from Batoche Regina
Coteau Books, 1989
Maria Popova in Brain Pickings:
If the universe operates by fixed physical laws, what does it mean for us to have free will? That’s what C.S. Lewis considers with an elegant sidewise gleam in an essay titled “Divine Omnipotence” from his altogether fascinating 1940 book The Problem of Pain (public library) — a scintillating examination of the concept of free will in a material universe and why suffering is not only a natural but an essential part of the human experience. Though explored through the lens of the contradictions and impossibilities of belief, the questions Lewis raises touch on elements of philosophy, politics, psychology, cosmology, and ethics — areas that have profound, direct impact on how we live our lives, day to day.
He begins by framing “the problem of pain, in its simplest form” — the paradoxical idea that if we were to believe in a higher power, we would, on the one hand, have to believe that “God” wants all creatures to be happy and, being almighty, can make that wish manifest; on the other hand, we’d have to acknowledge that all creatures are not happy, which renders that god lacking in “either goodness, or power, or both.”
To be sure, Lewis’s own journey of spirituality was a convoluted one — he was raised in a religious family, became an atheist at fifteen, then slowly returned to Christianity under the influence of his friend and Oxford colleague J.R.R. Tolkien. But whatever his religious bent, Lewis possessed the rare gift of being able to examine his own beliefs critically and, in the process, to offer layered, timeless insight on eternal inquiries into spirituality and the material universe that resonate even with those of us who fall on the nonreligious end of the spectrum and side with Carl Sagan on matters of spirituality.
Saturday, August 30, 2014
Leo Robson in TNR (photo from Wikimedia Commons):
Flaubert’s prescription, set down in 1852, was never one likely to be followed by Martin Amis, the guy who said he didn’t want to “write a sentence that any guy could have written,” or his contemporary Ian McEwan, who from his earliest stories kept in such close contact with his benighted characters that you could virtually smell his breath on the page. Over the years, the desire to editorialise has proved increasingly hard to resist, with Amis engaging in lofty allocutions on human nature, many of them borrowed from his essays and memoirs (“It’s the death of others that kills you in the end”—Experience in 2000 and The Pregnant Widow in 2010), and McEwan adopting a stealthier approach, superficially more dramatic and yet no less tailored to communicating his personal opinions—on science, mores, ethics.
The turning point came in 1987, with Amis’s story collection Einstein’s Monsters and McEwan’s novel The Child in Time, the first books that each writer published after making the transition from enfant terrible to proud father. For all the books’ differences, a number of shared concerns emerged. Sex, once either casual or squalid, had become something else entirely—cataclysmic, even cosmic. Violence was no longer a pay-off or punchline but a thing to walk in fear of. Also indicative were these words from McEwan: “I am indebted to the following authors and books . . .” And these ones from Amis: “May I take the opportunity to discharge—or acknowledge—some debts? . . . I am grateful to Jonathan Schell, for ideas and for imagery.” Bedtime reading on subjects such as nuclear weapons, quantum mechanics and the Second World War had been delivering the kinds of shocks and thrills that the authors had been aiming for with stories about boys and girls mistreating one another in decaying city bedrooms. It was time to chase a grander frisson.
What distinguishes this move from, say, the more recent fashion for the essay novel—see the work of W. G. Sebald, Geoff Dyer, Teju Cole, Laurent Binet—is that Amis and McEwan have tried to accommodate facts and arguments into a prose that resists being candidly discursive. Ideas about sexual politics (Amis’s The Pregnant Widow, McEwan’s On Chesil Beach), science v. superstition (McEwan’s Enduring Love and Saturday), the new physics (The Child in Time, Amis’s Night Train) and political violence (Amis’s Time’s Arrow, Black Dogs and House of Meetings) are put into characters’ mouths (mostly by Amis) or wedged into a narrative structure (mostly by McEwan). The novels in this period that seem freest from these vices—among them, McEwan’s Atonement and Amis’s Yellow Dog—are beset, to varying degrees, by other problems; in McEwan’s case, maniacal control and, in Amis’s, frivolity and self-plagiarism.
Derek Ayeh in The New Inquiry:
Imagine the dying patient today: sitting in the intensive care unit, hooked up to a ventilator that artificially pumps their heart and a feeding tube because they can no longer eat on their own. The patient could be on several drugs or antibiotics, hooked up to devices that keep an eye on every bodily function, or even need hemodialysis because their kidneys have failed. All the while physicians scramble about doing everything in their power to keep this patient alive as long as they possibly can, even when they know that time is limited. Why? Because this person is a patient in a hospital and everyone knows you go to hospitals to get better, not to die.
Lydia Dugdale gives such a description in her Hasting’s Center Report article “The Art of Dying Well.” Dugdale claims that American society is ill equipped for the experience of dying. Instead a physician’s focus is solely on perpetuating life as long as possible, and the family often times desires the same thing. According to Dugdale, today’s focus on continued life doesn’t make dying any better than in the mid-fourteenth century in Europe during the Bubonic plague epidemic. Then, the constant presence of death turned society’s attention to ensuring that the dying would receive a good death.
To aid laypeople in giving their loved ones good deaths, the Catholic Church created a text called Ars Moriendi, the Art of Dying, in 1415. It guided the layperson through the dying process by teaching them the appropriate prayers, preparations, and listing questions that the dying person should consider and answer about their life as a way of confirming that they lead a repentant and righteous life. But one could start considering what it meant to die well just by being in close proximity with the dying. By encountering the prescribed preparations, others involved were able to think critically about death and the inevitable end of their own lives. The Ars Moriendi in time expanded into its own genre, with numerous religious authorities reinterpreting what it meant to die well and promoting their own texts. These guidebooks were written for centuries after.
The original Ars Moriendi consisted of six separate sections, each serving to help either the dying individual or his or her close ones through prayer and guidance. Part two, for example, deals with five temptations that the dying person faces in death: lack of faith, despair, impatience, vainglory, and avarice. These temptations were devils that came to the dying man’s bedside and tried to tempt him towards hell. For despair, the devil says, “Wretched one, look at your sins which are so great that you would never be able to acquire grace.” But with each temptation comes a remedy, the words of a good angel meant to inspire and comfort. In this case, the angel reminds the dying of the sinners who confessed late and still received grace.
Carl Zimmer in the NYT(image by Jitender P. Dubey/U.S.D.A.)
An unassuming single-celled organism called Toxoplasma gondii is one of the most successful parasites on Earth, infecting an estimated 11 percent of Americans and perhaps half of all people worldwide. It’s just as prevalent in many other species of mammals and birds. In a recent study in Ohio, scientists found the parasite in three-quarters of the white-tailed deer they studied.
One reason for Toxoplasma’s success is its ability to manipulate its hosts. The parasite can influence their behavior, so much so that hosts can put themselves at risk of death. Scientists first discovered this strange mind control in the 1990s, but it’s been hard to figure out how they manage it. Now a new study suggests that Toxoplasma can turn its host’s genes on and off — and it’s possible other parasites use this strategy, too.
Toxoplasma manipulates its hosts to complete its life cycle. Although it can infect any mammal or bird, it can reproduce only inside of a cat. The parasites produce cysts that get passed out of the cat with its feces; once in the soil, the cysts infect new hosts.
Toxoplasma returns to cats via their prey. But a host like a rat has evolved to avoid cats as much as possible, taking evasive action from the very moment it smells feline odor.
Experiments on rats and mice have shown that Toxoplasma alters their response to cat smells. Many infected rodents lose their natural fear of the scent. Some even seem to be attracted to it.
Adam Gopnik in The New Yorker:
About a year ago, I wrote about some attempts to explain why anyone would, or ought to, study English in college. The point, I thought, was not that studying English gives anyone some practical advantage on non-English majors, but that it enables us to enter, as equals, into a long existing, ongoing conversation. It isn’t productive in a tangible sense; it’s productive in a human sense. The action, whether rewarded or not, really is its own reward. The activity is the answer.
It might be worth asking similar questions about the value of studying, or at least, reading, history these days, since it is a subject that comes to mind many mornings on the op-ed page. Every writer, of every political flavor, has some neat historical analogy, or mini-lesson, with which to preface an argument for why we ought to bomb these guys or side with those guys against the guys we were bombing before. But the best argument for reading history is not that it will show us the right thing to do in one case or the other, but rather that it will show us why even doing the right thing rarely works out. The advantage of having a historical sense is not that it will lead you to some quarry of instructions, the way that Superman can regularly return to the Fortress of Solitude to get instructions from his dad, but that it will teach you that no such crystal cave exists. What history generally “teaches” is how hard it is for anyone to control it, including the people who think they’re making it.