Thursday, October 01, 2015
Jonathan Kalb in The Brooklyn Rail:
Emojis are an infantilization of language in the name of amusement. A New Yorkmagazine cover story last year compared them admiringly to ancient hieroglyphs without mentioning that civilization bounded forward after advancing from pictographs to symbolic language. Emojis are also a flagrant and increasingly common means of pandering to the young. What else are the White House’s emoji-peppered online notices to millennials about the Affordable Care Act, or The Guardian’s emoji translation of Obama’s 2015 State of the Union address?
The nadir of emoji pointlessness, in my view, is Fred Benenson’s Emoji Dick, a 736-page, crowd-sourced emoji translation of Melville’s Moby-Dick that was accepted into the Library of Congress in 2013. It beggars belief that anyone but Benenson has ever read this book cover to cover.
Small reason was there to doubt, then, that ever since that almost fatal encounter, Ahab had cherished a wild vindictiveness against the whale, all the more fell for that in his frantic morbidness he at last came to identify with him, not only all his bodily woes, but all his intellectual and spiritual exasperations.
The White Whale swam before him as the monomaniac incarnation of all those malicious agencies which some deep men feel eating in them, till they are left living on with half a heart and half a lung.
I am obviously an emoji skeptic, yet I found myself drawn to investigate them for deeply personal reasons. I have lived for more than a decade with facial palsy that distorts my smile and causes a lot of social misunderstanding. I employ a raft of improvised strategies to clarify the emotional intentions behind my quirky facial expressions, and emojis, I realized, were doing something similar for normal people. These tiny cartoonish faces and glyphs were deployed as digital masks. Millions were grasping at them to elucidate their feelings because their addiction to faceless communication modes had put them at a comparable disadvantage to mine.
Wolfgang Streeck in Eurozine:
The slogan with which the German government eventually sold the euro to the German electorate became: "The euro – as stable as the mark". Germany's partners signed the treaty in the end, presumably hoping to amend it later under the pressure of economic realities, if not on paper then in practice. It helped that the 1990s were a period when, issuing from the United States, fiscal consolidation was a common political objective for the countries of OECD capitalism, in the context of financialization and the transition to a neoliberal, non-Keynesian money regime. It was in the spirit of the era to commit to a ceiling on public debt of sixty per cent of GDP and to budget deficits that would never exceed three per cent; financial markets would have looked with suspicion upon any country refusing to fall into line.
Today it is Germany, together with countries like the Netherlands, Austria or Finland, which is reaping the benefits of the EMU. But it is important not to forget that this has only been so since the financial collapse of 2008. During the first years of EMU, Germany was "the sick man of Europe", and monetary union had a lot to do with it (Scharpf 2011). The common interest rate set by the European Central Bank, which had to take into account the economies of all member countries, was too high for a low-inflation political economy like Germany. A possible solution might have been wage increases forced by aggressive trade unions. In a heavily industrialized, export-dependent country like Germany, however, this would have meant not just fewer exports but also, in a time of increasing capital mobility, a drain of jobs to foreign countries. This explains the, to many outside observers, mysterious wage moderation of German unions since the early 2000s. By comparison, the more inflationary economies of the Mediterranean enjoyed negative real interest rates, coupled with a dramatic fall in the cost of public borrowing – the latter on the assumption by capital markets, encouraged by the European Commission, that with a common currency there would also be some sort of common responsibility for the solvency of member states. The result was a boom in the South and stagnation, along with high unemployment and growing public debt, in Germany.
That situation was reversed in 2008, and contrary to popular neoliberal mythology this had little to do with the "Hartz reforms". They made a dent in public spending, especially as regards unemployment insurance, and opened the door to an expansion of low-wage employment outside of the core sectors of German economic strength. What really mattered was that the German economy, traditionally driven by foreign demand and due to its perennial "over-industrialization", was in a position after 2008 to serve global markets in the high-quality manufacturing sector. As a result, it suffered much less from the fiscal crisis and the breakdown of credit than more domestic demand-led EMU countries.
Paul Richardson in More Intelligent Life:
It is one of the great American sententiae, as sonorous and moving as the Gettysburg Address. “I went to the woods because I wished to live deliberately, to front only the essential facts of life, and see if I could not learn what it had to teach, and not, when I came to die, discover that I had not lived.”
Henry David Thoreau went to the woods in 1845, living for two years and two months in a cabin he had built on the north shore of Walden Pond. The book resulting from his experiment in simplicity was published in 1854, to lukewarm reviews. A century and a half later, however, “Walden” is a fundamental text of the ecological movement, and the pond, a crucial topos of American history, has become a place of pilgrimage.
I come to the woods in a taxi from Logan Airport, leaving Boston on Route 2. My taxi driver is a young Ethiopian woman with a printed headscarf wound around her head, nervous on her first day of work. We leave the highway at the turn-off for Lincoln, and up there on the exit sign I see the name in big letters: Walden Pond. It has become a destination in itself.
The pond lies a few miles out of Concord village in the state of Massachusetts. The pond isn’t really a pond, at least not in the English sense of a small body of standing water, often found at the bottom of a garden. It’s a roundish lake surrounded by forest, with a patch of boggy meadow at its western end. The water in this kettle lake or pothole lake (as geographers variously define it), tinged benignly blue-green at the edges and scarily black towards the middle where it plunges to a depth of 33 metres, is filtered as it pushes up through the sandy soil around it, and has a mesmerising clarity I’ve never seen in any English pond.
Ethan Siegel in Forbes:
When we think about the ingredients necessary for life on Earth, liquid water is always right at the top of the list. Without it, there’s simply no good way to for transport of ions and molecules to occur. Sure, some other liquid might — in principle — be able to substitute for water, but nitrogen, carbon dioxide and methane don’t have that awesome polar structure that water has, which allows for such a wide variety of molecules to dissolve and be moved from one location to another.
For a long time, it was known that Mars had a wet, watery past, likely for over a billion years in the early Solar System. Things like dried up riverbeds, sedimentary rock formations, round spherules known as Martian blueberries and other patterns of erosion teach us that — at one point long ago — Mars likely had oceans covering its surface more than a mile deep.
But all that ended long ago. As Mars is much smaller than Earth, its core cooled more quickly, meaning it lost its protective magnetic field. Without it, there was insufficient protection against the solar wind, which over just a few million years stripped that thick atmosphere away, making stable, liquid water on the surface impossible. What was left either had to exist as water vapor (the gaseous phase) or ice (the solid phase), something we’ve found copious amounts of evidence for. When we look at the Martian surface today, we find polar icecaps, clouds, sub-surface ice (via digging) that sublimates as soon as it’s exposed, and even seasonal frozen lakes.
Walt Hickey in Five Thirty Eight:
Bob Ross was a consummate teacher. He guided fans along as he painted “happy trees,” “almighty mountains” and “fluffy clouds” over the course of his 11-year television career on his PBS show, “The Joy of Painting.” In total, Ross painted 381 works on the show, relying on a distinct set of elements, scenes and themes, and thereby providing thousands of data points. I decided to use that data to teach something myself: the important statistical concepts of conditional probability and clustering, as well as a lesson on the limitations of data.
So let’s perm out our hair and get ready to create some happy spreadsheets!
What I found — through data analysis and an interview with one of Ross’s closest collaborators — was a body of work that was defined by consistency and a fundamentally personal ideal. Ross was born in Daytona, Fla., and joined the Air Force at 17. He was stationed in Fairbanks and spent the next 20 years in Alaska. His time there seems to have had a significant impact on his preferred subjects of trees, mountains, clouds, lakes and snow.
More here. [Thanks to Jennifer Oullette.]
In the spring of 1914, one of the most famous images of authorship in English literary history went on public display for the first time. Branwell Brontë’s portrait of his sisters, Charlotte, Emily and Anne, had been discovered in Ireland, on top of a ward-robe at Hill House, Banagher, formerly the home of Arthur Bell Nicholls, Charlotte Brontë’s widower, together with a portrait fragment of Emily Brontë from a lost work by Branwell, known as the “Gun Group” (Nicholls had cut the fragment from the painting and destroyed the rest).
Hurriedly purchased by the trustees of the National Portrait Gallery in London, at “a very moderate cost”, and relined but not restored, the heavily creased painting of the three sisters – folded at one time to an eighth of its original size – was hung next to a portrait of Robert Louis Stevenson. The portrait of Emily, purchased at the same time, was displayed directly beneath. As the public flocked to see the two paintings, articles in the press focused on “The Three Sisters” group, marvelling at its chance rediscovery, “negligible” status as a work of art, and compensating value as a historical relic. A few dissident voices attacked the late Mr Nicholls for his neglect of the painting and the consequent damage to it, as well as for his desecration of the “Gun Group”. “Oh, the barbarism of Charlotte’s husband”, lamented a reporter in the Daily Graphic.
I WENT TO THAT black barbershop for the reason millions like me have done so before—to feel at home. But for years, as Quincy Mills’s fascinating Cutting Across the Color Line reveals, black barbershops in America were unavailable to people of my lineage and color. Though they became a stereotypical image of a black social institution, crystallized best in Barbershop, they began as institutions of segregation and white supremacy. In the antebellum era, but also well into the period of Reconstruction, black barbershops—predominantly in the South but often in the North—only served white men. Prohibiting black men from cutting black hair for a profit allowed slave owners to control their slaves’ relationship to their own and to other black bodies. At the same time, slave owners profited from their enslaved barbers by hiring their slaves out to cut the hair of white townspeople. If the barber was lucky, his owner allowed him to take a percentage of the profits, which he sometimes used to purchase his freedom.
Their distance from harsh, manual labor made these positions relatively privileged ones, leading Mills to argue that barbers initially occupied an unstable class position. “As captive capitalists in a slave society,” Mills writes, free barbers represented “both the possibilities and limits of freedom for African Americans in the antebellum period.”
Besides the complexes that blokes in the 21st century may have about castration and the shivering joy many take in explaining all this to a psychoanalyst, there is another reason the castrato may continue to fascinate us. It is the old idea that while heard melodies are sweet, those unheard are haunting. Feldman writes that castrato voices had ‘strong resonance … understood as relative loudness and intensity, with timbral richness’. If we want to imagine what a castrato sounded like perhaps it would help to listen to a recording by a deep and powerful contralto – Hilde Rössel-Majdan, for example, or Maureen Forrester – and then follow this by listening to a countertenor, David Daniels, for example, or Andreas Scholl, or Iestyn Davies (or go on YouTube and listen to a recording of the last castrato, Alessandro Moreschi, who died in 1922, singing the Bach-Gounod ‘Ave Maria’, with what Feldman called a vibrato that is ‘often lush and plentiful’, and the ‘Crucifixus’ from Rossini’s Petite messe solennelle). But all of these offer merely clues.
Some of the clues are fascinating, however, perhaps because the language used to describe a castrato singing has its own luscious, plaintive sound. The French soprano Emma Calvé wrote in her autobiography about hearing the castrato Domenico Mustafà in 1891: ‘He had an exquisite high tenor voice, truly angelic, neither masculine nor yet feminine in type – deep, subtle, poignant in its vibrant intensity … He had certain curious notes which he called his fourth voice – strange, sexless tones, superhuman, uncanny!’ Another writer wrote of a castrato voice that it was ‘so soft, and ravishingly mellow, that nothing can better represent it than the Flute-stops of some Organs’, which themselves were ‘not unlike the gentle Fallings of Water’.
Variety of life: An effort to sequence thousands of people’s genomes reaches the end of the beginning
Editorial in Nature:
Modern science has a good grip on most of those very few laws that drive life forward, most tellingly on how genetic material copies itself from parent to offspring. The innumerable variations however? Not so much. They are, after all, innumerable. That does not mean that science is not trying, and on pages 68 and 75 of this issue, Nature publishes the latest progress reports from this colossal effort. The papers mark the completion of the 1000 Genomes Project, the largest work yet to sequence the genetic information of hundreds of individuals in an attempt to tune into Mother Nature’s hum of human variation. It completes a set of genomic reference tools — resources of genetic data produced by international collaborations — that dates back 25 years to the start of the Human Genome Project. The bigger job, of tracking the relationships between genetic variation and human disease to help to develop effective treatments, is not finished, and may never be. But it is important from time to time to acknowledge and celebrate landmarks of achievement along the way. This week marks one such landmark.
...The final goal remains to make this flood of population-level genetic research relevant to personal health. Emerson would have approved. He was a proponent of individualism, a political philosophy that emphasizes the moral worth of the individual. He celebrated the non-conformist. And when it comes to the few laws that dictate the repetition of genetics, it is not just the 2,504 people whose variation is detailed this week who are the non-conformists. We all are.
Wednesday, September 30, 2015
Colin Dayan in the Boston Review:
The most striking thing happened as I began reading Lori Gruen’s book, Entangled Empathy: An Alternative Ethic for Our Relationships with Animals. I was sitting on the porch when a baby white-throated sparrow flew inside. Attempting to escape, the sparrow repeatedly dashed itself against the screens, head down in exhaustion. I tried to lead it to the open door. No luck. But then a male cardinal appeared outside. It hovered, went first to one side of the screen, then the other; held tight one moment, moved softly the next. Flying against the screen, it guided the captive bird, gradually, from side to side, up and down—all the while outside the porch—and led it to the open air. For twenty minutes I watched a bird save another not of its brood, and I thought: now that is empathy.
Yet empathy is a word I have always distrusted. Deep and enigmatic, at best it means being present to or with another being; at worst it calls forth a moral surround as exclusive as it is well intentioned. Along with sympathy, and often confused with it, empathy summons an intensely humanized world, where our emotional life—how much we feel for or with—matters more than the conditions that cause suffering and sustain predation. Examples are all around us. To consider but one, we all know the sad excesses of sentiment that followed the 2010 Haiti earthquake. Money flowed to the coffers of international aid organizations and NGOs, but it never reached the hundreds of thousands of Haitians who continued to live as displaced persons in camps. Inhumanity can easily be moderated, legitimized, and even reproduced by the humanitarian concern that is analogous to it.
As an Americanist, I learned from Edgar Allan Poe how the language of sentiment animates subordination. A slave, a piece of property, a black cat—once loved in the proper domestic setting, they arouse a surfeit of devotion, bonds of dependence that slavery apologists claimed could never be felt by equals.
From Science Alert:
Despite research telling us it’s a really bad idea, many of us end up working 50-hour weeks or more because we think we’ll get more done and reap the benefits later. And according to a study published last month involving 600,000 people, those of us who clock up a 55-hour week will have a 33 percent greater risk of having a stroke than those who maintain a 35- to 40-hour week.
With this in mind, Sweden is moving towards a standard 6-hour work day, with businesses across the country having already implemented the change, and a retirement home embarking on a year-long experiment to compare the costs and benefits of a shorter working day.
"I think the 8-hour work day is not as effective as one would think. To stay focused on a specific work task for 8 hours is a huge challenge. In order to cope, we mix in things and pauses to make the work day more endurable. At the same time, we are having it hard to manage our private life outside of work," Linus Feldt, CEO of Stockholm-based app developer Filimundus, told Adele Peters at Fast Company.
Filimundus switched to a 6-hour day last year, and Feldt says their staff haven't looked back. "We want to spend more time with our families, we want to learn new things or exercise more. I wanted to see if there could be a way to mix these things,"he said.
To cope with the significant cut in working hours, Feldt says staff are asked to stay off social media and other distractions while at work and meetings are kept to a minimum.
For doubters, the enduring renown of The Great Gatsby is mystifying. It seems a wonder to them that Gatsby should cling to its lofty place on lists of Great American Novels, despite being so slender and so dated, and not withstanding its ham-handed symbolism (the Valley of the Ashes, the Eyes of Doctor Eckleburg), simplistic structure (a series of set-pieces), clunky plot machinery (fancy cars roaring back and forth to Manhattan, merely to move pieces around the board), and flat characters (Tom Buchanan tilts toward caricature and Meyer Wolfsheim tips all the way over).
There is a solution to the mystery of Gatsby’s lasting fame, as believers know, and to my mind that solution is voice. The elixir that transforms the novel’s inert matter into music—that turns its static iconography into poetry—is its first-person narration: the subtle, compounded, compromised voice of Nick Carraway. A voice of hope infused with despair, of belief corroded by doubt. A voice suave and dapper on its surface but roiled and dark in its depths. It is the inviting but evasive voice of a new best friend who draws you into his confidence and promises alluring secrets, only to turn away from you, agitated, distracted, and weary.
Paradox: If Trotsky was correct at Kronstadt, then his own murder could also be construed as right. If his murder stinks (as I most certainly believe), then he was wrong at Kronstadt, in which case his murder again becomes justified so long as he supports Kronstadt-like actions. Like most paradoxes, this one ultimately fails to hold together—but only in the “real world.” Rostov is a reduction of a far more interesting and ambiguous man. But the protagonists of parables must be types, emblems, tropes. Rostov represents not who Trotsky was, but a certain principle that Trotsky stood for. If we feel willing to generalize and simplify, then this parable with its paradox does have something to tell us—for the events that haunted Bernard Wolfe reincarnate themselves endlessly.
“Then it amounts to this,” says a Mexican official to the dying Rostov’s wife. “Those who use all means will win, those who reject some means will lose. There is no remedy …” Can it be so? Trotsky believed it. Sometimes, so do I. (That is why I prefer to lose.) Exactly here we come face to face with Wolfe’s defective, unlikely greatness. His formulation must never be forgotten.
Among the many questions that surround the Cambridge spies, one has occupied historians ever since the scale of their treachery became fully known. Why did they choose to betray their country? Several reason are given why Guy Burgess, Kim Philby, Donald Maclean, Anthony Blunt and John Cairncross – commonly known as the Cambridge Five, though there may have been others – decided to serve the Soviet state. In the 1930s they saw the USSR as the chief bulwark against the advance of Nazism and fascism; in the Second World War, they acted in response to Britain and the USSR being allies; during the cold war, they viewed the United States as the chief threat to world peace. Above all, the spies had an overriding ideological commitment to communism. Acting on this was more important for them than clinging to old loyalties of king and country.
No doubt all of these factors played a part, but they are less than thoroughly convincing. The spies were recruited in the 1930s, when the danger of Nazism was becoming clear; but they continued to serve the Soviet Union after it entered into a pact with Nazi Germany, when many other communist sympathisers fell away, and went on serving the Soviet state after it ceased to be Britain’s ally.
Yasmin Alibhai-Brown in The Independent:
Iran is seriously mistrusted by Israel and America. North Korea protects its nuclear secrets and is ruled by an erratic, vicious man. Vladimir Putin’s territorial ambitions alarm democratic nations. The newest peril, Isis, the wild child of Islamists, has shocked the whole world. But top of this list should be Saudi Arabia – degenerate, malignant, pitiless, powerful and as dangerous as any of those listed above.
The state systematically transmits its sick form of Islam across the globe, instigates and funds hatreds, while crushing human freedoms and aspiration. But the West genuflects to its rulers. Last week Saudi Arabia was appointed chair of the UN Human Rights Council, a choice welcomed by Washington. Mark Toner, a spokesperson for the State Department, said: “We talk about human rights concerns with them. As to this leadership role, we hope that it is an occasion for them to look into human rights around the world and also within their own borders.”
The jaw simply drops. Saudi Arabia executes one person every two days. Ali Mohammed al-Nimr is soon to be beheaded then crucified for taking part in pro-democracy protests during the Arab Spring. He was a teenager then. Raif Badawi, a blogger who dared to call for democracy, was sentenced to 10 years and 1,000 lashes. Last week, 769 faithful Muslim believers were killed in Mecca where they had gone on the Hajj. Initially, the rulers said it was “God’s will” and then they blamed the dead. Mecca was once a place of simplicity and spirituality. Today the avaricious Saudis have bulldozed historical sites and turned it into the Las Vegas of Islam – with hotels, skyscrapers and malls to spend, spend, spend. The poor can no longer afford to go there. Numbers should be controlled to ensure safety – but that would be ruinous for profits. Ziauddin Sardar’s poignant book Mecca: The Sacred City, describes the desecration of Islam’s holiest site.
From The New Yorker:
The first installment in our For Your Consideration series is “Pink Grapefruit,” a ten-minute short by the writer-director Michael Mohan. The film—which premièred at Sundance, in January, and went on to win a jury award at South by Southwest—takes place in a serene vacation home in the Palm Springs desert. A young woman (Wendy McColm) arrives there with her friends, a slightly older married couple (Nora Kirkpatrick and Matt Peters), and we quickly learn that they are subjecting her to a rather intense version of a blind date: a single man she’s never met (Nathan Stewart-Jarrett) will soon be joining them for the weekend. Like any jaded millennial, the woman greets the impending setup with a sense of dread: “These things never work out!” she says on the car ride out. But, when her suitor arrives, things don’t go quite as expected. (And without spoiling anything, we hope, we should note that this film contains sexual situations.)
...But the story in “Pink Grapefruit,” of a young couple’s first encounter, turns out to be, as Mohan has put it, a cinematic Trojan horse. Shot in lush colors, with lingering images of the arid California hills, the film also makes use of an eerie desert silence, and the voyeurism of the glass-walled vacation home suggests that something pernicious is afoot between the two couples. What Mohan was really interested in exploring, he said, is how young adults “measure our happiness and success by comparing it to those around us.” Mohan, who also directs music videos and commercials (like a pair of very fun short films for Kate Spade, starring Anna Kendrick and Lily Tomlin), is currently beginning work on a new film project called “The Ends.” Co-written with Chris Levitus, who also co-wrote “Pink Grapefruit,” the film portrays the life of a young woman by examining her past breakups. Mohan said, “We want to show how our past relationships shape the person we ultimately become.”
Ellie Lee in Spiked:
The first episode of the new BBC TV series Countdown to Life: the Extraordinary Making of You, broadcast on Monday, showed us how this process works. The programme as a whole placed great emphasis on how ‘what you are’ is determined in the womb. Part of this argument for womb determinism drew on the alleged ‘amazing significance of what a mother-to-be eats’. The programme’s amazement at the profound import of maternal diet began with a section exploring the (sound) findings of the Dutch Famine Birth Cohort Study. This study showed how babies born to Dutch women who were literally starved during the Second World War were more likely to suffer from a range of serious diseases later in life; the environment in which fetal development occurred had serious detrimental effects for the health not only of the women, but also their children. This, combined with a Medical Research Council study about diet and health in Gambia, led programme presenter Michael Mosely to conclude: ‘You really are what your mother eats. Or more precisely, you really are what your mother ate when you were just a tiny little embryo, just a few cells big.’ Thus ends the article he wrote for BBC News to promote the programme: ‘If you are thinking of having a baby, then eating lots of leafy green vegetables, which are rich in B vitamins and folates, is certainly a good thing to do.’
Despite its gripping footage of life before birth – who could not be blown away by a film of the transformation of a ball of cells into a living, waking human being? – Countdown to Life is entirely in line with today’s propensity for parental determinism and scientism. The programme’s scientific content is neither new nor that interesting. Epigenetics has been around for a long time and the effects of the Dutch famine are well known. What is most telling is the ease with which the programme segues from discussing the extraordinary (the Dutch famine) through to the everyday (all women, the world over). You end up with what is really quite a bizarre message: that if pregnant women don’t eat what is today considered to be ‘good food’, then their babies will be damaged. But we are not ‘what our mothers ate’, and the suggestion that women should eat a lot of spinach if they are even thinking about having a baby burdens women with yet more health hectoring.
The Elusive Jellyfish Nebula
At the aquarium, the jellyfish are lit
from below—blue and pink hues
flash in time with the ebb
and flow of visitors come to see
The true sea is not so bright, though,
nor so clear—
Infinity reaches down from space
to the center of our waters
where jellyfish live in truth,
countless billions upon billions
of dead stars and living organisms
recycled into dust upon dust.
Near bright star Eta Geminorum,
the Jellyfish Nebula emits faint strands
of light, the remnants of a supernova gone
rogue, leaving only a neutron star to see
how the universe changes over time.
It is too far away, too large
to imagine what it would feel
like to touch those strands,
though the ones in the water sting
We imagine we know why jellyfish
are so fragile, dying easily or not at all,
but they say even stars die. We have faith
that’s true. When the aquarium closes,
the lights go out.
A new book, available at Amazon
Tuesday, September 29, 2015
Tim Flannery reviews Carl Safina's Beyond Words: What Animals Think and Feel and Hal Whitehead and Luke Rendell's The Cultural Lives of Whales and Dolphins in the New York Review of Books:
The free-living dolphins of the Bahamas had come to know researcher Denise Herzing and her team very well. For decades, at the start of each four-month-long field season, the dolphins would give the returning humans a joyous reception: “a reunion of friends,” as Herzing described it. But one year the creatures behaved differently. They would not approach the research vessel, refusing even invitations to bow-ride. When the boat’s captain slipped into the water to size up the situation, the dolphins remained aloof. Meanwhile on board it was discovered that an expeditioner had died while napping in his bunk. As the vessel headed to port, Herzing said, “the dolphins came to the side of our boat, not riding the bow as usual but instead flanking us fifty feet away in an aquatic escort” that paralleled the boat in an organized manner.
The remarkable incident raises questions that lie at the heart of Carl Safina’s astonishing new book, Beyond Words: What Animals Think and Feel. Can dolphin sonar penetrate the steel hull of a boat—and pinpoint a stilled heart? Can dolphins empathize with human bereavement? Is dolphin society organized enough to permit the formation of a funeral cavalcade? If the answer to these questions is yes, then Beyond Words has profound implications for humans and our worldview.
Beyond Words is gloriously written. Consider this description of elephants:
Their great breaths, rushing in and out, resonant in the halls of their lungs. The skin as they moved, wrinkled with time and wear, batiked with the walk of ages, as if they lived within the creased maps of the lives they’d traveled.
Not since Barry Lopez or Peter Matthiessen were at the height of their powers has the world been treated to such sumptuous descriptions of nature.
Safina would be the first to agree that anecdotes such as Herzing’s lack the rigor of scientific experiments. He tells us that he is “most skeptical of those things I’d most like to believe, precisely because I’d like to believe them. Wanting to believe something can bias one’s view.” Beyond Words is a rigorously scientific work. Yet impeccably documented anecdotes such as Herzing’s have a place in it, because they are the only means we have of comprehending the reactions of intelligent creatures like dolphins to rare and unusual circumstances. The alternative—to capture dolphins or chimpanzees and subject them to an array of human-devised tests in artificial circumstances—often results in nonsense. Take, for example, the oft-cited research demonstrating that wolves cannot follow a human pointing at something, while dogs can. It turns out that the wolves tested were caged: when outside a cage, wolves readily follow human pointing, without any training.
Claude S. Fischer in Boston Review (image: "A U.S. Department of Agriculture photo showing a family grocery shopping using the SNAP (food stamp) program. Photo: USDA."):
Now that growing economic inequality is widely accepted as fact—it took a couple of decades for the stubborn to acknowledge this—some wonder why Americans are not more upset about it. Americans do not like inequality, but their dislike has not increased. This spring, 63 percent of Gallup Poll respondents agreed that “money and wealth in this country should be more evenly distributed,” but that percentage has hardly changed in thirty years. Neither widening inequality nor the Great Recession has turned Americans to the left, much less radicalized them.
This puzzle recalls the hoary question of why there is no socialism in America. Why is the United States distinctive among Western nations in the weakness of its labor movement, absence of universal health care and other public goods, and reluctance to redistribute income where the elderly are not concerned? Generations of answers have ranged from the American mindset (say, individualism) to exercises of brute political power (e.g., strike-breakers, campaign money) to the formal structure of government (such as single-member districts). Some recent research presents a cultural explanation—specifically, Americans’ tendency to see issues of inequality in terms of deservingness. Even economist Thomas Piketty, author of Capital in the Twenty-First Century, insists on the “key role” of “belief systems.”
Notions of who deserves what shape the American welfare state. The economic demographer Robert Moffitt has shown that, despite common misperceptions, total U.S. welfare support—social security, food stamps, disability insurance, and so on—has notdeclined since the days of the Great Society. Even bracketing health expenditures, per capita government spending on means-tested programs rose pretty steadily over the last forty-plus years. What has changed, Moffitt argues, is who gets help. Spending has shifted away from the jobless, single, childless, and very poor toward the elderly, disabled, working, married, parents, and those who are not poor.
Ta-Nehisi Coates in The Atlantic:
I want to respond to Greg Weiner’s contention that I’ve offered a distorted picture of Daniel Patrick Moynihan. There’s a lot wrong with Weiner’s note. I specifically object to the idea that the Moynihan Report left its authors reputation “in tatters.”
It is certainly true that Moynihan suffered through more than his share of unfair criticism after the release of The Case for National Action. It is also true that within two years of the Moynihan Report’s release, the author was being hailed on the cover of TIME magazine as America’s “urbanologist.” That same year Lifemagazine lauded Moynihan as the “idea broker in the race crisis.” After leaving the Johnson administration, Moynihan went on to a lucrative post at Harvard, became the urban affairs guru for one president and the UN ambassador for another, and then served for an unbroken four terms in the Senate. Furthermore, Moynihan’s central idea—that the problems of families are key to ending the problems of poverty—dominates the national discourse today. I suspect the president would take no insult in being described as a disciple of Moynihan. If this is all part and parcel of having your reputation destroyed, it is an enviable specimen of the genre.
Weiner’s claim is, of course, much larger. He accuses me of merely hinting at Moynihan bearing some responsibility for mass incarceration, and cleverly leaving the nasty work to the editor’s note written by James Bennet:
Coates demonstrates that white Americans’ fear of black Americans, and their impulse to control blacks, are integral to the rise of the carceral state. A result is that one of every four black men born since the late 1970s has spent time in prison, at profound cost to his family. For this, Coates holds Moynihan, in part, responsible.
Since Weiner believes I was being coy, let me directly state that I wholly concur with this interpretation. My argument is that mass incarceration is built on a long history of viewing black people as unequal in general, and criminal in the specific. Both of these trends can be found in Moynihan’s arguments.
Sam Leith in The Guardian:
A couple of weeks ago I saw David Crystal give an after-dinner speech at the august annual conference of the Society of Indexers and the Society for Editors and Proofreaders. In it, he recalled having been an adviser on Lynne Truss’s radio programme about punctuation. She told him she was thinking of writing a book on the subject. He advised her not to: “Nobody buys books on punctuation.” “Three million books later,” he said, “I hate her.”
Making a Point is this prolific popular linguist’s entry into the same, or a similar, market. Truss’s book, Eats, Shoots & Leaves, was energised by her furious certainties about the incorrect use of all these little marks. Crystal’s is a soberer and, actually, more useful affair: he puts Truss’s apostrophe-rage in its sociolinguistic context, considers the evolution of modern usages, and gently encourages the reader to think in a nuanced way about how marks work rather than imagining that some Platonic style guide, if only it could be accessed, would sort all punctuation decisions into boxes marked “literate” and “illiterate”. (Or literate and illiterate, if you prefer.)
As Crystal writes, scribes started to punctuate in order to make manuscripts easier to read aloud: they were signalling pauses and intonational effects. Grammarians and, later, printers adopted the marks, and tried to systematise them, as aids to semantic understanding on the page. The marks continue to serve both purposes. “This,” Crystal writes, “is where we see the origins of virtually all the arguments over punctuation that have continued down the centuries and which are still with us today.”
His central argument, buttressed by countless well-chosen examples and enlivened by the odd whimsical digression, is that neither a phonetic, nor a semantic, nor a grammatical account of our punctuation system is singly sufficient.
Ken Roth in The Guardian:
The need to negotiate with leaders as unsavoury as Syria’s Bashar al-Assadis an unfortunate reality of diplomacy. But western leaders should be careful not to confuse that necessity with the idea promoted by Russiathat the Syrian crisis can be resolved only if Assad stays in power. Nor should they believe that Assad’s ongoing rule is the only way to prevent the collapse of the Syrian state and protect Syria’s diverse communities.
Vladimir Putin has long sought to portray Assad as a bulwark against the self-declared Islamic State. But far from a stabilising factor or a solution to the Isis threat to basic rights, Assad is a major reason for the rise of extremist groups in Syria. In the early days of Syria’s uprising, between July and October 2011, Assad released from prison a number of jihadists who had fought in Iraq, many of whom went on to play leading roles in militant Islamist groups. These releases were part of broader amnesties, but Assad kept in prison those who backed the peaceful uprising.
These releases helped to change the complexion of the Syrian rebellion from one with largely democratic aims, to one dominated by jihadists. That transformation has enabled Assad to refocus the narrative from his vicious rule to his claimed indispensability in the fight against Isis.