John Rawls, who died in 2002, was the most influential American philosopher of the twentieth century. His great work, A Theory of Justice, appeared in 1971 and defined the field of political philosophy for generations. It set out standards for a just society in the form of two principles. First, a just society would protect the strongest set of civil liberties and personal rights compatible with everyone else having the same rights. Second, it would tolerate economic inequalities only if they improved the situation of the poorest and most marginalized (for example, by paying doctors well to encourage people to enter a socially necessary profession).
Taken seriously, Rawls’s principles would require a radical transformation: no hedge funds unless allowing them to operate will benefit the homeless? No Silicon Valley IPOs unless they make life better for farmworkers in the Central Valley? A just society would be very different from anything the United States has ever been. Rawls argued that justice would be compatible with either democratic socialism or a “property-owning democracy” of roughly equal smallholders. One thing was clear: America could not remain as it was, on pain of injustice.
It did not remain as it was, but Rawls’s vision did not triumph either. A Theory of Justice was published in 1971, just before economic inequality began its long ascent from its lowest level in history to today’s Second Gilded Age.
In the lowlands of Bolivia, the most isolated of the Tsimané people live in communities without electricity; they don’t own televisions, computers or phones, and even battery-powered radios are rare. Their minimal exposure to Western culture happens mostly during occasional trips to nearby towns. To the researchers who make their way into Tsimané villages by truck and canoe each summer, that isolation makes the Tsimané an almost uniquely valuable source of insights into the human brain and its processing of music.
Most studies about music perception examine people accustomed to Western music, so only a few enclaves like these remote Tsimané villages allow scientists to make comparisons across cultures. There they can try to tease apart the effects of exposure to music from the brain’s innate comprehension of it — or at least start dissecting the relationship between the two. “We need to understand that interplay between our genes and our experience,” said Josh McDermott, an associate professor of brain and cognitive sciences at the Massachusetts Institute of Technology. He is the senior author of a recent paper involving the Tsimané in the journal Current Biology which suggests that a feature of music most of us might consider to be intrinsic — the perceived organization of musical pitches into octaves — is a cultural artifact.
In July 2011, a quiet European capital was shaken by a terrorist car bomb, followed by confused reports suggesting many deaths. When the first news of the murders came through, one small group of online commentators reacted immediately, even though the media had cautiously declined to identify the attackers. They knew at once what had happened – and who was to blame.
“This was inevitable,” explained one of the anonymous commenters. And it was just the beginning: “Only a matter of time before other European nations get a taste of their multicultural tolerance that they’ve been cooking for decades.”
“Europe has been infested with venomous parasitic vermin,” explained another. “Anything and everything is fine as long as they rape the natives and destroy the country, which they do,” said a third.
As the news grew worse, the group became more joyful and confident. The car bomb had been followed by reports of a mass shooting at a nearby camp for teenagers. One commenter was “almost crying of happiness” to be proved right about the dangers of Islam. “The massacre at the children’s camp,” another noted, “is a sickening reminder of just how evil and satanic the cult of Islam is.”
A pair of recent essay collections—Jia Tolentino’s Trick Mirror and Leslie Jamison’s Make It Scream, Make It Burn—invite easy comparisons. They were released within a month of each other. The current professional status of their authors is superficially similar: two established women writers in their thirties both living in Brooklyn. Each writes about weddings, travel, and the sensations of drugs and alcohol; each takes up overtly feminist topics, from defeated heroines and female rage to “difficult women” and women in pain. Even the two books’ bold text-only covers are similar: multicolored lettering in ochre, hot pink, and orange (Tolentino) and fuschia, peach, and sky blue (Jamison). But to my mind what really connects them is how Tolentino and Jamison reason. What’s objective, they demonstrate, is often subjective, or hypocritical, or at the very least, complicated.
In a preface to her ghost stories, Wharton writes, “I do not believe in ghosts, but I am afraid of them.” Following an attack of typhoid as a child, Wharton writes in her autobiography, A Backward Glance, that she returned from the brink of death with “chronic fear” that felt like a “choking agony of terror.” Well into young adulthood, she would not sleep without a light and a maid present in her room. “It was like some dark, indefinable menace, forever dogging my steps, lurking, and threatening,” she writes, and I could not help but think of Hilary Mantel’s childhood encounter with an indescribable evil in her family’s garden. Must all women be visited by terror so consistently and from such a young age? The rumors of paranormal activity at the Mount began after the house become an all-girls school in the forties, and intensified when the theater troupe Shakespeare and Company took residence there in the seventies. The performers were kicked out more than a decade ago in a landlord-tenant dispute that seemed, publicly, not related to the supernatural. Even so, nothing attracts the devil more than a group of adolescent girls, except for maybe a group of actors.
Does the ferocity of the Brexit debate reveal different conceptions of the nature and value of democracy? Brexiteers proudly talk as if the 2016 vote was a rare paradigm of real democracy – “the largest democratic exercise in our history” – while Remainers respond that majority voting by the electorate is only a small part of our democratic system. In a representative democracy, our elected representatives can and should scrutinize the result of an “advisory” referendum as they scrutinize anything else. So why should a referendum result be “respected” if the democratically elected politicians were to decide that, all things considered, it is not in the country’s best interests? On the other hand, Brexiteers will respond that if parliament can overturn the result, what was the point of the referendum in the first place?
It might seem, then, that the Brexit debate is a debate about democracy itself: what it is, why it is valuable, and how it should work.
It is we sinful women who are not awed by the grandeur of those who wear gowns
who don’t sell our lives who don’t bow our heads who don’t fold our hands together.
It is we sinful women while those who sell the harvests of our bodies become exalted become distinguished become the just princes of the material world.
It is we sinful women who come out raising the banner of truth up against barricades of lies on the highways who find stories of persecution piled on each threshold who find that tongues which could speak have been severed.
It is we sinful women. Now, even if the night gives chase these eyes shall not be put out. For the wall which has been razed don’t insist now on raising it again.
It is we sinful women who are not awed by the grandeur of those who wear gowns
who don’t sell our bodies who don’t bow our heads who don’t fold our hands together. . by Rukhsana Ahmad from: We Sinful Women: Contemporary Urdu Feminist Poetry (with original Urdu poems) The Women’s Press Ltd, London, 1991, ISBN 0—7043—4262-6
Consider a forest: One notices the trunks, of course, and the canopy. If a few roots project artfully above the soil and fallen leaves, one notices those too, but with little thought for a matrix that may spread as deep and wide as the branches above. Fungi don’t register at all except for a sprinkling of mushrooms; those are regarded in isolation, rather than as the fruiting tips of a vast underground lattice intertwined with those roots. The world beneath the earth is as rich as the one above. For the past two decades, Suzanne Simard, a professor in the Department of Forest & Conservation at the University of British Columbia, has studied that unappreciated underworld. Her specialty is mycorrhizae: the symbiotic unions of fungi and root long known to help plants absorb nutrients from soil. Beginning with landmark experiments describing how carbon flowed between paper birch and Douglas fir trees, Simard found that mycorrhizae didn’t just connect trees to the earth, but to each other as well.
Simard went on to show how mycorrhizae-linked trees form networks, with individuals she dubbed Mother Trees at the center of communities that are in turn linked to one another, exchanging nutrients and water in a literally pulsing web that includes not only trees but all of a forest’s life. These insights had profound implications for our understanding of forest ecology—but that was just the start.
Justin E. H. Smith is a professor of history and philosophy of science at the University of Paris. In addition to his recent book, Irrationality: A History of the Dark Side of Reason, Smith is the author of the blog post “It’s All Over,” republished by The Point in January, which sparked weeks of debate across the internet and remains the magazine’s most popular article of 2019. He is currently in New York on a Cullman Fellowship, researching a book on Leibniz, the St. Petersburg Academy of Sciences, and the Second Kamchatka Expedition of 1731-41. In September, we met at Bryant Park for a casual conversation about postmodernism, comedy in the age of the algorithm, and what it means to be an aging cultural critic during a period of accelerated generational upheaval. What follows is an edited transcript of that discussion.
Andrew Sornborger and Andreas Albrecht in Scientific American:
If you could look closely enough at the objects that surround you, zooming in at magnifications far beyond those you could ever see with most microscopes, you would eventually get to a point where the familiar rules of your everyday experiences break down. At scales where blood cells and viruses seem enormous and molecules come into view, things are no longer subject to the simple laws of physics that we learn in high school.
Atoms—and the electrons, protons and neutrons they are made of—don’t exist in the same way a marble does. Instead they are smeared in clouds that are difficult to understand and impossible to describe without the complex mathematics of quantum mechanics.
And yet atoms make up molecules, which, in turn, are the building blocks of marbles and everything else we touch and see each day. Nature has clearly found some way of suppressing quantum behavior when quantum objects are assembled into the familiar ones all around us.
How can things that obey the classical laws of physics—such as a pitched baseball or a bumblebee in flight—be composed of parts that are subject to quantum rules at minute levels? That is one of the deepest questions in modern physics. In pursuit of an answer, recent research—with funding from the High Energy Physics program at the Department of Energy’s Office of Science—should help shed light on how the classical world emerges from the underlying quantum one.
The religious landscape of the United States continues to change at a rapid clip. In Pew Research Center telephone surveys conducted in 2018 and 2019, 65% of American adults describe themselves as Christians when asked about their religion, down 12 percentage points over the past decade. Meanwhile, the religiously unaffiliated share of the population, consisting of people who describe their religious identity as atheist, agnostic or “nothing in particular,” now stands at 26%, up from 17% in 2009.
Both Protestantism and Catholicism are experiencing losses of population share. Currently, 43% of U.S. adults identify with Protestantism, down from 51% in 2009. And one-in-five adults (20%) are Catholic, down from 23% in 2009. Meanwhile, all subsets of the religiously unaffiliated population – a group also known as religious “nones” – have seen their numbers swell.
The best thing about this excellent and pleasing anthology of 33 tributes to “Peanuts” is that it will probably evoke your own memories of newspaper comic-strip reading and reawaken your appreciation of Charles M. Schulz’s round-headed, adult-sounding children and the imaginative dog Snoopy. “An isolated four-panel comic strip of Charlie Brown and Linus debating a philosophical point can be appreciated just as it is, humorous, insightful, compact, and perfect; one strip a day documenting one man’s thoughts for half a century has the weight of a full life,” writes the cartoonist Ivan Brunetti. As a collective eulogy to a cultural phenomenon, “The Peanuts Papers: Writers and Cartoonists on Charlie Brown, Snoopy & the Gang, and the Meaning of Life” testifies to the inspiration and importance of Schulz’s work at critical times in the authors’ (usually) younger lives. Satirist and critic Joe Queenan reflects: “Unlike so many other venerated objects in U.S. pop culture, it was sweet without being stupid, reassuring without being infantile.”
…That daily dose of taking in Schulz’s “not arty” work allowed us to identify with hopefulness despite all the evidence in the newspaper and in our lives that hopelessness is more reasonable. Discussing the annual depiction of Lucy yanking the football away from Charlie Brown as he runs up to kick it, the psychiatrist Peter D. Kramer explains, “Charlie Brown is trusting to a fault – or a virtue. He prefers to trust, however often his faith is betrayed. Giving fellow humans the benefit of the doubt is a fine if painful way to live. ‘Don’t! Don’t!’ we cry to Charlie Brown, and then we’re glad he does.”
The writer who invented the genre was neither a feminist nor an American. Simone de Beauvoir’s The Second Sex was published in France in 1949 when she was forty-one years old. A few years later, its English translation would haunt and inspire Friedan, Kate Millett, and Shulamith Firestone. When she wrote it, Beauvoir, a socialist, did not see the need for a political movement specifically for women’s rights. It was a time of feminist quiescence, when activism on behalf of women seemed to belong to the past rather than the future. Earlier in the century, there had been an expansion of education opportunities for women in France: they had won the right to sit for the prestigious French agrégation exam that was the entry point to university teaching. Beauvoir had been the ninth woman to pass the exam in philosophy, and more women entered the field behind her. As of 1944, women also had the right to vote. They were on their way. But something nagged at the philosopher. “The situation of woman,” she writes in her introduction, “is that she—a free and autonomous being like all human creatures—nevertheless finds herself living in a world where men compel her to assume the status of the Other.”
I’m writing these words in clothes that reek of tear gas. Trying to process the pulse of the street while still part of it, while our feet are still there on the ground, fleeing water cannons, not knowing where to go, hiding in the crowd, among people just like us, groups of us marching, dodging smoke and soldiers. This is a celebration, a protest, a demand for change that began with students jumping turnstiles in the metro after fares were hiked. Without any organizer, without petitions, leaders, or negotiations, the whole thing escalated and then exploded into chaos in the streets. And there is yelling, and singing, and banging on pots, and fire, and beatings. In front of the palace of La Moneda, near the theater where I work, a man tells a soldier that he doesn’t understand why the soldier is protecting privileges that will never be his. A woman screams that we’re killing ourselves, we’re committing suicide, with all this inequality.
In the autumn of 1869, Charles Darwin was hard at work revising the fifth edition of On The Origin of Species and drafting his next book, The Descent of Man, to be published in 1871. As he finished chapters, Darwin sent them to his daughter, Henrietta, to edit — hoping she could help to head off the hostile responses to his debut, including objections to the implication that morality and ethics could have no basis in nature, because nature had no purpose.
That same year, Darwin’s cousin Francis Galton published Hereditary Genius, a book that recast natural selection as a question of social planning1. Galton argued that human abilities were differentially inherited, and introduced a statistical methodology to aid “improvement of the race”. Later, he coined the term ‘eugenics’ to advocate selective reproduction through application of the breeder’s guiding hand.
Darwin’s transformative theory inspired modern biology; Galton’s attempt to equate selection and social reform spawned eugenics. The ethical dilemmas engendered by these two late-nineteenth-century visions of biological control proliferate still. And, as older quandaries die out, they are replaced by more vigorous descendants. That there has never been a border between ethics and biology remains as apparent today as it was 150 years ago. The difference is that many of the issues, such as the remodelling of future generations or the surveillance of personal data, have become as everyday as they are vast in their implications. To work out how to move forward, it is worth looking at how we got here.
In the late nineteenth century, like today, society was in upheaval and science was on a roll.
When Samuel Beckett visited the Tell Halaf Museum in Berlin’s Charlottenburg district on 21 December 1936, he had the place to himself. Though King Faisal of Iraq had visited the makeshift museum when it opened six years earlier and the Illustrated London News had run a cover story on the quirky institution, the museum was hardly a popular tourist destination. You had to be in the know. After Beckett rang for the key, he was left alone among colossal lions, scorpion-bird-men, griffons, and sphinxes. “Superbly daemonic, sinister + implacable,” the yet unknown Irish writer wrote in his diary.
The museum and its contents belonged to Baron Max von Oppenheim (1860–1946). Heir to one of Germany’s wealthiest banking families, Oppenheim had acquired the artworks through self-funded excavations at Tell Halaf (ancient Guzana) in northwest Syria. He sought to display his sculptures in the Pergamon Museum on Berlin’s prestigious Museum Island, but when negotiations fell through, Oppenheim settled for a disused machine factory on the other side of town.
Was it really as heavy as it felt? I got the scale out from under the bathroom sink. That’s where it lives, tilted on its side, resting in its zeroes. Would my head weigh more than the Collected Works of Anthony Trollope? More than my overfed tuxedo cat? Would my jittery thoughts balance out my mournful ones? Or would my head reveal itself to be largely empty, like the universe which it contains, as I’d often feared and sometimes wished? I realized I would need a mirror. I lay down on the bathroom tile, pillowed the scale under the back of my skull, held the hand-mirror at arm’s length and took a good look at myself, the absurdity of my situation, a grown man lying between toilet and tub wearing the slightly self-mocking anticipatory expression of a person who has decided to weigh his head. The number floated above me as in a thought-bubble and I had my answer: 8.8 lbs., two infinities turned rightside up, the Eightfold Path doubled, the number of years my father lived minus the decimal, and about half as heavy as I’d imagined this thing my spine had evolved to lift into the air and carry above the earth would be.
Late on election night, November 8, 2016, Paul Krugman wrote in the New York Times: “. . . people like me, and probably like most readers of The New York Times, truly didn’t understand the country we live in. We thought that our fellow citizens would not, in the end, vote for a candidate . . . so scary yet ludicrous.” About two and half years before that night, many liberals in India felt something similar at Narendra Modi’s massive victory—though one should say, Modi is scary but not ludicrous.
The right-wing populist challenge to the liberal order is by no means limited to Donald Trump’s America or Modi’s India. The popular appeal of Britain’s Brexit, France’s Marine Le Pen, Russia’s Vladimir Putin, Hungary’s Viktor Orbán, Poland’s Jarosław Kaczyński, Turkey’s Recep Tayyip Erdoğan, and the Philippines’s Rodrigo Duterte has baffled social thinkers over the last few years. Meanwhile after a decades-long triumphal march of authoritarian and rapid economic growth, China’s increasingly repressive regime seems to be winning all the marbles in the global power game.
In deciphering a pattern in the looming illiberal challenge, an explanation has often been sought in the inexorable and unconscionable rise of economic inequality.