No one knows if it was really in the state prison, the ruins of which are visible today outside the ancient Agora of Athens, that Socrates was kept during the final days before his execution, so many times has the area been destroyed and reconstructed— walking past it sends a chill down my spine. Ancient Greece is visceral and vivid because it entered my imagination early in life; some of the most cherished tales of my childhood came from the crossovers of Hellenistic history and legend, such as the one in which Sikander (Alexander the Great) is accompanied by the Quranic Saint Khizr, in pursuit of “aab e hayat,” the elixir of immortality, or the one about the elephantry in the battle between Sikander and the Indian king Porus, or of the loss of Sikander’s beloved horse Bucephalus on a riverbank not far from Lahore, the city where I was born. I became familiar with ancient Greece through classical Urdu poetry and lore as well as through my study of English literature in Pakistan, but I would read Greek philosophers in depth many years later, as a student at Reed college; I would subsequently discover Greek influence on scholars in the golden age of Muslim civilization while working on a book on al-Andalus— the overlooked, key contribution of Arabic which served as a link between Greek and Latin, and its later offshoots that came to define the cultural and intellectual history of Europe.
Visiting the Agora in the sweltering heat of July, I am amazed by how comfortably these ruins from over two thousand years are nestled in the modern landscaping, park benches and pavements, how familiar the patchy, intensely green grass is, the deep, somnolent shade of oaks— the ancient is home once again, brought down to a child’s scale, at once snug and phantasmagoric, historic and pulsating with new life. Read more »
1. “…And I, who timidly hate life, fascinatedly fear death.” Fernando Pessoa, The Book of Disquiet.
2. I didn’t ask to be born, yet I’ve been condemned to death. Early on in Plato’s Phaedra, Socrates declares that it’s the job of the philosopher to prepare for death. Addressing Simmias and Cebes he states, “I am afraid that other people do not realize that the one aim of those who practice philosophy in the proper manner is to practice for dying and death.” There is something vulgarly sunny in the way he suggests it, but it also rings clearly true. What philosophers talk about when they talk about ‘the good life’ is a life not regretted when it comes time to exit it. Theirs is the pursuit of truth and understanding, and our own mortality could not be a more present topic to pursue.
3. Quite a few artists are engaged in a similar investigation. Although theirs is not as pure as the philosophers’, the good ones have a strong tendency to make work that both eviscerates vanity and frames mortality. By its very nature, the subject becomes the object. The countless examples of musicians, artists, poets, etc., examining the finitude of their own lives constitutes a vast list. Those who have spent time producing work in this vein are legion: from Bob Dylan to Future, from John Donne to Amiri Baraka, from Cady Noland to Andy Warhol. It comes as no surprise that one’s own death is an attractive topic. After all it’s something that every person ever has either done or will do in their lives. Death is more common than emotions. Not all people are capable of, say, love, but everyone must die. But what artists try to do with their work is antithetical to what Socrates was defining: artists are trying to cheat death. I’ll say more about this further down. Read more »
Lab-grown beef may very well be the path forward. In 2008, it was estimated that just half a pound of lab-grown beef would cost $1 million. Then, on August 5, 2013, the first lab-grown hamburger was eaten. It cost $325,000 and took two years to make. Just two years later, the same amount of lab-grown beef costs about $11 to make.
Lab-grown beef checks almost all of the boxes: it doesn’t require animal cruelty, and a study in Environmental Science and Technologyshowed that it could cut emissions from conventionally produced meat by up to 96 percent and cut down on the land use required for meat production by 99 percent. In the U.S., where cow pastures take up 35 percent of available land — that’s about 654 million acres — this could be huge. Imagine having 647 million acres for development, housing, national parks, anything at all!
But does lab-grown beef pass the most crucial test? Does it taste like an honest-to-goodness hamburger?
Attention conservation notice: long (nearly 5,000 words long) essay on the economic power of ideas. To its credit, the questions discussed are plausibly important. To its detriment, the arguments are less arguments than gestures, and the structure is decidedly baggy.
For the last couple of weeks, I’ve been wanting to write a response to Aaron Major’s (paywalled) article on ideas and economic power for Catalyst. Now there’s a second piece by Jeremy Adelman in Aeon on Thomas Piketty and Adam Tooze. I think they’re both wrong, but in different ways. Major’s piece suggests that economic ideas don’t really matter very much – it’s the economic base, not the superstructure that’s doing the work. Adelman, in contrast, think that ideas are super important – he just thinks that Piketty and Tooze have ones that are leading us in the wrong direction.
These arguments come from radically different places, but they have one thing in common. They both substantially underestimate the role that ideas have played in getting us to where we are on the left, and what they they’re likely to do for us in the near future.
The main political beneficiaries of the social and economic fractures wrought by globalization and technological change, it is fair to say, have so far been right-wing populists. Politicians like Donald Trump in the United States, Viktor Orbán in Hungary, and Jair Bolsonaro in Brazil have ridden to power by capitalizing on the growing animus against established political elites and exploiting latent nativist sentiment.
The left and progressive groups have been largely missing in action. The left’s relative weakness partly reflects the decline of unions and organized labor groups, which have historically formed the backbone of leftist and socialist movements. But ideological abdication has also played an important role. As parties of the left became more dependent on educated elites instead of the working class, their policy ideas aligned more closely with financial and corporate interests.
The remedies on offer from mainstream leftist parties remained correspondingly limited: more spending on education, improved social-welfare policies, a bit more progressivity in taxation, and little else. The left’s program was more about sugarcoating the prevailing system than addressing the fundamental sources of economic, social, and political inequities.
For around a decade, people who think critically about the media have worried about filter bubbles—algorithmic or social structures of information flow that help us see only the news that we want to see. Filter bubbles make it easy to ignore information that could change our views. But the Covington story is an example of a different problem. It’s a story that’s disproportionately talked about and hard to avoid. It’s relatively inconsequential, but also inescapable. There is no bubble strong enough to keep it out.
The Covington saga isn’t fake news, strictly speaking. The events on the Mall really happened; what’s more, the surrounding story raises many questions of broad, genuine interest. How much should we hold teen-agers accountable for their political views? Would a group of nonwhite demonstrators have been permitted to behave as the Covington boys did? What is the moral status of Catholicism, and of socially conservative religious institutions generally? (What if the boys had been students at a Jewish or Muslim school?) How reactive should journalists be? These subjects are interesting to debate, as are the reputations of Sandmann and Phillips. All of this lends the Covington video a kind of moral momentum. As more people weigh in, the momentum builds.
It would be wrong, however, to take the moral interest of the Covington video at face value.
When Lake Xochimilco near Mexico City was Lake Texcoco, and the Aztecs founded their island capital city of Tenochtitlan in 1325, a large aquatic salamander thrived in the surrounding lake. The axolotl has deep roots in Aztec religion, as the god Xolotl, for whom the animal is named, was believed to have transformed into an axolotl—although it didn’t stop the Aztecs from enjoying a roasted axolotl from time to time. The custom of eating axolotl continues to this day, although the species has become critically endangered in the wild. Saving the salamander that Nature called “biology’s beloved amphibian” takes on a special significance given the animal’s remarkable traits. Axolotls are neotenic, meaning the amphibians generally do not fully mature like other species of salamander, instead retaining their gills and living out their lives under water as a kind of juvenile. On rare occasions, or when stimulated in the lab, an axolotl will go through metamorphosis and develop lungs to replace its gills.
Accompanying these unique traits is a remarkably complex genome, with 32 billion base pairs compared to about 3 billion base pairs in human DNA. The axolotl has the largest genome ever fully sequenced, first completed last year by a team of European scientists. The University of Kentucky, which heads axolotl research in the United States, today announced that researchers have added the sequencing of whole chromosomes to the European effort—“about a thousand-fold increase in the length of assembled pieces,” according to Jeremiah Smith, an associate biology professor at the University of Kentucky. Scientists hope to use this new data to harness some of the axolotl’s unique abilities.
Every so often a book comes along and changes the way you see a classic of literature. The Diary of Virginia Woolf, published between 1977 and 1984, came out decades after Woolf’s death in 1941, and added a stunning lens through which to view her long and dynamic career. Her husband Leonard had carefully edited a volume initially in 1953, one that focused entirely on Woolf’s writing process and avoided personal details, but it was only when Woolf’s diaries were released in their totality that readers gained a precious glimpse inside a complicated mind at work.
They revealed a Woolf unexpectedly playful and at times mundane: “So now I have assembled my facts,” she wrote on August 22, 1922, “to which I now add my spending 10/6 on photographs, which we developed in my dress cupboard last night; & they are all failures. Compliments, clothes, building, photography—it is for these reasons that I cannot write Mrs Dalloway.” They also reveal a Woolf at times both vicious and shitty: her cattiness, her casual racism. Ruth Gruber, who wrote the first PhD dissertation on Woolf, had a short, pleasant correspondence with her in the 1930s, only to discover, when the diaries were later published, Woolf referring to her dismissively as a “German Jewess” (Gruber was born in Brooklyn). As Gruber would write of the experience, “Diaries can rip the masks from their creators.”
Unlike many writers’ diaries, The Diary of Virginia Woolf has become more than just a gloss on her novels; it is a work of literature in and of itself, a powerful and startling look into the inner life of a woman writer during a dramatic time. “I will not be ‘famous,’ ‘great,’” she wrote in 1933. “I will go on adventuring, changing, opening my mind and my eyes, refusing to be stamped and stereotyped. The thing is to free one’s self: to let it find its dimensions, not be impeded.” Woolf began writing in a diary in 1897, when she was just 14 years old; she would continue on and off again, for the rest of her life; she would write the final entry four days before her death in March 1941. In total, she wrote over 770,000 words in her diaries alone.
Vienna in the 1920s was an exciting place. Politically, it was the time of Red Vienna, when the municipal government experimented with radical democratic reforms in housing, healthcare, education and worker’s rights. There was optimism in the air, despite postwar hyperinflation and rising conservatism. It was also an exciting time intellectually, for one of the most influential movements in the history of philosophy was in full swing: the Vienna Circle.
They were a group of philosophers, mathematicians and physicists who gathered around the German philosopher Moritz Schlick, and included luminaries such as Rudolf Carnap, Otto Neurath and Herbert Feigl. The Circle put forward an ambitious programme that would have all knowledge constructed out of an objective foundation of observation and deductive logic. Their ‘veriﬁability principle’ would assert that a meaningful sentence had to be reducible, via truth-preserving logic, to a basic language of observation statements. Metaphysics, ethics, religion and aesthetics were either to be revised so as to be stated in this scientific language, or else declared meaningless – mere nonsense. These new scientific philosophers were socially progressive, at home in Red Vienna, and they saw themselves as intellectually progressive as well. Unfortunately, others all too readily concurred, such as the fascist student who gunned down Schlick on the steps of the University of Vienna in 1936.
Theirs was an idea whose time had come. A similar group was developing in Berlin, with Hans Reichenbach and Carl Hempel as its most prominent members. In Cambridge, Bertrand Russell had also been arguing that philosophy must proceed by a logical analysis that bottoms out in simple, metaphysically fundamental existents in the world. But it was Russell’s Viennese student Ludwig Wittgenstein who most intrigued the Circle with his first book, written mostly during the First World War on the perilous Eastern and Italian fronts, where he was ultimately taken as a prisoner of war.
At first glance, the hagfish—a sinuous, tubular animal with pink-grey skin and a paddle-shaped tail—looks very much like an eel. Naturalists can tell the two apart because hagfish, unlike other fish, lack backbones (and, also, jaws). For everyone else, there’s an even easier method. “Look at the hand holding the fish,” the marine biologist Andrew Thaler once noted. “Is it completely covered in slime? Then, it’s a hagfish.”
Hagfish produce slime the way humans produce opinions—readily, swiftly, defensively, and prodigiously. They slime when attacked or simply when stressed. On July 14, 2017, a truck full of hagfishoverturned on an Oregon highway. The animals were destined for South Korea, where they are eaten as a delicacy, but instead, they were strewn across a stretch of Highway 101, covering the road (and at least one unfortunate car) in slime.
Typically, a hagfish will release less than a teaspoon of gunk from the 100 or so slime glands that line its flanks. And in less than half a second, that little amount will expand by 10,000 times—enough to fill a sizable bucket.
But this failure is based on a strict interpretation of what it is to be rational – obeying the laws of logic and probability. It is not interested in the machine that must weigh up the evidence and reach a decision. In our case, that machine is the human brain – and like any physical system, it has its limits.
Although our decision making falls short of the standards required by logic and mathematics, there is still a role for rationality in understanding human cognition. The psychologist Gerd Gigerenzer has shown that while many of the heuristics we use may not be perfect, they are both useful and efficient.
But a recent approach called computational rationality goes a step further, borrowing an idea from artificial intelligence. It suggests that a system with limited abilities can still take an optimal course of action.