Jan Mieszkowski at Public Books:
The prospect of a new Kafka biography is like an invitation to a party that is bound to be entertaining but may end badly. Situating Kafka’s writing within the cultural and political landscape of European modernism and the late Austro-Hungarian Empire is a worthy, if daunting, endeavor. Less certain is whether such efforts to contextualize his corpus actually garner insights into it. Kafka’s readers are intrigued by virtually any anecdote about him, but few would allow that the abiding mysteries of his texts will be resolved by learning that he lived in Prague, was the son of a fancy goods merchant, and enjoyed going to the beach. Nor does history provide a reliable key to unlock his works, which have dates but do not date. If they are decidedly not a product of our time, there appears to be little chance of them ever going out of style.
Although Kafka’s importance is incontestable, scholars and casual fans alike fiercely debate every feature of his corpus. Each plot twist or curious turn of phrase calls for clarification, yet customary interpretive practices are seldom up to the task. To read Kafka is to lurch back and forth between the uncannily familiar and the abjectly foreign. To reread a favorite story is to risk seeing any exegetical progress made the first, second, or third time through evaporate. Given these challenges, learning more about Kafka’s life may be a good opportunity to win new perspectives on his writing, but it may also be the furthest thing from it.
S.D. Sykes at Literary Hub:
When did this love for “crumbling Venice” begin, and why has it taken hold with such tenacity? By the time Victorian historian and art critic John Ruskin encountered the city in the 1840s, he thought Venice was so neglected that she might melt into the lagoon “like a lump of sugar in hot tea.” It’s true that Ruskin feared any further deterioration, but what appalled him to an even greater extent was any attempt to modernize the city. He wanted a Venice that was set in aspic, a time-capsule for posterity. And thus the movement to save La Serenissima was born—and what a successful movement this has been—for Venice is probably the finest preserved medieval and Renaissance city in Europe. Yet all this tender loving care has not been without consequence. It could be argued that she’s been stifled, moth-balled—even de-commissioned as a real city. For, while there is a great deal of industry and development about the rim of the Venetian lagoon, it is almost impossible to find a modern building in Venice herself. For the most part, she is the same city that Ruskin visited, kept safe in her watery refuge and forbidden from growing up.
But it’s not just crumbling Venice that we have come to love through art, film and literature. We’re equally, if not more attached to “decadent Venice”—shameless, lustful, dissipated Venice. We might blame Casanova and Lord Bryon and their epic sexual exploits for bestowing this reputation upon the place, but Venice’s status as a city of pleasure goes back much further in history. By the early 17th century, there were estimated to be as many as 20,000 prostitutes in a city that only numbered around 140,000 people. Even for those not seeking to pay for sex, there was plenty of the stuff on offer—adulterous affairs, secret assignations, riotous carnivals and masked balls. Even the city’s nuns sometimes took lovers. One needs only to look at the paintings of Titian, particularly the eroticism of his 1538 painting “Venus of Urbino” to appreciate that this was a society with a relaxed and permissive attitude to sex.
Jonathan Gibbs at the TLS:
Maggie Nelson’s Bluets, originally published in the US in 2009 and only now appearing in the UK, thanks to Jonathan Cape, joins a small collection of books I seem to have acquired, without really trying, on the subject of the colour blue. Nelson’s book might best be described as an essay in the form of prose-poetic fragments; its tone is set from the first line, which runs: “Suppose I was to begin by saying that I had fallen in love with a color”. What follows are ruminations on Nelson’s relationship with the colour blue; more critical explorations of why this colour might have a power over us that red, for instance, or green don’t have; and brief back-slips into memoir that exhibit the same jagged candour as The Argonauts (2015).
The other books in my micro-collection are William Gass’s On Being Blue, Derek Jarman’s Blue, and Blue Mythologies by Carol Mavor. It may be chance that these particular books have come into my possession (I have no particular interest in the colour, myself) but it is surely not chance that all these books were written about blue, rather than any other colour. Nelson is well aware of the anomaly. “It does not really bother me that half the adults in the Western world also love blue”, she writes, “or that every dozen years or so someone feels compelled to write a book about it.”
Andrea Scrima at The Quarterly Conversation:
What do we expect from literature? Fiction offers writers the chance to formulate uncomfortable ideas, to place words in the mouths of characters that are distinct from the author’s point of view. Written six years after September 11, however, Falling Manstill did not address much of the madness that occurred in the aftermath of this epochal event: the self-censorship that characterized the time; the mindless patriotism; the trauma that was fixated exclusively on victimhood, as opposed to the devastating effects of United States policy abroad; the conspiracy theories—the latter being a particularly noteworthy omission, given that in his research for Libra, DeLillo immersed himself in the sea of speculation surrounding the JFK assassination, the last era-defining catastrophe before 9/11. Was it a cop-out to give the strongest critical voice to a foreigner, the vaguely dubious Martin with the socialist past? Oddly, his nationality is not precisely specified, as though Europe were some indistinguishable entity patently hostile to American values and virtues, and therefore decadent, discredited. DeLillo seems to be asking how much we actually want to know about ourselves, and it seems significant in this respect that Falling Man was one of his least loved books. A similar fate befell Susan Sontag, who famously issued an apology for the short essay she read out loud at the American Academy in Berlin on September 13, 2001 and published two days later in the Frankfurter Allgemeine Zeitung. Finally appearing in The New Yorker on September 24, nearly two weeks after the event, the piece made the comparatively mild and fairly accurate observation that America was attacked for its arrogance and its disastrous international interventions and heaped scorn on “the unanimity of the sanctimonious, reality-concealing rhetoric spouted by American officials and media commentators.” Sontag stood alone in her audacity to state what should have been obvious to everyone, and she was vilified for it. When does it become the writer’s responsibility to put skin in the game, to come out of hiding and state an unequivocal point of view? Are DeLillo’s deflected statements the only way he saw to voice deeply uncomfortable and unpopular ideas, and is this a legitimate literary strategy?
George Yancy and Noam Chomsky in the New York Times:
Over the past few months, as the disturbing prospect of a Trump administration became a disturbing reality, I decided to reach out to Noam Chomsky, the philosopher whose writing, speaking and activism has for more than 50 years provided unparalleled insight and challenges to the American and global political systems. Our conversation, as it appears here, took place as a series of email exchanges over the past two months. Although Professor Chomsky was extremely busy, because of our past intellectual exchange, he graciously provided time for this interview.
George Yancy: Given our “post-truth” political moment and the growing authoritarianism we are witnessing under President Trump, what public role do you think professional philosophy might play in critically addressing this situation?
Noam Chomsky: We have to be a little cautious about not trying to kill a gnat with an atom bomb. The performances are so utterly absurd regarding the “post-truth” moment that the proper response might best be ridicule. For example, Stephen Colbert’s recent comment is apropos: When the Republican legislature of North Carolina responded to a scientific study predicting a threatening rise in sea level by barring state and local agencies from developing regulations or planning documents to address the problem, Colbert responded: “This is a brilliant solution. If your science gives you a result that you don’t like, pass a law saying the result is illegal. Problem solved.”
Kieran Shiach in The Guardian:
A cartoon of a lynched Pakistani man hanging with mutilated genitals and a racial slur on his name tag might seem obviously incendiary, and to put it on the cover of a comic book the epitome of poor decision-making. But Image Comics did just that with the fourth issue of The Divided States of Hysteria, a new comic by industry legend Howard Chaykin – and then undid it a day later. An official apology was quickly released and the cover whisked away from the web. Which leaves the question: who the hell thought it was a good idea? And with so many recent examples of studios having to retract and apologise for their comics, how could such an image have made it all the way to print? The “how” might be explained by Image’s response – or rather, the stark difference between their account and Chaykin’s. While Image was remorseful – “Image Comics recognises that we could have responded to readers’ concerns about the graphic nature of this cover more quickly and with more empathy and understanding” – Chaykin focused on explaining why his comic was a Good Thing. “For the record, the cover depicts the horrific wish dream of some 45% of their fellow Americans,” he told website FreakSugar. “Perhaps if they spent a bit more time paying attention to the fact that the world they were born into is on the brink of serious disaster, they might have less time to get worked up about an image of genuine horror that depicts an aspect of that impeding disaster.” Chaykin’s comic was – according to its creator – intended to shine a light on the worst parts of our society by turning the dial to the nth degree in a future setting. But although only one issue is out, it isn’t the first controversy The Divided States of Hysteria has stirred up: in June, it made headlines when the first issue, published during Pride month with a special Pride cover, featured a group of men attacking a transgender sex worker.
Image is far from the only publisher to let questionable images go to print. Marvel Comics has run several, including J Scott Campbell’s Invincible Iron Man cover, which depicted teenage Riri Williams in a textbook example of how black women are stereotyped as hypersexualised.
Katharine Walter in Nautilus:
Biologist Eric Verdin considers aging a disease. His research group famously discovered several enzymes, including sirtuins, that play an important role in how our mitochondria—the powerhouses of our cells—age. His studies in mice have shown that the stress caused by calorie restriction activates sirtuins, increasing mitochondrial activity and slowing aging. In other words, in the lab, calorie restriction in mice allows them to live longer. His work has inspired many mitochondrial hacks—diets, supplements, and episodic fasting plans—but there is not yet evidence that these findings translate to humans. Last year, Verdin was appointed President and Chief Executive Officer of the Buck Institute for Research on Aging, the largest independent research institute devoted to aging research. The Buck, founded in 1999 by Marin County philanthropists Leonard and Beryl Hamilton Buck, includes more than 250 researchers working across disciplines to slow aging. Verdin, originally trained as a physician in his native Belgium, is eager to translate findings from the lab work done over the past 20 years in worms and mice to humans. “Aging without illness is our overarching goal!” he wrote when he began at the Buck. In a recent Nautilus interview, Verdin was optimistic about the future. He thinks we’ll continue to live longer and age better. But to live better longer, he says, requires research but also rethinking doctors’ visits.
Can those incredible increases in lifespan continue? Is there an upper limit?
There currently is an upper limit, and the upper limit is probably around 115, 120. You have a very large number—100 billion people to choose the number of people that have ever lived—and you have only one who has made it through to 122, Jeanne Calment. The second oldest was 119. It does seem there is an upper limit. Some people have shown that in the last hundred years, even though we have progressively increased the average lifespan, the number of people who live above 115 has not increased. That has to tell you that we might be reaching sort of a limit. That’s already a pretty good limit. If we could all live to 110 healthy and a disease in the last five years of life, I think most people would sign for this.
We're now accumulating data at an incredible rate. I mentioned electron microscopy to study the ribosome—each experiment generates several terabytes of data, which is then massaged, analyzed, and reduced, and finally you get a structure. At least in this data analysis, we believe we know what's happening. We know what the programs are doing, we know what the algorithms are, we know how they come up with the result, and so we feel that intellectually we understand the result. What is now happening in a lot of fields is that you have machine learning, where computers are essentially taught to recognize patterns with deep neural networks. They're formulating rules based on patterns. There are are statistical algorithms that allow them to give weights to various things, and eventually they come up with conclusions. When they come up with these conclusions, we have no idea how; we just know the general process. If there's a relationship, we don't understand that relationship in the same way that we would if we came up with it ourselves or came up with it based on an intellectual algorithm. So we're in a situation where we're asking, how do we understand results that come from this analysis? This is going to happen more and more as datasets get bigger, as we have genome-wide studies, population studies, and all sorts of things.
There are so many large-scale problems dependent on large datasets that we're getting more divorced from the data. There's this intermediary doing the analysis for us. To me, that is a change in our way of understanding it. When someone asks how we know, we say that the system analyzed it and came up with these relationships—maybe it means this or maybe it means that. That is philosophically slightly different from the way we've been doing it. The other reason to worry is a cultural reason. The Internet and the World Wide Web have been a tremendous boon to scientists. It's made communication far easier among scientists. It's in many ways leveled the playing field.
Anna Nowogrodzki in Nature:
Aviv Regev likes to work at the edge of what is possible. In 2011, the computational biologist was collaborating with molecular geneticist Joshua Levin to test a handful of methods for sequencing RNA. The scientists were aiming to push the technologies to the brink of failure and see which performed the best. They processed samples with degraded RNA or vanishingly small amounts of the molecule. Eventually, Levin pointed out that they were sequencing less RNA than appears in a single cell. To Regev, that sounded like an opportunity. The cell is the basic unit of life and she had long been looking for ways to explore how complex networks of genes operate in individual cells, how those networks can differ and, ultimately, how diverse cell populations work together. The answers to such questions would reveal, in essence, how complex organisms such as humans are built. “So, we're like, 'OK, time to give it a try',” she says. Regev and Levin, who both work at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, sequenced the RNA of 18 seemingly identical immune cells from mouse bone marrow, and found that some produced starkly different patterns of gene expression from the rest1. They were acting like two different cell subtypes. That made Regev want to push even further: to use single-cell sequencing to understand how many different cell types there are in the human body, where they reside and what they do. Her lab has gone from looking at 18 cells at a time to sequencing RNA from hundreds of thousands — and combining single-cell analyses with genome editing to see what happens when key regulatory genes are shut down.
The results are already widening the spectrum of known cell types — identifying, for example, two new forms of retinal neuron2 — and Regev is eager to find more. In late 2016, she helped to launch the International Human Cell Atlas, an ambitious effort to classify and map all of the estimated 37 trillion cells in the human body (see 'To build an atlas'). It is part of a growing interest in characterizing individual cells in many different ways, says Mathias Uhlén, a microbiologist at the Royal Institute of Technology in Stockholm: “I actually think it's one of the most important life-science projects in history, probably more important than the human genome.”
Adam Tooze over at his website:
As climate changes and temperatures rise, who will hurt? At least since the 1980s and Ulrich Beck’s pathbreaking work on Risk Society, the question of the social stratification of risks has been posed.
At a global level it has long been obvious that some of the poorest nations will suffer most from climate change and that the US is amongst the least impacted countries. But does this finding hold across the US? Remarkably, a new study by the Climate Impact Lab (UC Berkeley, Rutgers, University of Chicago, and Rhodium Group, along with their research partners at Princeton University and RMS.) is the first to attempt to assess the effects across the US at the county level.
The results were published in Science and were reported in both the FT and the NYT.
The results are pretty eye-opening. Assuming a business as usual emissions scenario and no major breakthroughs in mitigation, every 1°C increase in global temperatures, costs the US economy about 1.2 per cent of gross domestic product. But these costs are very unevenly distributed. The impact on the Southern parts of the US by 2100 is predicted to be very severe indeed.
With the impact concentrated in the South, this also means that the costs will fall disproportionately on the poorest counties of the US.
Jarret Middleton at the Quarterly Conversation:
While many at the time made calls to politicize academia, the Frankfurt School set out to academize politics, an abstract move when real political struggle was occurring all around them and rank-and-file unions and revolutionary parties were fighting in the streets of many countries around the world, struggles in which revolutionaries paid a heavy price, from surviving the repression of fascist regimes to facing torture, prison, and death, all for the ultimate cause of human freedom. The School’s relentless critique prompted criticism not only from adversaries but from perceived allies, ranging from German communists to Bertolt Brecht to Hungarian Marxist philosopher Gyorgy Lukacs, who coined the term “Grand Hotel Abyss” when referring to the School’s precarious position perched “on the edge of an abyss, of nothingness, of absurdity.” Lukacs conceived of the Frankfurt School’s project as a theory so devoid of practice that they were in danger of permanently isolating themselves and the fruits of their intellectual labor, so much so that their position could be perceived as anti-revolutionary, one of orthodox Marxism’s greatest sins.
Taking a closer look at their original mission, the scholars of the Frankfurt School concluded that the communist revolution failed among Germany’s working class because the country had retained a healthy amount of the conservative social mores that had been established with the rise of the petit-bourgeoisie, even as their economic prowess began to decline after World War I and the era of hyperinflation. The resulting devastation of Germany’s economy, and the humiliation of so much of the formerly middle- and working classes, created a brand of reactionary populism and nationalist fervor that fueled the rise of Nazism.
Nikil Saval at n+1:
The taxi system was and is an exploitative one, in which drivers were often classified as independent contractors. But ride-sharing is incalculably more exploitative. In regulated markets, taxi companies are at least required to maintain, acquire, and insure all the cars in a taxi fleet. Ride-sharing companies are not. This means for example, as Quartz reported recently, that Uber can force its drivers into “deep subprime” loans to acquire their vehicles, leaving them drowning in debt. In addition to undermining every possible regulation to screw their drivers more, Uber claimed as late as 2015 that drivers could earn $90,000 working for them. In a landmark piece for the Philadelphia City Paper, reporter Emily Guendelsberger worked as an UberX driver and discovered the truth. “If I worked 10 hours a day, six days a week with one week off, I’d net almost $30,000 a year before taxes,” she wrote. “But if I wanted to net that $90,000 a year figure that so many passengers asked about, I would only have to work, let’s see . . . 27 hours a day, 365 days a year.” The jobs created by ride-sharing are emblematically crappy, part-time, and contingent. In fact, according to the loophole in labor law that ride-sharing companies exploit, they’re not even “jobs” so much as gigs; the drivers are independent contractors who just happen to use the ride-sharing app.
But lying and rule-breaking to gain a monopoly are old news in liberal capitalism. What ride-sharing companies had to do, in the old spirit of Standard Oil, was secure a foothold in politics, and subject politics to the will of “the consumer.” In a telling example of our times, Uber hired former Obama campaign head David Plouffe to work the political angles. And Plouffe has succeeded wildly, since—as Washingtonians and New Yorkers are experiencing with their subways—municipal and state liberals are only nominally committed to the standards that regulate transport. Never mind that traffic is something that cities need to control, and that transportation should be a public good. Ride-sharing companies—which explode traffic and undermine public transportation—can trim the balance sheets of cities by privatizing both.
Tim Parks at the London Review of Books:
Histories of the Risorgimento find it difficult to present Garibaldi without a patina of condescension. The modern intellectual’s suspicion of the folk hero – pursued by drooling ladies of the British aristocracy, believed by Sicilian peasants to have been sent by God – is everywhere evident. In his otherwise excellent biography of 1958, Denis Mack Smith frequently referred to Garibaldi as ‘simplistic’ and ‘ingenuous’, made fun of his habit of wearing a poncho, and saw his decision to set up home on the barren island of Caprera as merely idiosyncratic. Pick takes a similar position. His Garibaldi has huge personal charisma and is a brilliant military adventurer (though almost no space is given to reminding the reader quite how brilliant), but he is also ingenuous, gullible when it comes to dealing with money and endearingly ignorant of the ways of the world. In short, he is the genius simpleton.
Pick continues a tradition that began with Garibaldi’s contemporaries and is still alive in Italy today, whereby he is to be exalted as a national hero and simultaneously never mentioned in serious public debate (Italian schoolchildren are kept well away from his incendiary, anti-clerical memoirs). So at one point, having noted Garibaldi’s lack of appetite for official honours and his tendency to live in a single, bare room even when a palace was at his disposal, Pick continues: ‘Yet he was an appealingly inconsistent ascetic, with his own touching foibles and predilections for the good things in life, and for display: thus he would occasionally don a rather gaudy embroidered cap.’
John Gray at the New Statesman:
The Second World War was not just another event – it changed everything.” Even more than the Great War of 1914-18, Keith Lowe argues, the Second World War altered human experience fundamentally. In one way or another it affected more human beings than any other violent conflict in history. Over a hundred million men and women were mobilised, and yet the number of civilians killed was greater than the number of soldiers by tens of millions. Four times as many people were killed as during the First World War. But the effects ranged far beyond the numbers of dead. For everyone who died, dozens of others found their lives changed irrevocably. Whether as refugees and exiles in the great displacement of people that followed the war, or else as factory workers, slave labourers or targets for the protagonists in the conflict, uncountable human beings were caught up in the devastation wreaked by this unprecedented upheaval.
Terrible as it was, the impact of the war was not entirely negative. In much of the world the postwar era was energised by an idea of freedom and a feeling of hope. The generation of leaders that emerged was old enough to remember the Great Depression, and determined that nothing like it would happen again. Ideas of social reconstruction through government planning were applied on a large scale, producing welfare states and managed economies in which living standards were improved for much of the population. The global scale of the conflict produced new international institutions, such as the United Nations, in which the nations of the world could co-operate on free and equal terms. In Africa and Asia, the end of the war gave anti-colonial movements increased ambition and vitality. Scientists were gripped by dreams of using the technologies that the war had spawned to enhance human life everywhere.