Where in the pantheon of American commercial titans does Jeffrey Bezos belong? Andrew Carnegie’s hearths forged the steel that became the skeleton of the railroad and the city. John D. Rockefeller refined 90 percent of American oil, which supplied the pre-electric nation with light. Bill Gates created a program that was considered a prerequisite for turning on a computer. At 55, Bezos has never dominated a major market as thoroughly as any of these forebears, and while he is presently the richest man on the planet, he has less wealth than Gates did at his zenith. Yet Rockefeller largely contented himself with oil wells, pump stations, and railcars; Gates’s fortune depended on an operating system. The scope of the empire the founder and CEO of Amazon has built is wider. Indeed, it is without precedent in the long history of American capitalism.
Today, Bezos controls nearly 40 percent of all e-commerce in the United States. More product searches are conducted on Amazon than on Google, which has allowed Bezos to build an advertising business as valuable as the entirety of IBM. One estimate has Amazon Web Services controlling almost half of the cloud-computing industry—institutions as varied as General Electric, Unilever, and even the CIA rely on its servers. Forty-two percent of paper book sales and a third of the market for streaming video are controlled by the company; Twitch, its video platform popular among gamers, attracts 15 million users a day. Add The Washington Post to this portfolio and Bezos is, at a minimum, a rival to the likes of Disney’s Bob Iger or the suits at AT&T, and arguably the most powerful man in American culture.
I sat at my kindergarten desk, Surrounded by others, Either cheerful Or bored, who were Cutting The requisite circles With ease, Or slicing down Straight, penciled lines As the teacher directed. . I did my dutiful best, But the scissors Hurt my fingers In a minor, Distracting way, And I was too young To realize the handle Was biased For a right-hand child, So all I could do Was cut in clumsy zigzags And feel like a fool. . Staring hard at the blades, I tried to will them To obey, Who couldn’t conceive I was being freed That day By those little silver wings Of a bird Intent on the erratic, Authentic pattern Of its own flight Through a sky of colored paper. . by Gregory Orr from Narrative Magazine
Terry Eagleton’s Humour and Peter Timms’ Silliness: A Serious History are two recent additions to the patchy field of humour studies. Both authors are hemmed into the Anglocentric comedy canon that sees absurdist comedy peaking with The Goon Show, Pete and Dud, and Monty Python, and going downhill ever since. They’re also both in their seventies. This puts you in the weird position of feeling unreasonable for expecting them to be up-to-date on their subject. But neither would you want their lukewarm take on, say, ‘meme commentator’ @gayvapeshark or the HBO series Los Espookys.
To be fair, cutting-edge relevancy isn’t Eagleton or Timms’ priority. Humour is a critical survey not of humour but its theory – like a literature review for a PhD, albeit more reader-friendly. Its most interesting aspect is its Marxist subtext (Eagleton’s famously a comrade), which occasionally breaks ground but never erupts into a full-blown theory of how humour can serve Leftist aims.
First, a warning: this is a life-changing book and will alter your relationship to food for ever. I can’t imagine anyone reading Safran Foer’s lucid, heartfelt, deeply compassionate prose and then reaching blithely for a cheeseburger. There’s some dispute as to precisely what proportion of global heating is directly related to the rearing of animals for food, but even the lowest estimates put it on a par with the entire global transportation industry. A well-evidenced 2009 report by the Worldwatch Institute claimed that livestock-related emissions accounted for 51% of all greenhouse gases, “more than all cars, planes, buildings, industry and power plants combined”. Whichever the case, Safran Foer’s thesis is clear and compelling: by making “a collective act to eat differently” (he suggests “no animal products before dinner”) we can turn the tide of the climate crisis.
JR: What accounts for this failure to engage with Savarkar?
VC: In many ways, both the Left and the Right treat him as a non-human subject. The Left wants to simply denounce him and see him as the political enemy, but not actually engage with his ideas. To talk about him or read him is somehow an indication that he is a human, that he is worthy of some kind of engagement. And when it comes to his supporters, it seems they are only interested in hagiographies, in elevating him to the status of an almost deity-like figure, but without actually reading what he is saying.
The more I started looking at his writings, what became apparent to me is that Savarkar had a much better understanding of cultural hegemony than the Left did. It is no longer possible to simply ignore him, or to say that he was not an interesting figure or was only a “derivative thinker.” Ignoring Savarkar has not helped us to combat the kind of ideas that now perpetuate Hindutva in India, or globally, today.
Each story begins in the mid-century, when the New Deal created a new need for economists. The New Deal inflated the size of the federal government, and politicians turned to economists to make sense of their new complicated initiatives and help rationalize their policies to constituents. Even Milton Friedman, the dark apostle of market fundamentalism, admitted that “ironically, the New Deal was a lifesaver.” Without it, he said, he may have never been employed as an economist. From the mid-1950s to the late 1970s the number of economists in the federal government swelled from about 2,000 to 6,000.
The New Deal also gave rise to cost-benefit analysis. Large projects, like dam building or rural electrification, needed to be budgeted and constrained. In 1939, Cambridge economist Nicholas Kaldor asserted that the political problem with cost-benefit analysis—that someone always loses out—wasn’t a problem. This was because the government could theoretically redirect a little money from the winners to the losers, to even things out: For example, if a policy caused corn consumption to drop, the government could redirect the savings to aggrieved farmers. However, it didn’t provide any reason why the government would rebalance the scale, just that it was possible. What is now called the Kaldor-Hicks principle, “is a theory, “ Appelbaum says, “to gladden the hearts of winners: it is less clear that losers will be comforted by the possession of theoretical benefits.” The principle remains the theoretical core of cost-benefit analysis, Appelbaum says. It’s an approach that sweeps the political problems of any policy—what to do about the losers—under the rug.
First formulated by Barbara Ehrenreich and John Ehrenreich in a pair of essays in the journal Radical Americain the late 1970s, the idea of the “professional-managerial class” was originally part of an attempted materialist explanation of the political stability of American capitalism in the 20th century, and in particular the failure of the New Left to overthrow it. While industrial capitalism had liquidated the 19th-century middle class, much as Marx had predicted, society had not subsequently polarized into two hostile camps. Instead, the “monopoly capitalism” that evolved in the 20th century—the bureaucratic, administered, managerial system that replaced the entrepreneurial chaos of Victorian laissez-faire—had thrown up a new middle class, whose purpose was to supervise the accumulation process and keep the unruly proletariat in line: researchers and engineers to transform the production process; teachers, doctors, nurses, and managers to sculpt, maintain, and control the workforce; cultural workers to produce commercialized mass entertainment and ideology, displacing the pathologized pleasures of the ghetto; social workers and lawyers to deal with the ensuing social problems when people deviated from this disciplinary grid.
In the early years of the 20th century, the professions emerged in their modern forms, establishing uniform standards of practice and conduct in all these fields. The new professionals were in general politically progressive, seeing their purpose as the renovation of American democracy and the modernization of conditions of work and life, in keeping with the momentous social and technological changes that had remade the world.
Environmental history lacks an overarching, consensus narrative for the last two centuries, and the environmental movement still does not have a plan for what to do when things get rough. Two recent books—Simon Pirani’s Burning Up and Holly Jean Buck’s After Geoengineering—hint, though, that the movement is at last starting to offer strategic thinking commensurate with the crisis at hand. They reveal how the environmental movement must thoroughly understand neoliberalism to avoid underestimating it as an adversary—or, worse, falling for its charms.
As a researcher at the Oxford Institute for Energy Studies, Pirani might sound like yet another energy analyst, but what sets him apart is his approach, for there aren’t many dyed-in-the-wool Marxists in this line of work. A former member of the Trotskyist Workers Revolutionary Party, Pirani has traded on his close acquaintance with Russia to have a second career studying its methane industry. He has also worked as a journalist and penned books on the Russian revolution and contemporary politics during the Putin era. Burning Up represents the convergence of his parallel professions: it is a history of fossil fuels couched in a Marxist armature. To explain his aim for the book, he quotes the economic historian Adam Tooze, who in 2016 called for “a history that shows how consumption and production became tied together in an expanding feedback loop of ever greater economic and material scope.” Pirani hopes “this book is a step on that path,” but he is too modest. He has written an ambitious history of fossil fuels.
Consciousness of time passing seldom accords with what clocks and calendars tell us. The discord is especially acute in these days of Trump-induced, ever changing “breaking news.” Thus, it seems to me and I suspect to every other sentient being paying attention, that it was centuries ago that Donald Trump was still making an effort not to flaunt his ignorance and mindlessness. As far as the physics goes, it hasn’t been quite three years. It seems like centuries ago too when there were still “adults in the room,” trying, without much success, to keep Trump from acting out too egregiously or doing anything too transparently stupid. According to the calendar, it has not been much more than a year since most of that stable cleared out. Among those adults, there was a retired Marine Corps General called “Mad Dog,” an Exxon-Mobil honcho named “Rex,” and H. R. McMaster, a retired Army Lieutenant General. They were Trump’s Defense Secretary, Secretary of State, and National Security Advisor, respectively.
Trump has a thing for fossil fuel industry executives, the richer the better, and for generals. But, even for them, any and all lapses from abject servility, and any sign of disrespect, is a sure way to get fired by tweet. I would guess that Trump picked up his fondness for generals at the Military School where his parents sent him to get his act together. Back in the day, that is what rich parents with troubled kids would do. Or maybe it came along with his bone spurs.
Whatever the explanation, those were the Trump administration’s salad days.
Nowadays, his administration is an unadorned kakistocracy, a government of the worst, least qualified and most unscrupulous persons around. From the moment that, thanks to the Electoral College, a slight plurality of voters in a few states – and a minority overall — set Trump loose upon the world, I was of the view that only cholesterol would save us. My hope was that a beneficent cheeseburger would be the Donald’s undoing.
Borko Amulic and Gabriel Sollberger in The Scientist:
In the early 2000s, Arturo Zychlinsky at the Max Planck Institute for Infection Biology in Berlin found that mammalian immune cells called neutrophils use an enzyme called neutrophil elastase (NE) to cleave bacterial virulence factors. When Zychlinsky and his colleagues delved deeper into this defense mechanism, they realized that when activated by bacteria, human neutrophils release NE in what, under the microscope, looked like a fibrous structure. This structure turned out to be a meshwork of NE, other proteins, and copious amounts of DNA. In cultured human neutrophils, the webs were able to trap the bacteria that had triggered their formation, thereby limiting infection, so Zychlinsky and colleagues dubbed them neutrophil extracellular traps, or NETs.
The fact that neutrophils used their nuclear material to catch pathogens was intriguing to immunologists and cell biologists alike. The work of the Zychlinsky lab suggested that the release of NETs was an active process, and that the material wasn’t simply released by passive lysis. This has motivated a new line of research devoted to characterizing these unique structures, delineating the mechanisms that prompt their formation, and identifying their relevance in mammalian biology. As more and more researchers join the burgeoning field, the spectrum of pathogens known to induce NET release from neutrophils has expanded from a variety of bacteria to fungi and, most recently, to viruses. However, it has also become clear that NETs can have negative consequences for the organisms that produce them—by activating autoimmune pathways or encouraging tumor cells to metastasize, for example.
In 2017, scientists at Carnegie Mellon University shocked the gaming world when they programmed a computer to beat experts in a poker game called no-limit hold ’em. People assumed a poker player’s intuition and creative thinking would give him or her the competitive edge. Yet by playing 24 trillion hands of poker every second for two months, the computer “taught” itself an unbeatable strategy.
Many people fear such events. It’s not just the potential job losses. If artificial intelligence (AI) can do everything better than a human being can, then human endeavor is pointless and human beings are valueless.
Computers long ago surpassed humans in certain skills—for example, in the ability to calculate and catalog. Yet they have traditionally been unable to reproduce people’s creative, imaginative, emotional, and intuitive skills. It is why personalized service workers such as coaches and physicians enjoy some of the sweetest sinecures in the economy. Their humanity, meaning their ability to individualize services and connect with others, which computers lack, adds value. Yet not only does AI win at cards now, it also creates art, writes poetry, and performs psychotherapy. Even lovemaking is at risk, as artificially intelligent robots stand poised to enter the market and provide sexual services and romantic intimacy.
Lord Byron, according to his dumped mistress Lady Caroline Lamb, was “mad, bad and dangerous to know”. Antony Peattie’s exploration of his personal caprices and intellectual quirks definitively strikes down all three charges. Byron the self-aware ironist was never demented; he may have relished his reputation for vice, but his pagan promiscuity was overshadowed by the legacy of his punitive Calvinist upbringing; and it would surely have been a delight, not a danger, to know this convivial fellow, whose eyes, as Coleridge said, were “the open portals of the sun” and his teeth “so many stationary smiles”.
Peattie’s biography starts with an anecdote about Byron’s teenage years that encapsulates his slippery psychological complexity. On an evening of amateur theatricals, he performed first in a sulphurous melodrama, then in a comedy of manners. In one play, he was a misanthrope branded with the mark of Cain, in the other a frivolous dandy. “Everything by turns and nothing long”, as he said, he found both the outcast apostate and the man of mode inside himself. Or were they simply masks Byron wore and then discarded?
The four impossible “problems of antiquity”—trisecting an angle, doubling the cube, constructing every regular polygon, and squaring the circle—are catnip for mathematical cranks. Every mathematician who has email has received letters from crackpots claiming to have solved these problems. They are so elementary to state that nonmathematicians are unable to resist. Unfortunately, some think they have succeeded—and refuse to listen to arguments that they are wrong.
Mathematics is not unique in drawing out charlatans and kooks, of course. Physicists have their perpetual-motion inventors, historians their Holocaust deniers, physicians their homeopathic medicine proponents, public health officials their anti-vaccinators, and so on. We have had hundreds of years of alchemists, flat earthers, seekers of the elixir of life, proponents of ESP, and conspiracy theorists who have doubted the moon landing and questioned the assassination of John F. Kennedy.
Circle squarers and angle trisectors have been around for as long as the problems themselves. The ancient Greeks used the word τετραγωνιζειν (tetragonidzein), which translates “to occupy oneself with the quadrature,” to describe those trying to solve the circle-squaring problem.
Silence and endings are much on Howe’s mind these days. She is seventy-nine, slight but still spry, with a kind, angular face and sharp blue eyes. She has a puckish sense of humor: her friend, the philosopher Richard Kearney, described her to me as a “comic mystic, or a mystic comic.” The coffee shop I originally suggested was closed for the day. On our walk to the Fogg, she told me, in a voice that still recalls the 1950s Cambridge milieu in which she grew up, about her recent trip to Belfast and how much she’d loved Milkman, Anna Burns’s Booker Prize–winning novel about the Troubles.
Howe’s latest collection of poetry, Love and I, is by my accounting her seventh book in the past ten years. (Howe is so productive, and writes in so many different forms, that it’s hard to keep track of her oeuvre. Some publicity materials claim she’s published more than thirty books; others estimate forty-plus.)
Most subversive of all, however, is the writer who has the chance to become American and write in the American language but deliberately rejects it. That is the story of Minae Mizumura, a distinguished Japanese novelist who has made her ambivalent feelings about English a central theme of her work. Mizumura was born in Tokyo in 1951, and when she was twelve years old her family moved to the US after her father was transferred to his company’s New York office. She spent the rest of her childhood in a Long Island suburb and then attended Yale, where she went on to earn a graduate degree in French literature. By the time Mizumura finished her studies, she had spent more than half her life in America. Yet she decided to move back to Japan and begin a career as a Japanese novelist, returning only occasionally to the US to teach. She has published eight books of fiction and nonfiction, of which three have been translated into English over the last decade.