The sheer oddness of Delacroix’s subject choice and the venue for which his painting was produced suggest that there was more at stake here than the simple reenactment of a cherished theme. Barthélémy Jobert has stressed the Exposition’s significance for the artist, calling it “the final turning point in Delacroix’s career—and one of the most important stages, when he was finally acknowledged by every authority.”10 In Jobert’s estimation, Delacroix’s participation in the Exposition resulted in his election, after seven previous rejections, to the Institut de France. The Exposition was unquestionably a moment of high visibility for Delacroix, both as an artist and as an administrator (the latter thanks to his appointment to the organizing committee). While he cannot have known what impact the Exposition would have on his career, Delacroix understood its importance as a showcase for contemporary art, and he persuaded the committee that the Exposition should focus on the work of living artists rather than serving as a retrospective of nineteenth-century art.11Given this context, it seems likely that the decision to depict a lion hunt was made carefully. Delacroix received his official appointment to the organizing committee on 24 December 1853, and the commission for the painting followed on 24 March 1854.
I spill seeds watch them sprout my trinket children pull lobes breasts choose to be pendulous after years pinched metaphors in closets and on sheets of orgasmic upheaval when afterwards you scribble under an obsessive shade of a yellow lamp fingering hairs I tuck pillows’ case a part-time muse substantiate a make-shift smile a line or a word fixes me while a lonely dog cracks the silence of night providing you the major inspiration deviating from my arms odd embraces vicarious extensions stains on pages you call poems for God’s sake I am not over.
A young bank teller is shot dead during a robbery. The robber flees in a stolen van and is chased down the motorway by a convoy of police cars. Careening through traffic, the robber runs several cars off the road and clips several more. Eventually, the robber pulls off the motorway and attempts to escape into the hills on foot, the police in hot pursuit. After several tense minutes, the robber pulls a gun on the cops and is promptly killed in a hail of gunfire. It is later revealed the robber is a career criminal with a history of violent crime stretching all the way back to high school.
Now tell me: Are you picturing a male or a female robber? If you look back at the last paragraph, you’ll notice that I didn’t actually specify the robber’s sex. Nonetheless, I’d be willing to bet that you were picturing a man. Don’t worry—you weren’t being sexist; you were simply playing the odds. Most men are not especially violent, but most people who are especially violent are men. And rare though they might be, men such as our fictitious robber are the extreme of a more general trend, namely that men are more violent than women, more in-your-face aggressive, and more prone to taking risks.
Why? Where do these all-too-familiar sex differences come from? A recent New York Timesopinion piece weighed in on this difficult question, and came to a fairly common conclusion. The headline captured the gist: “It’s Dangerous to Be a Boy: They smoke more, fight more and are far more likely to die young than girls. But their tendency to violence isn’t innate.” (Emphasis added.) In other words, sex differences in aggression come entirely from the environment: from culture rather than biology, nurture rather than nature. Let’s call this the Nurture Only position.
At the start of the 20th century Sigmund Freud observed the psychological phenomenon of “repetition compulsion”, the pathological desire to repeat a pattern of behaviour over and over again. He no doubt would have diagnosed the painter Edvard Munch with such an affliction. As the British Museum’s new exhibition of his work demonstrates, Munch returned obsessively to certain visual motifs: uncanny sunsets, zombie-like faces, threateningly sexualised female bodies.
Freud might have looked to Munch’s biography for the roots of his mental anguish. There is much to unpick. His mother died from tuberculosis in 1868 when he was five. Nine years later, his sister died of the same disease. At a time of rapid industrialisation and grinding urban poverty, tuberculosis was tragically common. Munch had to watch as his father, a doctor, desperately tried to save the lives of consumptive patients, often resorting to prayer when all else had failed. This, and the Lutheran strictures of Munch’s adolescence in conservative Kristiana (now Oslo), did little to encourage the healthy processing of Munch’s trauma. By the time he reached adulthood he longed to escape, and managed to do so by falling in with a bohemian set of radical artists and writers, including Henrik Ibsen and August Strindberg. He soon developed a visual style that cast aside the Scandinavian traditions of formal portraiture and stately landscapes, resulting in vivid and unnerving prints.
Printmaking is by its very nature a process of repetition. There is also something almost violent about its technical vocabulary of acid bites, drypoint scratches and woodcut gouges. Possibly this appealed to Munch, as he printed, scraped, clawed and re-printed, the resulting image darkening with each new impression.
Ammon Shea loves dictionaries – especially the OED. He loves the OED so much, he read it – the whole thing, in its second edition: 21,730 pages with around 59 million words. It took him a year, full-time, and he wrote a book about it, titled Reading the OED (2008).
This is not a review, but it is a recommendation. Reading the OED will charm anyone who’s into dictionaries and words, especially unusual ones, or anyone curious about unusual hobbies and passions-slash-afflictions. (I did review Shea’s 2014 book Bad English, an entertaining historical snapshot of the English usage wars.)
When I said Shea loves dictionaries, I meant he really, really loves them. (This repetition of really is an example of epizeuxis, which is defined below.) Before the book came out, he moved house and brought 45 boxes: dictionaries filled 41 of them. As well as the 20-volume second edition of the OED, he owns the 13-volume 1933 edition, the four-volume supplement, the two- and ten-volume Shorter OEDs, the condensed-type edition, and ‘a random single-volume edition’. ‘Each has its own usefulness,’ he assures us. Certainly these things are relative, but I don’t doubt him for an instant.
So what was it like to read the biggest, most celebrated dictionary ever compiled – ‘the most coveted and desirable book in the world’, as Oliver Sacks wrote? ‘It is resolutely, obstinately, and unapologetically exhaustive,’ writes Shea. ‘These qualities make it both a tremendous joy to read at some times and unbearably boring at others.’
These days you can dismiss anything you don’t like by calling it “a religion.” Science, for instance, has been deemed essentially religious, despite the huge difference between a method of finding truth based on empirical verification and one based on unevidenced faith, revelation, authority, and scripture. Atheism, the direct opposite of religion, has also been characterized in this way, though believers who criticize secular worldviews as religious seem unaware of the irony of implying, “See—you’re just as bad as we are!” Even environmentalism has been described as a religion.
The latest false analogy between religious and nonreligious belief systems is John Staddon’s essay “Is Secular Humanism a Religion?” for Quillette. Staddon’s answer is “Yes,” but his reasoning is bizarre. One would think that it should be “Clearly not” for, after all, “secular” means “not religious,” and secular humanism is an areligious philosophy whose goal is to advance human welfare and morality without invoking gods or the supernatural.
Nevertheless, Staddon makes an oddly tendentious argument for the religious character of secular humanism.
How should we make sense of the Easter Sunday church and hotel bombings in Sri Lanka that killed more than 350 people and wounded 500? Now that Islamic State appears to have claimed responsibility for the attacks, the question arises: is this merely the latest symptom of an epidemic of Islamist violence, motivated by a belief in offensive jihad (“holy war”)?
The answer is complex and not necessarily in line with public perceptions. Islamist terrorism has been decreasing globally, and particularly in the west, since its peak in 2014-15 when Isis established its caliphate. In recent years, however, far-right supremacist terrorism has risen sharply, to more than one-third of terror attacks globally, even accounting for every extremist killing in the US in 2018. Yet it was more likely to be overlooked or tolerated by western polities, because of cultural history, familiarity and legal protections extended to domestic groups (such as US constitutional safeguards for freedom of speech and the right to bear arms). Thus, attacks by Muslims between 2006 and 2015 received 4.6 times more coverage in US media than other terrorist attacks (controlling for target type, fatalities, arrests).
These two violent ideologies are not separate, but work in tandem, hammering away at the political order, which is increasingly vulnerable for a number of reasons.
…. What thoughts I have of you tonight, Walt Whitman, for I walked down the sidestreets under the trees with a headache self-conscious looking at the full moon. …. In my hungry fatigue, and shopping for images, I went into the neon fruit supermarket, dreaming of your enumerations! …. What peaches and what penumbras! Whole families shopping at night! Aisles full of husbands! Wives in the avocados, babies in the tomatoes!—and you, García Lorca, what were you doing down by the watermelons?
…. I saw you, Walt Whitman, childless, lonely old grubber, poking among the meats in the refrigerator and eyeing the grocery boys. …. I heard you asking questions of each: Who killed the pork chops? What price bananas? Are you my Angel? …. I wandered in and out of the brilliant stacks of cans following you, and followed in my imagination by the store detective. …. We strode down the open corridors together in our solitary fancy tasting artichokes, possessing every frozen delicacy, and never passing the cashier.
…. Where are we going, Walt Whitman? The doors close in a hour. Which way does your beard point tonight? …. (I touch your book and dream of our odyssey in the supermarket and feel absurd.) ….Will we walk all night through solitary streets? The trees add shade to shade, lights out in the houses, we’ll both be lonely. …. Will we stroll dreaming of the lost America of love past blue automobiles in driveways, home to our silent cottage? …. Ah, dear father, graybeard, lonely old courage-teacher, what America did you have when Charon quit poling his ferry and you got out on a smoking bank and stood watching the boat disappear on the black waters of Lethe?
No other scientific theory can match the depth, range, and accuracy of quantum mechanics. It sheds light on deep theoretical questions — such as why matter doesn’t collapse — and abounds with practical applications — transistors, lasers, MRI scans. It has been validated by empirical tests with astonishing precision, comparable to predicting the distance between Los Angeles and New York to within the width of a human hair.
And no other theory is so weird: Light, electrons, and other fundamental constituents of the world sometimes behave as waves, spread out over space, and other times as particles, each localized to a certain place. These models are incompatible, and which one the world seems to reveal will be determined by what question is asked of it. The uncertainty principle says that trying to measure one property of an object more precisely will make measurements of other properties less precise. And the dominant interpretation of quantum mechanics says that those properties don’t even exist until they’re observed — the observation is what brings them about.
Bruce McPherson, among many others, at The Brooklyn Rail:
After the shock and tears, the feelings of personal loss and collective grief, as well as after the dutiful if hagiographic journalism, one is left wondering how Carolee Schneemann’s life’s work is likely to be seen over time. There are many things I could say, but two things above all else occur to me to suggest here as ways forward.
First, that while Carolee insisted on many occasions that she was a painter, what this meant was that the forms of her art should not and could not be separated into discrete categories—painting, performance, film, dance, theater, writing, photography, sculpture/combine, installation, etc.—that it is one thing altogether, a Gestalt. This, I believe, was thoroughly demonstrated by her Salzburg/Frankfurt/New York retrospective, where it became possible for the first time to trace and embrace the coherence of her formal expression and aesthetic integrity as a continuum. Her aesthetic is too complex to describe here, but it is fundamentally gestural, a physical propulsion outward of an inward state of being. Her awareness of that inward state occurred on many levels—physically, psychologically, interpersonally, communally, and through eidetic dreaming. Painting was her early passion and rigorous training, but it is of special importance as the gateway into and foundation underlying all other media and forms of expression that she mastered, altered, and creatively employed.
We’re often told that today’s North American critics are missing something vital. But what? Ever self-reliant, American critics often identify the missing element as a certain intensity, as though the questing knight has grown flabby and a little domestic. In American Audacity: In defense of literary daring, an impressive new collection of essays, the Boston-based critic and novelist William Giraldi sounds the alarm. “The danger is real now”, he writes, “godlike and unprecedented, all- powerful and everywhere. The Internet has zapped us all into obliging zombies; it makes yesterday’s threat from television look whimsical and rather cute.” Against these stupefying forces, Giraldi calls for the critic to return to fundamentals. “The critic’s chief loyalty is to the duet of beauty and wisdom”, he writes, “to the well-made and usefully wise, and to the ligatures between style and meaning.” Giraldi is the sort of critic – often the most helpful when one is choosing what to read – who insists on the paramount importance of a work’s aesthetic features. He is hostile to those who would perceive literature through a political or theoretical lens. “Ideology is the enemy of art because ideology is the end of imagination”, he avers.
We know a little of what the social novel was. At the very least, we know of Charles Dickens and what the literary historian Louis Cazamian calls that author’s “philosophy of Christmas.” I hope you will laugh a little here, as I think Cazamian is attempting to be at once ironic and precise. Dickens was arguably the first author to bring the urban lower middle class into the European novel as more than scenic decor; reflecting in various ways on his father’s time in debtor’s prison as well as his own stint as a factory laborer during that parent’s absence, Dickens described the precariousness produced by industrialization in generally moving detail—even if Americans are more apt to remember the amusing eccentricities of Tiny Tim and Miss Havisham than the sociological achievement of a work like 1854’s Hard Times, which stands as a sort of anatomy of the imaginary mill town of Coketown. Changes in the British political system and economy during the earlier part of the nineteenth century (expanded suffrage after 1832 and increasing readership of the press) meant that there was an eager audience for fiction that touched upon the organization of society. In the early 1830s, Harriet Martineau, a young, unmarried woman, became the author of a series of bestselling serial parables that explained basic economic concepts such as free trade, via “The Loom and the Lugger,” and unions, via “A Manchester Strike.” Martineau’s Illustrations of Political Economy (1832–34) emerged out of her conviction that the economic and the personal were not separate spheres, and her more than slightly didactic bent was surely influential for the style of serialized novel Dickens would first produce in 1836 with the Pickwick Papers. Dickens’s work built on Romanticism’s convictions regarding the importance of national history to contemporary identity, with the difference that the influence of modern (i.e., mechanized) systems on the individual were explored. In addition, unlike Sir Walter Scott, he of the sweeping national-historical romance, Dickens dealt unabashedly in coincidence, cuteness, and sentimentality—apparently hoping to motivate readers to philanthropic attitudes and works through minor styles of depiction designed to inspire pity.
It is worth underlining the strategies Dickens used to depict the social world, because even as his novels are among the most familiar to us out of the nineteenth-century Anglophone pantheon (try Googling “Scrooge McDuck merchandise”), their style seems, the contemporary American liberal maintains, anathema to what is valuable and appropriate in politicized art. The cool methods of the French realist novel have somehow won out, and we are inclined to side with Gustave Flaubert when he criticizes the pious deaths of children in that problematic American novel of persuasion, Uncle Tom’s Cabin, itself surely a Dickensian attempt and also notable as the best-selling novel of its century.
Stroke, amyotrophic lateral sclerosis and other medical conditions can rob people of their ability to speak. Their communication is limited to the speed at which they can move a cursor with their eyes (just eight to 10 words per minute), in contrast with the natural spoken pace of 120 to 150 words per minute. Now, although still a long way from restoring natural speech, researchers at the University of California, San Francisco, have generated intelligible sentences from the thoughts of people without speech difficulties.
The work provides a proof of principle that it should one day be possible to turn imagined words into understandable, real-time speech circumventing the vocal machinery, Edward Chang, a neurosurgeon at U.C.S.F. and co-author of the study published Wednesday in Nature, said Tuesday in a news conference. “Very few of us have any real idea of what’s going on in our mouth when we speak,” he said. “The brain translates those thoughts of what you want to say into movements of the vocal tract, and that’s what we want to decode.”
But Chang cautions that the technology, which has only been tested on people with typical speech, might be much harder to make work in those who cannot speak—and particularly in people who have never been able to speak because of a movement disorder such as cerebral palsy.
I long to be in Japan in the autumn. For much of the year, my job, reporting on foreign conflicts and globalism on a human scale, forces me out onto the road; and with my mother in her eighties, living alone in the hills of California, I need to be there much of the time, too. But I try each year to be back in Japan for the season of fire and farewells. Cherry blossoms, pretty and frothy as schoolgirls’ giggles, are the face the country likes to present to the world, all pink and white eroticism; but it’s the reddening of the maple leaves under a blaze of ceramic-blue skies that is the place’s secret heart.
We cherish things, Japan has always known, precisely because they cannot last; it’s their frailty that adds sweetness to their beauty. In the central literary text of the land, The Tale of Genji, the word for “impermanence” is used more than a thousand times, and bright, amorous Prince Genji is said to be “a handsomer man in sorrow than in happiness.” Beauty, the foremost Jungian in Japan has observed, “is completed only if we accept the fact of death.” Autumn poses the question we all have to live with: How to hold on to the things we love even though we know that we and they are dying. How to see the world as it is, yet find light within that truth.
I was enthralled by Dennett and Chalmers’ recent discussion of the threats and prospects regarding artificial superintelligences. Dennett thinks we should protect ourselves by doing all we can to keep powerful AIs operating at the level of suggestion-making tools, while Chalmers is impressed by the market forces that will probably push us into devolving more and more responsibility to these opaque and alien minds. But I felt as if their picture of the space of possible AI minds could be usefully refined, and with that in mind I’d like to push on two further dimensions.
The first is action. Agents that can act on their (real or simulated) worlds can choose “epistemic” actions that both test and improve their model of that world. A simple example might be a robot equipped with a camera and an arm that can push and prod objects in its field of vision. Such a robot can actively create sensorimotor flows that help reveal objects as integrated wholes distinct from their backgrounds and from other objects. These systems, simple versions of which have been explored by Giorgio Metta and others, possess a crucial but under-appreciated capacity, which is to use their own worldly actions to refine or disambiguate information both for learning and during practical action.
I vividly remember the rush I felt after my first encounter with the story of the Haitian Revolution. It was a sudden and miraculous sense that everything was not as it seemed, that it had never been, and that I had much to learn. A massive uprising of enslaved people became a 12-year fight for independence that ultimately created the first sovereign Black republic in 1804. Haiti was the second nation to cast off colonial rule in the Western hemisphere, and its revolution led to the abolition of slavery across the French empire and laid out a roadmap for independence that would inspire other colonies in Latin America.
Two recently published books examine intellectual histories of the revolutionary Caribbean, illuminating the what and how of Haiti’s rise. In The Common Wind: Afro-American Currents in the Age of the Haitian Revolution, Julius S. Scott explains how revolutionaries worked from below decks and beneath the gaze of overseers to circulate ideas across vast space, imperial borders, and linguistic barriers. Baron de Vastey and the Origins of Black Atlantic Humanism by Marlene L. Daut heralds a prominent and prolific early Haitian writer, Jean-Louis Baron de Vastey, whose avant-garde, anti-colonial writings eviscerate the philosophical imperialism of the Anglo-North. Daut’s study tracks Vastey’s significant influence on postcolonialism, critical race theory, and the Negritude movement, not to mention abolition and the revolutionary project of Haitian sovereignty itself. Both of these books follow the spread of radical politics, but Scott emphasizes vernacular transmission in the form of rumor, song, and marketplace exchanges, whereas Daut attunes to literary and print cultural transmission.
There are, I think, three particularly striking things about Hark. First, it is not in the fanatical first-person. It features a multitude of centers of narrative consciousness, and this makes for a story that feels more spacious—less claustrophobically compulsive—than many of Lipsyte’s others. Second, and in direct relation to this, there is a spaciousness in the novel’s regard for what we might call its characters’ practices of belief. Hark himself, for instance, remains something of a well-drawn cipher in the book, a vivid blur, and in Lipsyte’s novel-wide willingness to demure from mercilessness, to withhold satirical fire and thus preserve some unvoided space of mystery about him—this unfunny man who professedly neither gets nor traffics in irony—we can feel a deliberate and, to my mind, telling recalibration of the novelist’s own marrow-deep impulses toward mockery. Page after page, and often through the lens of the hapless Fraz, the most familiar of Lipstye’s quasi-despairing middle-aged men, the novel turns over a new and startling question: What if a killing and all-devouring irony isn’t the way to survive the world?
“To give the mundane its beautiful due” is how Updike described his own literary program, and in Carver the mundane is honed to ominous implication. You don’t often see Carver’s name hitched to Whitman’s, but consider the Whitmanian exuberance of the everyday: almost nothing is too insignificant to escape Whitman’s communion. Carver’s socially insignificant people, and the insignificant artifacts of their lives, are not insignificant to him. Wholly unlike Whitman, though, Carver’s literary program takes no stock of the sublime. His language achieves a demotic splendor, a conversational artfulness—always a grand talker, Carver wrote stories in an eminently spoken register; his art is as oral as Whitman’s—but his language cannot connect with that junction where this world rubs against the other. Though Carver’s characters often pine for exalted things, they cannot articulate their pining. The oppressive immediacy of their lives prevents such articulation. Transcendence is a privilege Carver’s people have perhaps heard rumor of but have not been granted access to.