Thursday, July 02, 2015
Walking in nature lowers risk of depression: Urbanization is associated with increased levels of mental illness
A new study has found quantifiable evidence that supports the common-sense idea that walking in nature could lower your risk of depression. The study, published in Proceedings of the National Academy of Science, found that people who walked for 90 minutes in a natural area, as opposed to participants who walked in a high-traffic urban setting (El Camino Real in Palo Alto, California, a noisy street with three to four lanes in both directions), showed decreased activity in the subgenual prefrontal cortex, a brain region active during rumination — repetitive thought focused on negative emotions. “These results suggest that accessible natural areas may be vital for mental health in our rapidly urbanizing world,” said co-author Gretchen Daily, the Bing Professor in Environmental Science and a senior fellow at the Stanford Woods Institute for the Environment. “Our findings can help inform the growing movement worldwide to make cities more livable, and to make nature more accessible to all who live in them.” “This finding is exciting because it demonstrates the impact of nature experience on an aspect of emotion regulation — something that may help explain how nature makes us feel better,” said lead author Gregory Bratman, a graduate student in Stanford’s Emmett Interdisciplinary Program in Environment and Resources, the Stanford Psychophysiology Lab and the Center for Conservation Biology. “These findings are important because they are consistent with, but do not yet prove, a causal link between increasing urbanization and increased rates of mental illness,” said co-author James Gross, a professor of psychology at Stanford.
Essential for urban planners to incorporate nature
It is essential for urban planners and other policymakers to understand the relationship between exposure to nature and mental health, the study’s authors write. “We want to explore what elements of nature — how much of it and what types of experiences — offer the greatest benefits,” Daily said. As noted in the paper, “Never before has such a large percentage of humanity been so far removed from nature ; more than 50% of people now live in urban areas, and by 2050, this proportion will be 70% . Although urbanization has many benefits, it is also associated with increased levels of mental illness, including anxiety disorders and depression [3-5].”
Wednesday, July 01, 2015
There is a poignant scene in Cameron Crowe’s film Almost Famous in which the rock critic Lester Bangs warns 15-year-old William Miller of the perils of seduction by the musicians Miller is covering for Rolling Stone magazine. Banks (played by Philip Seymour Hoffman) says to Miller (Patrick Fugit portraying a young Cameron Crowe):
“They make you feel cool. And, hey, I’ve met you. You are not cool.”
This profound advice extends well beyond the worlds of rock ‘n roll and music criticism. All writers should take heed of Bangs’s insight that trying to be cool when you simply aren’t risks muddling one’s clarity of observation and analysis and jeopardizes credibility with readers. Journalists, historians, novelists, academics, judges—perhaps especially judges—should take note.
I recalled this scene while reading the two most recent opinions of Justice Samuel Alito, the Supreme Court’s junior member. Justice Alito wrote the opinion of the Court inPleasant Grove City v. Summon, which held that a local government does not violate the first amendment by maintaining a monument to the Ten Commandments in a public park while refusing to install other permanent monuments that express differing religious views.
The portrait is meant to give us a direct line into the soul of its sitter, or at least we’re told. It’s meant to expose underlying truths about the subject, using physiognomy to express that which cannot be gleaned from the subject’s name alone.
And yet, why is it that whenever I view portraits that go anywhere beyond the shoulders, all I can focus on are the hands? An old art historian told me years ago that the true mark of an artist’s draughtsmanship is his ability to render hands, due to the difficulty in producing the form, especially the digits. Whether true or not, this bit of received wisdom has lodged itself firmly in my brain, nagging even the finest works. (I’m reminded of Gainsborough’s portraits, which for all their virtues can feature hands that are almost sickly.)
And so it is with the Morgan Library’s new portrait drawings show, “Life Lines: Portrait Drawings from Dürer to Picasso,” on view through September 8, 2015. The show, composed of fifty-one works, all but four of which are from the permanent collection, takes a wide view of the concept of portraiture, which is to say, there’s no shortage of hands.
My favorite photos in Hollywood Boulevard are the ones that don’t fit neatly into the time, the place, or the glib narrative of Hollywood, but instead are wildly individual: pictures of waiting, wandering, loitering with friends. Two girls with ombré hair can be pegged to the ’70s by the cut of their jeans and jackets, but their postures—one looking amused, the other staring at the camera with a directness I’ve only ever seen in teens who want to be looked at—reminds me of something more timeless. They’re young, and waiting for something to happen.
The very young resist Feldman’s taxonomic gaze: they’re busy making new ways of being. A signifier or a style that will one day become a marketable trend, or a tenet that’s taken to represent a generation, often starts as teenage idiosyncrasy. A modern-looking, androgynous pair, so similar they could be twins, poses against an uncluttered black and white wall that echoes their dark hair and pale skin. The one on the left wears no shirt, an open vest, and creased slacks. The one on the right wears a button-up with lapels spread wide and jeans so frayed they no longer look like hippie clothes, but something else. He stands with one hip cocked, a fist pressed into the small of his waist. They gaze up from under flat black brows with a bold, receptive sexuality. They look like they could be models, like they could be in New York, like they could be on an album cover tomorrow.
Cesar A. Hidalgo in the Huffington Post:
Most people think that information and computation are new things when in fact they are as old as the big bang. In the beginning, there was the bit, as my MIT colleague Seth Lloyd likes to say. Only recently, however, we have learned to see the bits embodied in atoms, cells, society and the economy.
But what is information? Colloquially, people think of information as the messages we use to communicate the state of a system. But information, which is not the same as meaning, includes also the order embodied in these systems, not just the messages we use to describe them. Think of the order you destroy when you crash a car. A car crash does not destroy atoms -- it destroys the way in which these atoms are arranged. That change in order is technically a change in information.
Computation, on the other hand, is the use of energy to process information. It is the fundamental mechanism by which nature rearranges bits to produce order. Computation is everywhere but in an economic context, we can think of it as a more modern and more accurate interpretation of the ideas of labor advanced originally by Adam Smith and Karl Marx.
Smith and Marx did not know about information or computation, so they described economies using the language of energy that dominated the nineteenth century zeitgeist. The mechanical protagonists of the industrial revolution were machines that transformed heat into motion: engines for pumps, trains and cranes. These machines awed the nineteenth century masses with their power -- masses that failed to see that what these machines were doing was increasing their ability to process information.
Processing information is the essence of all economic activities. It is not the privilege of the coder or the writer but what we do when we bake a cake, make a sandwich or manufacture a car. We compute when we take out the trash, do laundry or pair socks. All of these acts involve using energy to produce order -- whether we are grouping undesirable objects in a trashcan or using a laundry machine to remove dirt from our shirts. All jobs are acts of computation, and the economy is a collective computer that involves all of us.
More here. [Thanks to Marko Ahtisaari.]
Ian P. Beacock in The Point:
One evening in October 1905, when most Berliners were bundled away at home, Kurt Hiller wandered alone through the Tiergarten. Well, not quite alone. Walking in the southeast corner of the park between Lennéstraße and the Brandenburg Gate, the nineteen-year-old law student found himself boxed in by silhouettes: men searching the shadows for the company of other men, the “warm brothers” (warme Brüder) for which Berlin was so well-known. It was Hiller’s first visit to the city’s most notorious cruising ground, but he quickly found what he was looking for. He sat down on a bench next to a wiry man perhaps ten years his senior, rakish and mysterious in the moonlight. The law student wasted little time with small talk; he asked about the most important things. The man raised his arm and flexed. “I checked for myself,” Hiller recalled. “His bicep was broad, curved, and strong as steel.” Returning to the apartment of his anonymous lover, Hiller noticed with some distaste that the man’s body was quilted with tattoos. This was a man of the outskirts: a sailor or a criminal, a soldier or a circus performer. Taken briefly aback, the law student was rapidly overcome by lust for the man’s taut, sculpted frame. He let the door to the hallway close behind him.
In the early 1920s, an American military intelligence officer stationed in Germany reported that Berlin was “known by connoisseurs as one of the most immoral cities in the world.” The German capital was infamous for its wildly sexual and transgressive atmosphere: the confident young women with cropped hair and revealing skirts, the swingers clubs openly catering to polyamorous couples and curious singles, the cocaine fueling the city’s roaring nightlife.
From Kirkus Reviews:
Emily Mitchell is the author of Viral. But who is Emily Mitchell, really?
“Emily Mitchell has worked as a waitress, a receptionist at a bakery/tanning salon, a short-order cook, a snowmobile driver, a crime-scene cleanup technician, an exotic animal trainer, a war correspondent, a phone dispatcher, a secretary, an environmental campaigner, a freelance journalist, a bean counter and a holistic pediatric oncologist,” Mitchell writes in “Biographies.”
“Biographies” isn’t her biography, but one of a dozen delightfully diverse stories in her debut story collection. Viral is no memoir. No bakery/tanning salon likely exists—but couldn’t another word for “tanning salon” be “bakery”? You might catch these kinds of thoughts fromViral.
“If you are willing to accept this first premise then we can go on this amazing ride together—I love work that does that,” says Mitchell, speaking of recently discovering Michael Martone’sBlue Guide to Indiana. It’s perfect praise for Viral.
Tia Olivia Serves Wallace Stevens a Cuban Egg
The ration books voided, there was little to eat,
so Tía Olivia ruffled four hens to serve Stevens
a fresh criollo egg. The singular image lay limp,
floating in a circle of miniature roses and vines
etched around the edges of the rough dish.
The saffron, inhuman soul staring at Stevens
who asks what yolk is this, so odd a yellow?
Tell me Señora, if you know, he petitions,
what exactly is the color of this temptation:
I can see a sun, but it is not the color of suns
nor of sunflowers, nor the yellows of Van Gogh,
it is neither corn nor school pencil, as it is,
so few things are yellow, this, even more precise.
He shakes some salt, eye to eye hypothesizing:
a carnival of hues under the gossamer membrane,
a liqueur of convoluted colors, quarter-part orange,
imbued shadows, watercolors running a song
down the spine of praying stems, but what, then,
of the color of the stems, what green for the leaves,
what color the flowers; what of order for our eyes
if I can not name this elusive yellow, Señora?
Intolerant, Tía Olivia bursts open Stevens's yolk,
plunging into it with a sharp piece of Cuban toast:
It is yellow, she says, amarillo y nada más, bien?
The unleashed pigments begin to fill the plate,
overflow onto the embroidered place mats,
stream down the table and through the living room
setting all the rocking chairs in motion then
over the mill tracks cutting through cane fields,
a viscous mass downing palm trees and shacks.
In its frothy wake whole choirs of church ladies
clutch their rosary beads and sing out in Latin,
exhausted macheteros wade in the stream,
holding glinting machetes overhead with one arm;
cafeteras, '57 Chevys, uniforms and empty bottles,
mangy dogs and fattened pigs saved from slaughter,
Soviet jeeps, Bohemia magazines, park benches,
all carried in the egg lava carving the molested valley
and emptying into the sea. Yellow, Stevens relents,
Yes. But then what the color of the sea, Señora?
by Richard Blanco
from City of a Hundred Fires
University of Pittsburgh Press, 1998
Robin McKie in The Guardian:
In June 1966, the British Nobel laureate Francis Crick helped to organise a meeting of the world’s leading geneticists at Cold Spring Harbour near New York. It was to be a triumphant event. For the previous decade and a half, biologists had been struggling to unravel the genetic code, the biological cipher that determines how genes are passed on to future generations and which controls the construction of proteins in our bodies. This effort had begun in 1953 when Crick and his colleague James Watson showed that DNA was the critical constituent of our genes and revealed that it had a double helical structure. Since then, scientists had been racing to find out how that double helix controlled the manufacture of amino acids from which our bodies’ proteins are constructed. At Cold Spring Harbour, they were ready to announce their success and revealed the detailed process by which units of DNA control the manufacture of particular amino acids via intermediary entities known as ribosomes. This is the genetic code.
It was a historic occasion, as Crick acknowledged. Biologists had achieved an understanding of life’s processes at a molecular level for the first time, a point reinforced by Matthew Cobb in this meticulous, carefully assembled and thoroughly enjoyable history of modern molecular biology. “Cracking the code was a leap forward in humanity’s understanding of the natural world… akin to the discoveries of Galileo and Einstein in physics, or the publication of Darwin’s On the Origin of Species,” he states. Yet it had not been an easy business, as Cobb also makes clear. The effort involved hundreds of scientists and was similar, in scale, to the Apollo moon landings or the Manhattan project – though with one key difference. There was no leadership, no overseeing council, and no directed funding from governments pursuing military or political goals.
Brendan Maher in Nature:
The mammalian Y chromosome has long been thought of as a sort of genomic wasteland, usually shrinking over the course of evolution and largely bereft of pertinent information. Page’s work has helped to change perceptions of the Y chromosome by revealing that it contains remarkable patterns of repeating sequences that appear dozens to hundreds of times1, 2. But the structure of these sequences and precise measures of how often they repeat have been difficult to determine. Standard sequencing technologies often cannot distinguish between long stretches of genetic code that differ by a single DNA ‘letter’. Page and his collaborators avoided this problem by using what he calls ‘super-resolution’ sequencing (a technique better known as single-haplotype iterative mapping and sequencing, or SHIMS), which can detect such minute variation between lengthy segments of DNA.
The team sequenced many large, continuous stretches of the Y chromosome and carefully scrutinized the areas that looked as if they overlapped. They found that repeating structures make up about 24% of the accessible DNA in the human Y chromosome, and 44% of that of the bull. And in the Y chromosome of the mouse, which is much larger than that of a human, repeating structures make up almost 90% of accessible DNA. The intricate patterns, which often contain palindromes — sequence that reads the same in forward and reverse order — carry three families of protein-coding genes. What the genes are doing — and how they got there — remains a mystery, however.
Tuesday, June 30, 2015
Andrew Leonard in Nautilus:
Fifty years ago science-fiction author Frank Herbert seized the imagination of readers with his portrayal of a planet on which it never rained. In the novel Dune, the scarcest resource is water, so much so that the mere act of shedding a tear or spitting on the floor takes on weighty cultural significance.
To survive their permanent desert climate, the indigenous Fremen of Dune employ every possible technology. They build “windtraps” and “dew collectors” to grab the slightest precipitation out of the air. They construct vast underground cisterns and canals to store and transport their painstakingly gathered water. They harvest every drop of moisture from the corpses of the newly dead. During each waking moment they dress in “stillsuits”—head-to-toe wetsuit-like body coverings that recycle sweat, urine, and feces back into drinking water.
Described by Dune’s “planetary ecologist,” Liet-Kynes, as “a micro-sandwich—a high-efficiency filter and heat exchange system”—the stillsuit is a potent metaphor for reuse, reclamation, and conservation. Powered by the wearer’s own breathing and movement, the stillsuit is the technical apotheosis of the principle of making do with what one has.
Someday, sooner than we’d like, it’s not inconceivable that residents of California will be shopping on Amazon for the latest in stillsuit tech. Dune is set thousands of years in the future, but in California in 2015, the future is now. Four years of drought have pummeled reservoirs and forced mandatory 25 percent water rationing cuts. The calendar year of 2014 was the driest (and hottest) since records started being kept in the 1800s. At the end of May, the Sierra Nevada snowpack—a crucial source of California’s water—hit its lowest point on record: zero. Climate models suggest an era of mega-droughts could be nigh.
Which brings us to Daniel Fernandez, a professor of science and environmental policy at California State University, Monterey Bay, and Peter Yolles, the co-founder of a San Francisco water startup, WaterSmart, that assists water utilities in encouraging conservation by crunching data on individual water consumption. Fernandez spends his days building and monitoring fogcatchers, remarkablyDune-like devices that have the property of converting fog into potable water. “I think about Dune a lot,” Fernandez says. “The ideas have really sat with me. In the book, they revere water, and ask, what do we do?” Similarly, Yolles says, “I remember being fascinated by the stillsuits. That was a striking technology, really poignant.” And inspiring. The fictional prospect of a dystopian future, Yolles says, “helped me see problems that we have, and where things might go.”
Rick Perlstein reviews Bryan Burrough's Days of Rage: America’s Radical Underground, the FBI, and the Forgotten Age of Revolutionary Violence, in The Nation:
Burrough begins Days of Rage with the story of the New Left’s first convert to armed struggle, an oddball named Sam Melville, who started bombing random Manhattan banks shortly after enjoying the music at Woodstock and later died in the uprising at Attica. But the best history is always about the backstories—the flashback reconstructions explaining how a mentality that may strike us as alien today made perfect sense in the minds of those who shared it at the time.
Consider Mutulu Shakur. Born Jeral Williams in 1950, he became an early proponent of the Republic of New Afrika movement. His career as a militant began in a hospital. In 1970, members of the Young Lords, a Puerto Rican version of the Black Panthers that started as a street gang, occupied the auditorium of a tumbledown hospital in the South Bronx to protest its inadequacies. They demanded a heroin clinic. Harried hospital administrators were amenable; they needed a heroin clinic. So they let the Young Lords start one. Nourished with nearly $1 million in state and city funds, Lincoln Detox soon grew into the South Bronx’s largest drug-treatment facility.
Its program prescribed a theory popularized by Malcolm X: “that the plague of drugs was a scheme concocted by a white government to oppress blacks,” as Burrough puts it. Shakur started volunteering; his specialty was acupuncture. Another part of the treatment was studying a pamphlet subtitled “Heroin and Imperialism,” which advised that a commitment to armed struggle was a more effective analgesic than methadone. Lincoln Detox soon became what Burrough describes as “a kind of clubhouse for New York’s radical elite”; for instance, medical supplies purchased with government funds—“by the truckload”—were turned over to the Black Liberation Army to assist it in its campaign of murdering cops. Crazy stuff, to be sure. But in the South Bronx of the 1970s—where cops were heavily involved in the heroin trade, and building owners found it more profitable to torch their property for the insurance than to rent it out—it’s easy to understand why taking the fight to the police seemed a more realistic route to social change than voting for Hubert Humphrey had been in 1968.
Amartya Sen in The New Statesman:
Why did Keynes dislike a treaty that ended the state of war between Germany and the Allied Powers (surely a good thing)?
Keynes was not, of course, complaining about the end of the world war, nor about the need for a treaty to end it, but about the terms of the treaty – and in particular the suffering and the economic turmoil forced on the defeated enemy, the Germans, through imposed austerity. Austerity is a subject of much contemporary interest in Europe – I would like to add the word “unfortunately” somewhere in the sentence. Actually, the book that Keynes wrote attacking the treaty, The Economic Consequences of the Peace, was very substantially about the economic consequences of “imposed austerity”. Germany had lost the battle already, and the treaty was about what the defeated enemy would be required to do, including what it should have to pay to the victors. The terms of this Carthaginian peace, as Keynes saw it (recollecting the Roman treatment of the defeated Carthage following the Punic wars), included the imposition of an unrealistically huge burden of reparation on Germany – a task that Germany could not carry out without ruining its economy. As the terms also had the effect of fostering animosity between the victors and the vanquished and, in addition, would economically do no good to the rest of Europe, Keynes had nothing but contempt for the decision of the victorious four (Britain, France, Italy and the United States) to demand something from Germany that was hurtful for the vanquished and unhelpful for all.
The high-minded moral rhetoric in favour of the harsh imposition of austerity on Germany that Keynes complained about came particularly from Lord Cunliffe and Lord Sumner, representing Britain on the Reparation Commission, whom Keynes liked to call “the Heavenly Twins”. In his parting letter to Lloyd George, Keynes added, “I leave the Twins to gloat over the devastation of Europe.” Grand rhetoric on the necessity of imposing austerity, to remove economic and moral impropriety in Greece and elsewhere, may come more frequently these days from Berlin itself, with the changed role of Germany in today’s world. But the unfavourable consequences that Keynes feared would follow from severe – and in his judgement unreasoned – imposition of austerity remain relevant today (with an altered geography of the morally upright discipliner and the errant to be disciplined).
John Tierney in The New York Times:
There are hundreds of romance novels in a category that some have named “Plain Jane and Hot Stud,” a theme that was equally popular when Jane Austen wrote “Pride and Prejudice.” Tall and good-looking, endowed with a “noble mien,” Mr. Darcy initially denigrates Elizabeth Bennet’s appearance: “She is tolerable, but not handsome enough to tempt me.” He notes “more than one failure of perfect symmetry in her form.” Even worse for the rich Mr. Darcy, her family’s social status is “so decidedly beneath my own.” His initial reactions make perfect sense to evolutionary psychologists, because these preferences can improve the odds of passing on one’s genes. Beauty and physical symmetry are markers of a mate’s health and genetic fitness; status and wealth make it more likely that children will survive to adulthood.
...In the 2012 survey, people were asked a version of the famous question in Christopher Marlowe’s 16th-century poem: “Who ever loved, that loved not at first sight?” A great many, it turns out. In the survey, 33 percent of men and 43 percent of women answered yes when asked if they had ever fallen in love with someone they did not initially find attractive. Dr. Fisher terms this process “slow love,” and says it is becoming more common as people take longer to marry. “Everyone is terrified that online dating is reducing mate value to just a few superficial things like beauty — whether you swipe left or right on Tinder,” she said in an interview. “But that’s just the start of the process. Once you meet someone and get to know them, their mate value keeps changing.” When the survey respondents were asked what had changed their feelings, the chief reasons they gave were “great conversations,” “common interests,” and “came to appreciate his/her sense of humor.” All of those factors contribute to Mr. Darcy’s change of heart in “Pride and Prejudice.”
Anthony Lane in The New Yorker:
Who reads “Alice’s Adventures in Wonderland”? The answer used to be: Anyone who can read. From the tangled tale of mass literacy one can pluck a few specific objects—books that were to be found in every household where there was somebody who could read and people who wanted to listen. Aside from the Bible, a typical list would run like this: “The Pilgrim’s Progress,” “Robinson Crusoe,” and “Gulliver’s Travels,” to which were later added “The Pickwick Papers” and “Alice’s Adventures in Wonderland.” Notice that Alice is not the sole adventurer. Every one of those titles contains the leading character, whose fate is to go on a journey, and whose mettle is tested in the process. Each explores a different landscape, or body of water, but all five traverse what you might call the valley of the shadow of life, profuse with incident. Three of the writers were men of God, and the two others began as journalists. Had you asked any of them to take a creative-writing course, the door would have closed in your face.
But who reads the Alice books nowadays? Everybody knows Alice, but that is not the same thing. There are countless ways to know something, or someone, without firsthand evidence, and Alice, as familiar as a household god and as remote as a child star, is a prime case of cultural osmosis. Having seeped through the membrane of the original books, she has spent the past century and a half infusing herself into the language, and the broader social discourse; as a result, we can all too easily picture her, quote her, or follow her example in the nonsense of our own lives without having read—or even feeling that we need to read—a word of Lewis Carroll. Yet the need is more urgent than ever. Carroll wrote with a peppery briskness, impatient of folly, and always alive to the squalls of emotion that we struggle to curb:
“You know very well you’re not real.”
“I am real!” said Alice, and began to cry.
“You won’t make yourself a bit realler by crying,” Tweedledee remarked: “there’s nothing to cry about.”
“If I wasn’t real,” Alice said—half laughing through her tears, it all seemed so ridiculous—“I shouldn’t be able to cry.”
“I hope you don’t suppose those are real tears?” Tweedledum interrupted in a tone of great contempt.
The second half of this exchange was used by Evelyn Waugh as the epigraph to “Vile Bodies,” in 1930, and the tone is a perfect match for the chill, directionless frenzy of Waugh’s personae. But Tweedledum’s question is, if anything, more pertinent still to our epoch, when the capacity to weep, whether in triumph or disaster, is a heartfelt imposture that has proved de rigueur, not least in the realm of the reality show—a term, by the way, that would have caused Carroll to sharpen his pen like a carving knife.
Monday, June 29, 2015
Jonathan Kramnick has picked the three winners from the nine finalists:
- Top Quark, $500: Joanna Walsh, Ventimiglia
- Strange Quark, $200: David Kurnick, The Essential Gratuitousness of César Aira
- Charm Quark, $100: Sarah Blackwood, Editing as Carework
Here is what Jonathan had to say about them:
This is a great time for public art writing and literary criticism, with new venues of review and discussion popping up all over the internet. The essays I picked all in one way or another present the intimate and individual experience of an artwork in compelling, public language. That is not an easy thing to do.
Congratulations also from 3QD to the winners (remember, you must claim the money within one month from today—just send me an email). And feel free, in fact we encourage you, to leave your acceptance speech as a comment here! And thanks to everyone who participated. Many thanks also, of course, to Jonathan Kramnick for doing the final judging.
The three prize logos at the top of this post were designed by Sughra Raza, me (using a photo by Margit Oberrauch) and Carla Goller. I hope the winners will display them with pride on their own blogs!
Details about the prize here.
I recently found myself marooned with a large group of astronomers in a remote 11th century abbey in Tuscan countryside. Despite the picturesque beauty of the landscape not to mention the abbey's splendid library; still the days (I must admit) stretched on and on…
I guess it's true that google is making me stupid, but I discovered that it is a lot harder for me than it used to be to read for hours on end. And without any wireless nor any real means of getting myself back to civilization, I decided to hatch a means of escape. It wasn't all that hard actually, it was just a matter of reminding him (the astronomer with the driver's licence) that located not all that faraway from the abbey was what has been called "the best picture in the world."
Has anyone else read that wonderful essay by Aldous Huxley called "The Best Picture?"
It is a brilliant essay --and the title says it all. But, wait, you ask, how can there be such a thing as "the best picture" in the world? Isn't it an absolutely ludicrous suggestion to make?
Of course it is, and this is not lost on Huxley--for as you can see in the essay, he addresses this absurdity immediately:
The greatest picture in the world…. You smile. The expression is ludicrous, of course. Nothing is more futile than the occupation of those connoisseurs who spend their time compiling first and second elevens of the world's best painters,eights and fours of musicians, fifteens of poets, all-star troupes of architects and so on. Nothing is so futile because there are a great many kinds of merit and an infinite variety of human beings. Is Fra Angelico a better artist than Rubens? Such questions, you insist, are meaningless. It is all a matter of personal taste.And up to a point this is true. But there does exist, none the less, an absolute standard of artistic merit. And it is a standard which is in the last resort a moral one. Whether a work of art is good or bad depends entirely on the quality of the character which expresses itself in the work. Not that all virtuous men are good artists, nor all artists conventionally virtuous. Longfellow was a bad poet, while Beethoven's dealings with his publishers were frankly dishonourable.But one can be dishonourable towards one's publishers and yet preserve the kind of virtue that is necessary to a good artist. That virtue is the virtue of integrity, of honesty towards oneself.
The Horses of Chauvet
the horses of Chauvet
lope through a cave
eloquently old and new
true and lush
as the love and lust
our progenitors knew
by Jim Culleny
by Akim Reinhardt
To be born in America in 1967 is, to some degree, to fall through the cracks.
The Baby Boom was most certainly over by then, its most senior elements old enough to vote and drink. But the Millennials, now the focus of every drooling advertising executive and marketing guru, were naught but twinkles in the eyes of their Boomer sires and dames.
Bookmarked between bigger generations, being born in the late 1960s and early 1970s meant you were conceived and suckled amid the tumult of the Civil Rights and Vietnam protests; in (cloth) diapers when the moon landing occurred; discovering kindergarten as President Richard Nixon’s Plumbers were bumbling the Watergate break-in; and learning to read when the final U.S. helicopters evacuated Saigon.
To be born in 1967 means that when the late 1960s and early 1970s were becoming iconic, you were there, but you weren't. You didn't get to partake in the Summer of Love. You're what it spit out.
Thus, when coming of age, many important things were very familiar to you, but their meanings were muddled. Cultural symbols like bell bottom jeans and rubber Richard Nixon masks were still common enough to be lodged in your consciousness, but deeper insights were lacking. By the time you were waking up in the late 1970s, they seemed to be little more than goofs, unmoored from the bloody anti-war protests that divided a nation, or the collapse of a presidency that shook Americans' faith in their government.
Sure, we understood our own moment well enough. Late Cold War and early computers. AIDS and acid rain. Crack cocaine and homelessness. But the gravitas that had conceived us was by then little more than parody and catharsis. Black Power surrendered to Blacksploitation. Protest songs gave way to disco and synth pop. Vietnam was reduced to Rambo.
And if the late 1970s began glossing over so much of what had immediately preceded it, then the 1980s buffed it into a smooth, porcelain sheen. In pop culture representations of the 1960s and early 19790s, substance had been overtaken by style. Symbols, absent their meaning, were rendered fashion accessories and punch lines. A case in point was the Confederate flag.
C-print, edition of 7.
by Emrys Westacott
My Facebook profile describes my political views as "very liberal." In the US this is a shorthand way of indicating that I support gay rights, government-run health care, stricter gun laws, abortion rights for women, abolition of the death penalty, reduced military spending, environmental protection, campaign finance reform, the United Nations, Charles Darwin, the Toyota Prius, and higher taxes on people richer than me.
When I get together with other very liberals—which is quite often, since I'm married to one—a favorite topic of lamentation is the blindness of our political opponents. Why don't they get it? Why don't they see that we'd all be better off if we spent more on education and less on weapons systems; that if they really want to see fewer abortions they should support rather than oppose sex education in school and universal healthcare; that violent crime in the US is more likely to be reduced by having stricter gun control laws than by increasing the number of executions.
Our discussions of such matters follow a predictable course. After a round of annoyed tongue clicking, irritation gradually mounts until we reach a crescendo of infuriation and incredulity, from which we subside, with much headshaking, onto the soft but comfortless pillow of our usual answer. Why don't they get it? Because, to quote Samuel Beckett, "people are bloody ignorant apes!"
I believe something like the same kind of incredulity characterizes the view that many Europeans have of American politics. Whether the issue is denial of climate change, teaching creationism, resisting even minimal gun control, or opposing a more efficient health care system, the first impulse is to shake the head and ask, "How stupid can you get?"
As an explanation of why millions of people don't agree with me, the "ignorant ape" hypothesis has the virtue of simplicity. But I can't help feeling that it lacks depth. After all, in other areas of life conservatives aren't any more stupid than me or my fellow VLs. They make perfectly good parents, neighbors, and colleagues. So why do our wonderfully cogent arguments have so little purchase on their thinking?
I believe one key reason is that when it comes to political topics and stances, rational cogency often counts for less than symbolic meaning. In any debate, on any topic, the ideal is for the outcome to be determined entirely by the force of the best evidence and arguments. Indeed, submission to the argument is largely what we mean by scientific or scholarly objectivity. But submission to the argument seems to be less common in politics than in most other spheres. Instead, it is the symbolic significance of a political position that often decides whether a person endorses it or rejects it. This is true in every society; think for, instance, of the headscarf controversy in France. But it is perhaps more true in the US than in most other developed countries because for some reason symbols seem to play a bigger part in American political culture.
by Brooks Riley
Introduction by Bill Benzon
This month I've decided to turn things over to my good friend Charles Cameron, whom I've known for somewhat over a dozen years, though only online. He's a poet and a student of many things, most recently religious fundamentalism and its contemporary manifestations in terrorism. He characterizes himself as a vagabond monk and he blogs at Zenpundit and at Sembl. When he was eleven he applied to join an Anglican monestery and, while they didn't take him in, that act did bring him to the attention of the remarkable Fr. Trevor Huddleston, who became his mentor for the next decade. Thereafter Cameron explored Tibetan Buddhism, Hindu mysticism, and Native American shamanism. He's been around.
But it's his connection with Trevor Huddleston that got my attention, for Huddleston managed to broker a gift between two trumpet-player heroes of mine. At one point in his career he was in South African, where a young Hugh "Grazin in the Grass" Masekela was one of his students. On a trip to America, Fr. Huddleston met Louis Armstrong and got him to give Masekela a trumpet.
To the bridge builders...
Pontifex as Bridge Builder: the Encyclical Laudato Si'
by Charles Cameron
I propose that in his recent encyclical Laudato Si', Pope Francis is exercising his function as Supreme Pontiff, or @pontifex as he calls himself on Twitter – a pontifex being literally a bridge builder. It is my contention that in his encyclical he bridges a number of divides, between Catholic and Orthodox, sacramental and social, liberal and conservative, religious and scientific, even Christian and Muslim, traditional and of the fast advancing moment, in a manner which will impact our world in ways yet unforeseen.
It is my contention, also, that his pontificate provides the third step in a momentous journey.
The first step, as I see it, was taken by Christ himself in the Beatitudes – blessed are the poor in spirit, they that mourn, the meek, the merciful, the pure in heart, the peacemakers – and in his doctrine of forgiveness, not once only but a myriad of times. The second was taken by Francis of Assisi, in his Canticle of Creatures – praised be you, my Lord, with all your creatures, especially Sir Brother Sun, through Sister Moon and the stars, praised be You, my Lord, through our Sister Mother Earth, who sustains and governs us.. blessed those who endure in peace.. – and in his crossing the front lines of war during the crusades to greet in peace the Sultan Malik Al-Kamil in Damietta, Egypt. And in taking the name Francis, in washing and kissing on Maundy Thursday the feet of both male and female, Christian and Muslim juvenile offenders in prison, and in issuing this encyclical, I would suggest Pope Francis, born Jorge Mario Bergoglio, is taking the third step.
The line, the transmission, is of sheer humility. It begins with the Founder of the line, Christ himself, lapses, which all high inspirations must as routine replaces charisma, only to emerge brilliantly a millennium later in the saintly maverick, Francis, lapses again though still fermenting in the imagination of church and humankind, and now at last shows itself once more, in that most unexpected of places: in the heart of the bureaucracy, at the head of the hierarchy, atop the curia, simple, idealistic, practical – a pontifex building bridges.
Strained Analogies Between Recently Released Films and Current Events: Jurassic World and the World Wide Web
by Matt McKenna
Welcome, dinosaurs, to the pantheon of horror film monsters including zombies, sharks, and aliens that have been subjected to the sci-fi trope of genetic-engineering-gone-too-far. To be fair, it's hard to blame directors of horror sequels for invoking this narrative cliché--how else are they expected to make their follow-up films interesting? Must they be forced to produce another movie in which the exact same monster plunks around and kills yet more people in precisely the same fashion as it did in the original? Of course not. Sequels have to be spiced up somehow, and the best way to do that is to make the scary monster scarier. And to make a scary monster scarier, a director has but two options: either add more scary monsters (e.g. there is one alien in Alien, but there are many aliens in its sequel) or dial up the intelligence of the scary monster (e.g. the shark is a simple killing machine in Jaws, but it becomes emotionally complex and vindictive by Jaws IV). Genetic engineering comes in as the convenient means by which one of these methods is enacted. It was therefore only a matter of time before Hollywood created a blockbuster film about scientists creating a gifted and talented dinosaur rampaging about eating people. Jurassic World is that film, it's not bad, and it's also a strikingly good metaphor for the current state of the World Wide Web.
In Jurassic World, the dinosaur-filled theme park of the first film in the franchise has reopened after having miraculously recovered from the disaster that occurred decades prior. To no one's surprise, history repeats itself when a dinosaur escapes and eats a slew of park guests. However, this escaped dinosaur isn't just any old dinosaur--it's a genetically modified ultra-huge and hyper-smart murder monster. As expected, the second and third act of the movie consists mainly of the human characters yelling "run" and "go" and really just a lot of yelling in general as CG dinosaurs wriggle around and eat things.