Wednesday, April 16, 2014
Rebecca Newberger Goldstein in The Chronicle of Higher Education (image: André da Loba for The Chronicle):
Questions of physics, cosmology, biology, psychology, cognitive and affective neuroscience, linguistics, mathematical logic: Philosophy once claimed them all. But as the methodologies of those other disciplines progressed—being empirical, in the case of all but logic—questions over which philosophy had futilely sputtered and speculated were converted into testable hypotheses, and philosophy was rendered forevermore irrelevant.
Is there any doubt, demand the naysayers, about the terminus of this continuing process? Given enough time, talent, and funding, there will be nothing left for philosophers to consider. To quote one naysayer, the physicist Lawrence Krauss, "Philosophy used to be a field that had content, but then ‘natural philosophy’ became physics, and physics has only continued to make inroads. Every time there’s a leap in physics, it encroaches on these areas that philosophers have carefully sequestered away to themselves." Krauss tends to merge philosophy not with literature, as Wieseltier does, but rather with theology, since both, by his lights, are futile attempts to describe the nature of reality. One could imagine such a naysayer conceding that philosophers should be credited with laying the intellectual eggs, so to speak, in the form of questions, and sitting on them to keep them warm. But no life, in the form of discoveries, ever hatches until science takes over.
There’s some truth in the naysayer’s story. As far as our knowledge of the nature of physical reality is concerned—four-dimensional space-time and genes and neurons and neurotransmitters and the Higgs boson and quantum fields and black holes and maybe even the multiverse—it’s science that has racked up the results. Science is the ingenious practice of prodding reality into answering us back when we’re getting it wrong (although that itself is a heady philosophical claim, substantiated by concerted philosophical work).
And, of course, we have a marked tendency to get reality wrong.
Sleep is invisible and inconsistent. Aping death, sleep in fact prevents it; at the very least, sleep deprivation leads to premature demise (and before that, failures in mood, metabolism, cognitive function). All animals sleep, and it makes sense for none of them, evolutionarily, since it leaves the sleeper defenseless to predation. Sleep is common, public, a vulnerability we all share—even as sleep also brackets the sleeper in the most impenetrable of privacies. Nothing, everyone knows, is harder to communicate than one’s dream.
And then there’s time. Sleep seems to remove us from the general tyranny of the advancing clock. When you wake, 20 minutes could have passed as easily as three hours. But sleep defines time, dividing day and night. Humans discover circadian rhythm through the urge to sleep. That urge is, of course, cyclic, endless: always more sleep to be had. But sleep measures forward progress by consolidating our sense of the past. (Steven W. Lockley and Russell G. Foster lay out the evidence for this and other facts in their briskly informative Sleep: A Very Short Introduction.) In sleep, our brains decide what to keep and discard. Without sleep, we would dissolve into overloaded confusion.
Who does the Crimea belong to?
First of all, to the sea that made it. Seven thousand years ago, the Black Sea was much lower than it is today. Then a waterfall tumbled over the Bosporus, and the waters began to rise. The flood cut the Crimea off from the mainland – all the way except for a narrow isthmus called the Perekop. Ever since, it has been a rocky island on the shores of a sea of grass.
The steppes belonged to the nomads. Grass meant horses, and freedom. The steppes stretched north, from the mouth of the Danube to the Siberian Altai. Across the centuries they were home to various nomadic confederations and tribes: Scythians, Sarmatians, Huns, Pechenegs, Cumans, Mongols, and Kipchak Turks. The legendary Cimmerians predate them all; the Cossacks are still there today.
At times, the nomadic tribes made their home in Crimea too.
The eighties, at least, were drenched in cocaine and neon, slick cars and yacht parties, a real debauched reaction. But nineties white culture was all earnest yearning: the sorrow of Kurt Cobain and handwringing over selling out, crooning boy-bands and innocent pop starlets, the Contract With America and the Starr Report. It was all so self-serious, so dadly.
Today, by some accounts, the nineties dad is cool again, at least if you think normcore is a thing beyond a couple NYC fashionistas and a series of think pieces. Still, that’s shiftless hipsters dressed like dads, not dads as unironic heroes and subjects of our culture. If the hipster cultural turn in the following decades has been to ironize things to the point of meaninglessness, so be it. At least they don’t pretend it’s a goddamn cultural revolution when they have a kid: they just let their babies play with their beards and push their strollers into the coffee shop. In the nineties, Dad was sometimes the coolest guy in the room. He was sometimes the butt of the joke. He was sometimes the absence that made all the difference. But he was always, insistently, at the center of the story.
Miles Becker in Conservation:
Can farmers feed an additional 4 billion people with current levels of crop production? A team from the University of Minnesota tackled the problem by shifting the definition of agricultural productivity from the standard measure (tons per hectare) to the number of people fed per hectare. They then audited the global caloric budget and found a way to squeeze out another 4 quadrillion calories per year from existing crop fields. Their starting point was meat production, the most inefficient use of calories to feed people. The energy available from a plant crop such as corn dwindles dramatically when it goes through an intermediate consumer such as a pig. Beef has the lowest caloric conversion efficiency: only 3 percent. Pork and chicken do three to four times better. Milk and eggs, animal products that provide us essential nutrients in smaller batches, are a much more efficient use of plant calories.
The researchers calculated that 41 percent of crop calories made it to the table from 1997 to 2003, with the rest lost mainly to gastric juices and droppings of livestock. Crop calorie efficiency is expected to fall as the meat market grows. Global meat production boomed from 250.4 million tons in 2003 to 303.9 million tons by 2012, as reported by the FAO. Rice production, mainly for human food, dwindled by 18 percent over the same time period. The authors of the 2013 paper, published in Environmental Research Letters, suggested a trend reversal would be desirable. They estimated that a shift from crops destined for animal feed and industrial uses toward human food could hypothetically increase available calories by 70 percent and feed another 4 billion people each year.
This discovery opens up the possibility that environmental and/or genetic factors may hinder or suppress a specific brain activity that the researchers have identified as helping us prevent distraction. The Journal of Neuroscience has just published a paper about the discovery by John McDonald, an associate professor of psychology and his doctoral student John Gaspar, who made the discovery during his master's thesis research. This is the first study to reveal our brains rely on an active suppression mechanism to avoid being distracted by salient irrelevant information when we want to focus on a particular item or task.
McDonald, a Canada Research Chair in Cognitive Neuroscience, and other scientists first discovered the existence of the specific neural index of suppression in his lab in 2009. But, until now, little was known about how it helps us ignore visual distractions. "This is an important discovery for neuroscientists and psychologists because most contemporary ideas of attention highlight brain processes that are involved in picking out relevant objects from the visual field. It's like finding Waldo in a Where's Waldo illustration," says Gaspar, the study's lead author.
First Poem of the Morning
When you and I wave
I wonder if for you
the stranger across
three gray rooftops
over the blackbirds
pecking the softening
skylight rim of morning
through the shapes
of blackened branches
on the other side
of the ice-paned
I wonder if for you
our wave is the first
poem of the morning
by Ann Nadge
Tuesday, April 15, 2014
Robert Alter in The New Republic:
Evelyn Barish begins her impressively researched biography by flatly stating that “Paul de Man no longer seems to exist.” This may be an exaggerated expression of frustration by a biographer whose long- incubated work now appears after what might have been the optimal time for it. Yet there is considerable truth in what she says. De Man is now scarcely remembered by the general public, though he was the center of a widely publicized scandal in 1988, five years after his death at the age of 64. In the 1970s and 1980s, he was a central figure, an inevitable figure, in American literary studies, in which doctoral dissertations, the great barometer of academic fashion, could scarcely be found without dozens of citations from his writings. But the meteor has long since faded: over the past decade and more, I have only rarely encountered references to de Man in students’ work, committed as they generally are to marching with the zeitgeist.
Paul de Man arrived in the United States from his native Belgium in the spring of 1948. He would remain in this country illegally after the expiration of his temporary visa, on occasion finding ways to elude the Immigration and Naturalization Service. But that, as Barish’s account makes clear, was the least of his infractions of the law. Eventually he would be admitted, with a considerable amount of falsification on his part, to the doctoral program in comparative literature at Harvard, from which he would receive a degree, in somewhat compromised circumstances, in 1960. He then went on to teach at Cornell, briefly at Johns Hopkins, and most significantly at Yale, where he became a “seminal” scholar and an altogether revered figure.
More from Wired here.
From the New York Times:
“The Goldfinch” (Little, Brown)
Ms. Tartt’s best-selling novel is about a boy who comes into possession of a painting after an explosion at a museum.
In a phone conversation on Monday, Ms. Tartt, 50, said the novel “was always about a child who had stolen a painting,” but it was only two years into writing the book that she saw “The Goldfinch,” a 17th-century work by Carel Fabritius.
“It fit into the plot of the book I was writing in ways I couldn’t have imagined,” she said. “It had to be a small painting that a child could carry, and that a child could be obsessed by.”
Finalists Philipp Meyer, “The Son”; Bob Shacochis, “The Woman Who Lost Her Soul.”
Leo Mirani and Gideon Lichfield in Quartz (via Jennifer Ouellette, D-Wave Systems photo):
For the past several years, a Canadian company called D-Wave Systems has been selling what it says is the largest quantum computer ever built. D-Wave’s clients include Lockheed Martin, NASA, the US National Security Agency, and Google, each of which paid somewhere between $10 million and $15 million for the thing. As a result, D-Wave has won itself millions in funding and vast amounts of press coverage—including, two months ago, the cover of Time (paywall).
These machines are of little use to consumers. They are delicate, easily disturbed, require cooling to just above absolute zero, and are ruinously expensive. But the implications are enormous for heavy number-crunching. In theory, banks could use quantum computers to calculate risk faster than their competitors, giving them an edge in the markets. Tech companies could use them to figure out if their code is bug-free. Spies could use them to crack cryptographic codes, which requires crunching through massive calculations. A fully-fledged version of such a machine could theoretically tear through calculations that the most powerful mainframes would take eons to complete.
The only problem is that scientists have been arguing for years about whether D-Wave’s device is really a quantum computer or not. (D-Wave canceled a scheduled interview and did not reschedule.) And while at some level this doesn’t matter—as far as we know, D-Wave’s clients haven’t asked for their money back—it’s an issue of importance to scientists, to hopeful manufacturers of similar machines, and to anyone curious about the ultimate limits of humankind’s ability to build artificial brains.
There is thus a lively debate in Russia itself on the country’s orientation. The question is, where does the leadership stand in this debate? The answer is difficult, because not only has Russia become more autocratic under Putin, but the circle of real decision-makers has become ever smaller. According to some accounts, it may consist of no more than five people. But, reviewing the period since 2000, when Putin assumed power, it is plausible that it began with a continuation of a commitment to democracy and a market economy, associated with a growing resentment at lack of consideration on the part of the West to certain deep Russian concerns – NATO enlargement, treatment as a poor supplicant, disregard for what are seen as legitimate interests in the neighbourhood etc. Angela Stent cites a senior German official complaining of an “empathy deficit disorder” in Washington in dealing with Russia. The pathology that this caused became progressively more virulent in the intervening years, culminating in 2003 in the invasion of Iraq without any Security Council mandate, indeed, in open defiance of the UN. After this, the New York Times magazine’s Ron Suskind reported on a visit to the Bush White House in 2004 in the course of which he recounts that “an aide” (commonly supposed to be Karl Rove) “said that guys like me were ‘in what we call the reality-based community’, which he defined as people who ‘believe that solutions emerge from your judicious study of discernible reality’… ‘That’s not the way the world really works any more’, he continued. ‘We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality – judiciously, as you will – we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors … and you, all of you, will be left to just study what we do.’”
For years I’ve been hearing it said that young artists think art began with Andy Warhol. It’s never been true. But now what I hear is art historians complaining that none of their students want to study anything but contemporary art. Among young art historians, it seems, to delve as far back as the 1960s is to be considered an antiquarian. “They only take my courses because they think they need some ‘background,’” one Renaissance specialist told me. “We have to accept almost anyone who applies saying that they want to study anything before the present, just to give our current faculty something to do.” What a time, when the art historians have less historical consciousness than the artists—and no wonder that the former, these days, show so little interest in what the latter actually do.
When I was a grad student (in a different field), the budding art historians I knew were studying medieval, they were studying mannerism, they were studying the Maya. No one thought of studying living artists. The most adventurous ones might be investigating Italian Futurism. Now the Futurists seem as distant as the Maya. But might this be their own fault?
We all distinguish between plants and animals. We understand that plants, in general, are immobile, rooted in the ground; they spread their green leaves to the heavens and feed on sunlight and soil. We understand that animals, in contrast, are mobile, moving from place to place, foraging or hunting for food; they have easily recognized behaviors of various sorts. Plants and animals have evolved along two profoundly different paths (fungi have yet another), and they are wholly different in their forms and modes of life.
And yet, Darwin insisted, they were closer than one might think. He wrote a series of botanical books, culminating in The Power of Movement in Plants (1880), just before his book on earthworms. He thought the powers of movement, and especially of detecting and catching prey, in the insectivorous plants so remarkable that, in a letter to the botanist Asa Gray, he referred to Drosera, the sundew, only half-jokingly as not only a wonderful plant but “a most sagacious animal.”
Darwin was reinforced in this notion by the demonstration that insect-eating plants made use of electrical currents to move, just as animals did—that there was “plant electricity” as well as “animal electricity.”
Tim Martin in The Guardian:
The years in which the young Samuel Beckett prepared and published his first collection of short stories were, as he later remarked, “bad in every way, financially, psychologically”. In late 1930 he had returned to Dublin from teaching at the École Normale Supérieure in Paris, reluctantly swapping the shabby dazzle of James Joyce’s circle and the fun of drunken nights on the town for a post lecturing at Trinity College that he soon came to hate. Painfully awkward and shy, Beckett was tortured by public speaking, and he dreaded what he called the “grotesque comedy of lecturing” that involved “teaching to others what he did not know himself”. To the horror of his parents, he resigned, bouncing disconsolately between Germany, Paris and London on a family stipend as he tried to get his first novel off the ground. Money became shorter and shorter. In the autumn of 1932, he was forced to “crawl home” to his parents in Dublin when the last £5 note his father sent him was stolen from his digs. He was 26.
At home, however, his problems were far from over. It soon became clear that Dream of Fair to Middling Women, the madcap, erudite, Joycean book he had written at speed in Paris earlier that year, was not going to be the success he imagined. During a miserable spell in London, feeling “depressed, the way a slug-ridden cabbage might expect to be”, he shopped the manuscript around to several publishers: Chatto & Windus, the Hogarth Press, Jonathan Cape and Grayson & Grayson. The letter he wrote later to a friend summarised the results of the trip. “Shatton and Windup thought it was wonderful but they simply could not. The Hogarth Private Lunatic Asylum rejected it the way Punch would. Cape was écoeuré [disgusted] in pipe and cardigan and his Aberdeen terrier agreed with him. Grayson has lost it or cleaned himself with it.” Back in Dublin, wearily recognising that Dream might be unpublishable (it appeared posthumously in 1992), Beckett devoted his remaining energy to compiling a volume of short stories. Like his novel, these covered episodes in the life of Belacqua Shuah, a Dublin student who shared the author’s obsession with Dante and Augustine as well as his hang-ups about sex.
Virginia Hughes in Nature:
Trauma is insidious. It not only increases a person’s risk for psychiatric disorders, but can also spill over into the next generation. People who were traumatized during the Khmer Rouge genocide in Cambodia tended to have children with depression and anxiety, for example, and children of Australian veterans of the Vietnam War have higher rates of suicide than the general population.
Trauma’s impact comes partly from social factors, such as its influence on how parents interact with their children. But stress also leaves ‘epigenetic marks’ — chemical changes that affect how DNA is expressed without altering its sequence. A study published this week in Nature Neuroscience finds that stress in early life alters the production of small RNAs, called microRNAs, in the sperm of mice (K. Gapp et al. Nature Neurosci. http://dx.doi.org/10.1038/nn.3695; 2014). The mice show depressive behaviours that persist in their progeny, which also show glitches in metabolism. The study is notable for showing that sperm responds to the environment, says Stephen Krawetz, a geneticist at Wayne State University School of Medicine in Detroit, Michigan, who studies microRNAs in human sperm. (He was not involved in the latest study.) “Dad is having a much larger role in the whole process, rather than just delivering his genome and being done with it,” he says. He adds that this is one of a growing number of studies to show that subtle changes in sperm microRNAs “set the stage for a huge plethora of other effects”.
I remember your square jaw
Strong and viselike
Of my hand father
That wouldn’t let go
I remember you at the bottom of the stairs
We had to go son
I remember the hat
The small brim
With its feather
You always wore
As if leaving without it
Was like being naked in the sun
I remember you standing
Behind the old glass counter
With its huge crack
weight upon your right foot
I remember that subtle smile
Showing only a portion
Of the false teeth
I remember you father asking me
With your worried look father
Why I liked that girl
With the dark skin
I never knew father
What you father
by Bill Schneberger
Monday, April 14, 2014
by Emrys Westacott
In 1930 the economist John Maynard Keynes predicted that increases in productivity due to technological progress would lead within a century to most people enjoying much more leisure. He believed that by 2030 the average working week would be around fifteen hours. Eighty-four years later, it doesn't look like this prediction will come true. Most full-time workers work two, three, or four times, that: and many part-time workers would work more hours if they could since they need the money.
So why haven't we come closer to realizing the expectations of Russell and Keynes? In their recent book, How Much Is Enough? Money and the Good Life (Other Press, 2012), Robert and Edward Skidelsky offer an interesting answer. According to them Keynes' mistake was his failure to realize that capitalism has unleashed forces that can't be brought under control. Specifically, it has greatly inflamed a natural human desire for recognition and status, turning it into an insatiable desire for ever more wealth—wealth being the number one determinant of status in our society. If we could just settle for a modest level of comfort, we could work far less. But the yearning for more wealth and more stuff now leads people to spend far more time working than they need to. The same insatiability characterizes our society as a whole. Every politician and most economists take for granted that we should be striving with all our might to achieve economic growth without limit. The wisdom of this relentless, endless pursuit of economic growth is rarely questioned.
The Skidelskys' explanation of why we still work much more than Keynes predicted isn't entirely wrong, but I don't think it's the whole story or even the most important part. It's no doubt true of some people that they are driven to work more than they need to by insatiable greed. But I suspect that far more people work the hours they do because of circumstances beyond their control. For instance, many people work long hours simply because their hourly wage is quite low, so they work overtime, or perhaps take a second job, just in order to have enough to live on. Some live in expensive metropolitan areas like Boston or San Francisco, so even though they make a good wage, they actually need a full time job even to secure a fairly modest level of comfort, given the cost of housing. Many people keep working full time, even though they'd like to retire or go part time, because only a full time job will provide indispensible benefits like health insurance and a pension. And lots of people would like to cut back the hours they work but can't for a simple reason: their boss won't let them.
But there's also another factor preventing us from achieving a more leisured and balanced lifestyle, and that is the intensely competitive social environment in which we live.
by Brooks Riley
As I hover over my life in cyberspace, I look down at the various trails emanating from me that find their way across the globe to multiple destinations, known and unknown, whether or not they were ever intended to travel that far. Interconnectivity has increased exponentially since 2009 when I bought the notebook whose recent demise forced me to confront a sea change. Up to now, I'd left a line of breadcrumbs, for Windows, for McAfee, for Google, for the NSA, for my e-mail contacts, for who knows who else. Now those breadcrumbs have become loaves and like the parable they have multiplied.
I loved my old notebook: Except for the odd update or security scan, it was just it and I, two symbiotic pals going about our business. Now I find myself constantly confronted with geek issues such as OS updates, software compatibility, multiple preference settings and cloud management. Is Microsoft my new best friend because it greets me (Hello from Seattle) and promises to guide me? Is Apple my new best friend because it promises chic design? Is Google my new best friend because it finds things, shows me where I live and offers to hardwire my nest? Is Amazon my new best friend because it delivers? None of the above. They fall into the category of useful acquaintances to whom I turn when I need them. My new best friend turns out to be my old best friend, Wikipedia, without which the world would be a poorer place for one who wants to know everything.
What does it mean to leave behind such spoors (to borrow language of the hunted), when most of the billions before us left only genetic traces in the form of offspring and descendants? An electronic version of each one of us will haunt the internet after we're gone, as immutable and indestructible as the risus rigidus of a Guy Fawkes mask on the trash heap after the party's over.
Facebook is beginning to deal with death, but only with issues of access, not with the fate of the pages themselves. Nearly 3 million Facebook users worldwide were predicted to die in 2012 alone, their pages achieving an immortality denied to their progenitors. Will famous last words be replaced by famous last entries? Will Stephen King write a ghoulish story about a Facebook user who updates his page from heaven? Will some start-up create a ‘dropped box' in cyberspace for the dearly departed? And what about all those other clouds? Your stuff is safe and backed up. You are not.
by Charlie Huenemann
In 1746, Hume returned to London after touring Europe as tutor and caretaker of the mad Marquess of Annendale. He was not sure what was next in his life. He was already 35 and somewhat ashamed of not having yet made a career for himself. He resolved to return to Scotland, but at the last minute he received an unexpected invitation to serve in a military expedition to Canada. The invitation came from Lieut.-General James St Clair, a distant relative of Hume whom he had recently met. The opportunity hit Hume at just the right time, and he wondered if this was the beginning of a career in the military.
The plan for the expedition was to approach Quebec by way of the St. Lawrence River in August. Hume set his affairs in order and reported for duty. But what followed was not the exciting onset of an adventure at sea, sails rippling in the wind, but three months of fits and starts. When the wind was not favorable, they were stuck in one harbor or another; when the wind was favorable, the orders from the Navy changed and kept them from going anywhere.
By the end of August, the orders changed dramatically. Forget Canada; the new plan was to invade the French coast and cause a distraction from the campaign taking place then around Flanders. But winds were unfavorable once again, giving St Clair the opportunity to remind the Navy that for this new assignment he had no maps, no military intelligence, no horses, and no money.
The Navy sent along a major and some ship pilots to help plan for an invasion - though, as it turned out, none of them could provide any helpful information. Thus, as Hume put it, the company "lay under positive orders to sail with the first fair wind, to approach the unknown coast, march through the unknown country, and attack the unknown cities of the most potent nation of the universe".
On September 15th, they undertook to do just that, setting out for Lorient in Brittany with about 50 ships and 4500 men, with the guidance of a map bought in a shop in Plymouth. They arrived at the French coast in the evening of September 18th. But instead of invading right away, the commanding admiral waited to land until the following morning, and on the morning they encountered winds that prevented their landing for two more days. This of course gave the French plenty of time to see them, sound alarms, and prepare a defense of some 3,000 militia, plus cavalry. The wind finally relented and the invading British troops landed, diverting at the last moment to an unoccupied section of the coast. They chased some French soldiers into the hills and issued a general declaration to villagers in the area that they would not be harmed if they did not oppose. Hume was apparently so excited that he simply co-signed this declaration "David," forgetting to supply his last name.
What followed then was the sort of comedy of errors one could easily see coming. The British troops began to poke around the unfamiliar territory, engaged in some minor skirmishes, sacked a village, and entered into a firefight in which they ended up shooting at each other. Rain kept pouring, morale was low, and many soldiers just wandered off into the French countryside.
“Did you like your father,” my friend asked?
The Tongues of His Black Boots Say
as my father sleeps the world goes on
his black boots are by the door
he left them there unlaced
the right run down at the heel
the left toe scuffed
his blue shirt hangs on a hook
wrinkled below the belt line
where every morning
its tails were tucked
there’s no forgiveness in pasts
just now and here, defeat
is the hardest epiphany
the tongues of his
black boots say
by Jim Culleny
by Gautam Pemmaraju
Suave locus voci resonat conclusus
(How sweetly the enclosed space responds to the voice)
—Horace, Satires I, iv, 76 (in Doyle, P, Echo and Reverb:
Fabricating Space in Popular Music Recording, 1900 – 1960; 2005)
The whispering gallery that runs along the inner periphery of the dome of Gol Gumbaz, the mausoleum of the medieval Bijapur sultan Muhammad Adil Shah (1626 – 56 CE) is an acoustic marvel. Multiple echoes of up to ten in number can be heard in the dome on a single clap. And a reasonably soft whisper can be heard across a distance of a hundred and thirty feet. The tourists visiting the place are mostly prone to whoop, shout, and clap with great enthusiasm, overwhelming the dome with dense sonic information. At quiet times though one can savour its rich, amplified reverberance—the timbre, colour and tone of the spoken word assumes an elevated quality, as if it were imbued by the sheen of something beyond earthly artifice.
Such sonic modulations appear to us to be of a higher order, sanctified by primordial forces. And in our own mimetic appropriations, of sermons and speeches, chants and songs, drones and dirges, we seek to texturize our words with an otherworldly aura. The use of delay effects in sound recording allows us then to ritualistically edify our anxieties and inadequacies and transpose them into reverberant solemnity.
The prosaic use of delay effects in recorded sound—echo and reverberation—has its place in modern times, but the phenomenon has for long resided in the realm of mystical experience. The Greco-Roman mythical character Echo, a nymph condemned to repeat all that she hears, is a tragic figure by all accounts. Rebuffed by Narcissus, the heartbroken Oread hides herself in woods, caves and mountain cliffs. She withers away there in loneliness, her flesh wasting away and bones turning into stone till all that is left is her voice. In this reduced, etheric spectral state, all she can do is to reply to anyone who calls out to her.
by Mathangi Krishnamurthy
All my life, I've been called a Madrasi. This is false, funny, and ironic. For those that live north of the Vindhyas in India, all four of the southern states connote a ubiquitous "Madras", or in other words the land where people speak Madrasi (otherwise knows as four distinct languages Kannada, Telugu, Malayalam, and Tamil). But Madras, or to call it by its current, official, and always locally more kosher name Chennai, was never home to me. I visited Madras, and I lived in Bombay. Madras was heat, provinciality, incoherence, and conservatism. For the longest time, it occupied the second position on a list of cities that I vowed to never inhabit. Number One is still held by New Delhi, and I hope it doesn't indulge in similarly stymieing my life plans. Hush I tell myself, lest the Gods have sharp ears. Evidence indicates otherwise, but you never know.
Madras, I am told by the many books I peruse in the hopes of gaining intellectual familiarity, is where modern India began. This old colonial outpost that had the likes of Robert Clive, Elihu Yale, and Arthur Wellesley pass through dates back to the 1640 settlement of Madraspatnam. For those seeking a primer, I highly recommend Bishwanath Ghosh's Tamarind City and of course, S.Muthiah's Madras Discovered.
Seeking this selfsame city of sepia fame, I wander off one bright Madras morning, dragging a friend and relucatant early riser to Fort St. George, one of the arteries of the colonial enterprise. Disembarking from the train at Beach station sharp at seven am, bright and caffeinated, we walk past a still sleeping old town through NSC Bose Road, and the various Chetty streets, named after differently famed members of the Chettiar community. Each street differentiates itself by the goods it sells; electrical appliances in one, upholstery in the other, plumbing equipment in yet another.
The art-deco buildings are magnificent, and often magnificently ratty. The politics of heritage preservation are apparently a nationwide phenomenon. I receive atmospheric consolation from this history that seems like so many other histories of so many other old towns. I do what any self-respecting debutante to urban studies might do, take many pictures. Fort St.George, the Armenian church with many buried Armenians and nary a community, Armenian Street, abandoned pushcarts, modernist architecture, all fodder for my newly obsessive need to know this city.
Ciprian Muresan. I'm Too Sad to Tell You. 2009
Perceptual experience is a distinctly privileged way of knowing about the world. Not only is perceptual experience ultimately the bridge between mind and world, but it also trumps other ways of knowing. When another way of knowing about the world – inference, introspection, memory, and testimony – and experience disagree with one another, experience will be kept and the other will be dropped unless we have strong reason to believe that what we're experiencing contains an illusion or a hallucination. So, if you were to tell me that my sister is in Melbourne, and later that day I saw her walking across the street from where I am here in Adelaide, I would immediately drop the belief based on testimony that she is in Melbourne. However, an alternative situation may be this: I know that my sister has a doppelgänger who lives here in Adelaide. So in this situation, if I happen to believe that you're a reliable source of information, I'd probably believe instead that I'm actually seeing my sister's doppelgänger.
The moral is this: experience, privileged as it is, is still judged against what we already happen to know – as are the other sources of knowledge. When a proposition arrived at by inference or testimony disagrees with something I already know, I'm going to subject that proposition to much closer scrutiny than I otherwise would. Furthermore, experience, privileged as it is, always involves interpretation. In the first case, where I see my sister, and the second case, where I see my sister's doppelgänger, provide me with identical data. Each of the perceptual experiences are indistinguishable from my first-person perspective. The fact that interpretation is involved in perceptual experience is what explains how it's possible to come to different conclusions from identical perceptual experiences.
So, how are we to understand "interpretation" in the context of perceptual experience? It's certainly not anything like conscious deliberation, otherwise its presence would be salient to us (and it's not at all), and further, experience would be much more plastic than it actually is, in that it would be affected by interpretation in a much more thoroughgoing way. So our background knowledge influences our perceptual experiences in a way that is automatic and unconscious. Should this consideration lead to scepticism about the reliability of our perceptual faculties giving us objective knowledge of the external world? I think the answer is clearly not.