Monday, October 20, 2014
7500 Miles, Part II: Dakota
by Akim Reinhardt
Part I of this essay appeared last month.
Thus continues my grand voyage, in which a rusty ‘98 Honda Accord shuttles me from one end of North America to the other and back again . . .
After stumbling half-way across the continent, I settled into the northern Great Plains for a spell. Determined to visit a variety of archives, I cris-crossed South Dakota to the tune of a thousand miles. It's a big state.
First I spent some time in the East River college towns of Vermillion and Brookings. A hop, skip, and a jump from the Minnesota border, this here is Prairie Home Companion country. It's a land of hot dishes (casseroles) and Lutheran churches. Of sprawling horizons and "Oh, ya know."
There's lots of tall people. Lots of blond people. Lots of tall, blond people. I like it.
But after a week of researching and visiting old friends, I left behind the Scandinavian heritage and Minnesota-style niceties of eastern South Dakota. I made my way west across the Missouri River and then headed north. Actually, I crossed the line into North Dakota; Sitting Bull College on the Standing Rock Reservation is actually in the NoDak town of Fort Yates.
I'm happy to give the tribe some money, so I spent a night at the tribally owned Prairie Knights hotel and casino. I had a mind to play some poker, but when I went downstairs to investigate, I found the card room was already in the thick of a Texas Hold ‘Em tournament. So I bought a sandwich, returned to my room, and watch Derek Jeter's last game at Yankee Stadium.
After Standing Rock, the plan was to go straight down the gut of central South Dakota to Rosebud Reservation, which sits near the Nebraska border, and then westward to Pine Ridge Reservation in the state's southwestern corner.
If you were to plot my herky-jerky route across South Dakota, I suspect it would create an exciting new shape that mathematicians would get wide-eyed about. And then they'd come up with a cool name for this strange but essential new shape. Maybe something like an "akimus." The akimus will shed new light on our understanding of trapezoids. And of course it will have some mysterious relationship to Pi.
I can imagine this because I haven't passed a math class since the 10th grade.
On the way to Rosebud I stopped off at the town of the town of Murdo and spend the night at the Anchor Inn in. The name, of course, is brilliant. The place is,500 miles from either ocean.
The Anchor Inn is a cheap motel with some surprisingly stylish touches. My room included a maroon, pillowed headboard and a brass chain lamp strung along the ceiling above the bed. The bathroom featured 1960s pink ceramic tile. The towels were actual colors, tan and red, instead of the generic motel-white.
Too bad I'm traveling alone. This place would be a perfect spot for some kitschy sex.
The room key was old timey. An actual key attached to a plastic key chain that had the hotel's name, address, and my room's number printed on it. The kind of thing you really need to not lose.
Next to the motel office was a small bar and grill called the Lost Souls Tavern. It featured a Grim Reaper logo that looks like it was inspired by the Sons of Anarchy television drama about a California biker gang. But then again, we're not very far from Sturgis, site of the worlds largest annual motorcycle rally, so the inspiration could just easily flow the other way.
After settling into my room, I walked down the Lost Souls. On my way there, I noticed a sign on the back window of a pickup truck outside the motel office that read:
I walked into The Lost Souls, sat down, and ordered a Shiner Bock beer. Dollar bills with various messages scrawled across them were taped all over the ceiling and the walls. The room was adorned with a melange of rural tchotchkes. A couple of middle aged men sat at the bar, drinking beer and chatting with the female bartender while a show about Alaska State Troopers hunting down various petty criminals played on the TV behind her.
After glancing over the menu, I decided to go with the fried sampler platter. Bad move. It wasn't actually fried. The breaded, frozen nuggets were simply microwaved. Without the hot grease to provide flavor, it was just a pile dry, tasteless, and mush. The jalapeno ketchup and cold beer were the only things that made it edible.
When I was done, I returned to my room. There was an exceptionally aggressive fly waiting for me.
The next morning, as I walked to the office to return my room key, I crossed paths with a middle aged woman. She looked like she might've been a biker's Old Lady at some point. Or maybe she was just going for biker chic, with the sparkly belt. We chatted briefly. She seemed very nice. Then she got into the big pickup with the 2nd Amendment sign.
It was a reminder of how fear based stereotypes often frame so much of the gun debates in this country. Many Conservative gun rights supporters often fixate on defending themselves from the specter of violence by imaginary criminals, probably dark skinned ones. Meanwhile, many Liberal gun rights opponents fret about the specter of violence by imaginary gun nuts, probably a rural white ones.
I'm not going to use this essay to delve into the complexities of the gun debate. But my interaction with this woman was a pleasant reminder that most folks on either side of the debate are actually perfectly normal and reasonable people. At least when they're not screaming about guns, that is.
Prairie Dogs. If you've never seen one, take my word for it: they're just the cutest goddamn thing on the face of the planet. They can easily hold their own against penguins, puppies, and kittens.
Prairie dogs used to be ubiquitous on the Great Plains. Incredibly social animals, they tunnel enormous subterranean villages for their clans to live in. Down there they're safe from most predators. They come back up to graze and take in the wider world, sprouting up through the rings of dark earth surrounding their access spouts. These dirt doughnuts, spread across the prairie grass, are sign posts of a prairie dog village.
Now domesticated cattle, that's not a particularly cute animal. And certainly not a smart one. Dumb motherfuckers. They're apt to occasionally step into a prairie dog hole and break a leg. Consequently, many ranchers are keen on prairie dog genocide, poisoning entire villages and filling in their access holes.
It seems there are always more and more reasons not to eat meat that have nothing to do with why I don't eat meat. This is yet another one of those ancillaries that make me feel happy about not consuming beef.
At one of the campuses of Sinte Gleska University in the town of Mission on the Rosebud Reservation, there's a prairie dog village. Pispiza is the Lakota word for prairie dog.
I like it when the little fellas come out of their villages, stand up, and look around. It's a big world and they're making the effort.
I entered Badlands National Park in South Dakota with the idea of camping for the night. Murdo was nice, but I was ready for something other than a motel.
There are two campsites in the park. The first, near the eastern entrance, has amenities. The second, on the west side of the park, is a so-called "primitive" camp site (Seriously. Are we still using this word in the 21st century? Ugh. Don't get me started.). It has nothing but a couple of outhouses and about a dozen park benches to mark campsites.
When I camp, I prefer fewer amenities. If I wanted amenities, I'd stay home, or in a motel. Plus, the real allure of camping, so far as I'm concerned, is getting the fuck away from people. I love living in a dense, eastern city, but sometimes I've just had my fill of people and their endless capacity to disappoint and be uninteresting.
The more rural campsites, with their rancid, solar powered outhouses and their absence of vending machines or power outlets for firing up RV living rooms and kitchens, draw far fewer people. I quickly passed the more suburban campsite, already overrun with campers in mid-afternoon, and kept rolling west.
For more than 25 miles along the winding park road, I let loose a volley of gasps and contented sighs. The badlands are indescribably beautiful, so I won't attempt to describe them.
After about forty-five minutes, the pavement ended. The rusted chariot continued grinding along the gravel, and eventually I reached the turnoff to the rural campsite.
A pair of wild bison grazed a couple of hundred yards away, which was unsurprising since I've previously come across wild buffalo in the North Dakota badlands.
Bison are like redwood trees: if you haven't seen one with your own eyes, it's difficult to explain just how big they really are. Plus those horns. Merely noting that they weigh upwards of a ton and can run 40 miles per hour doesn't do them justice. Seeing one in person is enough to redeem the word "awesome."
The weather was gorgeous, the sun slowly drifting across the massive sky and warming the ground below. Perhaps that's why there were more people at the campsite than I would have hoped for on a Wednesday in late September. A baker's dozen vehicles, some like me with a car and a tent, some with trailers. My eyebrows arched when a 16' U-Haul truck pulled up to the campsite next to mine.
A woman got out of the truck along with a dog and a cat in a box. Other than the animals, she was alone. She quickly pitched a tent. Her competence and gumption were impressive, and not because she was a woman. But because, holy shit, that's a 16' truck at the rural campsite in the South Dakota badlands.
After setting up her camp, she took the dog for a walk around the campgrounds, about a quarter mile. She stared at her phone the whole time. Of course there's not any reception. But beyond that, did you really come all the way out here, in a moving truck no less, to stare at your fuckin' phone?
Looking around at my fellow campers, there seemed to be several sorts, ranging from suburban passers by to old hippies and bohemians. It's probably an expression of my own cynicism and discontent more than anything else, but I sensed/imagined a desire among many of them to be somewhere far away while still being fashionable; to do something "daring" but to also fit-in by doing what others have done; to do something one is supposed to do.
It annoyed me. I annoyed myself.
I hadn't set out to camp in the Badlands. I don't have a Travel Channel-approved bucket list of shit to get done or places to visit before I get cancer or have a stroke or become frail with age. All the world's a wonder when human beings aren't mucking it up.
I just wanted a quiet place to camp. The first place I'd investigated earlier that afternoon was near a town in rural South Dakota. But it turned out to be not all that remote, so I'd moved on. I ended up at the Badlands because it was the next spot down the line, hopefully peaceful, and relatively close to where I was headed the next morning.
I turned in early. A couple of mosquitos followed me into my tent. I killed them against the luminous screen of my laptop as I wrote some of these words.
I awoke to a strange noise. It was a quiet but piercing, staccato shriek. It sounded like someone slowly rubbing their fingers across the surface of a taut balloon.
Why the fuck is someone playing with a balloon, I thought to myself in my partially unconscious daze.
I don't wake up well. Not against my will anyway. I've been known to curse into the phone when someone calls at an unsuspecting hour.
As I began approaching a fuller consciousness, my anger rose. Why was one of these late arrivals to the campsite, one of these goddamned people, making noise in the middle of the night?
Then I heard the other sound. Breathing.
It was the deepest, heaviest breath I've ever heard. It certainly wasn't human. It was just on the other side of my nylon tent.
Holy shit. The bison.
Adrenaline coursed through my body. I shot up into a sitting position. And then I froze. Don't startle them, I thought. If they trample the tent, I'd have no chance.
I sat there. I waited. I listened to the breathing, so big, so close. I didn't move a muscle. My heart raced. Inside the tent the nighttime air was cold and brisk.
Slowly the sound got a little further away. And a little further. I reached for my flashlight. I slowly, quietly unzipped the tent flap and cautiously stuck my head out. I couldn't see them, but I could still hear them now on the other side of my car. Bit by bit I emerged.
I could still hear the breathing. Fearing that something could startle them into charging, I crouched behind my car and peered into the distance. From where the noise was coming, I espied a pair of massive silhouettes. Wow. Then I looked around the campsite. No sign of any campers being awake. No sounds, no lights.
I climbed into my car. Through the windshield I watched the two black forms slowly drift.
Eventually they got far enough away that I found the courage to leave my car. When I finally turned my cheap flashlight on them, they were beyond the range of its tepid beam.
I began to wonder whether it had actually happened. Were my eyes playing tricks on me in the dark? Was I imagining it? It was all too surreal to believe.
It was half past midnight. As I returned to my tent, I looked up. The Milky Way spilled across the nighttime sky.
I awoke the next morning an hour after sunrise. I stepped out of my tent and saw a pair of bison calmly grazing between two campsites. Most people were out of their tents by now, but no one else seemed to notice. It was as if they were invisible. As if I were the only one who could see them.
Then a man and a woman, late arrivals from the night before, with a car sporting Virginia license plates, walked up to the bison to take pictures. These buffalo were obviously used to being around people; not domesticated by any means (bison have never been domesticated like dogs or cattle), but not so wild that they were unfamiliar with humans. I wondered if the two Virginians understood that. They were close enough for a good mauling.
I packed up and headed out. As my car rambled along the dirt path that led back to the gravel road, I saw pairs and trios of bison on either side. Some of them were no more than twenty yards from the path.
There are no words.
Akim Reinhardt's website is ThePublicProfessor.com
We hate Internet trolls. But should we be helping them?
by Grace Boey
Lately, I’ve been thinking a lot about Internet trolls. I’d always been vaguely aware of their presence, and had read some articles here and there about the threats they pose to constructive debate—but I never truly realized the full nature of their pestilence until I had to deal with them myself. Since I started publicly writing and commenting online, I’ve encountered abusive, non-constructive comments and emails on an increasingly frequent basis. I also co-manage an atheist social media page; I’m not the direct target of the trolls that lurk here, thankfully, but I do have to trawl through their vile comments, where they often abusively attack (or embarrass) causes I care about deeply.
Naturally, none of this has been good for my blood pressure. Last month, I became irritated enough to start work on a long exposition of online trolling—in the process, targeting specific trolls I’d personally encountered. Yes, hell hath no fury like a woman trolled, and I spent more time than I’d care to admit compiling comprehensive records of at least three of these individuals’ online activities. I even uncovered the physical, non-virtual identity of one of them.
You’d think I’d be happy for striking troll-hunter’s gold. Yet, the more I wrote and uncovered, the less I wanted to publish a piece bashing trolldom in general, let alone one that put specific individuals on the spot. Though I was pleased with the quality of the article, I refrained from running it. And very fortunately so—a couple of weeks after the piece would have been published, the Brenda Leyland troll-exposing controversy erupted.
Here’s what I've come to think: there’s very strong reason to believe that many compulsive Internet trolls need our active help. The impersonality of the internet makes it easy for them to dehumanize others, but for this same reason, it’s also easy for us to completely dehumanize them. But we must resist this temptation. Who are the people behind these monikers and computer screens, really? Why do they thrive on trolling, and why on earth don't they have anything better to do? How did they become this way? When we really stop to think about these questions, a disturbing social and psychological picture emerges. Virtual trolls may be a problem as much to their human selves as they are to their human victims.
Humanizing the troll
Of what trolls do I speak? As Scott Aikin and Robert Talisse note, online trolls come in many shapes and sizes. But the trolls they (and I) are concerned with are those who “dominate discussions with overblown objections and personal attacks, who seem immune to criticism, and who thereby derail Internet argument.” Such trolls “thrive on the negative reactions they elicit”, and “responding to them and defending your view causes them to become even more unhinged. Trolls are a cross-cultural phenomenon, and a brief look at Wikipedia reveals some gems about trolldom around the world:
In Chinese, trolling is referred to as … bái làn ( … literally: "white rot"), which describes a post completely nonsensical and full of folly made to upset others, and derives from a Taiwanese slang term for the male genitalia, where genitalia that is pale white in colour represents that someone is young, and thus foolish. …
In Portuguese … pombos enxadristas (literally, "chessplayer pigeons") or simply pombos are the terms used to name the trolls. The terms are explained by an adage or popular saying: “Arguing with fulano (i.e., John Doe) is the same as playing chess with a pigeon: the pigeon defecates on the table, drops the pieces and simply fly, claiming victory.”
Much of the discourse on trolls so far has focused on the following: how should we minimize their negative effects on us? How should we react (or not react) to those who seek to aggravate, letting them wreak as little destruction on our constructive discussion as possible? The general consensus on these questions seems to be ‘don’t feed the trolls’, and it is important that this message should continue to be spread. However, comparatively little has been done to address the issue that many of these people are probably in great need of help themselves—especially ones with significant histories of trolling behaviour.
We sometimes touch upon his when we shake our heads in despair at a troll, saying, “I feel so sorry for you.” This sentiment is often sincere, yet snarkily expressed: genuine pity for the troll is conveyed, but brushing them off as low-lifes usually has the primary function of helping us feel better about ourselves. Most of the time, we move on after this momentary shudder. But when one really stops to think about it—what it must feel like to be the kind of person who lurks anonymously behind a screen, compulsively making petty and abusive comments, and what must have happened to such a person to make them that way—it seems we may need to take our pity more seriously.
Starting on an anecdotal level, many of the persistent trolls I’ve observed have troubled personal lives. It may take some digging through past comments to find references to the relevant events, but they’re there. Such people inadvertently reveal deep insecurities, or unresolved emotions, by projecting them onto whoever or whatever they attack. They may, for example, be insecure about not having successfully completed some level of higher education, and reflect this through a pattern of attacking the intelligence of qualified writers for (real or perceived) minor grammatical errors, and perceived character flaws like pride. Insecurities like these may explain why trolls often hold their targets to high standards of argument and conduct, while (unwittingly) not meeting these standards themselves.
Yet another scenario I frequently witness is this: male trolls who are bitter over failed relationships with women, reflecting their feelings through a pattern of sexist, anti-feminist attacks on female writers and commenters. This happens even when the original topic has nothing to do with gender or feminism.
There’s also the question of why someone would choose to troll virtually and anonymously, rather than personally. Ostensibly, many trolls choose a virtual platform for their bad behaviour as they are unable, or unwilling, to express these unhealthy urges in in the flesh. This may stem from a few possible reasons—such people may be embarrassed or repressed, or perhaps simply driven into seclusion after being rejected for similar behaviour in person. They may be socially isolated, have few meaningful in-person relationships, or they may be suppressing some pent-up part on themselves in front of friends. They may be building a fantasy persona online, to escape problems they're unable to cope with in real life.
Recent, more academic discussions have linked online trolling to psychiatric illness—in particular, personality disorders. Individuals with personality disorders don't experience human relationships and emotions the same way others do; in many ways, they're missing out on many things that healthy people value most in life. In a recent study, psychologists from the University of Manitoba found that online trolling behaviour correlates strongly with diagnostic markers of narcissism, psychopathy, Machiavelianism, and most strongly, sadism. Cyber-trolling, they concluded, seems to be an Internet manifestation of everyday sadism. It has also been argued that flame trolling activities share a number of similarities with the diagnostic criteria for anti-social personality disorder. Given the commonalities between people with personality disorders and Internet trolls, it's no big surprise that the advice to their victims is identical: don't engage.
Treating personality disorders is difficult, but there's good reason to think that troll psychiatry extends beyond this category of illness. The mental disorder linked with sadism—sadistic personality disorder—shares high comorbidity with other psychiatric conditions, including depression, obsessive-compulsive disorder, bipolar disorder, and borderline personality disorder. And for what it’s worth, two out of the three trolls I personally looked up had a history of psychiatric mood disorders. Quite imaginably, cyber-trolling may be related to a whole host of other psychiatric and psychological problems: Internet addiction seems like a strong possibility, as does isolation and depression from severe social anxiety. Trolling may stem from persistent boredom, or lack of meaningful activities in the troll’s life. Bad online behaviour may also be compulsive—like pathological liars, some trolls may simply not know how to stop.
That cyber-trolling should be related to mental illness is no big surprise. Changes in our social landscape create new ways for mental disorder to express itself—and the Internet, growing in its social pervasiveness, is a natural playground. (This is true for online behaviour other than trolling: psychiatrists are now saying that compulsively taking selfies and posting them on social media may be indicative of body dysmorphic disorder.) We should want to help these people we see online—for their own sakes and not just ours—to the same degree we want to help people in real life who display similar symptoms. These unpleasant online entities, after all, are just that—real-life people with issues—and it’s easy to forget this when interaction is mediated by computer screens.
Helping the cyber-troll
So, how should we go about helping the Internet troll? Here’s one method I'm skeptical of: victims of trolling attempting to reach out to their troll attackers. There have been instances of people doing this successfully: Cambridge professor Mary Beard, for example, has even befriended some of her previous abusers (including one who called her a ‘filthy old slut’). But realistically, few people have the emotional intelligence, patience or benevolence required not to botch the job. And more importantly, it’s also unfair to place the responsibility of solving trolling onto the trolled—it’s somewhat akin to asking a victim of stalking to reach out to his obsessive harrasser. We certainly aren't under any personal obligation to help our haters.
Rather than looking for answers in the troll-trolled relationship, it makes sense to view the phenomenon similarly to how we view the social, psychological and psychiatric issues that seem to be related to it. This means that public, organized outreach efforts on a society-wide level would be helpful. It also means that individuals should keep an eye out for friends and family who seem to spend a lot of time online, compulsively engaging in troll-like behaviour, and extend support to them where needed—just as we’d do for loved ones we suspect are slipping into depression. Much more awareness needs to be raised of trolling as being indicative of deeper personal issues, and this may even encourage people to seek help for their own unwholsome online behaviour that's gotten out of control.
If these recommendations seem vague and tentative, it’s because so little research has been done into the relevant aspects of trolldom that might help us help them. This brings us to the next thing that needs to be done: research. Many bits of this piece have been general and speculative—inevitably so, given the lack of rigorous data we have on trolls. All that can be concluded from the information on hand, really, is that there’s strong reason to think that trolls need our help, and quite a lot of it too. More research studies like the Manitoba investigation need to be done. We need to take further steps to find out who exactly these people are, discover the reasons for why they do what they do, compile any co-morbidity rates and relationships with other mental disorders, and develop effective treatment options for those who need help.
Not all trolls will need our help. Ostensibly, some trolls are just assholes, which is all there’ll ever be to it. And sometimes abusive behaviour stems from a mix of simple ignorance and thoughtlessness; all it takes is a little nudge for some people to properly recognize just how harmful their behaviour is. But from what I can tell, a great number of trolls seem to need help rebuilding some part of themselves and their lives.
Imagining these vicious people grumbling behind their computer screens sometimes reminds me of the cave trolls in Henrik Ibsen’s Peer Gynt—creatures who luxuriate blindly in squalor, representing the depths we may fall to if we neglect our task of realizing our full human potential. The Internet continues to change the ways in which we live and relate to others, and constantly opens up new ways for people to trap themselves in sub-humanity. Compulsive trolls only play at goals that the rest of humanity finds fulfilling, like solution-seeking, constructive debate, and meaningful social interaction. Perhaps it’s time to take our pity for trolls seriously, and take a good look into how we can extend a helping hand.
Illustration courtesy of Alexi Chabane.
Walter Johnston. Flaky Thorn Acacia. Timbavati, South Africa, 2014.
On safari in August we were told that a "gall making wasp" injects a kind of growth hormone into the thorn to make it expand (see below) and thus provide a well protected nourishing home for its eggs.
I have not been able to corroborate this. If someone else can, I'd love to learn.
Here's the best I have found:
"Myrmecophilous acacia are found in Eastern Africa and Mesoamerica ...
...They develop some to most of their stipular spines into inflated, globose, ovoid, fusiform or thick cyclindrical armatures. Their spines look like galls or horns leading to species names like White swollen thorn acacia (=A. bussei), Black-galled acacia (A. malacocephala), Hairy-galled acacia (=A. mbuluensis), Bull`s Horn acacia, or Ant-galled acacia also called Whistling thorn acacia
The swollen thorns are genetically fixed. They are not randomely generated by the sting of an insect, like the galls produced by a wasp that injects her chemicals into a leaf, which then forms galls. Therefore the so-called gall-thorns are not real galls.
The fresh thorn is drilled open by an ant queen. Then it is carved out and she lays her eggs inside, starting a new colony ...
The obligate mutualistic Acacia-ants (Pseudomyrex in Mesoamerica and Crematogaster in Africa) protect the plant in different ways: they fiercly attack browsing mammals, ravaging insects and epiphytic vines. They prevent any twig from neighbouring trees to touch their host – to prevent hostile ants from invading their tree. For the same reason they cut shoots of their tree that develop too far towards the canopy of neighboring trees."
Walter Johnston. Swollen thorn of the Flaky Thorn Acacia. Timbavati, South Africa, 2014.
More on acacias here.
Photographs posted with permissin from Walter Johnston.
Evolving to the Future, the Web of Culture
by Bill Benzon
“The interests of humanity may change, the present curiosities in science may cease, and entirely different things may occupy the human mind in the future.” One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
In scientific prognostication we have a condition analogous to a fact of archery--the farther back you are able to draw your longbow, the farther ahead you can shoot.
R. Buckminster Fuller, Critical Path
Let’s get started:
- A week ago The Guardian published a long piece in which Pankaj Mishra argued that the Western world no longer provides a model the rest of the world should or even can follow, if it ever had.
- A couple of weeks ago polymath David Byrne asserted that he’d lost interest in contemporary art, feeling it had devolved into “inoffensive tchotchkes for billionaires and the museums they fund,” a sentiment that the late Robert Hughes had been promulgating for some years.
- Back in 1996 science journalist John Horgan published The End of Science, in which he argued that many fields of science had reached a point where they were no longer intellectually productive. The big problems had been solved, more or less, and further investigations seemed to be running in circles without any clear sense of progress.
Not only am I sympathetic which each of these ideas, I think they all reflect the same underlying cause: the wellsprings of old ideas – about social organization, artistic expression, and scientific explanation, certainly, but also about fiction, legal codes, economics, education, music, gender and family, and a host of others – have run dry and new ones have not yet been discovered.
I’m quite familiar with this phenomenon in the case of literary studies, where I received my graduate training. The French landed in Baltimore in the Fall of 1966 and catalyzed three decades of intellectual invention. The invention all but stopped about twenty years ago, leaving literary studies afflicted with a sense of malaise that goes deeper than budget cuts and umbrage taken at silly articles in which humorists of The New York Times take potshots at papers presented at the annual convention of the Modern Language Association.
How could new ideas just stop? Have people gotten stupid or is something else going on? If so, what?
What’s going on is something that that late David Hays and I call “rank shift”, which is a generalization over what happened several millennia ago in the transition to agricultural economies run by literate elites living in walled cities and again several hundred years ago with the emergence of modern science, mercantilism and capitalism, and the nation state. Of course get no points for saying “it’s happening again folks” because everyone knows that. What we lay claim to is a fairly specific set of proposals about how such transitions are grounded in mechanisms of thought.
Certain mechanisms inhere in human biology, mechanisms which have evolved through millions upon millions of years. Those mechanisms eventuated in language. Whether or not spoken language should be considered biologically innate is a matter of much contemporary debate, but no one argues that writing, arithmetic and algebra, or computing are innate. They’re clearly within the province of culture and they enable us to bootstrap ever more powerful mechanism of thought into the same basic biological apparatus.
The Human Mind and Its Cultural Elaboration
When the work of developmental psychologist Jean Piaget finally made its way into the American academy in the middle of the last century the developmental question became: Is the difference between children’s thought and adult thought simply a matter of accumulated facts or is it about fundamental conceptual structures? Piaget, of course, argued for the latter. In his view the mind was constructed in “layers” where the structures of higher layers were constructed over and presupposed those of lower layers. It’s not simply that 10-year olds knew more facts than 5-year olds, but that they reasoned about the world in a more sophisticated way. No matter how many specific facts a 5-year old acquires, he or she cannot think like a 10-year old because he or she lacks the appropriate logical forms. Similarly, the thought of 5-year olds is more sophisticated than that of 2-year olds and that of 15-year olds is more sophisticated than that of 10-year olds.
This is, by now, quite well known and not controversial in broad outline, though Piaget’s specific proposals have been modified in many ways. What’s not so well known is that Piaget extended his ideas to the development of scientific and mathematical ideas in history in the study of genetic epistemology. In his view later ideas developed over earlier ones through a process of reflective abstraction in which the mechanisms of earlier ideas become objects manipulated by newer emerging ideas. In a series of studies published in the 1980s and 1990s the late David G. Hays and I developed similar ideas about the long-term cultural evolution of ideas.
We theorized about cognitive ranks, where later ranks developed over earlier ones through reflective abstraction (see articles listed in the appendix). The basic idea of cognitive rank was suggested by Walter Wiora’s work on the history of music, The Four Ages of Music (1965). He argued that music history be divided into four ages. The first age was that of music in preliterate societies and the second age was that of the ancient high civilizations. The third age is that which Western music entered during and after the Renaissance. The fourth age began with this century. (For a similar four-stage theory based on estimates of informatic capacity, see for example D. S. Robertson, The Information Revolution. Communication Research 17, 235-254.)
This scheme is simple enough. What was so striking to us was that so many facets of culture and society could be organized into these same historical strata. It is a commonplace that all aspects of Western culture and society underwent a profound change during the Renaissance. The modern nation state was born, the scientific revolution happened, art adopted new forms of realistic depiction, attitudes toward children underwent a major change, as did the nature of marriage and family, new forms of commerce were adopted, and so forth. If we look at the early part of our own century we see major changes in all realms of symbolic activity—mathematics, the sciences, the arts—while many of our social and political forms remain structured on older models.
The transition between preliterate and literate societies cannot easily be examined because we know preliterate societies only by the bones and artifacts they've left behind and the historical record of the ancient high civilizations is not a very good one. Instead we have to infer the nature of these ancient cultures by reference to modern anthropological investigations of preliterate cultures (just as biologists must often make inferences about the anatomy, physiology, and behavior of extinct species by calling on knowledge of related extant species). When we make the relevant comparisons we see extensive differences in all spheres.
Social order in preliterate societies may involve nothing more than family relationships, or at most the society extends kinship by positing ancient common ancestors. With little or no apparatus of government, civil order is maintained by feud or fear of feud. In literate societies, social order is kept by etiquette, contract, and courts of equity, and civil order is maintained by police and courts of justice. In preliterate societies each community, of 5 to 500 members (and generally less than 200) is autonomous until, about 6000 years ago, chiefdoms appear in a few places: groups of villages forced into submission. In literate societies villages grow into towns and cities, which organize the villages of their hinterlands into kingdoms. Preliterate societies depend on the skills of hunting and gathering, of slash-and-burn farming, pottery, and a few more crafts, which are sound and effective where they exist. In literate societies certain persons trained to think choose to think about farming and write manuals for the agrarian proprietor – and eventually manuals of other crafts appear. Finally, Lawrence Kohlberg has found evidence that people in preliterate societies have less sophisticated moral concepts than people in literate societies.
The appearance of writing was followed by the Mosaic law and the prophets of Israel, and by the Periclean Age in Athens. The architecture, democratic political system, and above all the philosophy ¬– both natural and moral – of the Hebrews and Greeks was so different from all predecessors that we tend to think of our civilization as beginning with them. In fact, a period of cultural regression followed the fall of Rome and before the Renaissance could begin a “little renaissance” beginning about A.D. 1000 and reaching its peak with Aquinas in the 13th Century was necessary to raise Europe once more to a literate level. Our civilization combines elements of Greek, Roman, and Hebrew antiquity with Moslem, Indian, Chinese, and Germanic elements.
Hays and I believed that these ages, the systematic differences between cultures at these levels of cultural evolution, are based on differences in conceptual machinery. As cultures evolve they differentiate and become more complex and sophisticated, the more sophisticated cultures having cognitive mechanisms unavailable to the less sophisticated. Over the long term this process is discontinuous; it passes through singularities in the sense of Ulam and von Neumann. People on the old side of a socio-cultural singularity cannot imagine the world of people on the near side.
The post-singularity modes of thought and action permit a dramatic reworking of culture and society, a reworking that is ultimately engendered by a new capacity for manipulation of abstractions. The thinker/artist/social actors on different sides of these singularities are thus operating with different ontologies – to use a philosophical term that entered computer science and AI in the last few decades – from that employed by the older ones. The new ideas and practices cannot be reduced to strings of ideas stated within the old ontology, though crude approximations are often possible. Indeed, science and technological journalists use such crude approximations to convey the newer ideas to a general audience. But those approximations do not give you the ability to use the newer ideas in a powerful way.
These several kinds of thinking are cumulative; a simpler kind of thinking does not disappear from a culture upon the introduction of a more complex kind. A culture is assigned a rank according to the most sophisticated kind of thinking available to a fraction of its population that is substantial enough to regulate the overall affairs of society. That a culture is said to be of Rank 3 thus doesn’t imply that all adult members have a Rank 3 system of thought. It means only that an influential group, a managing elite if you will, operates with a Rank 3 cognitive system. The rest of the population will mostly have Rank 1 and Rank 2 conceptual systems.
Each cognitive process is associated with a new conceptual mechanism, which makes the process possible, and a new conceptual medium that allows the mechanism and process to become routine in the culture. This is an important point. The general effectiveness of a culture is not determined only by the achievements of a few of its most gifted members. What matters is what a significant, though perhaps small, portion of the population can achieve on a routine basis. The conceptual medium allows for the creation of an educational regime through which a significant portion of the population can learn effectively to exploit the cognitive process, can learn to learn in a new way.
Here is the scheme Hays and I proposed:
|Rank 2:||Rationalization||Metalingual Definition||Writing|
In an earlier paper (Principles and Development of Natural Intelligence, see the appendix) we argued that the human brain is organized into five layers of perceptual and cognitive processors. We called the top layer the gnomonic system and thought of it as organizing the interaction between the lower four layers. All abstractions form in the gnomonic system. The cognitive processes that concern us here are all regulated by this gnomonic system. Hence for our present purposes it is convenient to collapse this system into a two-level structure, with the gnomonic layer on top in an abstraction system and the other four layers on the bottom, collectively, the concrete system.
With a Rank 2 structure, Aristotle was able to write his philosophy. He presented it as an analysis of nature but we take it to be a reconstruction of the prior cognitive structure. In the Renaissance, some thinkers developed cognitive structures of rank 3. Exploitation of such structures produced all of science up through the late nineteenth century.
Beginning, perhaps, with Darwin and going on to Freud, Einstein, and many others, a new kind of cognitive process appears. To account for it, we call on rank 4 processes. We understand earlier science to be a search for the one true theory of nature, whereas we understand the advanced part of contemporary science to be capable of considering a dozen alternative theories of nature before breakfast (with apologies to Lewis Carroll). The new thinker can think about what the old thinker thought with. And indeed we use that sentence to summarize the advance of each cognitive rank over its predecessor.
The Singularity in Which We Swim
Let’s begin assessing our current situation by looking at the observed course of cultural evolution to date. We’re interested in the transition from one rank to the next and the interval between one rank-shift, or singularity, and the next. When looking at the following table, keep in mind that these transitions are not sharp; they take time. All we’re after, though, is a crude order of magnitude estimate:
|Informatics||Emergence, years ago|
The transition time from one rank to the next seems to have gone down by an order of magnitude for each transition.
One does not have to look at those numbers for very long before wondering just what started emerging five years ago. While there is nothing in the theory that forbids the emergence of a fifth, or a sixth rank, and so on, it doesn’t seem plausible that the time between ranks can continue to diminish by an order of magnitude. The emergence of a new system of thought, after all, does not appear by magic. People have to think it into existence: How much time and effort is required to transcend the system of thought in which a person was raised? THAT limits just how fast new systems of thought can arise.
And, THAT, I believe, is where we are, all of us. In one way or another every society on earth is undergoing major changes, but the nature of those changes depends on the prior history of that society. By the end of the 19th Century every society had come under the influence of imperial empires based in Europe whether through direct or indirect rule, as in the case of the British Empire, or simply had been forced to deal with Europe and/or the United States, as was the case with Japan.
Japan is an interesting and important case because it was never conquered by a Western power. Indeed, by the middle of the previous century Japan had grown powerful enough to challenge the West, a challenge that lead to defeat in World War II.
But how do we think about what the Japanese have accomplished in that time? Has Japan become westernized? Is that what’s happening in the non-European world, westernization? Christopher Goto-Jones poses that question in the introduction to Modern Japan: A Very Short Introduction (2009). He poses it in terms of modernity (p. 9):
In particular, the concept of the modern seems to share the Enlightenment project’s faith in progress and its aspirations towards the universalism of its maxims. However, it is important to remember that there is a difference between observing the historical origins of this cluster of ideas in Europe and claiming that the ideas themselves are somehow essentially European possessions.
They are not, of course. They “belong” to anyone and any society who masters them.
Goto-Jones goes on to observe that some Japanese have wanted to reject “all the trappings of modernity in the name of rejecting Westernization”, others wanted to blend aspects of modernity with Japanese traditions, while still others have wanted to reject all of Japanese tradition. He notes that:
In some ways, this kind of sociocultural anxiety about identity and the place of tradition in society is one of the marks of the modern era, not only in Japan but everywhere. The modern era is not only characterized by great advances in science, but also by social anomie and political protest.
And that leads us back to where we began, with Pankaj Mishra’s argument that, whatever it is that’s happening in the world today, it’s not westernization.
Culture is not a homogenous substance exuded by nation states. It is a body of ideas, expressive forms, and social techniques and, in the contemporary world, the ideas, expressive forms, and social techniques that operate in a given place may have originated anywhere in the world. The processes of reflective abstraction are inherent in the human mind and so can operate on any extant body of ideas, expressive forms, and social techniques.
The observations I’ve cited from Byrne – the emptiness of contemporary art – and Horgan – the exhaustion of science – suggest that “the West” is pushing the limits of its stock of ideas and techniques. Does the future belong to cultural forms originating in the rest of the world as, indeed, Europe drew on Chinese and Indian mathematics by way of the Iranian scholar Al-Khwārizmī at the beginnings of its modernization, its Renaissance?
Appendix: The Theory of Cultural Ranks
Most of what Hays and I have written about cultural rank is available at this website: Mind-Culture Coevolution: Major Transitions in the Development of Human Culture and Society: http://asweknowit.ca/evcult/
These are the major essays, most of which can be downloaded in PDF form:
William Benzon and David Hays, Principles and Development of Natural Intelligence. Journal of Social and Biological Structures 11, 293 - 322, 1988. http://www.academia.edu/235116/Principles_and_Development_of_Natural_Intelligence
This one is not about culture; it’s about biology. It’s a prelude to the rest in that it lays out the biological underpinnings of cultural elaboration.
William Benzon and David Hays, The Evolution of Cognition. Journal of Social and Biological Structures 13, 297-320, 1990. https://www.academia.edu/243486/The_Evolution_of_Cognition
Much of this post is based on this article, which is our basic statement. The article has full citations for the ideas and observations in the sections of this post on the human mind and culture and on cultural singularities.
William Benzon, The Evolution of Narrative and the Self. Journal of Social and Evolutionary Systems, 16(2): 129-155, 1993. https://www.academia.edu/235114/The_Evolution_of_Narrative_and_the_Self
William Benzon, Stages in the Evolution of Music. Journal of Social and Evolutionary Systems, 16(3): 283-296, 1993. https://www.academia.edu/8583092/Stages_in_the_Evolution_of_Music
David Hays, The Evolution of Expressive Culture. Journal of Social and Evolutionary Systems, 15: 187-215, 1992. http://asweknowit.ca/evcult/Express.shtml
David Hays, The Evolution of Technology Through Four Cognitive Ranks. White Plains, NY: Connected Education, 1993. http://asweknowit.ca/evcult/Tech/FRONT.shtml
This is the most extensive development of the ranks idea and contains treatments of forms of government and personality in addition to discussions of technology.
William Benzon , Culture as an Evolutionary Arena. Journal of Social and Evolutionary Systems, 19(4), 321-362, 1996. https://www.academia.edu/235113/Culture_as_an_Evolutionary_Arena
This long essay can serve as a bridge between ranks theory and the material in my last 3QD post, about direction in the evolution of 19th century English langauge novels.
William Benzon, Pursued by Knowledge in a Fecund Universe, Journal of Social and Evolutionary Systems, 20(1): 93-100, 1997. https://www.academia.edu/8790205/Pursued_by_Knowledge_in_a_Fecund_Universe
This is an essay review of Horgan’s The End of Science and sets out why I expect science to continue with the emergence of new systems of scientific thought.
Bill Benzon blogs at New Savanna.
What i wanted to tell him (on the way to mars)
Lauren Davis extemporizes about how astronauts became known as gods:
The stories told of ancient beings so powerful that they could fling themselves into space and explore the points of light in the heavens. When Lady Adelaide moved into one of their unused crafts, many called it blasphemy. She called it research.
by Leanne Ogasawara
That's what I wanted to tell him about. But the evening when I finally had my chance to chat with a former astronaut and now NASA leader, I had lost my voice.
He was standing there holding court about the state of science education in the country. He was also discussing the lack of political vision, and I thought how the level of this decline came with an astounding --and perhaps corresponding-- level of malaise. Looking back, other than World War II and perhaps the country's early days of Revolutionary politics, has anything truly excited and united people here more than scientific innovation and the space program? Apropos of this, not so long ago a friend, who had just turned 50, listed in a Facebook post several of what he considered to be the highlights of his half century on earth-- and of eight great achievements, three were space related (and of the other five, only one, the eradication smallpox, was even serious).
Yes, space is exciting. It also generates wonder in people--especially children.
So, how could we let it decline?
Manned missions to Mars is the next big dream it seems. Not surprisingly, when the Dutch non-profit outfit Mars One held open applications for new astronauts, the largest group by far to apply were Americans--and this was for a one-way mission!
One could argue that discovery is something that is inherently part of the human condition and that space is just in our blood. So, also not surprisingly, the former astronaut mentioned above spoke excitedly about Mars. "A human astronaut can do what it took the robotic rover to do in a long day in under twenty minutes," he said. "And, let's face it, Mars is the only place humans could possibly live," he continued.
Walking away, I felt sad to think how much the space program had declined. For if the moon is now a faraway dream how impossible would a manned mission to Mars be? I doubt we are even twenty years away from sending American astronauts to the moon again--unless maybe on a Russian (or Indian?) rocket... So how much further and more impossible is Mars?
My own favorite potential NASA project is an international space station at L2. They call it a "deep-space station on the moon's far side," and there are so many selling points to the idea. Not only would it utilize all the existing long-term space habitation technology, but it would focus international cooperation on astronomy--something that has been lacking in the low earth orbit station, maybe? It would be the furthest humans have ever been and would certainly facilitate many more lunar trips as well. My friend Mark though laughed at my typical faintheartedness.
What? People in the L5 Society would spit at your L2 idea.
This goes way back to our younger years--say 30-40 years ago. The young Mark (like the young Leanne) had been fascinated by the ideas of the 1970s visionary Gerard O'Neill. A physicist at Princeton, O'Neill was a genius. And he is probably best known for his incredible ideas on creating space habitats along the stable Lagrangian points at L4 and L5--hence Mark's derogatory comment about my L2 space station ambitions.
O'Neill's ideas seem both wonderful --and fanciful-- from today's perspective. But they also make a kind of straight-forward sense. O'Neil had mapped out all his ideas in his book, High Frontier. And I notice that in the new introduction for the 2001 edition, Freeman Dyson stated that, O'Neill died in 1992 seeing humanity no closer to fulfilling his bold vision. And that not just that, but in many ways we've actually backslided; since the International Space Station (and the current role of NASA) is "not a step forward on the road to the High Frontier. It's a big step backward, a setback that will take decades to overcome."
In O'Neill's seminal article on his plans for space habitation in L4 and L5, The Colonization of Space which appeared in Physics Today in September 1974, he included his timeline for the various stages of the project in Table 1. Under the table, he adds, From about the year 2014, I assume a doubling time of six years for the colonies; that is, the workforce of a "parent" colony could build a "daughter" colony within that time.
I bet if O'Neill knew we were not even able to make a manned moon trip in 2014--he would roll over in his grave! There have been advances in technology and medicine to be sure... but somehow it seems like in many ways big dreams like manned space missions are missing nowadays?
Or maybe it's simply that Americans are no longer able to think in the long term. There was a wonderful posting at Brain Pickings on Thursday, called The History Manifesto: How to Eradicate the Epidemic of Short-Termism and Harness Our Past in Creating a Flourishing Future. She quotes Brown University history professor Jo Guldi and Harvard historian David Armitage:
A specter is haunting our time: the specter of the short term.
We live in a moment of accelerating crisis that is characterized by the shortage of long-term thinking… Almost every aspect of human life is plotted and judged, packaged and paid for, on time-scales of a few months or years. There are few opportunities to shake those projects loose from their short-term moorings. It can hardly seem worthwhile to raise questions of the long term at all.
It's a wonderful post and I highly recommend. In a sad mood, last night, my astronomer and I were watching a Carl Sagan tribute video on space colonization and toward the end, the great Carl Sagan says this about collective dreams (and sacred projects):
In modern Western society," writes the scholar Charles Lindholm, "the erosion of tradition and the collapse of accepted religious belief leaves us without a telos [an end to which we strive], a sanctified notion of humanity’s potential. Bereft of a sacred project, we have only a demystified image of a frail and fallible humanity no longer capable of becoming god-like." I believe it is healthy—indeed, essential—to keep our frailty and fallibility firmly in mind. I worry about people who aspire to be "god-like." But as for a long-term goal and a sacred project, there is one before us. On it the very survival of our species depends. If we have been locked and bolted into a prison of the self, here is an escape hatch—something worthy, something vastly larger than ourselves, a crucial act on behalf of humanity. Peopling other worlds unifies nations and ethnic groups, binds the generations, and requires us both to be smart and wise. It liberates our nature and, in part, returns us to our beginnings. Even now, this new telos is within our grasp.
It is such a wonderful piece about the way space and the stars (and science) can generate a sense of wonder and inspire us to dream big. For my own part, like many kids, as a child I wanted to become an astronaut. For me, all wonder started with what was a tremendous love and fascination with space. I spent hours and hours pouring over astronomy books but also many evenings in front of our house, sitting on the driveway just staring up at the stars. Do kids even do that anymore?
I don't really know what Sagan means by "sacred projects." But it must have something to do with long term collectively held projects that generate shared meaning. Philosophers continue to urge us to find this kind of wonder in science, and I think it is true that if we don't find something bigger to dream about and something that will enable us to "escape the prison of the self," then we are doomed. (Doomed to be left as man the eternal consumer, as Terry Eagleton writes).
He says Mars (M-A-R-S, Mars Bitches) --while I prefer the manned space station at L2... and human habitation in space...either way, I guess. For as Van Gogh said, no matter what, the sight of the stars makes me dream.
Stupid early press release so: Bad news for primordial gravitational waves
Dislike on multiple counts: Japanese space ladder
Monday, October 13, 2014
Fallibilism and its Discontents
by Scott F. Aikin and Robert B. Talisse
Fallibilism is a philosophical halo term, a preferred rhetorical mantle that one attaches to the views one favors. Accordingly, fallibilists identify their view with the things that cognitively modest people tend to say about themselves: I believe this, but I may be wrong; We know things but only on the basis of incomplete evidence; In the real world, inconclusive reasons are good enough; I'm open to opposing views and ready to change my mind. But there are different kinds of epistemic modesty, and so different kinds of fallibilism. Let's distinguish two main kinds of fallibilism, each with two degrees of strength:
Weak: It is possible that at least one of my beliefs is false.
Strong: Any one of my beliefs may be false.
Weak: It is possible that I know something on the basis of inconclusive evidence
Strong: All I know is on the basis of inconclusive evidence
Belief-fallibilism is a commitment to anti-dogmatism. It holds that one (or any!) of your beliefs may be false, so you should root it out and correct it. The upshot is that one should hold beliefs in the appropriately tentative fashion, and face disagreement and doubts with seriousness.
Knowledge-fallibilism is a form of anti-skepticism. It holds, against the skeptic, that one does not need to eliminate all possible defeaters for a belief in order to have knowledge; one needs only to address the relevant defeaters. The knowledge-fallibilist contends that the skeptic proposes only the silliest and least relevant of possible defeaters of knowledge. We rebuke the skeptic by rejecting the idea that all possible defeaters are equally in need of response. Again, the knowledge-fallibilist holds that knowing that p is consistent with being unable to defuse distant skeptical defeaters; knowing that p rather requires only that the relevant defeaters have been ruled out.
Although these two varieties of fallibilism are propositionally consistent, they prescribe conflicting intellectual policies. Belief-falliblism yields the attitude that, as any of one's beliefs could be false, one must follow challenges wherever they lead. But knowledge-fallibilism holds that one needn't bother considering certain kinds of objections; it thereby condones the attitude that a certain range of challenges to one's beliefs may be simply dismissed.
When we think about knowledge, we often toggle between wildly different viewpoints. The anti-skeptical viewpoint holds that we know many things, some quite easily; it therefore maintains that any theory of knowledge that has skeptical implications is untenable. Thus if a theory of knowledge says you don't know that you have hands, you should simply toss the theory. However, we also are attracted to an anti-dogmatic viewpoint, one that prizes relentless questioning and values doubt. Hence we tend to recoil at the dogmatic thought that one should evade criticism of one's ideas. Call this vacillation between anti-skepticism and anti-dogmatism the shifting problem.
Fallibilism is typically proposed as a solution to the shifting problem. Roughly, it enjoins us to question everything that's worth questioning. However, recalling our descriptions above, we can see that fallibilism is not a solution to the shifting problem, but rather a restatement of the problem! In its two forms, fallibilism simply restates the conflicting inclinations; belief-fallibilism manifests anti-dogmatism, while knowledge-fallibilism embraces anti-skepticism. We still lack advice on how the resulting conflicts between these inclinations should be managed.
Perhaps we've moved too quickly. Let's return to knowledge-fallibilism. The crucial thought, again, is that one can know that p without being able to defeat all of the possible skeptical scenarios that would undermine one's knowledge. In other words, one could know that p even if one cannot entirely rule out the possibility that we're all living in the Matrix or under the influence of an omnipotent Cartesian demon. To capture the knowledge-fallibilist's thought, imagine the possibility that every zebra in every zoo has been secretly replaced with a painted mule. The knowledge-fallibilist thought runs as follows:
Sam sees what seems to be a zebra at the zoo. It is a zebra, but on the basis of what he sees, he can't eliminate the possibility that he's seeing only a painted mule. Nonetheless, Sam knows it's a zebra.
Knowledge-fallibilism thus makes wide room for concessive knowledge-attribution. And this is where knowledge-fallibilism has a connection with a wider pragmatist tradition, as our attributions of knowledge also function as endorsements of actions. So Sam's communicative action of telling his kids, for example, that the zebra just ate some grass is acceptable. Knowledge-fallibilism saves a good deal of the knockabout usage of the term knowledge and thus vindicates our actions in light of that usage.
That's the good news about knowledge-fallibilism, but there's bad news too. It comes in two stages. The first is what we call the infelicity problem. It sounds weird to say, "I know that p, but I may be wrong." Suppose Sam says, "I know that there's a zebra over there, but I may be wrong about that, and it may be a painted mule or something…" Surely his daughter, Geraldine, is well within her rights to say, "So you don't know, then, right?" David Lewis called statements of the form "I know that p, but I might be wrong," mad, and he termed this trouble the madness of fallibilism. That may be a little over the top, but it's on the right track. (Lewis, by the way, ended up endorsing fallibilism because he took it to be less mad than skepticism!)
The second problem with fallibilism is what we call the epistemic ascent problem. It is simply this: even in cases of appropriate fallible knowledge-attribution, one must nonetheless stipulate that the proposition known is true and the defeaters are false. The problem is that fallible knowers are not warranted in making those additional stipulations.
To see this, consider that, by hypothesis, Sam has knowledge, because it was stipulated that he's really looking at a zebra, and he's not looking at a painted mule. But notice that his claiming that he sees a zebra is appropriate in both circumstances. But Sam can't claim that he knows which circumstance he's in. This is because the claim to know in these cases requires that we've stipulated that what the purported knower holds is true and the defeaters are false. But Sam can't do that. So knowledge-fallibilism saves knowledge for those who are in the right circumstances, but doesn't make it a requirement to ascertain that you're in fact in the right circumstances. The result looks strange:
Sam: That's a zebra over there, Geraldine!
Geraldine: Neat! But do you know it's a zebra? It could be a painted mule.
Sam: Huh. That's a funny thought. I hadn't even considered that. Well… if it's not a painted mule but a zebra, then I know it's a zebra.
Geraldine: Ok, if it's a zebra and it's not a painted mule, then you know.
Sam: Yep, and it's a zebra, alright! Now, who wants ice cream?
Fallibilism works for knowledge attribution when we know the truth values for the known content (as true) and that for the potential defeaters (as false). The trouble is that this strategy for third-personal knowledge attribution doesn't work in the first-personal form in which skeptical scenarios are posed. When Sam says, "Yep, and it's a zebra, alright," he's covertly switched between what's appropriate for him to say within the situation to the position of assessing what the situation is. It'd be appropriate for him to say that it's a zebra, even were it not a zebra but a painted mule. But his first-order perspective on what he sees at the zoo doesn't warrant the claim that he's in a zoo containing zebras rather than painted mules.
Fallibilism is a positive impediment to successful anti-skepticism and anti-dogmatism, because in order to attribute knowledge in the first-person, we need both first-order and second-order information of the circumstances. Otherwise we're just claiming we know without any basis. That's a little better than yelling it or typing it in ALLCAPS, we think, but not much.
There is, without a doubt, more to say. But, equally without a doubt, the shine is off fallibilism's halo.
The Brooklyn Gentrifier's Playbook
"A New Yorker is someone who longs for New York."
These days, when the inevitable question of "What do you do?" pops up at a cocktail party or some such, I now simply answer, "I live in New York." A credulous follow-up might wish to clarify whether that is, in fact, how I make my living, at which point I try to steer the conversation to kinder, gentler topics. But after living in New York for 15 years, I feel my response is both perfunctory and justified. Anyone as deeply immersed in the city knows that living here really is its own, full-time occupation, since the city demands constant observation and reflection. And New York is especially amenable to this, given the breadth, density and accessibility of the city's neighborhoods, as well as New Yorkers' guileless embrace of real estate as a primary subject of conversation. It is perhaps the only city that I know of, where a stranger can walk into your apartment and ask, within the first 15 minutes, how much you rent pay for the privilege, and expect an answer.
In this vein, there has always been much talk about gentrification: where it is happening right now and where it will happen next, whether the desirability of the outcomes outweighs the costs, and, especially, who is being ousted. This last is not so much about the residents themselves, but rather the ongoing disappearance of beloved restaurants, bars and retail establishments, for example as documented by Jeremiah Moss's Vanishing New York. So what can be said about gentrification that has not already been said? Honestly, not a whole lot. There are still no good answers or responses, especially as New York reassesses its post-Bloomberg future.
However, gentrification has increasingly been treated as a monolithic concept, when in fact it is an umbrella term describing a continuum of variegated and uneven urban processes. The ‘improvement' of any neighborhood is the result of a bevy of actors, operating within a legal and social context that is unique to that neighborhood, and that itself sits within the larger context of the city and the state. Finally, even global financial circumstances play a role, for example, artificially low interest rates and the ease with which capital may travel. When gentrification is seen as a monolithic process, it is difficult to think about it as anything other than inevitable. But if we consider the different processes that are obscured into this single rubric, or more accurately, the different scales and velocities at which gentrification occurs, then we will be better equipped to engage the phenomenon itself, and not merely the label.
The late geographer Neil Smith clearly identified this in the late 1970s. First in his dissertation and then in his subsequent work, he characterized gentrification, especially in its accelerated forms, as fundamentally a process of capital, not of people.
Since the 1970s, gentrification has shifted from a marginal, fragmented process in the housing market to a large-scale, systematic and deliberate urban development policy. Gentrification has deepened as a comprehensive city-building strategy encompassing not just the residential market, but recreation, retail, employment, and the cultural economy.
Michael Bloomberg's three terms as mayor of New York City carried the precise hallmarks of such a "large-scale, systematic and deliberate urban development policy," or what could also be termed a love-fest between developers and city officials. While marquee projects such as the (successful) Atlantic Yards and (unsuccessful) Midtown East projects occupied most of the media spotlight, what remains less appreciated is the sheer scope of rezoning undertaken by the administration: upwards of 120 rezonings, almost all of which were approved, will continue to reshape the contours of New York for decades to come.
But how? At first, it may be surprising to hear that "the city planning department doesn't track…how much potential space was gained or lost, or how much value it's created by enabling development" for any given rezoning. However, zoning itself is not a monolithic concept: a block may be ‘upzoned,' ‘downzoned' or left unchanged (also known as ‘contextual'). Zoning delimits the ultimate population density for a given lot, and in fact, from 2003 to 2007, the net result was only a 1.7% net increase in capacity. This immediately leads to the next question: Who gets what kind of zoning? The contours of rezoning become clearer when one understands that
Upzoned lots tended to be in areas that were less white and less wealthy, with fewer homeowners. Downzoned lots tended to be areas that were more white and had both higher incomes and higher rates of homeownership than upzoned areas. Areas with contextual rezoning were even whiter and richer (with median incomes "much higher than that of the city"), and had "very high rates of homeownership." In other words, more privileged people were more likely to have the city change the zoning of their neighborhoods to preserve them exactly as they were.
Understood this way, the possible pathways for New York become clearer: rezoning defines and guarantees its own success. But rezoning is really only the beginning of real estate development. There is still the procurement of permits and the appeasement of local community boards. But developers are used to playing the long game, and one of the legacies of the Bloomberg (and Giuliani) administrations is a massive, tangled infrastructure of committees, advisory boards and public-private partnerships where real estate developers mix with city officials in order to clear hurdles, this being most easily achieved outside of the public eye and behind closed doors. (For an exceptionally clear-eyed exposition of this bureaucratic juggernaut, see the excellent documentary My Brooklyn by Kelly Anderson).
The bodies are buried in plain sight. I have already written about the fate of the Fulton Fish Market, which remains little changed today. For its part, ‘My Brooklyn' documents the redevelopment of Brooklyn's Fulton Mall and its impact on the African-American and Caribbean communities that depended on that commercial district. And the systematic dismantling of community resistance to the Atlantic Yards project was a big-city real estate bruise-fest whose definitive history remains as yet unwritten, but will doubtlessly launch a thousand urban social justice dissertations. Like the Bloomberg administration's zealous rezoning campaign, this web of governance is set to endure for a long time, and in the meantime, Brooklyn is in fact, becoming poorer.
These, then, are the macro policies that drive large-scale gentrification of substantial swathes of New York. However, there is a smaller scale at which gentrification operates, and one that is largely invisible to the media. Nevertheless, its effects on neighborhoods is no less decisive. As an example, consider the story of another part of Brooklyn, that of Franklin Avenue in Crown Heights. "The Ins and The Outs" is a vital and broad-ranging article, written by Vinnie Rotondaro and Maura Ewing, on the changing nature of one of Crown Heights' principal commercial thoroughfares. While readers outside of New York may most clearly remember it as the neighborhood gripped by a race riot back in 1991, after a generation Crown Heights has now been Columbused as the newest Brooklyn hotspot, with Franklin Avenue as its pulsing heart.
I have been to Franklin Avenue over the years but have been going more frequently, thanks to a friend who recently moved to the neighborhood. The rapidity of the transformation is nothing short of astonishing – in fact one of the defining features of gentrification in New York is that each episode seems to take less time than the previous. Franklin Avenue seems to follow the standard pattern of development, where delis become swish bars and pawn shops are replaced by up-market retail. And yet everything happens for a reason. One of these reasons has been MySpace Realty.
As documented by Rotondaro and Ewing, MySpace (and possibly a few shell corporations under its control) have engaged the neighborhood's landlords, aggressively making offers to buy buildings for cash. For MySpace, a landlord who says ‘No' only means ‘No' today. Once a building is sold to MySpace, it is time to get the residents out of the building, so that it can be renovated and put back on the market for rental rates that can be several times the existing rent. If they are lacking in savvy, most tenants are bought out at a discount, or even made to think that they have little choice in the matter. The holdouts – some of whom have been living in the building for decades and cannot afford to live anywhere else in the area – are then subjected to the usual shenanigans of deferred repairs, ignored infestations, etc. Lather, rinse, repeat.
MySpace is using an old playbook, of course. Just as Anderson documented the strong-arm tactics of big-league developers in ‘My Brooklyn', Rotandaro and Ewing narrate a history of similar behavior but writ on a much more local scale. The results are much the same, however: a process of divide-and-conquer by capital leads to the decrease of the availability of affordable housing stock in a given neighborhood. And it is also important to recognize the fact that MySpace Realty's actions do not exist in isolation. As Franklin Avenue has become more ‘hip' the neighborhood has been primed for larger developers to buy up lots that are beyond the reach of a local firm: the Goldman Sachs Urban Investment Group was part of a consortium that purchased a nearby property that will likely become a luxury mixed-use development, with about $20m to be invested in the near future. And this is only one of several such transactions happening in the area. As one of the locals put it, "I don't know how to beat this. I don't know how anyone can beat this machine."
This same resident also asked the real question at the heart of any gentrification process: "I still think there's a better and more ethical way to get from a broken down, crime-ridden, drug-ridden neighborhood to a place that is safe and enjoyable for everyone while still maintaining a sense of community ownership." Capital can only provide a partial and ultimately unsatisfactory answer to this question – left to its own devices, it can only produce cookie-cutter development at market rates, with the end result being nothing but the relentless homogenization of any given neighborhood. The same people, shops and restaurants. Ironically, perhaps only the housing stock will remain to bear mute witness to the unique flavor that a neighborhood once had.
It is somewhat like the old philosophical paradox of sorites – if you have a heap of sand, and you remove grain after grain, at what point do you no longer have a heap of sand? What sorites points out is that we have ultimately failed to define what a ‘heap' is in the first place. Without this definition, you cannot know when a heap ceases to be a heap. Gentrification functions similarly – at what point does improvement become gentrification, or, to continue with the analogy of the heap, at what point is gentrification no longer that, but rather improvement?
I was reminded of this when my friend Alex Castle posted a wonderful essay on his own experience, somewhat misleadingly titled "Gentrification Is My Fault". Fittingly, it's in the form of a blog post. I say fittingly, because it is both interesting and important to note the commensurate nature of the media describing each of these levels of gentrification: the largest process is worthy of an acclaimed documentary; the local level merits long-form journalism; and the smallest is only given voice by its protagonist's memoir. Fitting, of course, is not the same as just, so it is important that these latter voices be given their due.
Castle's essay details the haphazard way in which he and his wife came to own a limestone townhouse in Prospect-Lefferts Gardens, which was then a fairly rough-and-tumble section of Brooklyn, one that is in fact on the southern border of Crown Heights. Through a mix of good timing, thrift and hard work – all vital ingredients of the American Dream – the Castles have created exactly that for themselves. What I appreciate even more deeply is the way that Alex invested himself in the ownership and improvement of his home and, by extension, the neighborhood:
I didn't displace anyone; the place was abandoned, the basement was flooded with shit and the doors had been battered in. I spent the first five years we lived here working on the house all day and bartending all night. When I started I had no skills, I couldn't drill a hole in a board without splitting it. Now I know how to do wiring, framing, sheetrock, I can frame and hang a door (interior or exterior), put in a dishwasher, tile the floor. It took a long time, but it only cost materials.
But what is striking about this personal history – and this is the kind of story that can only be told as a personal history – is the ambivalence that even this engenders. On the one hand, through their temerity and foresight, the Castles expect that, by the time they retire, the mortgage will be paid off and they will be able to live off the income from renting their extra apartment (in New York, this is what's known as ‘winning'). But as Alex muses, "if Bruce Ratner calls me tomorrow and offers me $5 million for this house, is it my responsibility to ask what's going to happen to the property after I'm gone before I sell? Or am I just reaping the benefits of good planning?"
The Castles' experience echoes Neil Smith's point of departure in his own analysis of gentrification: "a marginal, fragmented process in the housing market." Thus, while tempting, it would be wrong to think that the fragmented and marginal become obsolete simply by virtue of the rise of capital. It's clear from this last example that all of these processes co-exist and eventually negotiate with one other – it is simply a consequence of the way in which a city embodies its limited, valued space. Even the much larger forces of capital-driven gentrification must still contend with property rights and the intentions and desires of smallholders who have invested decades of savings and work into their particular corner.
More importantly, the best bulwark against the kind of gentrification we all seem to wring our hands over is precisely the people who are perfectly aware of their rights and have no illusions of the true value of their stock. I am not making some petite-bourgeoisie argument here: this is as true (and vital) for tenants as it is for landlords. The only thing that is missing is all the other stories like Alex's. Where are they? Who is recording them, and bringing those people together into what is likely a common cause that is nevertheless representative of each person's own interests? I am perhaps being optimistic, but as Jefferson wrote, albeit in a different context, "Whenever the people are well informed, they can be trusted with their own government."
"The coastal plain of the Arctic National Wildlife Refuge is the core calving area of the Porcupine River caribou herd. It is also the most debated public land in the United States history - whether to open up this land to oil and gas development or to preserve it has been raging in the halls of the United States Congress for over thirty years. This caribou herd has symbolized the Arctic Refuge - both for its ecological and cultural significance. Individual caribou from this herd may travel more than three thousand miles during their yearly movements, making it one of the longest terrestrial migrations of any land animal on the planet. Numerous indigenous communities living within the range of the herd have depended on the caribou for subsistence food. The Gwich'in people of Alaska, and the northern Yukon and Northwest Territories in Canada, live on or near the migratory route of this herd, have relied upon the caribou for many millennia to meet their subsistence as well as cultural and spiritual needs. The Gwich'in are caribou people. They call the calving ground of the caribou “Iizhik Gwats'an Gwandaii Goodlit” (The Sacred Place Where Life Begins). To open up the caribou calving ground to oil and gas development is a human-rights issue for the Gwich'in Nation. In addition to the perceived threat of oil development in their calving ground, this caribou herd has been severely impacted by climate change in recent years. International scientific community has stated that climate change has impacted this herd more than most of the other large caribou herds across the circumpolar Arctic. Their numbers has declined steadily at a 3.5% per year since 1989 from 178,000 animals to a low of 123,000 in 2001. Warmer, wetter autumn resulting in more frequent icing conditions; warmer, wetter winter resulting in deeper and denser snow; and warmer spring resulting in more freeze-thaw days and faster spring melt are among the key negative climate change impacts on the caribou and their habitat. In the photograph pregnant females are migrating over Coleen River on the south side of the Brooks Range Mountain on their way to the coastal plain for calving." (From Banerjee's website.)
Moral Time: Does Our Internal Clock Influence Moral Judgments?
by Jalees Rehman
Does morality depend on the time of the day? The study "The Morning Morality Effect: The Influence of Time of Day on Unethical Behavior" published in October of 2013 by Maryam Kouchaki and Isaac Smith suggested that people are more honest in the mornings, and that their ability to resist the temptation of lying and cheating wears off as the day progresses. In a series of experiments, Kouchaki and Smith found that moral awareness and self-control in their study subjects decreased in the late afternoon or early evening. The researchers also assessed the degree of "moral disengagement", i.e. the willingness to lie or cheat without feeling much personal remorse or responsibility, by asking the study subjects to respond to questions such as "Considering the ways people grossly misrepresent themselves, it's hardly a sin to inflate your own credentials a bit" or "People shouldn't be held accountable for doing questionable things when they were just doing what an authority figure told them to do" on a scale from 1 (strongly disagree) to 7 (strongly agree). Interestingly, the subjects who strongly disagreed with such statements were the most susceptible to the morning morality effect. They were quite honest in the mornings but significantly more likely to cheat in the afternoons. On the other hand, moral disengagers, i.e. subjects who did not think that inflating credentials or following questionable orders was a big deal, were just as likely to cheat in the morning as they were in the afternoons.
Understandably, the study caused quite a bit of ruckus and became one of the most widely discussed psychology research studies in 2013, covered widely by blogs and newspapers such as the Guardian "Keep the mornings honest, the afternoons for lying and cheating" or the German Süddeutsche Zeitung "Lügen erst nach 17 Uhr" (Lying starts at 5 pm). And the findings of the study also raised important questions: Should organizations and businesses take the time of day into account when assigning tasks to employees which require high levels of moral awareness? How can one prevent the "moral exhaustion" in the late afternoon and the concomitant rise in the willingness to cheat? Should the time of the day be factored into punishments for unethical behavior?
One question not addressed by Kouchaki and Smith was whether the propensity to become dishonest in the afternoons or evenings could be generalized to all subjects or whether the internal time in the subjects was also a factor. All humans have an internal body clock – the circadian clock- which runs with a period of approximately 24 hours. The circadian clock controls a wide variety of physical and mental functions such as our body temperature, the release of hormones or our levels of alertness. The internal clock can vary between individuals, but external cues such as sunlight or the social constraints of our society force our internal clocks to be synchronized to a pre-defined external time which may be quite distinct from what our internal clock would choose if it were to "run free". Free-running internal clocks of individuals can differ in terms of their period (for example 23.5 hours versus 24.4 hours) as well as the phases of when individuals would preferably engage in certain behaviors. Some people like to go to bed early, wake up at 5 am or 6 am on their own even without an alarm clock and they experience peak levels of alertness and energy before noon. In contrast to such "larks", there are "owls" among us who prefer to go to bed late at night, wake up at 11 am, experience their peak energy levels and alertness in the evening hours and like to stay up way past midnight.
It is not always easy to determine our "chronotype" – whether we are "larks", "owls" or some intermediate thereof – because our work day often imposes its demands on our internal clocks. Schools and employers have set up the typical workday in a manner which favors "larks", with work days usually starting around 7am – 9am. In 1976, the researchers Horne and Östberg developed a Morningness-Eveningness Questionnaire to investigate what time of the day individuals would prefer to wake up, work or take a test if it was entirely up to them. They found that roughly 40% of the people they surveyed had an evening chronotype!
If Kouchaki and Smith's findings that cheating and dishonesty increases in the late afternoons applies to both morning and evening chronotype folks, then the evening chronotypes ("owls") are in a bit of a pickle. Their peak performance and alertness times would overlap with their propensity to be dishonest. The researchers Brian Gunia, Christopher Barnes and Sunita Sah therefore decided to replicate the Kouchaki and Smith study with one major modification: They not only assessed the propensity to cheat at different times of the day, they also measured the chronotypes of the study participants. Their recent paper ""The Morality of Larks and Owls: Unethical Behavior Depends on Chronotype as Well as Time of Day" confirms that Kouchaki and Smith findings that the time of the day influences honesty, but the observed effects differ among chronotypes.
After assessing the chronotypes of 142 participants (72 women, 70 men; mean age 30 years), the researchers randomly assigned them to either a morning session (7:00 to 8:30 am) or an evening session (12:00 am to 1:30 am). The participants were asked to report the outcome of a die roll; the higher the reported number, the more raffle tickets they would receive for a large prize, which served as an incentive to inflate the outcome of the roll. Since a die roll is purely random, one would expect that reported average of the die roll results would be similar across all groups if all participants were honest. Their findings: Morning people ("larks") tended to report higher die-roll numbers in the evening than in the morning – thus supporting the Kouchaki and Smith results- but evening people tended to report higher numbers in the morning than in the evening. This means that the morning morality effect and the idea of "moral exhaustion" towards the end of the day cannot be generalized to all. In fact, evening people ("owls") are more honest in the evenings.
Not so fast, say Kouchaki and Smith in a commentary published to together with the new paper by Gunia and colleagues. They applaud the new study for taking the analysis of daytime effects on cheating one step further by considering the chronotypes of the participants, but they also point out some important limitations of the newer study. Gunia and colleagues only included morning and evening people in their analysis and excluded the participants who reported an intermediate chronotype, i.e. not quite early morning "larks" and not true "owls". This is a valid criticism because newer research on chronotypes by Till Roenneberg and his colleagues at the University of Munich has shown that there is a Gaussian distribution of chronotypes. Few of us are extreme larks or extreme owls, most of us lie on a continuum. Roenneberg's approach to measuring chronotypes looks at the actual hours of sleep we get and distinguishes between our behaviors on working days and weekends because the latter may provide a better insight into our endogenous clock, unencumbered by the demands of our work schedule. The second important limitation identified by Kouchaki and Smith is that Gunia and colleagues used 12 am to 1:30 am as the "evening condition". This may be the correct time to study the peak performance of extreme owls and selected night shift workers but ascertaining cheating behavior at this hour is not necessarily relevant for the general workforce.
Neither the study by Kouchaki and Smith nor the new study by Gunia and colleagues provide us with a definitive answer as to how the external time of the day (the time according to the sun and our social environment) and the internal time (the time according to our internal circadian clock) affect moral decision-making. We need additional studies with larger sample sizes which include a broad range of participants with varying chronotypes as well as studies which assess moral decision-making not just at two time points but also include a range of time points (early morning, afternoon, late afternoon, evening, night, etc.). But the two studies have opened up a whole new area of research and their findings are quite relevant for the field of experimental philosophy, which uses psychological methods to study philosophical questions. If empirical studies are conducted with human subjects then researchers need to take into account the time of the day and the internal time and chronotype of the participants, as well as other physiological differences between individuals.
The exchange between Kouchaki & Smith and Gunia & colleagues also demonstrates the strength of rigorous psychological studies. Researcher group 1 makes a highly provocative assertion based on their data, researcher group 2 partially replicates it and qualifies it by introducing one new variable (chronotypes) and researcher group 1 then analyzes strengths and weaknesses of the newer study. This type of constructive criticism and dialogue is essential for high-quality research. Hopefully, future studies will be conducted to provide more insights into this question. By using the Roenneberg approach to assess chronotypes, one could potentially assess a whole continuum of chronotypes – both on working days and weekends – and also relate moral reasoning to the amount of sleep we get. Measurements of body temperature, hormone levels, brain imaging and other biological variables may provide further insight into how the time of day affects our moral reasoning.
Why is this type of research important? I think that realizing how dynamic moral judgment can be is a humbling experience. It is easy to condemn the behavior of others as "immoral", "unethical" or "dishonest" as if these are absolute pronouncements. Realizing that our own judgment of what is considered ethical or acceptable can vary because of our internal clock or the external time of the day reminds us to be less judgmental and more appreciative of the complex neurobiology and physiology which influence moral decision-making. If future studies confirm that the internal time (and possibly sleep deprivation) influences moral decision-making, then we need to carefully rethink whether the status quo of forcing people with diverse chronotypes into a compulsory 9-to-5 workday is acceptable. Few, if any, employers and schools have adapted their work schedules to accommodate chronotype diversity in human society. Understanding that individualized work schedules for people with diverse chronotypes may not only increase their overall performance but also increase their honesty might serve as another incentive for employers and schools to recognize the importance of chronotype diversity among individuals.
Brian C. Gunia, Christopher M. Barnes and Sunita Sah (2014) "The Morality of Larks and Owls: Unethical Behavior Depends on Chronotype as Well as Time of Day", Psychological Science (published online ahead of print on Oct 6, 2014).
Maryam Kouchaki and Isaac H. Smith (2014) "The Morning Morality Effect: The Influence of Time of Day on Unethical Behavior", Psychological Science 25(1) 95–102.
Till Roenneberg, Anna Wirz-Justice and Martha Merrow. (2003) "Life between clocks: daily temporal patterns of human chronotypes." Journal of Biological Rhythms 18:1: 80-90.
Monday, October 06, 2014
Perceptions: Avian aesthetics
Bowerbirds. "make up the bird family Ptilonorhynchidae. They are renowned for their unique courtship behaviour, where males build a structure and decorate it with sticks and brightly coloured objects in an attempt to attract a mate." From Wikipedia.
"To woo females, the males of 17 of the 20 known species of bowerbirds build structures—often resembling an arbor, or bower, with an artfully decorated platform. ...
... evolutionary biologist Jared Diamond has called them "the most intriguingly human of birds." These are birds that can build a hut that looks like a doll's house; they can arrange flowers, leaves, and mushrooms in such an artistic manner you'd be forgiven for thinking that Matisse was about to set up his easel; some can sing simultaneously both the male and female parts of another species' duet, and others easily imitate the raucous laugh of a kookaburra or the roar of a chain saw. Plus, they all dance." From National Geographic, July 2010.
Do check out the links ... bowerbirds are completely awesome!
Thanks to Joyce Ramsey, the owner of "Bowerbird Mongo", a store in Ypsilanti, MI.
by Carol A. Westbrook
I gave a signed copy of my new book about beer, "To Your Health!" to a couple of favorite bartenders and a bar owner, all of whom had been featured in a story or two in this book about beer. A few weeks later I asked
each one how he enjoyed the book. And each admitted he hadn't yet opened the book, but assured me he put it in the bathroom. After my initial shock, I recognized that I was being paid the highest compliment. For a non-reader, the bathroom is the place of honor for reading material. A stack of books or magazines in the bathroom means, "this is valuable to me, and I am going to read it some day."
What a different world than the one in which I live! In my world, books hold a place of honor and, more importantly, books are read. I love books. When I was a kid, the Tooth Fairy left us books. My first Tooth Fairy book was "Harold and the Purple Crayon," by Crockett Johnson, which today remains my favorite children's book. I loved getting books from the Tooth Fairy, and treasured every one.
Because we were a Catholic family of four children, all of whom attended parochial school, we didn't have much money to spare, but books were always there. My father got many of these books for free, since they were demos at his place of work--he did PR for the Chicago Public Schools. We were fortunate to have a steady supply of children’s' books long after we had our permanent teeth.
Reading was a joyful activity in our family. We children taught each other to read long before we started first grade (there was no kindergarten at St. Hyacinth's School). I remember showing my younger brother how to sound out the letters in words; I was seven and he was three. Family vacations were always preceded by a trip to the library, to stock up on a dozen or so books to take along as we lounged at the lake or drove on our interminable car trips.
I was the bookworm of the family. In fourth grade I breezed through the classics on our classroom bookshelves--"Black Beauty," "Oliver Twist", and "Tom Sawyer.” I doubt these books would be considered suitable for a 10 year old today (even if they could read them), featuring abuse of both animals and children.
My true passion, though, was science fiction. My favorites were "Elevator to the Moon," by Stanley Widney, and "Have Space Suit, Will Travel," by Robert Heinlein, and "Space Cat,” by Ruthven Todd. By age 12, I had read all the young adult science fiction in our local library, so I was allowed to take the "el" train downtown by myself to the main Chicago library. There I discovered a world of books.
In high school I discovered science, and at the same time I discovered the John Crerar Library, a technical library that was on the campus of the Illinois Institute of Technology, an hour on the "el." I was impressed by the modern campus, designed by Mies van der Rohe, and dedicated to the study of science. At the Crerar Library I would spend hours in the stacks, finding books and articles for my current science fair project. Merely having those books around me made me feel like a true scientist.
In high school also I took a course in journalism. I learned the joys of writing a concise sentence, and the precision and accuracy of the English language. I decided I would have to learn touch-typing. I had to take the class in the summer, in public school, since the nuns at our high school would not allow the "college track" girls to register for "secretary track" classes. Remember, this was 1966. For a nice Catholic girl, attending public school was an education in itself. And learning to type gave me a voice. I begged my parents to buy me an electric typewriter and they obliged. It got me though college and then med school. I have it to this day, though it has been supplanted by my laptop.
The years passed, and I had three children of my own. I started reading to them when they were too young to understand all the big words; we read together at bedtime, going through C.S. Lewis' "Narnia" books, Madeline l'Engles' series, "A Wrinkle in Time," T.H.White's "The Once and Future King," numerous Robert Heinlein stories...too many others to remember. We read at bedtime, we read after dinner, we had books on tape for long car trips. All of my children were bookworms, too.
My children taught each other to read, just as I did with my siblings. My oldest son was an early reader. He attended the University of Chicago Lab School, which was later attended by the Obama girls. In nursery school he read books to his classmates, and he taught his younger sister to read; both eventually went on to study science and medicine. My second son was a late reader, but he had us all conned because he would memorize every book that was read to him, and "read" it back to us. As he grew older and became an actor, he retained this remarkable ability to memorize lines for plays. Ironically, in spite of being a late reader, he majored in English. The kids and I continue to read and recommend books to each other, and we are especially on the lookout for science fiction.
When I moved out of our old house in Chicago to Cambridge, Massachusetts, I found the box containing the children's books, our old friends that we read aloud to each other. When I opened it I was shocked--all that was left were small scraps of paper, and insect larvae. The books had been devoured by bookworms. Yes, there really ARE bookworms, and they do eat paper. I cried.
Cambridge was wonderful. There were so many bookstores I felt I was in paradise! Sadly, many of the bookstores closed, one by one, and I have since moved out of Cambridge. I'm writing books and blogs of my own now. But I still love to read for pleasure. Yes, I have my Nook and my Kindle and my iPhone Kindle Reader app. But I still prefer books. I like the feeling of the book in my hand, the weight of the paper. I like to read the flyleaf and the front pages, and the comments and bios on the back cover. When I read, I feel that I am inside the book, physically, with the story, and back on vacation as a child.
If you are reading this blog, you are probably a reader, too. No doubt you have stories like mine--I'd love to hear them! I am writing this to remind you to keep books in your life, and give them to your children and grandchildren. Make them bookworms. Buy books and keep bookstores open. And don't just keep the books in the bathroom. Read them!
Some day I will have grandchildren of my own, and I will read to them. For now, I only have grand dogs and grand kittens, and they don't enjoy books. My grandkids will get books from their grandmother, and I will read to them, perhaps on Skype. The first book will be, "Harold and the Purple Crayon."
Welcome to Weimar
by Lisa Lieberman
Hadn't there been something youthfully heartless in my enjoyment of the spectacle of Berlin in the early thirties, with its poverty, its political hatred and its despair?
The Weimar Republic is everybody's favorite example of liberalism gone wrong. Just a few days ago, The New Republic posted a reprint of Louis Mumford's essay, "The Corruption of Liberalism," a call to arms first published in April 1940. "The isolationism that is preached by our liberals today means fascism tomorrow," he warned.
Today liberals, by their unwillingness to admit the consequences of a victory by Hitler and Stalin, are emotionally on the side of "peace" — when peace, so-called, at this moment means capitulation to the forces that will not merely wipe out liberalism but will overthrow certain precious principles with which one element of liberalism has been indelibly associated: freedom of thought, belief in an objective reason, belief in human dignity.
Mumford attacked the complacency of American intellectuals who were blind to the "destruction, malice, violence" of the Nazi regime. He himself had been slow to recognize Hitler's barbarism, and chose to suspend judgement regarding the Soviet experiment for twenty years, but he now condemned liberal habits of mind for degrading America, sapping it of energy and the moral courage required to combat political extremism. By the end of the New Republic essay, he was advocating action, passion, and force as an alternative to the cold rationalism, tolerance, and open mindedness he blamed for "liberalism's deep-seated impotence." In fact, this same accusation had already been leveled at the Weimar Republic by the Nazis, and in remarkably similar terms.
Christopher Isherwood came to Germany in 1929 for one thing only: "Berlin meant Boys," he confessed in his memoir. His friend Wystan (the poet W. H. Auden) had promised him that he would find the city liberating and so he did. Before the month was out, he'd gotten involved with a blond German boy, the very type he'd fantasized about meeting. In the stories he published in the mid to late 1930s, which would become the basis for the musical and film Cabaret, Isherwood was circumspect about his motivations, narrating events passively, as an outsider who observes but does not participate in the promiscuity he describes. Mind you, he did not judge his characters, at least, not for their sexual behavior. Some he found wanting for other reasons, for callousness or a lack of generosity toward others, for bad taste in clothes or furnishings.
By way of contrast, the Austrian writer Stefan Zweig was horrified by Weimar Germany.
"Berlin transformed itself into the Babel of the world," he wrote in his autobiography, The World of Yesterday (1942). "The Germans brought to perversion all their vehemence and love of system. . . Even the Rome of Suetonius had not known orgies like the Berlin transvestite balls, where hundreds of men in women's clothes and women in men's clothes danced under the benevolent eyes of the police." Films from the era capture the polysexuality of Berlin's night clubs. Pandora's Box (1928) by G. W. Pabst give us Louise Brooks as Lulu, a captivating and utterly amoral young woman who swings both ways, driving her lovers mad with frustrated desire. Marlene Dietrich's Lola does the same in Josef von Sternberg's Blue Angel (1930), although she's good enough to warn her prospective lovers in advance. "Be careful when you meet a sweet blonde stranger. You may not know it, but you are greeting danger." Alas, forewarned is not forearmed in this case.
Toward the end of The Berlin Stories, Isherwood brought in troubling acts of violence he witnessed against Jews, homosexuals, Social Democrats and Communists. Here he stepped briefly out of his passive role, his narration taking on a more sardonic tone.
This morning, as I was walking down the Bülowstrasse, the Nazis were raiding the house of a small liberal pacifist publisher. They had brought a lorry and were piling it with the publisher's books. The driver of the lorry mockingly read out the titles of the books to the crowd:
"Nie Wieder Krieg!" he shouted, holding up one of them by the corner of the cover, disgustedly, as though it were a nasty kind of reptile. Everybody roared with laughter.
"'No More War!'" echoed a fat, well-dressed woman, with a scornful, savage laugh. "What an idea!"
Cabaret made more of this unpleasantness, intercutting the outré musical numbers at the Kit Kat Klub with occasional flashes of violence, easy to ignore at first, but by the end the darkness is inescapable. Roger Ebert noted how the film's final image, a distorted mirror reflecting the nightclub's dissolute patrons,"makes the entire musical into an unforgettable cry of despair." The camera pans the house, showing well dressed men and women interspersed with Nazis in uniform, a sea of evening gowns and dinner jackets disrupted by red armbands bearing swastikas. The foreshadowing is much less oblique in the current revival at Studio 54 in New York, which "lets us know that we're in hell almost as soon as we arrive in a theater," critic Ben Brantley complained in his New York Times review of the production. For what it's worth, Isherwood didn't think much of the stage or film version of Cabaret, but then, he was hard on his younger self for having created "a sanitized picture" of the Weimar era. At the end of his life, he was brave enough to look back and see what he'd missed as a young man in Berlin, and honest enough to acknowledge his blindness and self-absorption. "Berlin was a place of great hardship and suffering but you don't see much of that [in The Berlin Stories]," he said in Christopher and His Kind.
The prostitutes who walked the streets, the blond working-class boys who were the objects of Wystan's and Christopher's lust, were driven less by pleasure than poverty, I suspect. Focusing on the decadence of Berlin's café culture, whether to celebrate or condemn the sexual hedonism that drew foreigners to the city, obscures the harsh reality of the time, the extreme deprivation felt by millions of Germany's citizens. Kathe Kollwitz's stark woodcuts of war widows and orphans,
Max Beckmann's Expressionist paintings of the poor did not judge their subjects, but they did judge the society that allowed such suffering to exist. Traumatized by what he encountered as a medic in World War I, Beckmann pledged "to be part of all the misery that is coming." Kollwitz, who lost a son in the war, lived with her physician husband in the slums of Berlin and wanted her art to "have purposes outside itself. I would like to exert influence in these times," she said, "when human beings are so perplexed and in need of help."
Weimar itself has become a distorted mirror of our anxieties regarding the ability of democracy to resist violent extremism, but jeremiads like Mumford's miss the point. Complacency is not exclusively a liberal failing. While Mumford had a good deal to say about suffering humanity, he ignored the suffering of actual human beings. Hitler seemed to have emerged out of nowhere in his scheme, coming into focus only when he posed a threat to America. But what allowed Hitler to take control in Germany was his ability to capitalize on the fear of disorder — the threat of revolution — that unemployment and starvation produced. Fear is democracy's undoing, and the unraveling begins at home.
Monday, September 29, 2014
The shortest path, the traveling salesman, and an unsolved question
by Hari Balasubramanian
The Shortest Path
How does Google Maps figure out the best route between two addresses? The exact algorithm is known only to Google, but probably some variation of what is called the shortest path problem has to be solved . Here is the simplified version. Suppose we have a network of nodes (cities, towns, landmarks etc.) connected by links (roads), and we know the time it takes to travel a particular link. Then what is the shortest path from a starting node A to a destination node D?
In the instance above, there are 4 nodes. The rectangles provide the link travel times. The B-C link takes 2 time units to travel; the A-D link takes 5; the C-D link takes 1; and so on. The five possible routes from A to D are: A-D; A-B-D; A-C-D; A-B-C-D; and A-C-B-D. The easily spotted shortest path is A-C-D, with a total length of 3. But what if a network has hundreds of nodes and links? It would be impossible to visually identify the shortest path. We would need an efficient algorithm. By that I mean an algorithm whose execution time on a computer stays reasonable even when the problem size – the number of nodes or links in the network – gets bigger.
In 1959, Edsger Djikstra published just such an algorithm. Djikstra's Algorithm doesn't simply look for all possible routes between the start and destination nodes and then choose the shortest. That kind of brute-force approach wouldn't work, given how dramatically the number of possible routes increases even with a slight increase in network size. Instead, Djikstra's Algorithm progressively explores the network in a simple yet intelligent way. It begins with the start node A, looks at all its immediate neighbors, then moves on to the closest neighbor, and from there updates travel times to all as yet unvisited nodes if new and shorter routes are discovered. I am fudging important details here, but this basic procedure of moving from a node to its nearest neighbor and updating travel times is repeated deeper and deeper in the network until the shortest path to the destination is confirmed. Wikipedia has a good animation illustrating this.
How fast does the algorithm run? Let's say there are V nodes. Then, in the worst case, Djikstra's Algorithm will take in the order of V x V steps to compute the optimal path. An algorithm like this that grows polynomially with the problem size is something we will call efficient (of course lower order polynomials, such as the square function, are preferable; V raised to the power 50 wouldn't be helpful at all). So a 10-node problem might take around 100 steps; a 1000-node problem will take 1000000 steps. This increase is something a modern day computer can easily handle. The algorithm might do much better in most instances, but the worst case is commonly used as a conservative measure of efficiency. There are faster variations of Djikstra's Algorithm, but for simplicity we'll stick to the original.
The Traveling Salesman (TSP)
Now consider a slightly different problem. We are still interested in the shortest route, but we want the route to be such that it starts at some node A, covers all other nodes in the network without visiting any of them twice, and finally returns to A. In other words, we are interested in the shortest all-city tour that starts and finishes at A. This is the traveling salesman problem (TSP). The person delivering the mail; the therapist traveling to different patient homes in the city; the truck dropping off supplies at different stores: all face some version of the TSP (though no one may think of it as that, and there may be other practical constraints). The TSP isn't simply restricted to people or vehicles touring destinations; it also arises in genome mapping, the sequence in which celestial objects should be imaged, and how a laser should tour thousands of interconnections on a computer chip.
What is the shortest tour in the simple 4-node instance we saw in the figure earlier? Suppose we have to start and end at A. Then there are six possible tours that reflect the order in which we visit the other three cities: A-B-C-D-A; A-B-D-C-A; A-C-B-D-A; A-C-D-B-A; A-D-B-C-A; and A-D-C-B-A. The shortest tours are A-C-D-B-A and A-B-D-C-A (each is the other in reverse), both with a total length of 7.
Things get very complicated if there are hundreds of nodes. Turns out that there is no efficient algorithm yet that can guarantee the shortest possible tour for large instances. The only algorithm that will work for sure is listing all possible tours and then picking the best. When there are 4 cities this is not a problem as only 6 tours have to be evaluated. When there are 11 cities, there are suddenly 3.6 million possible tours. When there are 23 cities, the number of possible tours is 51,090,942,171,709,440,000 (I got this number from William Cook's book ). Compare this with the 23 x 23 = 529 steps that Djikstra's Algorithm needs, in the worst case, to find the shortest path between any two nodes in a 23-city network. Our brute-force algorithm for the traveling salesman is terribly inefficient. We may still get a modern day supercomputer, working full time, to get us an answer to the 23-city traveling salesman. But listing all possible tours for a 100-city instance is beyond the scope of the best computing power currently available on the planet .
Now, there are much smarter algorithms that have successfully tackled large TSP instances. See above image and other examples at this website. In fact, an 85,900-city instance has been solved optimally – an astonishing achievement (though it took 84.8 years of computing time ). But no algorithm has been able guarantee finding the best tour for every large instance of the TSP. What works in one 86,000-city instance may not work for another. What Djikstra's Algorithm efficiently guarantees for every large instance of the shortest path problem, no one has been able to achieve for the traveling salesman.
The Unsolved P versus NP Question
So what is it that makes the shortest path "easy" to solve in a large network and the traveling salesman "hard"? Certainly both problems are easy to understand; a layperson can figure out answers for small instances. The intuition underlying both is also clear to everyone: to make use of short links as much as possible – with the occasional, unavoidable longer link thrown in – so that the final routes or tours, which sum up these component links, remain short. Both problems seem so closely related. It would seem that even large instances of the TSP, like the shortest path, should be solvable in reasonable time. Yet, despite more than 50 years of intensive research, no one has an an efficient algorithm yet!
In moving from the shortest path between two cities to the shortest all-city tour in a network, we have crossed an unproven but widely accepted "boundary" that currently separates "easy" and "hard" optimization problems. To understand what this means more formally, we'll have to redefine our problems as decision questions with "yes" or "no" answers.
Consider the decision version of the shortest path. Is there is a path from the start node to the destination node whose length is less than some value L? To answer this "yes" or "no" question for a large network, all we have to do is to run Djikstra's algorithm, figure out what the shortest path is, and whether its length is less than L. So with Djikstra's algorithm we can efficiently (1) find a solution if one exists, and (2) verify its correctness. All decision problems for which these conditions are met said to belong to class called P. The term P stands for Polynomial.
Contrast this with the decision version of the traveling salesman. Is there an all-city tour in the network whose length is less than some value L? If somebody provides us a candidate tour, we can easily verify whether the tour covers all cities and whether the length is indeed shorter than L or not. So checking the validity of a candidate tour is easy: we can design an efficient algorithm even when there are a large number of cities. But actually finding a tour whose length is less than L in a large network? Well, there is no efficient algorithm yet. All decision problems for which we can check a particular candidate solution's validity easily but struggle to find a solution efficiently are said to belong to a class called NP-complete. NP stands for the strange-sounding Non-deterministic Polynomial.
The decision version of the TSP is just one of many, many problems, relevant in practice, that fall in the NP-complete category – from Boolean logic problems, to processing of strings, to games and puzzles. Just one other example is the subset sum problem: given a set of integers that can be either positive or negative, is there a subset of them whose sum is 0? Notice how deceptively easy this problem sounds.
In the early 1970s, Stephen Cook and Richard Karp -- and Leonid Levin working independently in the Soviet Union -- showed a very deep result that inextricably tied together the fate of all these NP complete decision problems. They showed that all NP complete problems, although seemingly different in their manifestation -- what could boolean logic and subset sum and the traveling salesman possibly have in common? -- are fundamentally identical in their underlying structure. This remarkable unity implies that If you find an efficient algorithm for just one NP complete problem, you will have found an efficient algorithm for all such problems. Solve the subset sum problem or one of the Boolean problems efficiently and you will have solved the traveling salesman!
The unsolved question -- one of the biggest in mathematics and computer science -- is whether there exists such an all-conquering efficient algorithm (which would imply P = NP), or whether all the NP complete problems are somehow intrinsically harder than the problems in P (P≠NP). The Clay Mathematics Institute (CMI) has a million dollar reward for anyone who proves the result, either way:
"If it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This is the essence of the P vs NP question. Typical of the NP problems is that of the Hamiltonian Path Problem: given N cities to visit, how can one do this without visiting a city twice? If you give me a solution, I can easily check that it is correct. But I cannot so easily find a solution." [link]
In his wide-ranging essay on the P vs NP question, Scott Aaronson, a computational complexity researcher at MIT, poses a literary analogy: "What is it about our world that makes it easier to recognize a great novel than to write one, easier to be a critic than an artist?" One reason, he argues, is the explosive, exponential growth in the number of possibilities – the sheer number of paths that one can choose from. We may get lucky once in a while, but the vast majority of paths, although promising for some time, will not lead us to optimal end outcomes.
If our current model of computing using binary bits is incapable of dealing with such complexity, then is thre another type of computing device that can? Aaronson points out that even the qubits of quantum computing – implemented through particles which randomly spin up (1) or down (0) and both up and down simultaneously, enabling (as an example) 1000 randomly spinning particles to store an astonishing 21000 possibilities simultaneously – will still have trouble breaking the NP complete barrier. Indeed, Aaronson explains that while hypothetical computing devices that solve NP-complete problems efficiently can be constructed, they would require implausible kinds of physics – such as allowing time travel, or sending signals faster than the speed of light. This leads him to predict that "the hardness of NP complete problems…will be seen as a fundamental [physical] principle that describes part of the essential nature of the universe."
I've overstretched myself in this last part: I know little to nothing about quantum computing and related physics, so I won't say much more – I am sure there are better informed 3QD readers who can comment.
1. "Google Maps -- It's All Just One Big Graph", link.
2. Much of my information on the TSP comes from William Cook's excellent, if somewhat technical, book, In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation. Companion website with lots of information about the traveling salesman is here.
Jay Kelly. Before Your Very Eyes. 2013.
Collage & resin on panel, 60"x96" made of vintage magazines, hand dyed paper, novels & art books.
Current show in Boston.
Longing for Letters
by Mathangi Krishnamurthy
On July 15, 2013, after a hundred and sixty-three years of witnessing birth, death, revolution and marriage, the Indian telegraphic service sent out its last telegram. I felt a small sense of loss, but truth be told, the telegram was already a thing of the past to my communicative repertoire. In all my life, I had neither sent nor received a telegram. Also, with all my Hindi film infused understanding of the world, I assumed that all they ever brought was bad news. I would however be more than heartbroken if some day the postal service stopped sending letters.
The first letter I ever received was from my father. Truth be told, it was a postcard. He was away in faraway lands and had sent me a one-line missive with a picture of some Disneyland minion in Mickey Mouse costume, looking both avuncular and eerie. I remember feeling a distinct happiness at the sight of his handwriting, all beautiful, cursive, and grand. People wrote me letters for a large part of my life. My father, my grandfather, two cousins, friends that moved away, and friends in foreign lands. I have letters bearing dates right up until the nineties. I wrote back letters and in the process, accumulated beautiful pens, inkpots, and thick, fancy letter-writing paper. Also, for those who remember, I owned blotting paper; inspite of that, my hands were permanently ink-streaked. I always owned what used to be called a China pen even though it bore the brand name "Hero". The need for good handwriting was drummed early into my head. Pages of pages of cursive writing have rendered permanent the callus on my middle finger.
Two things show up regularly on my reading list these days; one, the daily habits of artists, scientists, thinkers, and writers, and two, their prolific and thoughtful correspondence. As others have argued so forcefully, letter writing was for writers, not merely a distraction but a way to find some breathing space from their craft while also allowing them the possibility of re-infusing it with vigor and vitality. Through letters they made manifest their orientation towards life and the world, but also communicated and cleansed new ways of thinking about their craft. Writing about writing to empathetic interlocutors seems to also been also about finding community, and laying the foundations for a new world.
Maria Popova at Brain Pickings has curated a beautiful set of these lively and charged letters. Read, for example, Freud's elaborate and thoughtful response to Einstein's call to thinking about the menace of war, where he begins thus, "All my life I have had to tell people truths that were difficult to swallow. Now that I am old, I certainly do not want to fool them." Vincent Van Gogh writes heartbreakingly to his brother, "Everyone who works with love and with intelligence finds in the very sincerity of his love for nature and art a kind of armor against the opinions of other people." In another creative take on letter-writing, Anna Deavere Smith composes thoughts to an imaginary audience, "I am trying to make a call, with this book, to you young brave hearts who would like to find new collaborations with scholars, with businesspeople, with human rights workers, with scientists, and more, to make art that seeks to study and inform the human condition: art that is meaningful." And in one of the more joyous defences of letter writing Italo Calvino declares to his best friend Eugenio Scalfari, "A fine thing it is to have a distant friend who writes long letters full of drivel and to be able to reply to him with equally lengthy letters full of drivel; fine not because I like to plunge into captious polemics nor because I enjoy getting certain ideas into the head of some idiot from the Urbe, but because writing long letters to friends means having a moral excuse for not studying."
Teaching university and college students, I am always struck by their tender age and its susceptibility to alienation and anomie. When I read letters from famous parents to their children, I marvel at how they seem to be able to offer wisdom and love in the same breath, while also tethering the child to some form of parental surety, phantasmic albeit. In an age when community comes far more from friends and colleagues than family, and empathy is sought only from the like-minded, these parental missives offer hope for empathy within the family and a world not so necessarily overdetermined by the generation gap. My favorite from this list is Nobel laureate John Steinbeck's 1958 letter to his son who is in love. He begins thus, "First — if you are in love — that's a good thing — that's about the best thing that can happen to anyone. Don't let anyone make it small or light to you", and ends thus, "And don't worry about losing. If it is right, it happens -- The main thing is not to hurry. Nothing good gets away."
This brings me to my other favorite genre of letters, the love letter, chief architect of the epistolary romance. How can love ever be love without love letters, I have often wondered. How does love persevere in the absence of material instantiations of its declaration? If one were to follow Badiou and understand love as he does in "In Praise of Love", then how does one turn a chance encounter into a pursuit of truth if not through the love letter? If love must be constructed and understood as destiny, then the raw material of this construction must necessarily construct letter by letter the edifice upon which it stands. I read love letters to convince myself of the possibility that love can indeed stand both in and outside the world. When I think about Frida Kahlo writing to Diego Rivera these lines, "Your presence floats for a moment or two as if wrapping my whole being in an anxious wait for the morning. I notice that I'm with you. At that instant still full of sensations, my hands are sunk in oranges, and my body feels surrounded by your arms", I wonder at the paucity of feelings that do not feel like this.
Frida Kahlo letter to Diego Rivera, 1940; (c) 2014 Archives of American Art, Smithsonian
Articles like The Death of Letter Writing or The lost art of letter-writing rehash arguments in favor of the beauty of the letter, but then lament the force of technology and a new way of life taken over by speed and that animal we call communication. We say more and more, and communicate less. We favor speed and instantaneity as substitutes for spontaneity, and prefer regularity and availability over presence. Emails stand in for everything, and texts, facebook status updates, and twitter shoutouts seem to be our small and temporary ways of stating self in the world. We neither know nor necessarily care about interlocutors unless they are consumers of this self. These days, I might have to pay someone to either write me a letter or respond to mine. Letters are now artifacts. They lie dusty in my attic and are petrified objects offering nostalgia, and a seemingly slower, less communicative, but more attentive time.
My longing for letters however is not merely nostalgia. The last letter I received in recent times takes pride of my place in my living room. Abuelo Sam wrote me letters. Abuelo Sam is my friend Susy's grandfather. When I theorize love and its machinations, I think of this man who lived in the desert, awaited UFOs and talked to the stars. He built things and he looked at the world anew everyday. He sparkled and his eyes sang as he held my hand and pinched my nose. He wrote me letters, in envelopes with wings drawn to seal the flap. And I wrote him back because I wanted more of his letters. Abuelo Sam was to me the continued possibility of a different time, space, and pace. His letters bore testimony to such possibility. In these strange times, we need such testimonies.
Even as I do not anymore bear the capacity for writing long pages, or unedited content, I try every now and then to pull out a page and sit at a desk, ink pen in hand, willing a few thousand words into material being. Envelope in hand, I fold carefully fold my letter and tuck it in, just so. Gluing the flap into place, I hunt for stamps, spit on one and write an address in bold, black ink. (My obsession with stamps and my thoughts on philately I will save for another article). At the end of this process, I feel slightly accomplished. But I also feel that I have given away some of myself, and this unexpected capacity for generosity both surprises and warms me. The ability to save some piece of myself from the compulsions of everyday communication allows me the fantasy of a different self.
It is therefore not merely in nostalgia that I believe in the need for a return to letter writing. I think we ought to be made to inhabit a different time and space in lieu of lament. I think it's time we were forced to think ourselves complete thoughts, write complete sentences, and regrow our calluses.
Offspring of wanton wants, they arrive, together, these gods of war and weather, to the beating drums, and sound of thunder, crying out crisis, each September. This century's, Septembers, all arrive back to school, as it were, refreshed from resorts and beaches, in need of replenishing, their depleted coffers, of personal savings, and future job offers.This century's, September, as if afraid of endings, arrives as though, its own immortal endless season, of unceasing sameness, an eerie stillness of repeated scripts and finite possibility: War as weather and weather as war. Each September, reminds us, of an, unchanging, unreformed industry, of needs, that guarantees, more spectacular bombs and thunderous storms. Bombs and storms. Lovingly named for eradicated tribes, victims of genocide, and of course women. Apache Helicopters, and Tomahawk missiles, Rita, Katrina, and Ophelia. Do you even remember, come September, as we lurch from one year to the next, all the threats and crisis, these Septembers past have presented as pretexts? And we, the video generation, watching and watched, posting selfies, need only a video to suspend belief, acquiesce and agree. War is peace! This is a crisis, indeed. These past, two half dozens, and change, Septembers,this same cry of crisis? And we, resolutely unquestioning, of how rules were changed, to protect us, from ourselves. The Patriots Act? Remember? In September 2005 came Katrina, after Rita,and Ophelia: and army boots and troops came out, to act and protect the land, while patriots drowned? Boots on the ground? This ground. Remember Septembers past, and to come. Then, came FISA ‘Protect America Act', in September 2007. Do you remember? (here.) Do you remember the rules, that changed, on how you were to be protected, by being watched and listened to, and put in your place, if needed, by guns and batons, and military courts, and tear gas and bullets and fantastical costumes of robot cops and juggernauts. For your own protection, for your own, good, of course. Who elses'? Do you remember, the Financial Crisis, come September in 2008? The rules that changed? And Wall Street won and you lost your gains—and your roof, in its name, and, of course, your good name? And then, came the Gulf spill and by September 2009 British Petroleum, how it threatened, do you remember, the war on life? Or the threat for burning of the Quran, again godsent, then, in September 2010 that almost ushered in the chance on changing the rules on freedom of speech? And in September 2011, the Occupy Movement, which revealed, to us, the extent of our powers, against power, which as it turned out, were: None. That revealed to us, that the police, primarily protects property. That even a movement for rights and freedom, uses the term, Occupy. Are we mystified? And then the storm of Sandy which by September 2012 had made it clear, as it battered and washed away, our water front properties and flooded, Wall Street how powerless the batons, bullets, the tear gas, the shells, the bombs were against, the real threat. That year, they bombed Libya straight to hell. Then, a video of insanity, and that September, the attack at the embassy, in Bengazi? Yes, that was September 2012. Come September 2013, the drums of war turned to a deafening roar—that bloodlust's design, to go bomb Syria, all the way back to Afghanistan and more, Iraq and Libya and even Iran! A video, several, tried to help. Always a video, to make the case, to go bomb and invade, yet another place.That juggernaut denied its chance, by another hegemon, rose again, metamorphosed to fight another day. And come September now today, bombing of Syria, anyway---just the same, for now, lo and behold, there it is, ISIS, proof in hideous videos, for our eyes to suspend disbelief, that lets them there drop their bombs, which they had baked, and ached to drop for years, and almost did, last year. Iraq, has been bombed, for twenty five years! A gift from God, for endless war. Who created this new Goddess? This new crisis, of this ISIS? (here,here,here and here). So here we are, this September, with the headlines developed so far, which we won't remember, this time, next year, which gives us, our latest bed time story, and the newest season of hideous videos galore. No way, not today, will we slash defense spending, cut down weapons, roll back armies as was proposed. Hurray, we are on our way, again to war and endless hay. Hurray, for this new goose, this ISIS, and to the golden eggs, that it lays! And yet another resolution, a rule that claims, the right to kill belongs to only one hegemon, one Military. This, we will have forgotten by next September? Remember? War and weather. Bombs and storms. Rita, Ophelia, Katrina, Gustave, Ike and Sandy. Afghanistan, Iraq, Sudan, Libya, now Syria. And what can we do except this time, too, accept and: Cry, it is a crisis, upgrade the flat screen, store canned food, buy rubber boots and torches and: Cry Isis.
More Writing by Maniza Naqvi here.
Monday, September 22, 2014
A Place Called Home
by Namit Arora
‘No man ever steps in the same river twice,’ wrote Heraclitus, the ancient Greek philosopher, ‘for it’s not the same river and he’s not the same man.’ Some also say this about ‘home’, making it less a place, more a state of mind. Or as Basho, the haiku master, put it, ‘Every day is a journey, and the journey itself is home.’ Still, in an age of physical migration like ours, one of the most bittersweet experiences in a migrant’s life is revisiting, after a long gap, the hometown where he came of age. More so perhaps if, while he was away, his neighborhood turned to ruin, crumbling and overrun with weeds, as happened in my case.
Last month, I revisited my boyhood home in Gwalior, a city in north central India, with my parents. I had grown up with my two sisters in Birlanagar, an industrial township in Gwalior, until I went away to college at age 17. After graduation, I left for the U.S. in 1989 for post-graduate studies and various jobs in the U.S. and Europe over the next two decades. I continued to think of Gwalior as my hometown until my parents also left in 1995 and I stopped going there during my India visits. By most measures I had a decent boyhood in Gwalior, yet I’m loath to idealize it or look upon it fondly. If it had its joys, it was also full of graceless anxieties, pressures, and confusions.
A ‘Temple of Modern India’
Many industrial townships similar to Birlanagar had arisen in mid-20th-century India, including at Bhilai, Durgapur, Rourkela, Bokaro, Jamshedpur, and Ranchi. Most were built around public sector enterprises, housing factories that employed thousands. Nehru, the modernizer, called these the ‘temples of modern India’. Birlanagar, where I grew up, was a private township, centered on two textile mills. The Birlas had started building it shortly before independence on land given to them for free by the Scindias, who ruled the then princely state of Gwalior. The older and larger of Birlanagar’s two mills was Jiyajeerao Cotton Mills (JC Mills). The other, founded around 1950, was Gwalior Rayon (later Grasim), where my father, a textile engineer, worked for 36 years from 1958-94. Under the once famous ‘Gwalior Suiting & Shirting’ brand (watch this ad with Tiger Pataudi and Sharmila Tagore), Gwalior Rayon produced a range of fabrics combining both natural and synthetic fibers—such as cotton, wool, rayon, polyester, acetate, viscose—including some that ‘never tore’ and needed no ironing. Retailers apparently loved these products because their quality required no discounting.
During their heydays in the 1970s and 80s, the Birlanagar mills had over 10K employees—about 6-8 percent were Staff, the rest Labor—sustaining the livelihoods of perhaps over 100K people locally, about one sixth the population of Gwalior. In this otherwise unexceptional cow-belt city, many saw Birlanagar as a relative oasis—a modern township that drew in a diversity of professionals in nuclear families from across the country: Bengalis, Goans, Kashmiris, Tamils, Marathis, Punjabis, and more.
But in the early 1990s, both the mills and the township went into a terminal decline. By 2002 the mills had laid off all employees, shut down all operations, and sold off their looms and other capital assets. Left without jobs, many employees accused the companies of not paying out certain promised entitlements, and used this as justification for refusing to vacate their company-owned homes. Birlanagar turned overnight into a sea of squatters. Some enterprising folks even ‘sold’ their company-owned homes to outsiders for whatever they could get.
While most of my father’s peers left Birlanagar in search of greener pastures, a few stayed, whether they felt left behind in a fast-changing world or, nearing retirement, had nowhere else to go. Living rent-free must have helped offset the pain of living in a degenerating neighborhood. Meanwhile, the question of who owns the homes in which they continue to live remains in court. Any resolution will likely take many more years.
Within the last few years, the new owner of Gwalior Rayon has revived a dyeing unit (no pun intended) by tapping the residual labor pool of employees who never left. But the township remains a pale shadow of its past, with derelict houses and roads, untended public spaces, meager municipal services, and piles of rubble and garbage. The Birla Industries Club, once the township’s center of sporting activities—and which I frequented for chess, table tennis, and swimming—has succumbed to the weeds: a jungle envelops its playgrounds and swimming pool, tennis, basketball, and badminton courts, billiards, chess, carrom, and table tennis rooms. The Club once hosted state level sports tournaments. Now corrosion and decay pervade everything man-made. The best one can say is that the unchecked surge of weeds, shrubs, and trees in Birlanagar has helped revive the population of worms, frogs, lizards, butterflies, and parrots. I even saw peacocks, which we never had before.
Is There Anybody Home?
Mrs. Gupta, who now lives in our old house, receives us and graciously invites us in. So much is still the same! The same yellow cabinets and doors, the clunky fan regulators, the open mosaic kitchen shelves, the passage to the roof. Every corner still familiar, every space charged with memories. Except everything seems smaller than I had remembered. I recall the smell of Flit, which we sprayed in the bedrooms to kill mosquitoes. So many little battles over food and clothes and homework took place in this living room. The courtyard across which our big radio played Vividh Bharti songs each morning. The tiny storeroom that once had a little shrine at which my mother asked me to say a prayer before each exam; after turning atheist at 13, I’d still go through the motions but, with folded hands, murmur swear words at the gods. The corner that once held my study desk, where, in tenth grade, I discovered the shocking scale of the universe and grappled with it for days. The backyard, now entirely barren, had lemon, banana, and papaya trees, and a thriving vegetable garden. Our front lawn where we soaked up the sun in winter and chewed peeled sugarcane. The fragrant jasmine tree, the ornamental ‘vidya’ tree, the custard apple tree, and the rose beds are all gone but the windowsills and the front pillars, which we decorated with oil lamps during Diwali, still remain. We also used the lawn as a badminton court with an imaginary net running across.
I hop over a wall across the road to reach the open field where I used to fly kites, play cricket, and see B&W films projected on a white cloth during Durga Puja. At the edge of the field was a hall, part of a state school, where both students and visiting troupes staged dance and drama performances, which we often watched. The hall looks abandoned but two statues, of Tagore and Vivekananda, still stand outside. Nearby, at the resoundingly named Sarv Dharm Manav Mandir, kids once learned tabla and harmonium, kathak and bharatnatyam. The open field was where father finally taught me to drive his Vespa, after I’d snuck away with it a few times and, trying to impress a girl with my speed, had crashed it into a wall. It’s where I once fought three boys. Though roundly beaten in the end, I still relish the sweet satisfaction of landing a perfect punch on the big bully’s jaw. Another time, as I ran away from a fight, my opponent threw a stone that landed on my back. That’s how the boys attacked stray dogs, and it made me feel like an escaping dog, tail between my legs. I feel grateful that nothing much worse than this haunts me from that era. How many of us can say that? Indeed, isn’t that a good measure of a decent childhood?
My parents get busy meeting old timers and eagerly catching up: their various journeys; what they do now; whereabouts of former colleagues. Talk inevitably turns to those who have passed away, and there are so many. Their somber tones betray their sense of their shrinking world. They talk about their health, ailments, home remedies. Some speak of heart attacks or other brushes with death. When they speak of their kids or grandkids, it sometimes makes me queasy—especially when they assess their offspring by the most vacuous of attributes: obedience, height, complexion, degrees, income. I’ve had to struggle to unlearn so many of these petty, small-town values that had oozed into my consciousness here.
We pass through various drawing rooms. Besides family portraits, wall decor abounds in calendar art kitsch of gods, puppies, infants, landscapes, or flowers. Over cups of tea or cola, stories keep tumbling out: So-and-so pocketed tons of money renting out company venues for private events; Mr K’s son is now estranged; one gent in purchasing went to jail; their ex-president has lost three of his four sons, all of whom were quite obese; one ex-neighbor used to abuse his wife who escaped to our home one night after a fight, and she later tried to commit suicide on the railway tracks; another man shot his wife who’d been paralyzed by a stroke for two years and then killed himself. Growing up I rarely heard any neighborhood tales of the darker kind; perhaps they hadn’t been told in the presence of kids. But almost everyone extols the glory days of Birlanagar and laments its current state.
Our ex-neighbors, Mr and Mrs Bhadoria, have invited us for dinner. Once a tough guy feared by all, Mr Bhadoria is now retired—even from installing Airtel towers, his last part-time gig—and proclaims himself a ‘full-time heart patient’. Back in the day, his power at the factory flowed from his political clout: first as Congress party activist, then as president of the local trade union, then as president of RJD, Madhya Pradesh. One of his photo albums has him posing with a shirtless Lalu Yadav at the latter’s home. But a couple of years ago, he quit RJD and joined RSS. I give up trying to figure out his politics. Mrs Bhadoria is very devout and worships for three hours daily. With large, glinting eyes, she reveals that she bathes and dresses up little god statues in her home shrine everyday (if she herself needs a daily change of clothes, she reckons, why wouldn’t her beloved god?). Her practice has a long legacy in the Subcontinent but it still amuses me to think of it as a fusion of Bhakti and Barbie. She’s told Lord Krishna that if he wants her devoted service, he must keep her healthy, else she might leave him to wallow in filth. So far her threat has worked, she claims, for she has no ‘medical issues’. She is also into cow worship. Camped out by their front gate are two cows they don’t own, but who no doubt hang around for the desi ghee goodies they get everyday from the lady of the house.
After we take leave, I’m delighted when my parents call her devotion ‘excessive’ and gently laugh at her. My mother’s faith is nowhere near as stringent, my father’s even less so. In the quiet night we amble past our former house towards a waiting car that’ll take us to our hotel. This failed township was once home, and all day I’ve nursed a jumble of emotions towards it: indebtedness, gratitude, sadness, resentment. I know I’ll never fully untangle or unfurl that experience. There will only be a different tangling, a different furling, for the rest of my life.
The City Beyond the Township
Outside Birlanagar, Gwalior’s trajectory resembles that of many cow-belt cities: A newly prosperous class is evident in its malls, big cars, and gated apartments. Abject poverty is less visible now than in my time. Parts of the city have cleaner, wider streets; many new areas seem better planned. CNG autos are common (those absurd, if cute, tempos are gone). But the city is more crowded too—the population has nearly doubled in 25 years—with a lot more vehicles, noise, pollution, and fewer empty spaces.
The city also feels dense with skinny boys hanging about in public spaces, indicative of the 45 percent of Indians below age 19. Ill-educated as they are, I see them less as a ‘demographic dividend’, more as ‘looming disaster’. What future awaits them in jobs, housing, and marriages? Given the current child sex ratio in Gwalior, 17 percent of these boys will not find a mate. What social stresses will this create? A few will migrate to metros like Delhi NCR, which bloats by 700K each year. Some will take up jobs servicing the nouveau riche—as mall workers, security guards, delivery boys, drivers, appliance sales and servicemen, high-rise construction workers, hotel staff, and so on. But the big questions remain: can the economy add millions of decent jobs for the new entrants year after year, for decades—and at what cost to the environment? Can housing, healthcare, nutrition, education, water, sanitation, and electricity keep up?
More soothing than these ruminations is the sight of the Gwalior Fort above the city, and the ease of lounging in the lawns that surround the tombs of Tansen and his spiritual mentor, the Sufi saint Muhammad Ghaus. These lawns remain the site of the annual Tansen Samaroh, where music lovers and artists gather for a four-day, open-air tribute to Tansen. Even in my childhood it was free to all and my father religiously attended every evening. Wrapped up in shawls and mufflers on chilly December nights, this is where I first heard Shivkumar Sharma, Bismillah Khan, Dagar Brothers, Amjad Ali Khan, Hariprasad Chaurasia, and other musical greats.
Riding in a CNG auto, we soak in sights from our former haunts like Hazira, Padav, Gole ka Mandir, Shinde ki Chawni, Daulat Ganj, Bada, Sarafa Bazar, Kampoo. Even after twenty years, random people across town recognize my parents. Many touch their feet and speak with affection. My parents, now 78 and 73, say they won’t return to Gwalior again so I’m happy for their positive experiences. We reminisce about Ashok Talkies, now razed to the ground but once the closest movie theater to our home, where I watched so many angry-young-man movies. Nearby used to be children’s book and magazine rental shops, which I visited often on my bicycle. Over the years, they supplied me with lots of Champak, Chandamama, Chacha Chaudhary, Phantom, Mandrake, Flash Gordon, Tinkle, Amar Chitra Katha, The Hardy Boys, and Tintin titles.
We stop at the subzi mandi, the kirana market, and various shops that still bear the old names—venues where my parents once carried shopping lists and looked for bargains. When father began work in 1958, he earned Rs 150 per month (equivalent to Rs 7,600 / $120 today). That inaugurated the years of thrift, watching discretionary expenses, rarely eating out. Fortunately for me, my parents didn’t skimp on their kids’ education, nor on taking two vacations every year: one to a hill station in the Himalayas, the other to Jaipur where most of our relatives lived.
I meet two long lost school friends for dinner and the next day we visit our alma mater, Carmel Convent School. The principal, Sister Ann Jose, graciously gives us a tour of the premises. Now said to be the best girls-only K–12 school in Gwalior, it was co-ed in our time. The Sister says they got rid of the boys because of too many disciplinary problems. We tell her that in our day, the principal, Sister Reprata, whom she knew, used to cane our hands. Times have changed, Sister Jose says; there is no corporal punishment now. I find no general encyclopedia in their library so I offer to donate my Britannica, which she gladly accepts. The school has many new buildings but these are still the grounds where I once negotiated the fine line between being cool and studious. Part of being cool was to speak a language full of expletives, as in every third spoken word being a swear word. Adolescent thrill and peer pressure drove us to invent new curses that were so badass that merely uttering them felt like a transgression. The pendulum swung the other way later in life, for I ended up renouncing nearly all but the tamest of swear words.
The Warp and the Weft of Factory Life
My father and his colleagues seemed to agree that the golden age of Birlanagar had coincided with Hiralal Shrimal’s presidency of Gwalior Rayon. A physically large, autocratic, and energetic man, Shrimal had been more feared than loved. During his reign, Gwalior Rayon’s revenue exceeded $300M in the early 1980s. Things began going south, it was said, after a management shake-up in the late 1980s, in which SB Agrawal replaced Shrimal and began stuffing the ranks with his own cronies. Agrawal also harassed those considered close to Shrimal, including my father, and seemingly made bad investment decisions that contributed to the mill’s demise.
Others blamed Gwalior Rayon’s demise on its labor union, which they alleged had become too strong, until union workers no longer put in a hard day’s work and cost structures became unsustainable (the revived dyeing unit has no union and per capita production is apparently higher than before). Falling profits may have prompted the Birla Group to diversify its investment priorities to other emerging sectors, such as cement and chemicals. Many macro trends also likely played a role. The 1980s saw the closure of dozens of textile mills in Bombay, which was blamed variously on intransigent labor union leaders like Dutta Samant, rising real estate prices, reduction of import duties in textiles, low-cost Chinese fabrics, and the slow pace of innovation in the Indian textile sector.
At some point, my father came to lead the ‘weaving prep department’. His team was in charge of fabric designs and winding the right kinds of yarn on warp and weft bobbins that fed the German looms in the weaving department. Six days a week, father reached work shortly before 6 AM and worked until 6 PM, with a two-hour lunch/siesta break. In middle management, he was squeezed between the demands of upper management and the attitudes of unionized labor. His work environment, far from being safe and hospitable, required frequent supervision on humid and odorous shop floors, with excruciatingly loud machines that caused his significant hearing loss, and fine fiber pollutants, which likely contributed to his chronic bronchitis. The laborers who constantly worked in these areas had it even worse. Father also had to deal with Shrimal’s hot-cold psychological abuse, which left its residue on father’s moods when he returned home from work and took its toll on our family. But Shrimal also made it clear that he trusted and valued father’s work and looked out for him in other ways.
When Agrawal took over from Shrimal, he hired a crony with the goal of replacing father. To protect his job, father was advised to join the labor union, whose president was our neighbor Mr Bhadoria. Rallying behind my father, the union instigated a production slowdown and mass walkouts, inciting Aditya Birla himself to ask why ‘a man of the caliber of Mr. Arora’ would join the labor union—a remark that also betrayed his view of labor unions. Having prevailed in this confrontation and secured his job, father took it relatively easy in the last 4-5 years before retirement, working normal hours for the first time in his life and living without the fear of upper management. When he retired in 1994 and moved to Jaipur with my mother and younger sister, Gwalior Rayon was the only employer he had ever had.
A House for Mr. Arora
An industrial township is defined in part by its housing and employment regime—the logic by which its spaces and its toils are carved out among its residents. A house was one of the perks of a factory job in Birlanagar. An employee was assigned a house based on his rank in the company hierarchy. As the employee climbed the ladder, so did the size and location of his home. My parents had lived in four smaller houses before the one I described above, where we stayed the longest and during my formative years. Housing units were clustered: tiny dwellings in one locale, row houses in another, bungalows in a gated area. This created a de facto segregation based on professional rank, as all of one’s neighbors had approximately the same rank. A Laborer could never stay in Staff housing. Nor could junior Staff live around senior Staff. Since one’s house was a direct indicator of one’s status, it bred envy and created a race for upward mobility in housing. Perennial topics of gossip concerned who moved into which house, who got what company perks, who had secured renovations for his house, and so on.
This wouldn’t have been so bad had the mills functioned like the meritocracy they implicitly claimed to be—that is, if they had combined equal employment opportunity with such performance-based rewards as a bigger house. But the mills were nothing like a meritocracy. They didn’t practice equal opportunity hiring. Instead, caste nepotism ruled. The whole township was in fact an upper-caste fiefdom, mixing caste nepotism and housing segregation into a soul-corrupting brew. During this visit, as my parents and I walked by the Staff quarters, I recorded the names of ex-residents. I also noted names that came up in conversations with ex-neighbors and ex-colleagues. Below is this list of about hundred names, a highly representative sample of the Staff members of the Birlanagar textile mills.
- Bania (most Marwari): Mandalia, Kabra, Bajoria, Ganderiwal, Jhaver, Poddar, Singhania, Tibrewal, Chandgothia, Nahar, Budhia, Chapparia, Kathuria, Dwarka, Mittal, Poddar, Samalia, Saraf, Neekhra, Ajmera, Dalmiya, Lakhotia, Rungta, Makharia, two Shrimals, two Chauradias, two Goyals, many Guptas, many Agrawals
- Brahmin: Chakravarty, Fotedar, Kaul, Deshpande, Karandikar, Gopal, Tyagi, Saraswat, Gaur, Joshi, Dindhaw, Kalia, Mishra, two Dikshits, many Shuklas, many Sharmas
- Khatri: Tandon, Khanna, Kapoor, Batra, Oberoi, Sehgal, Chand, Vohra, many Aroras
- Thakur/Jat/Others: Bhadoria, Rathore, Rathi, Taparia, Singh, Rastogi
- Kayastha/Vaidya: Saxena, Shrivastava, Ghosh, Sinha, Sengupta, Dasgupta
- Christian/Sikh/Jain/Others: D’Souza, Briganza, Alexander, George, Thomas, Singh, Mauj, Jain, Merchant
This list confirmed my long held suspicion that the supposed diversity of Birlanagar was deeply deceptive. I found not a single Muslim, Shudra, Dalit, or Adivasi among the Staff. No women either. In short, not even one person from the constituencies that make up almost 90 percent of Indians! ‘Our management had an unwritten policy of not hiring Muslims,’ father remarked casually. Labor employees did include lower-caste men but almost all Staff employees at this ‘temple of modern India’ were twice-born Hindu males, with a profusion of Marwari banias—especially in senior management, starting with Aditya Birla himself—the rest being a smattering of privileged Christian, Jain, and Sikh men. Father told me that other Staff members quietly resented the domination of Marwari banias. Staff hiring and promotions pivoted mostly on caste, not merit, and diversity wasn’t valued at all. It struck me much later that no one from the Labor class ever visited the Birla Industries Club. While theoretically open to all employees, the club had become an exclusive playground for the Staff, all upper-caste. JC Mills even had separate entrances, and both mills had separate canteens, for Staff and Labor. This sort of segregation never struck me as problematic back then; it even seemed like the natural order of things.
The neighborhood of my formative years was exclusively upper-caste. No wonder I grew up so blind to the unfair advantage of caste in my own life and the handicap it was for others. This blindness, still rife among my family and friends, was an attribute of my caste privilege. It allowed the boys in my neighborhood to use ‘chamaar’ and ‘bhangi’ as casual abuses for each other. It’s tempting to think that Birlanagar’s discriminatory business and social practices contributed to the mills’ demise in the era of globalization, but that would be wishful thinking. Caste, with its hydra-headed ways, has adapted to modern capitalism; both caste and communal discrimination continue to flourish in 21st century corporate India. Yet notably, Birlanagar was then widely admired by outsiders; even the Labor jobs were in much demand. This suggests that to most people across the social spectrum, Birlanagar was no worse—and better in some ways—than the society at large.
But there it is, warts and all, my former hometown as I see it from this vantage point on a journey I began there. That journey continues as I board the train back to Delhi with a newfound appreciation of author Thomas Wolfe’s words: ‘you can’t go home again’.
7500 Miles, Part I: Baltimore>NYC>A2>QC>Lincoln>Omaha>Vermillion>Brookings
by Akim Reinhardt
I'm currently circling the nation in a black and orange ‘98 Honda Accord, my rusted chariot. About 7,500 miles in a little over two months. That's the plan. As far north as North Dakota, as far south as New Mexico, and as far west as California before closing the circuit by returning to Maryland. About 26 states in all.
It's a massive research/conference trip. I'm on sabbatical. A full year at half-pay.
A single semester at full pay is the more common sabbatical leave. For a full year sabbatical, the typical approach is to get a research fellowship that makes up the lost salary and provides academic focus.
But I usually end up doing things my own way. I'm not bragging. It's as much a blend of chaos and neurosis as anything else. But in this case the result is, no research fellowship.
Instead, I've rented out my house during the semester, and this past summer I took on a freelance writing project. I co-authored a coffee table book, which will come out next summer.
Bill moved in to my Baltimore rowhome in August. At the end of the month, I bid him a fond farewell and hit the road. And thus the journey begins.
The first stop was The Bronx. It seems only fitting to kick off an epic trek by visiting friends and family in my hometown.
Like the rest of the city, more chains are moving into The Bronx. Not at the same rate that sees Manhattan turning into a bland, congested, overpriced version of the rest of America, but it's happening nonetheless. Very depressing. Dunkin' Donuts. Target. Bla bla bla.
The day I see a real New Yorker, not some Midwestern transplant, order Domino's, is the day I turn my back on the city completely. When that day comes, New York's pointlessness will be profound beyond words.
For now, the pizza's still worth it. For now.
The next stop was Ann Arbor. I earned my B.A. from the University of Michigan in 1989. After a half-year back in New York, I returned to A2 in 1990 and lived there another two years.
I actually loathed Ann Arbor during my school days. Blame it on immaturity and culture shock. I was 17 when I left The Bronx for the Midwest. It was too much for me to handle. I was also quickly disillusioned by what I perceived the school to be: an overpriced diploma mill chock full of mediocre students so stuffed with unearned arrogance they were shitting it from their ears.
It was only during my later years in Michigan that I overcame the limits of my youth and provincialism and came to love the state. Michigan's a truly wonderful place. It's not on the way to anywhere. No one's just passing through. Everyone's there, it seems, because they've made a conscious decision to be there. It's gorgeous and cold and brimming with fresh pine.
During those latter two years I came to love Michigan. I still love Michigan. But Ann Arbor has changed a lot, and not for the better.
Like many once-quaint college towns around America, Ann Arbor has traded most of its charm for a sheen of tacky glitter. It's really part of the same homogenizing trend that's destroying much of New York City's cultural distinctions.
Ann Arbor has been flooded with money during the last quarter-century. As rents have increased, many small businesses have been replaced by chains. And many working class students have been replaced by preening rich kids who live in shiny new condos their parents buy for them.
As I sauntered the streets of the old downtown, people were all abuzz that the new crop of students included Madonna's daughter and celebrity chef Mario Batali's son. Sigh.
I spent one night in A2 and another down the road in Ypsilanti, home of the much more modest Eastern Michigan University.
Ypsi, as it's known, still has the grit. A former normal college, EMU has no real prestige, certainly nothing approaching U of M's nauseating self-importance. Most of the town's manufacturing jobs are long gone, and the local economy teeters. Ypsi's primary architectural feature is a brick water tower that looks like a penis. The Bomber Café still offers a cheap breakfast deal with so much food on the plate that if you're not careful, you might have to shit before you're finished.
It was good to end this leg of the trip in Ypsilanti. I waved goodbye to the penis and headed for I-94.
While driving across western Michigan, I noticed my Check Engine light was on.
You've gotta be kidding me.
I just gave my mechanic $1,200 to get this fashionably rusty bucket into tip top shape. And I love my mechanic. He's the man. I can't remember how many times I've showed up at his garage over the years and had conversations that went something like this:
"I think such and such is wrong with the car."
"Okay, leave it here and I'll check it out"
(Upon returning after lunch) "So what's the deal with it?"
"It's fine. Nothing wrong with it."
"Cool. What do I owe you?"
And now the fucking Check Engine light is lit up less than a thousand miles into a trip that I'm billing as 7,500 miles because "7,500" rolls off the tongue well, but to be perfectly honest, could be closer to 10k by the time all's said and done?
The mystery was solved the next time I pulled over for gas. I'd left the gas cap in Ohio. Fuck me.
It could be a lot worse. I was reminded of that while driving though Illinois. Saw one of those 22-wheel, double trailer FedEx trucks laying on its side in the median between the east- and westbound portions of I-80, a long trail of black dirt turned up from where it had skidded across the prairie like a massive plow.
I'm not a rubbernecker. I don't rubberneck. I think it's tawdry and gauche. It's inconsiderate to your fellow drivers and unspeakably disrespectful to the person who has just suffered great tragedy, gawking at them like they're part of a goddamn freak show. I hate rubberneckers. You're holding up the traffic, you slack-jawed baboon. Move your fucking car.
I didn't rubberneck. I didn't need to. The truck was so big that you couldn't help but see it, laying there like a dead whale out on the grassy Illinois beach.
"He might be dead," I thought to myself. "I hope he's not dead. He's probably dead."
The Quad Cities. Where Illinois and Iowa share the Mississippi River. I could name them all from years of driving back and forth. On the Illinois side there's Moline and Rock Island, it of the line that's a mighty good road. Davenport and Bettendorf hold down the Iowa side.
I still can't say "Bettendorf" without chuckling quietly.
I was in the QC for only about 16 hours all told. But it was pleasant. Good beer culture. Not a lot of pretension.
I like the Midwest. I've liked it every since I grew up and got over my immature distaste of Michigan. That was a quarter-century ago. In all, I spent 11 years living in the Midwest. Good times.
People who don't like the Midwest are provincial and unimaginative. They're like the people who don't like black and white films. Are you kidding me? Are you really that dumb and haughty at the same time?
I'm not talking about people who grew up in the Midwest and fled it. Everyone's entitled to their demons. I'm talking about all the cocksure morons on the coasts who've come to the completely unjustified conclusion that they themselves are really quite interesting.
No. I'm telling you right now. There's a big difference between "interesting" and "tiresome." Fucking figure it out. And yes, it does take longer for some of the cool shit to come the Midwest. But once it gets there it's half-price. So contemplate that the next time you fork over half your monthly paycheck for rent.
The scene of the crime. Lincoln, Nebraska.
I got my Ph.D. at the University of Nebraska in 2000. Lincoln's a damn good town. Nebraska was always good to me. And now I spend 10 days here.
This is where the research part of the trip begins. In addition to the University library stacks, I do a few days at the State Historical Society. Among the many things I looked at were the papers of James H. Red Cloud, grandson of the famous Lakota Sioux Indian leader. How famous? There's a whole war named after him. Red Cloud's War.
Despite being a couple of generations down the line and living almost all of his life during the 20th century, grandson James H. was pretty much a monolingual Lakota speaker. Most of his letters were composed through an interpreter. There were also a couple of pads with his handwritten notes in Lakota language, which he taught himself to read and write. One gray banker's box at the historical society in Lincoln. It's a national treasure.
I also went on the radio while in Lincoln, during Paul Nance's Friday morning show, "Morning Breath." Lincoln's got one of the best goddamn community radio stations in America: KZUM-FM. After doing radio for several years in Michigan, I spent five years behind the mic at KZUM in the late 1990s. I did the Friday morning 8-10 shift. Paul did the two hours before me, and we'd have some chirpy patter during the crossover.
All these years later, Paul's still at it, and he was kind enough to have me on. He spun the records (actually, he played them off his computer; a lot has changed since I had my own show back in 2000) and we shared the mic between sets of music.
I also played some softball while in Lincoln. The team I used to play for is still going after all these year. Many of the players are no in their late 40s and early early 50s.
I haven't played any ball in five years since spraining my ankle at a game in Maryland. I was very rusty. They stuck me in rightfield. I caught the first fly ball that came my way. A routine catch. Though for some reason I fell down. Just tipped over like a three-legged cow. Later, another fly ball soared majestically over my head as I badly misjudged it.
At the plate, I reached safely three of the four times I came up. However, two of those times were not because of my offensive prowess but because of defensive errors. I'll take it.
In the second inning I got caught in a rundown between 1st and 2nd base. Somehow I got safely back to the bag after sliding through the 1st baseman's legs; the ball got away from him again. My teammates mocked me for sliding.
"We're old now, Akim," they said. "We don't do that anymore."
A day in Omaha. I don't know Omaha very well because during most of my time in Nebraska I didn't own a car. Just biked everywhere. Omaha's about 50 miles from Lincoln, situated along the mighty Missouri River. Many "rivers" in the west are glorified creeks that dry up in summer. But the Missouri's the real deal.
While in Omaha I went to the Douglas County Historical Society. Afterwards I met friends for gourmet pizza, and then we went to a bar which has one of, if not the biggest scotch selection in the United States. It was the night before Scotland's independence vote. No one cared.
Vermillion, South Dakota is the home of the University of South Dakota, which boasts the South Dakota Oral History Center. It's an amazing collection of interviews conducted with South Dakotans, some of them dating back half a century. There are recordings of people talking about things they remember from the late 1800s.
When you're a historian, it really doesn't get much better than this.
In between shifts at the archive, I stayed at the 24-Seven Inn.
"How did you find out about us," the motelier asked as I signed in. One of her rote questions, no doubt.
"I drove by it," I said.
"You mean from the sign?" she asked incredulously, as if she'd been plotting to take it down for some time but couldn't quite justify the expense.
"Yeah, from the sign," I smiled.
I headed north for Brookings, South Dakota, the home of the South Dakota State University Jackrabbits. I'm spending the weekend with an old grad school friend and his family. He's a geographer, so he's great at answering all my nerdy questions about the region.
Turns out Sioux Falls, which is between Vermillion and Brookings, is easily the biggest thing in this state. SoDak is four hundred miles across, more than 200 miles from top to bottom, and has only one at-large Congressperson. It still has just one area code.
I'm rockin' it in the 605.
The Missouri River bifurcates the state almost perfectly. Easter River and West River are the names for the two halves. My friend's wife is decidedly East River, from a farm outside of Watertown, which is due west from Minneapolis and only a few miles from the Minnesota border. She's got that accent. You know the one. Think Fargo.
Saturday night we went to a Jackrabbit football game against the University of Wisconsin-Osh Kosh. The weather began gloriously. When I say sunny, I mean goddamn bright. Not too windy. High in the upper 70s. This is the good life. I can say that because I won't be here in January.
By the second half the sun had gone down, the wind had picked up, and the feeling was quite autumnal. These people are hardier than me. I have no shame in admitting that.
Early this morning, as 3QD is posting this piece, I'll hit the road again. This week I'll circle South Dakota and go to several West River archives. Afterwards, I'll visit another grad school friend in Reno before taking a long trek down California and then heading back east. But more on that next month.
Akim Reinhardt's website is ThePublicProfessor.com
Conquistador of the Useless
by Leanne Ogasawara
The incredible Sisyphean story of a man who wants to build an opera house in the middle of the Amazon rainforest in the late 19th century is only to be outdone by the crazy outlandishness of the man who decides to re-create the event a hundred years later in film.
Like a set of nested Russian dolls--each more mind-bogglingly conceived-- the story's central metaphor continuously revolves around the theme of "man against nature." This is a world where it is dreams that truly matter. And people move mountains in order to pursue their obsessions. So, to build his opera house, the hero, Fitcarraldo, has to employ hundreds of Indians to help pull a 320-ton ship over a muddy hill. But perhaps what is the most incredible part of the story is that Werner Herzog, in the making of his film about the historic ship-pulling, insists on physically re-creating the original challenges by struggling to capture on film the impossible task of having the local Indians pulling a real 320-ton ship over a mountain. His hell-bent will to veracity has made Herzog's film the stuff of legend.
And this is all very unexpected since film has never been an art much concerned with literal truth, being taken up solely by images. Not to mention that if all that matters is the "burden of his dream," why doesn't Herzog employ the usual Hollywood devices of stage set and miniatures to evoke his story more poetically? Why does he seek to do the impossible and film actual people pulling a real 320-ton ship over a steep and very slippery hill in the most remote part of the Amazon --given the useless burden of doing so?
Alongside Herzog's wonderful memoir concerning the making of the film, Conquest of the Useless, I am reading a fascinating book about 17th century science, by Ofer Gal and Raz Chen-Morris. Exploring the intellectual compromises in epistemology that were generated by the rise of the "new science," Baroque Science tells the story of Western philosophy's estrangement from the senses. In particular, it focuses on the inevitable denigrating of human vision and the disappearing observer in natural philosophy.
Of course, from ancient times philosophers had conceded that human vision cannot be trusted. It distorts and is prone to illusions. For just as the other senses mask with "tastes, odors and sounds..." so too is human vision fallible. This issue became particularly problematic with the rise of optical instruments in the 17th century, though, which allowed for the peering of the very far and the very small. The microscope and the telescope would have profound implications not just in advances in natural philosophy and art-- but in epistemology as well, leading to the creation of Descartes' "eye of the mind," whereby the eye of the mind was
modeled on but completely independent from the eye of the flesh
Optics came to be considered as being concerned not with human vision but with the nature of light itself. And, rather than augmenting human vision through the creation of lens and mirrors, these new instruments were thought to somehow bypass the human senses to allow the eye of the mind more direct and infallible observations, which were based on reason and math alone.
Galileo famously said that philosophy is written in the book of the universe in the language of mathematics. It was the telescope more than anything else that brought this epistemological conundrum into focus so to speak--calling our sensory knowledge into question in the process.And in this way, the new instruments were considered to be a mathematic extension of reason itself. In seeking to bypass the human senses, the new science led to the disappearing observer in science and to epistemological doubt in philosophy. With regard to the latter, if we are dependent on sensory information that is by definition faulty, then how can we know anything at all, wondered Descartes. And this problem of doubt has come to dominate Western epistemology down to today.
This late Renaissance estrangement of the flesh from knowledge is something which occurred mainly north of the alps, says the authors. It is also absent in much Asian philosophy, which mainly did not seek to bypass the senses vis-a-vis mind-body duality. "Vision pours in through the eyes," said Dante, going straight to the heart. Even through our very breath, (or 気) we take inside (内）what is outside (外) and can thereby be affected by osmosis. In such non-dualist epistemology, what is then emphasized is not knowledge ex nihilo but rather a sensitivity or a receptive sensibility to the shared world around us. This is a partial rejection of a priori knowledge with a strong emphasis on inter-relational and inter-subjective knowing. Knowing as doing. Doing as being. 知行合一The emphasis on embodied knowing is perhaps why Asian philosophy is sometimes linked to quantum mechanics (where observer is intimate with observation). It has also been actively taken up in modern or contemporary Continental phenomenology.
The princess challenges the philosopher with two questions:
The book Baroque Science ends somewhat unexpectently (and the ending is the best part of the book, I think!) with a discussion of Descartes (origin to so many of the world's philosophical ills). Here, the authors discuss the philosopher's absolutely charming correspondence with Princess Elisabeth of Bavaria. One doesn't often think of Descartes as being playful or charming --and yet he is somehow reminiscent of Voltaire in the witty and erudite exchanges he had with the princess concerning mind-body duality. Cartesian dualism is simply not something the lady can abide with. And not just that either--for not only isn't she buying it, but she is no push-over in an argument, and the princess gives the philosopher a real run for his money! In the end, the authors of Baroque Science describe the very surprising move back to the senses that Descartes makes, persuaded no doubt by his princess.
Coming to see that knowledge gained from the senses if in fact more reliable (assuming that there is no evil deceiving demon at play), he then posits that a suitable ethical stance for scholars and savants such as themselves is to be fully ensconced in the world as an "involved, attentive and compassionate citizen of nature and society." That is, Descartes concedes that knowledge found by pure reason alone is less likely to capture real truth and therefore in epistemology as well as in ethics, one should be engaged in the world with knowledge mediated by body and creative imagination. That is, one has to walk the walk and make it real.
With this in mind, I think Descartes and his princess would have greatly appreciated Herzog's brilliance in steadfastly refusing to follow the studio's demands to film his movie with a plastic boat streaming down a river. For to do so, insists Herzog, wouldn't have been true. In his memoir, Herzog explained that he went through all this unreal expense and trouble --not for realism's sake-- but for truthfulness's sake. Dreams must be articulated, expressed and embodied, he says. And this is truth. Even if it is just the "the helplessness of dreams over the heaviness of reality."
So, maybe it is less "I think therefore I am" versus "I sigh therefore I am" --as much as it is what my astronomer says --channeling Herzog-- that "I dream therefore I do and therefore I am."
Detail from a Van Eyck (the painter whose eye functioned as microscope and telescope at the same time).
Ray Smith. Red Army, 1991.
Digital photograph taken by Sughra Raza at Kentuck Knob sculpture park, PA, Sept 5, 2014.
(Thanks to Akila and Ute Viswanathan).
Macroanalysis and the Directional Evolution of Nineteenth Century English-Language Novels
by Bill Benzon
In The Only Game in Town: Digital Criticism Comes of Age I argued that digital criticism was the most important development in contemporary literary studies because it is the only line of investigation that presents us with new objects of thought. I’m continuing that argument in this post, where I consider some of the new conceptual objects in Matthew Jockers, Macroanalysis: Digital Methods & Literary History (2013).
Jockers undertakes a variety of inquiries into a corpus of 3346 19th Century Novels from America, Britain, Ireland, and Scotland, examining style, theme, and influence. Though he considers the possibility that literary culture evolves in a manner similar to that of life forms, he rejects the idea (pp. 171-172). Not only do I think Jockers is mistaken on that point, but I think that his analytic and descriptive work provides strong evidence not only for conceptualizing literary history as an evolutionary process, but that that process is directional (at least for the corpus Jockers examines). The purpose of this essay is to sketch out that case by reinterpreting some of Jockers’ results.
Note however that I do not intend to provide the required evolutionary model, though I do have some thoughts on how to do so (see the suggested readings at the end). I’ve only explained why I believe such an account is necessary.
Caveat: This is an unusually long post, so you might want have coffee or wine, your pleasure, readily at hand. Also, the argument is basically mathematical, though informally expressed, and mostly through diagrams, which are central to digital criticsm.
Does Culture Evolve?
Let me set the stage by quoting a passage from Tim Lewens’ excellent review of cultural evolution in the Stanford Encyclopedia of Philosophy (2014):
The prima-facie case for cultural evolutionary theories is irresistible. Members of our own species are able to survive and reproduce in part because of habits, know-how and technology that are not only maintained by learning from others, they are initially generated as part of a cumulative project that builds on discoveries made by others. And our own species also contains sub-groups with different habits, know-how and technologies, which are once again generated and maintained through social learning. The question is not so much whether cultural evolution is important, but how theories of cultural evolution should be fashioned, and how they should be related to more traditional understandings of organic evolution.
The alternative, Lewens suggests later on, is that “cultural change, and the influence of cultural change on other aspects of the human species, are best understood through a series of individual narratives.” Lewens rejects that notion, and so do I – and I’ll address that specific alternative, individual narratives, a bit later.
Before going on, however, I want to dispose of the most common objection to the idea of cultural evolution:
The explanatory point of evolutionary dynamics is that it gives us design without a designer, without intention. But isn’t culture consciously and deliberately designed and created?
Cultural artifacts (whether physical things, such as books or drawings, or events, such as rituals or musical performances) are deliberately designed and created by human agents and thus are not the result of a blind evolutionary process. That is true. But whether or not any of those artifacts are retained in a group’s repertoire is a matter beyond the will and design of individual creators. The process of cultural selection is independent from that of artifact creation.
Those many 19th Century novels that are now forgotten were created with as much deliberation and intentional design as those few that we still read and used as the basis for other cultural products, such as movies and, e.g. zombified parodies (Pride and Prejudice and Zombies). Whatever it is that distinguishes the novels with lasting cultural salience from the more ephemeral ones, it isn’t the mere fact of deliberation and design.
Now let us consider Matthew Jockers’ Macroanalysis. Working with a large corpus of over 3000 texts Jockers investigated two kinds of traits in those texts, stylistic and thematic.
The statistical investigation of style goes back to the mid 1960s when Mosteller and Wallace worked on Federalist Papers with uncertain authorship (15 out of 85). Subsequent research has shown that high-frequency words (mostly grammatical function words) and punctuation marks are most useful features for identifying style. Just why those features are most useful is an interesting question, and Jockers has some discussion of the matter here and there as appropriate in particular cases. But I’m simply going to treat the matter as an empirical fact. Nor, for that matter, am I going to attempt to summarize the techniques Jockers uses (he does that on pp. 68 ff.).
Let us consider a specific piece of work, Jockers’ reconsideration of a claim that Franco Mortti made in Graphs, Maps, Trees (2005). Here’s Moretti’s Figure 9 (p. 19):
Here’s what Moretti says about it (pp. 18-19):
Forty-four genres over 160 years; but instead of finding one new genre every four years or so, over two thirds of them cluster in just thirty years, divided in six major bursts of creativity: the late 1760s, early 1790s, late 1820s, 1850, early 1870s, and mid-late 1880s. And the genres also tend to disappear in clusters: with the exception of the turbulence of 1790–1810, a rather regular changing of the guard takes place, where half a dozen genres quickly leave the scene, as many move in, and then remain in place for twenty-five years or so. Instead of changing all the time and a little at a time, then, the system stands still for decades, and is then ‘punctuated’ by brief bursts of invention: forms change once, rapidly, across the board, and then repeat themselves for two– three decades: ‘normal literature’, we could call it, in analogy to Kuhn’s normal science.
Jockers looked into this phenomenon using a corpus of 106 novels that Moretti had classed into genres: historical, Newgate, Jacobin, Gothic, silver-fork, sensation, Bildungsroman, industrial, evangelical, national, and anti-Jacobin. Here Jockers’ his Table 6.4, showing genres distributed by decade (p. 85):
Notice first of all that the numerical values are percentages, not absolute number of texts, and that it is the row values that add up to 100%. Thus, for example, 100% of the Jacobin novels were published in the 1790s and 40% of the anti-Jacobin novels were published in the 1790s and 60% in the 1800s, and so forth. That table Jockers notes, looks rather like Moretti’s Figure 9, except that it has percentages added. What’s interesting is that novels in each genre are not spread evenly across decades–which in any event, as Jockers notes, are somewhat arbitrary time periods.
Jockers then presents another table, Table 6.5 (p. 87) designed so that the percentages in each column add up to 100%:
What we see is that genres are not equally represented in each decade. Notice, for example, that the national (Natl) novel is distributed 38% and 63% over the decades 1800 and 1810 (Table 6.4) respectively. While it constitutes 33% of the texts in the 1800s, its relative contribution to the 1810s is smaller (Table 6.5), despite the face that that’s when a greater portion of its output appeared.
Morretti’s 2005 is thus confirmed, though only for a limited set of texts and genres. Genres rise and fall over time and have a life that spans a generation, more or less.
Let’s set this aside look at how Jockers investigated themes. He uses a sophisticated statistical technique called topic modeling to identify clusters of words that occur together though many different texts. The computer simply delivers lists of words, along with a weighting of how important each word is in the topic. Those words are said to constitute a theme. It is up to the investigator to interpret that cluster, to characterize what the theme is about.
Rather than attempt to explain how topic modeling works, I’m simply going to present a few examples of his results. Jockers explains the technique in Macroanalysis (pp. 122 ff.).
Working with his corpus of 3346 American, British, Irish and Scottish novels, Jockers developed a model having 500 topics. Here’s the cluster for one topic (the size of the word is proportional to its frequency in the topic), which Jockers calls MARRIAGE 1 (there’s also a MARRIAGE and a MARRIAGE 2 topic):
This chart shows that it occurs more frequently in British and Irish texts than in American:
That difference, of course, requires an explanation, but that’s outside the scope of this essay. That’s one of many observations for which an evolutionary account must provide an explanation. In this particular case, as in many others, there is an existing historical literature to start with; Leslie Fiedler’s now classic Love and Death in the American Novel (1966) speaks directly, and at length, to this difference.
This graph shows the prevalence of that topic over time:
Roughly speaking, the topic was most prevalent early in the 19th Century (about a fifth of the way from the left-hand side of the chart), and then declines in frequency. That too requires an explanation, but that too is beyond the scope of this essay. More generally, each topic has a temporal distribution and those distributions need to be explained: Why are some themes culturally salient at a given time, and other themes not?
Jockers has this kind of data about each of 500 topics that occur in those 19th Century novels. He’s placed that information on a web page where anyone can look up a particular theme. I urge you to go there and explore this data for yourself.
* * * * *
In his ninth chapter, Influence, Jockers combines results from both his stylistic and thematic investigations in an effort to understand the pattern of influence of earlier upon later writers. His argument is that a text will be very similar to the texts that influence it and that we can measure similarity by using the stylistic and thematic features identified in those investigations. My argument is that, whatever that analytic work says about influence, which is a traditional topic in literary criticism, it can be fruitfully interpreted as evidence that literary culture changes through an evolutionary process and that, for this corpus, that process is directional.
Given that Jockers himself does not undertake such an analysis, we should specify reason for doing so. What is it about Jockers’ analysis that admits, and even calls for, such an interpretation?
Some Simple (Imaginary) Examples
To understand that we need to take a careful look at what Jockers did in his investigation of similarity between texts. To that end I am going to give simplified account based on imaginary examples.
The general idea is to measure each text on a number of traits or features and then compare measurements. Jockers has identified a combined total of almost 600 stylistic and thematic traits (each of which can be characterized by a numerical value) in his corpus of over 3000 works. Let us imagine a very simple case where we’re measuring a handful of texts on only two traits. It doesn’t matter what those traits are. But would be nice to put meaningful labels on the X and Y-axes of our feature space. Let’s say we’ve got one stylistic trait, the frequency of the word “she,” and one thematic trait, the proportion of words in the text that are taken from the MARRIAGE 1 topic.
This diagram illustrates the basic idea:
The diagram shows a two-dimensional feature space in which a text’s value on the MARRIAGE 1 trait (the percentage of words taken from that topic) is indicated along the X axis (horizontally) and its value on the stylistic trait, the frequency of “she”, is indicated along the Y axis. I’ve indicated the positions of two imaginary texts, A and B, in this space.
We can calculate the (Euclidean) distance between those texts using this formula:
That distance is a measure of how similar the texts are with respect to those features. The distance between highly similar texts will be short, while that between dissimilar texts will be long.
In the following graph we have six texts charted:
As in the first case, the edges (lines) between the texts (nodes) are proportional to the distance between the texts in this two-dimensional feature space.
The next graph is pretty much the same except that I’ve put arrowheads on all the edges. The text at the tail end of the arrow was written before the one at the head end. I’ve also rendered a selected few of the edges in black and the rest of them in light gray. Notice that all black edges point in the same direction, making it unidirectional chain.
One could pick out many similarity chains in this graph, but I’ve chosen the only one that includes each node.
The layout of this chain makes it obvious that these imaginary texts do not line up in temporal order from left to right. We see that texts five and six are between texts three and four in left to right order. The most recent text, six, is roughly the same distance from two and three as it is from its immediate predecessor, five, and closer to them than it is to four.
Now consider this diagram in which we have three unidirectional chains, each involving six texts (for sake of diagrammatic clarity, I’ve not indicated any non-chain edges):
Two of the chains, the blue and the green, are like the one in the previous graph in that they don’t line up in temporal order from left to right. The third chain, in black, does line up in temporal order from left to right.
But what if all of the chains lined up in temporal order from left to right?
That, in effect, is what happened when Jockers performed his analysis. I say “in effect” because Jockers did not examine individual unidirectional chains in his graph, nor have I done so myself. But the result that he got, which we’ll examine in a minute, implies that that’s what happened.
That requires an explanation. In fact, it requires two explanations. We need one account to explain what’s going that a given unidirectional chains chart as a diagonals in the feature space. That implies that the measured traits are not independent of one another; rather, they are highly correlated. We need another account to explain why most, though perhaps not all, of the unidirectional chains are of this type.
The first account is about the internal construction of novels in its relation to change. The second account is about how novels function and circulate in society.
With this in mind, let’s now look at what Jockers did.
The 19th Century Anglophone Novel
Jockers took most, though not all, of his 500 thematic features, and most of his stylistic feature, and combined them into a single model with 578 features for each of the 3346 texts in his corpus. He then calculated the pair-wise similarity of all the texts and tossed out all values below a certain relatively high threshold (pp. 162 ff.). The result is a graph having 3346 nodes and 165,770 edges embedded in a space with 578 dimensions, one for each feature. As in the examples in the previous section, each text is represented as a node in the network and the similarity between two texts is represented by the edge (or link) connecting them. The length of the edge is proportional to the degree of similarity.
In principle it is the same kind of mathematical object as those we examined in the previous section. But how do you visualize such a huge graph? YOU don’t. You get a computer to do it. One of the things the software does is project those 578 dimensions onto two dimensions so that we can create a visual representation. Here’s that representation (Figure 9.3 in the book, p. 165, color version from the web):
The visualization routine (Force Atlas 2 in the Gephi package) is designed to layout the graph so that similar texts are close together. As in our simplified example, there was no temporal information in the data from which that graph was derived. As a side effect of that process, however, the graph is also laid out more or less (there are a few outliers that are out of order, p. 167) in temporal order, going from older to newer, left to right. It should be obvious that Jockers would not have gotten this result if the unidirectional chains in his graph were not themselves physically ordered from left to right in feature space.
Here is Jockers’ comment on this result (pp. 164-65):
The fact that they line up in a chronological manner is incidental, but rather extraordinary. The chronological alignment reveals that thematic and stylistic change does occur over time. The themes that writers employ and the high-frequency function words they use to build the frameworks for their themes are nearly, but not always, tethered in time. At this macro scale, style and theme are observed to evolve chronologically, and most books and authors in this network cluster into communities with their chronological peers.
On Jockers’ first sentence, it is neither incidental nor extraordinary IF an evolutionary process regulates cultural change. For evolution proceeds through “descent with modification,” as Darwin put it, and that goes for cultural as well as biological evolution. If a later individual is modified from its immediate predecessors, it will in fact resemble them a great deal; the modifications do not change the basic character of the descendants.
We must further realize that Jockers’ graph in effect represents a collective mentality. Jockers wasn’t examining the minds of millions of 19th century readers of English-language novels in Britain and America, but the history of those novels is a function of the tastes and interests of those readers. Those books wouldn’t have been written if publishers didn’t think they could see them to the public. Those tastes changed gradually, with the themes and styles of novels appealing to those tastes changing gradually as well.
Why Did Jockers Get That Result?
Let’s return to two questions we posed earlier: 1) why do unidirectional chains chart as diagonals in feature space, and 2) does change appear to be coordinated across all chains? I don’t have really good answers for these questions, but I do have some thoughts.
Let us start with the second question, which seems easier to me. These novels are not written each in its own encapsulated mini-society. They are shared among groups within the larger society, ultimately in this case, the English-speaking world of America, Britain, Ireland, and Scotland in the 19th century. Literary texts are a means, though not the only one, by which people share their values, desires, attitudes, and aspirations (see, for example, my post Seven Sacred Words: An Open Letter to Steven Pinker).
As such, the texts that circulate in a given group will draw on the same set of values and concerns. As the group changes over time its values and concerns may change as well, but in a fairly concerted fashion. In this process some popular genres may no longer seem compelling because they do not easily encompass newer concerns. By the same token, formerly obscure genres may more readily express newer concerns and so texts in those genres will proliferate at the expense of texts in older genres (recall Jockers’ Table 6.5, reproduced above).
On the first question, forgetting about temporal order for a moment, diagonal chains in feature space indicate that that the features of that space are interdependent. Variations in feature values are correlated with one another. One conception of literary texts, that they are “organic wholes” implies that their traits are highly correlated. And whether or not one is a student of organic wholism, it is obvious that many traits are correlated with one another.
For example, this theme, AFFECTIONS PASSIONS FEELINGS OF ATTACHMENT, shows a temporal course that is similar to that of MARRIAGE 1, a decline through the 19th Century:
The designator Jockers gave to it tells us why; novels about marriage are also likely to involve affections, passions, and feelings of attachment. We expect some kind of coherence and interdependence among the various elements that constitute a literary text, however those elements are identified.
No, what’s interesting is the fact that changes in patterns of textual coherence have a consistent temporal direction. Why doesn’t the pattern of similarity among successive ‘generations’ circle back on itself or just meander about in feature space? It would appear that, contrary to conventional wisdom, there is something like progress in literary culture. But “progress” is just a word. Using it in this context doesn’t explain anything, though it does point up the issue.
The traditional way of accounting for directionality, of course, is through teleology. That would imply that the 19th Century novel is evolving toward something. The 20th Century novel, perhaps? And what’s that evolving toward? No, biological evolution dispensed with teleological thinking and it should be banned from cultural evolution as well. Whatever the novel is doing as it evolves, it isn’t seeking something in the future. It’s seeking something in the present. What?
As much as I like to meander around and about on that question I’m going to leave it alone. I think it’s a matter for a generation or two of serious investigation.
What Remains to be Done?
Everything, mostly. For one thing, we need an explicit model in which the cultural correlates to genes, phenotypes, and species are identified, for those are the central (theoretical) actors in biological evolution. A great deal has been written on this issue in the past two decades, though, alas, most of the work written under the rubric of memetics is somewhere between questionable and worthless. I’ve worked on these issues a great deal, but will not attempt to summarize that work here, though I’ll list some of it at the end of this essay.
I note, however, that such a model has to work on several scales, as does biological evolution. At the macroscale we have the long-term evolution of literary culture; that’s what Jockers has investigated. At the mesoscale we have the detailed study of individual literary works; there is where we investigated the interdependence of the elements constituting works of literary art. At the microscale we have the inner mechanisms of language, feeling, and desire, the bricks and mortar of literary cathedrals.
Of course, the student of literary evolution doesn’t have to undertake any of that work from scratch. There is a great deal of excellent work from which to start. But recasting that work in new terms is likely to be difficult, not to say controversial. And there will be much new work to do. I would expect the emerging disciplines of digital criticism to take the lead in this effort, for, as I said at the beginning of this essay, they are the only ones providing us with new conceptual objects.
Will the profession allow them to thrive?
Appendix: In Search of Models for Cultural Evolution
I’ve done quite a bit of work on Jockers’ Macroanalysis which I gathered into a single working paper: Reading Macroanalysis: Notes on the Evolution of Nineteenth Century Anglo-American Literary Culture. I gave considerable attention to Fiedler’s Love and Death in the American Novel in those notes.
Earlier in my career I did quite a bit of work on the notion of cultural complexity; much of that work was done in conjunction with David Hays. That work is not, for the most part, directly relevant to the current discussion, but you can find it at a website: Mind-Culture Coevolution: Major Transitions in the Development of Human Culture and Society.
The lines of investigation in my book on music, Beethoven’s Anvil: Music in Mind and Culture (Basic Books 2001) are more directly relevant to this essay. In the second and third chapters I undertook to conceptualize the music-making group as a neural entity. The upshot is that, when members of a group are interacting under in a certain way, their nervous systems become coupled into a single dynamical system. You can download final drafts of those chapters HERE. I discuss gene-like and phenotype-like entities on pp. 191-194 and 219-221.
I’ve got two working papers in which I discuss gene-like entities, aka memes, in some considerable detail. The popular notion of memes as autonomous bots of some kind is intellectually empty. These two papers explain why and present a coherent alternative:
- The Evolution of Human Culture: Some Notes Prepared for the National Humanities Center, Version 2 (http://ssrn.com/abstract=1631428)
- Cultural Evolution, Memes, and the Trouble with Dan Dennett (http://ssrn.com/abstract=2307023)
In the terms I develop in that second paper, the function words Jockers uses in stylistic work would be connectors and the contents words in thematic work would be designators.
Finally, we have Cultural Evolution: A Vehicle for Cooperative Interaction between the Sciences and the Humanities (http://ssrn.com/abstract=1644978):
The study of cultural evolution requires a comprehensive approach to les sciences de l’homme using methods and insights from researchers trained in both the humanities and the sciences. Only humanists have the wide-ranging knowledge of cultural phenomena necessary for effective analytic and descriptive control of the primary phenomena; without such control model building and theory testing are pointless. Scientists, on the other hand, are beginning develop tools for thinking about population-wide maintenance, propagation, and incremental change of cultural codes. At the micro-scale we need to understand, not only perceptual and cognitive processes, but, most critically, the negotiation of meaning through interaction. At the macro-scale we need to see how changes in cultural codes supports the emergence of new mentalities. Taken in sum these efforts will show us how the design of cultural codes emerges from the collective efforts of populations where each individual negotiates his or her life transaction by transaction.
* * * * *
Bill Benzon blogs at New Savanna.
Monday, September 15, 2014
by Scott F. Aikin and Robert B. Talisse
The case for God's existence is unsuccessful. Theistic arguments either beg the question, or involve deductive fallacies, or don't really prove what they promised to. Furthermore, the atheistic arguments all seem decisive – there's no morally acceptable solution to the problem of evil, and there is no need for God in a naturalistic universe. Current theistic replies are mostly rear-guard actions in reaction to the atheist – more apology than apologetics. The evidence overwhelmingly supports the thesis that God doesn't exist. And this is good news, too. God is just a cosmic bully. Such a being might provide the universe with meaning, but in so doing, makes it all pointless, especially human autonomy -- which, by hypothesis, would have to resolve itself into God's will. God's existence would be a moral tragedy, so good riddance to bad rubbish.
Call the view expressed above Positive Evidential Atheism (PEA). It is the two-part thesis composed of an evidential claim and a positive assessment: (1) our overall available evidence supports belief that there is no God and (2) God's non-existence is a good thing. We endorse PEA. Paul Moser challenges PEA in his book The Severity of God (2013). Moser argues that if God exists, He would be silent; moreover, He would be particularly silent to those who accept PEA. This "divine hiddenness" explains why PEA's advocates think they have no evidence for God's existence. Given that God hides, the evidence is misleading. In response to Moser, we defend PEA along two lines: (1) PEA needn't be undercut in the fashion Moser takes it to be, and (2) divine hiding can be rendered as supporting PEA.
Let's begin by considering the divine hiddenness view in a little more detail. It runs like this: Even though the problem of evil may seem unanswerable, we humans are not in a position to know that God would not allow severe evils in the world. Moreover, we do not know if God intervenes in this universe with individuals engaged in proper relationships with Him. Access to evidence of God is not a matter of looking and seeing, but a matter of searching, yearning, and then being transformed. God, in fact, wants relationships with us, not just our assent to claims of His existence. And so He hides from us until we are ready for His presence.
Notice that the divine hiddenness view reconciles the fact of widespread disbelief with God's capacity and goodness. God is silent until we are ready to hear. Were He to reveal himself when we are not ready, the relationship He desires and we need would be perverted. God's ways, in short, are not our ways; to expect otherwise is nothing short of idolatry. Divine hiding, so the reasoning goes, is something we should positively expect of a God truly worthy of worship.
God's motivation to hide is especially pronounced in the case of the positive atheist. As Moser has it, "God typically would hide God's existence from people ill-disposed toward it"; as they are ill-disposed toward God, "their lacking evidence for God's existence is not by itself the basis of a case for atheism"(2013:200). Thus positive atheists should expect that their evidence regarding God's existence is misleading, since they are precisely those for whom God's presence will be elusive. Hence PEA's positive assessment undercuts its evidential claim. Moser calls this the "undermining case" against PEA. Yet, as we will argue, the undermining case is not decisive.
First note that PEA's positive assessment and evidential claim are not logically separable in the way that Moser assumes. PEA concedes that if God exists, He is the only object worthy of worship. But worship is itself morally suspect. Worshipping God is an all-in, complete commitment – one gives one's life completely over to Him. All one's meaning and value, then, comes from Him. To give oneself completely over to anyone, to have that entity determine all the values and meanings for you, is to completely give up one's autonomy. To demand of others that they completely give up their autonomy is immoral, and to require that they do so with the very last act of their own singular volition is positively sadistic. Consider, then, that God demands that we worship Him, and punishes those who fail to do so. The worship of anything is immoral; thus God's existence would be a morally bad thing. Yet, God by definition must be morally perfect, and His existence must be a good thing; therefore, God is a morally impossible entity. So the reasons for PEA's positive assessment of God's nonexistence are also part of the evidence for His nonexistence. Rather than undercutting the evidence against God, the positive assessment contributes to the evidence for atheism.
Here's a second defense against Moser's undermining case. The divine hiddenness view holds that PEAs can see no evidence for God's existence because God deliberately hides from them. The view seeks to explain why PEAs can't see any reason to accept God; however, the view seems unable to explain how one becomes a positive evidential atheist. After all, one does not arrive in this life, fresh from the womb, despising the very idea of God. Rather, one typically thinks one's way into an atheist position from a theistic one. How, then, can divine hiddenness account for the fact that very often individuals become atheists after considering their evidence while espousing a theist view? If God hides from those who are disposed to reject Him, why does He also seem to hide from those who yet believe but begin to doubt, or merely wonder? Now, on Moser's view, it may be in character for Him to withdraw even from the doubter – He is severe, after all – but this kind of severity is morally unacceptable. Thus, we rare brought back to PEA's positive assessment: Moser appeals to God's severity in order to explain the fact that the atheist's evidence against God looks so compelling; but the existence of a severe God would be morally atrocious. And that's further evidence for positive evidential atheism.
A Rank River Ran Through It
It says something about a city, I suppose, when there is heated debate over who first labeled it a dirty place. The phrase “dear dirty Dublin”, used as a badge of defiant honor in Ireland’s capital to this day, is often erroneously attributed to James Joyce. Joyce used the term in Dubliners (1914) a series of linked short stories about that city and its denizens. But the phase goes back at least to early nineteenth century and the literary circle surrounding Irish novelist Sydney Owenson (Lady Morgan) who remains best known for her novel The Wild Irish Girl (1806) which extols the virtues of wild Irish landscapes, and the wild, though naturally dignified, princess who lived there. Compared to the fresh wilderness of the Irish West, Dublin would have seemed dirty indeed.
The city into which I was born more than a century later was still a rough and tumble place. It was also heavily polluted. This was Dublin of the 1970s.
My earliest memories of the city center come from trips I took to my father’s office in Marlborough St, just north of the River Liffey which bisects the city. My father would take an eccentric route into the city, the “back ways” as he would call them, which though not getting us to the destination as promptly as he advertised, had the benefit of bringing us on a short tour of the city and its more unkempt quarters.
My father’s cars themselves were masterpieces of dereliction. Purchased when they were already in an advanced stage of decay, he would nurse them aggressively till their often fairly prompt demise. One car that he was especially proud of, a Volkswagen Type III fastback, which had its engine to the rear, developed transmission problems and its clutch failed. His repair consisted of a chord dangling over his shoulder and crossing the back seat into the engine. A tug at a precisely timed moment would shift the gears. A shoe, attached to the end of the chord and resting on my father’s shoulder, aided the convenient operation of this system. That car, like most the others in those less regulated times, was also a marvel of pollution generation, farting out clouds of blue-black exhaust which added to the billowy haze of leaded fumes issuing from the other disastrously maintained vehicles, all shuddering in and out of the city’s congested center at the beginning at end of each work day.
A route into the city that I especially liked took us west of the city center, and as we approached Christ Church Cathedral I would open the window to smell the roasting of the barley which emanated from the Guinness brewery in Liberties region of the city, down by the Liffey. Very promptly I would wind up the window again as we crossed over the bridge, since the reek of that river was legendarily bad.
The Irish playwright Brendan Behan wrote in his memoir Confessions of an Irish Rebel (1965), “Somebody once said that ‘Joyce has made of this river the Ganges of the literary world,’ but sometimes the smell of the Ganges of the literary world is not all that literary.”
Historically, the River Liffey received raw sewage from the city and though a medical report from the 1880s concluded that the Liffey was not “directly injurious to the health of the inhabitants” — in the opinion of these doctors crowded living and alcohol consumption were the main culprits — the report concluded nonetheless that the Liffey’s condition “is prejudicial to the interest of the city and the port of Dublin.” It was time to clear up the mess.
The smell of the Liffey like other polluted waterways came not just from the ingredients that spill into it, but also from algae that bloom upon the excess nutrients that both accompany the solid waste and that seeps into the water from the larger landscape. The death and sulfurous decay of those plants, contribute to those noisome aromas.
Despite the installation of a sewage system for the city in 1906 and its expansion in the 1940s and 1950s the smell of the river remained ripe as Brendan Behan attested. Even in the late 1970s the smell of the river persisted and was remarked upon in popular culture. The song “Summer in Dublin” by the band Bagatelle contains the lines, “I remember that summer in Dublin/And the Liffey it stank like hell.” It was a big hit in the summer of 1978.
So why did the smell persist? Part of the problem with the tenacity of the Liffey’s pollution, and its associated odors, is that the river is a tidal one. It ebbs and flows into polluted Dublin Bay into which raw sewage continued to be dumped long after the creation and expansion of municipal sewage treatment plants. The rancid smells of the River Liffey remained powerful as I was motored over it with my father in the 1970s.
On other occasions, this time with my mother, I would get to observe the streets of Dublin city at a leisurely pedestrian pace. She would take one of her six kids into the city on her Saturday morning shopping rounds and would walk the selected child into the ground. The footpaths of the city were strewn with litter — sweet wrappers, newspapers, paper bags, plastic bags, discarded fast-food, random scraps of paper, cigarette butts — dog feces dappled the curbs, vomit pooled in doorways, the narrow streets were car-congested, and at evening-time, snug on the smoke-belching bus trundling home, I’d watch the sun sinking, gloriously crimson, hazily defined, leaving behind the bituminously smoky atmosphere of Dublin for another day.
It seemed like there was no end in sight to Dublin’s pollution problem, but clearly the situation could not have been left to go on forever. And even if a nineteenth century medical commission was not impressed that Dublin’s environmental pollution, from the river at least, posed a grievous problem, nonetheless the ubiquitous squalor of the city was not conducive to the good health of the Dublin’s city. The stench of river, the garbage in the streets, the smog of the city had to be remediated. As one Reuters report from the autumn of 1988 reported: “A thick pall of smoke from thousands of coal fires has become trapped over Dublin in freezing, wind-free weather, leaving a million coughing Dubliners to face streets at midday so gloomy it looks as if night had already fallen.” The links between high levels of smog and increased death rates concerned the medical community and a spokesperson from a major Dublin hospital reported that "Even patients without respiratory complaints have been complaining about throat irritation and coughing." (Toronto Star).
So change eventually came, some of it, admittedly, compelled by European legislation, a reasonable price for Ireland’s economic union with Europe. Acting on the Air Pollution Act, 1987 the capital city was declared a smokeless zone in 1990. It became illegal to sell or distribute bituminous coal, the smokiest kind, in all parts of Dublin city and its suburbs. By the early 1990s the city had lost the aroma of soot and the Dublin sunset lost some of its luster, but, in compensation, its air quality dramatically improved. The smoke in Dublin city dropped from 192 mg per cubic meter of air in December, 1989, to a mere 48 microgrammes the following December.
The River Liffey is generally less aromatic these days, though it is still very much a polluted urban river. Massive improvements, including the building of a new treatment plant near the harbor about ten years ago, has reduced raw sewage both in the river and in Dublin Bay. That being said the levels of faecal coliform, that is, E coli, associated with human waste, remains "disturbingly excessive" in some stretches of the River Liffey. There are heavy odors emanating from the new plant, an expensive problem that will need to be resolved.
I glanced down at the river this past summer while I was visiting home and saw that garbage still bobs up and down in the tidal waters, or clings to the algae at its bricked-up banks, before being inexorably tugged out to sea.
Follow me on Twitter @DublinSoil for 140 character updates on my columns. Links to previous 3QD columns here.
Jamie Wyeth. The Thief. 1996.
The View From Nowhere
"Well, I haven't been there yet, and shall not try now."
~ Conrad, Heart of Darkness
Marlow, the protagonist of Conrad's Heart of Darkness, remorsefully blames an old obsession with maps for his eventual captaincy of a ramshackle steamship, set on a doomed mission up the Congo River. But Marlow was irretrievably fascinated by the blanks on the map – those were the places that were worth going. These days, when we look at a map, we expect objectivity and specificity, or to put it bluntly, the truth. Our sense of entitlement has only grown with the thoroughness in which maps have enmeshed themselves into our daily lives, whether it is via the GPS devices that guide our cars, or the maps on our smartphones that help us walk a few blocks of a city, familiar or not. We may forego the flâneur's pleasure of asking a stranger for directions, but where a certain calculus is concerned, it seems a small price to pay for getting us, without undue delay, to where we need to be.
There are no more places where cartographers must write terra incognita, or where myths and rumors were recruited as phenomenological filler. For just as nature abhors a vacuum, a map is a canvas that demands to be crammed with seemingly confident observations, and it would appear that every nook and cranny of the planet has already had some physical characteristics reassuringly assigned to it. Thus when maps fail us, we are left to decide whom to blame – the map, or ourselves.
I will give you a hint: we never blame ourselves. Rather, it is the map that is inadequate. But what this really implies is our refusal to abandon the conviction that there will be some future map that will capture the truth. Correlating directly with its pervasiveness, it becomes too easy to pass over the obvious fact that, like anything else, the practice of cartography is a fundamentally social practice. Consider not only how immersed we are in maps, as with the example of GPS, but also how extensively, constantly and surreptitiously we ourselves are mapped. Every time you allow an app on our smartphone to "Use Your Location," indeed with every swipe of a credit card, you are effectively performing an offering of yourself, or rather some quantifiable aspect of yourself, to some kind of mapmaking project, the vast majority of which you will never be aware, let alone see. We are, in fact, subjects of a distinctly cartographic flavor of what Michel Foucault called clinical gaze.
When we are thus swaddled in information that provides so much convenience and in turn seems to ask so little in return – in fact, what is merely a bribe, but an exceptionally effective one – the occasional failure of maps can be galling (or sometimes entertaining). Because we are convinced that a better map is always already right around the corner, this anxiety does not last. But what comfort is there when we are confronted with things that resist mapping?
The classic thought experiment here is Benoît Mandelbrot's seminal 1967 paper, published in Science, "How Long Is the Coast of Britain?" For the present purposes, I will only describe Mandelbrot's premise: the measurement of an irregular natural surface such as Britain's coastline is dependent on the unit of measurement. So if we were to use a yardstick with a unit length of 200km, we might conclude that the length of the coastline is 2400km, whereas if our yardstick were 50km, we would assert a length of 3400km. Indeed, as the unit of measurement approaches zero, the observed length of the coastline approaches infinity.
For Mandelbrot, this is a mathematical problem, and he uses the example to posit a method for approximating length. Eventually these and other investigations would lead him to elaborate the theories of self-similarity for which he is justly famous. But in the introduction to the paper, Mandelbrot writes:
The concept of ‘‘length'' is usually meaningless for geographical curves. They can be considered superpositions of features of widely scattered characteristic sizes; as even finer features are taken into account, the total measured length increases, and there is usually no clear-cut gap or crossover, between the realm of geography and details with which geography need not be concerned.
One of the advantages of Mandelbrot's mathematical approach is that it allows him to elide that essential question: Where is the "clear-cut gap or crossover"? For mapmakers, identifying that gap or crossover is at the heart of cartography. It may well decide the ultimate utility of a map to someone navigating a route in the physical world. And this is a decision that must be made by people. It is not enough that the map is right; it must also be right in the right way.
I want to be clear that I am not talking about what is commonly called ‘usability', or the loose set of principles that designers use to make legible their interventions in the world. ‘Usability' is a red herring, in the sense that the process of dressing up cultural artefacts, whether physical or virtual, for ‘usability' occurs only after the decisions of what should be ‘usable' (ie, legible) have already been made. To invent a brief and perhaps absurd example, consider a highway map. If we are driving, we use such a map to get from A to B, where points A and B are reachable by car. Thus, highways and side roads will be prominently featured; other geographic features such as elevation may or may not be relevant. But cartographers also locate significant landmarks to inspire detours (for an Information Age example, see Rand McNally's TripMaker), thereby implying that these are good things that belong on a map. On the other hand, these same maps will never include locations that we may want to avoid, such as Superfund sites. It is not difficult to imagine that a family with young children would want to know about – and avoid driving through – regions thick with pollution from, say, coal-fired power plants. We may initially react to this by saying "But these things do not belong on a map." Well, why wouldn't they? If instead our design brief were to create a map that would allow us to determine the healthiest route from A to B, our highway map may look very different indeed.
The decision to not include such items is intrinsically ideological and, as we will see below, also explicitly political. It is only through repeatedly being shown what a map is that we come to believe what a map should be. We are rarely told what a map is not. But at each turn we are assured of the objectivity that is at the heart of the enterprise.
Objectivity, understood as a sort of neutral omniscience, was tartly characterized by philosopher Thomas Nagel as "the view from nowhere." But having nowhere as one's originary viewpoint is akin to being lost inside one of Mandelbrot's endless, scale-free fractals. It is also irreconcilable when we attempt to, as we must, relate our knowledge of the world to the world itself (although for Nagel, reconciling the two is precisely what is needed to create an individual's worldview). Thus objectivity, or at least the set of social relationships and productions of knowledge that we ascribe to the idea of objectivity, is in fact a moral stance. Why?
Like anything else, objectivity has its own history. In a fascinating paper called The Image of Objectivity (and later a much more extensive book) Lorraine Daston and Peter Galison unpack this "panhistorical honorific [bestowed on] this or that discipline as it comes of scientific age." For them, the workings of objectivity are most apparent when manifested visually, specifically in the way atlases of many varieties – anatomical, botanical, X-ray – have been created and consumed over the centuries. These are not works of neutral omniscience, but artefacts that tell us "what is worth looking at and how to look at it." And to say that this is a moral practice is not far-fetched. They find that
…objectivity is a morality of prohibitions rather than exhortations, but no less a morality for that. Among those prohibitions are bans against projection and anthropomorphism, against the insertion of hopes and fears into images of and facts about nature: these are all subspecies of interpretation, and therefore forbidden. (p122)
Cartography has evolved in a similar fashion. From early cartographers inscribing empty spaces on their maps with "Here Be Dragons" (actually, they didn't) to Google Earth, one might think that there is a flawed but inexorable march towards an ever-finer approximation of reality (if not objectivity). After all, as Daston and Galison write, the moral imperative of objectivity recognizes that "the phenomena never sleep and neither should the observer; neither fatigue nor carelessness excuse a lapse in attention that smears a measurement or omits a detail; the vastness and variety of nature require that observations be endlessly repeated." And yet, there are forces at work that are greater than cartography and the technologies that have transformed it in the last few centuries, and these too should be recognized.
I came across a most extraordinary example of these other forces last week, in a long-form reportage by the Times-Picayune's Brett Anderson. "Louisiana Loses Its Boot" is Anderson's attempt to reconcile the rapidly changing (that is, receding) coastline of the state with the fact that the official state map has not been updated in fourteen years, and isn't likely to be any time soon. What he finds is a toxic mix of, on the one hand, galloping erosion and, on the other, benighted legislation that seems dead-set on ignoring the former. As a result, "the boot is at best an inaccurate approximation of Louisiana's true shape and, at worst, an irresponsible lie." (All citations below are from this article).
To be sure, Louisiana was always a devilishly difficult entity to map. The Mississippi is a notoriously fickle river, given to not just flooding its banks but rewriting them wholesale, as Harold Fisk's maps from the 1940s illustrate. And yet it is precisely this process that replenished the coastline: new sediment allowed vegetation to take hold and create adequate breakwaters and barrier islands, which in turn kept hurricanes from being the Gulf of Mexico's shock troops. The coastline was shifting constantly, but it was not receding. In fact, it was expanding. But once the Army Corps of Engineers "stabilized" the Mississippi in order to ensure commerce, this process of replenishment was severely stunted. As a result, hurricanes such as Katrina have had much greater impacts than would otherwise have been possible. The need for structural modification is not just limited to the river, either. Louisiana is the nation's second-largest oil producer and has "over 9,000 miles of navigation and pipeline canals…dredged in the state's coastal marsh." Adding projected sea level rises to the mix does not promise to make things any more pleasant.
One would think that the physical uncertainties of the situation would therefore call for as ‘objective' an approach to mapmaking as possible. After all, even without factoring in human impact, it is probably difficult enough to decide what is ‘walkable land' and what is not. Instead, the conflicting priorities of the fishing and energy industries have stalled Louisiana's famously corrupt politics from mandating a responsible accounting. Additionally, "the Department of Transportation and Development and the U.S.G.S. would have to agree on a shape and then implement a costly replacement plan for images currently in circulation." Oh, dear. And the U.S. Supreme Court has done its part to command the tides, too, when it decreed in 1981 that "the state boundary of Louisiana was no longer an ambulatory line that could move in response to changes in the coastline, and was henceforth immobilized as a set of fixed coordinates."
In this case, we resist any sort of accurate map only in order to avoid blaming ourselves. We would rather have the maps lie to us, for as long as possible. In the meantime and wholly apart from this tragicomic legislative context, an acre of coastal land is being lost every hour. So even if there was agreement, what kind of map could be created that would do coastal Louisiana justice? In Anderson's view, one that would throw the situation in a clear and unforgiving light: hence the loss of the boot. Such a map could only be a political tool:
A more honest representation of the boot would not erase the intractable disagreements — around global sea level rise, energy jobs versus coastal restoration jobs, oil and gas companies versus the fishing industry — that paralyze state politics, but it would give shape to the awesome stakes, both economic and existential, that hang in the balance.
Anderson's campaign to make the map explicitly political goes against the cartographic gaze that I described above, with its decentralization of power and accountability. It is no wonder that it has been met with resistance. But is it enough? When one looks at the current, tranquil state map of Louisiana, none of this decay, let alone conflict, is apparent. Of course, a citizen traveler might be roused to indignation if not action, once he attempts to reach a destination that no longer exists: a swamp where there was once a camp, the vast reaches of the Gulf where there was once a causeway or a barrier island. But how many people are there of that ilk?
And so we have turned a full circle of cartographic irony: from speculative maps that included places that never existed, to objective maps that show us places that no longer exist, but pretend as if they do. After all, what Marlow found, far up the Congo River and in the darkness of the human heart, could never be marked on a map. But for what can be recorded, whether it is Louisiana's coastline, the Arctic ice cap, or various star-crossed Pacific islands, we can only hope that eventually, as Borges once wrote, "in time, those Unconscionable Maps no longer satisfied."