Monday, March 10, 2014
by Akim Reinhardt
In an early episdoe of Mad Men, a character named Ken Cosgrove publishes a short story in the Atlantic Monthly. It'sentitled:
"Tapping a Maple on a Cold Vermont Morning."
That's just about pitch perfect for the American literary scene circa 1960. The coating of influential New England literati is so thick on the young author, you can practically see it glisten.
But the reason I recently remembered "Tapping a Maple on a Cold Vermont Morning" had nothing to do with Mad Men or literature. Rather, it's because of late I've been remembering winter.
For much of the United States, including here in Maryland, it has been a particularly fierce winter. Not the snowiest necessarily, though there has certainly been snow. But long and cold.
This is my 13th consecutive winter in Maryland, and it's the first one that harkens back to my experience of onerous winters in harsher climes.
From the mid-1980s to the late 1990s, I toughed it out, spending the better part of seven winters in southeastern Michigan and another five in eastern Nebraska. These are serious winter places. They're not Siberia or Winnipeg, but they will punch you in the face, and you need to come to terms with that if you live there.
Southern Michigan winters, first and foremost, are just plain long. Snow usually begins falling in November and never quite goes away. Just when you think it might all melt off, boom! Another half foot covers everything. None of this March goes out like a lamb stuff. Every bit of March is winter. So is a chunk of April.
When will it end? you find yourself pleading aloud to no one in particular. It just goes and goes and goes. It grinds you down and forces you to get back up again. Every year you know what you're in for. Body blow after body blow. And you wonder to yourself how the people from northern Michigan and the Upper Peninsula, the ones who mock you for your soft, southern winters, how do they do it?
You have to find a way to adapt or you'll be downright miserable. I still remember the moment it happened for me. Sometimes I'm a slow learner. It wasn't until my fifth winter in Michigan. I had driven over to my friend Rae's house one night. And then the car died. That tan 1979 Dodge Dart with the cream interior. Thing never ran.
I no longer know why, as the details are long forgotten, but there was some reason why I had to hoof it back and forth I think we decided we needed something from my house, which was about a mile away. A bottle of booze, a record, something. So I geared up. Boots, coat, etc. I went out into the quiet night, flakes fluttering about, and muscled through the black of sky and white of snow. I got to my house, grabbed whatever it was we wanted, then turned around and headed back. I was jogging and clopping through the snow to make the trip go faster. And then my imagination took over.
I was a Viking. The Scandinavian wind and snow whipping through my beard. Furry boots laced up to my knees, carrying me to war, horns sounding, a broad sword waving in my hand. My slight frame and average height much bigger in my mind's eye, I was eager for battle, my iron ready to slice torsos and sever heads.
Then all of a sudden, there I was, back at Rae's. Wow, I thought to myself, that went a lot faster than I expected.
I don't think I ever pretended to be a Viking after that, but I had learned the mental trick: embrace the winter. It's not going anywhere, so have fun with it. Own it. That's how you get through so thick a tome.
If Michigan's predictable, Nebraska's downright irrational. In every place I've ever lived (except for Arizona) people love to say: If you don't like the weather, just wait a half-hour, it'll change. But Nebraska's the only place I've ever been where that's actually true on a consistent basis.
Weather systems sweep across the Great Plains unimpeded. Like many a driver passing through on I-80, the jet stream caroms over the rolling landscape, racing eastward towards Chicago. And eastern Nebraska is also right about where the jet stream starts to bend, meaning you can be on either side of it in one of Mother Nature's heartbeats. If it dips below you, cold Arctic air. If it dances above you, warm breezes from the Gulf of Mexico. As a result, the Nebraska whether changes constantly, and the only reliable feature is the ceaseless wind.
One day the wind stopped blowing. All the chickens fell down.
That wind. It just ain't got no quit in it. And during the winter, that's not a good thing.
The only thing between you and the North Pole is a barbed wire fence.
The net effect of a jittery, fast-moving jet stream is micro-weather systems that often last about three days apiece. So the early part of the week could be sub-freezing while the back part of the week could be downright balmy.
I don't know how many times I've seen the thermometer rise or fall more than 50 degrees Fahrenheit in a 24 hour period. It's a good idea to keep a change of clothes in the trunk of your car.
I remember one year there was a massive blizzard in October. Not only hadn't the leaves fallen yet, they hadn't even turned. Big, broad green leafs caught a couple feet of snow. All night long the gun shot pop of snapping branches ricocheted through the air. The following April there was another blizzard, about a foot. But the nearly half-year between those two storms? Nary a flake.
I'd say that was atypical, but saying that would be redundant. Aside from the wind, there just isn't much that's typical about Nebraska weather.
When the cold weather does hit Nebraska, it's damn cold. Not cold like Minot, North Dakota or International Falls, Minnesota. But it's cold. When it comes, it comes. Sub-freezing goes without saying. Teens are common. But that single digit frigid is what really gets you. Once the mercury drops below 10F (that's about -13C), you really notice it. The quicksilver ain't so quick anymore. Add to it the incessant wind, and outside is not a pleasant a place to be. Best get your ass around the corner of a building to find a windbreak, maybe take a nip from a bottle. And let's not even think about the occasional sub-zero temps (0F = -17.7C).
In places with moderate winters, like New York City or Philadelphia, winter feels like a metaphor for Death. In places like Michigan and Nebraska, you feel like you need to stay on your toes or you might actually die.
I remember this one winter night in Nebraska. For whatever reason, I suddenly had an overwhelming sensation of not wanting to die like the stereotypical, mid-20th century urbanite, your neighbors starting to wonder because the milk bottles are lining up outside your apartment door.
So I got in my little Ford Escort wagon and drove around aimlessly. I eventually came to a big empty field and parked in the ruts of frozen mud. After sitting there a while, I got out and walked to the middle of the field. The starry sky was enormous and the wind and snow swirled about ferociously, creating a sense of overwhelming desolation that Hollywood tries to replicate sometimes but can never get right. I laid down on the brittle, frosty grass.
This is a good way to die, I thought to myself. A real way to die. No milk bottles. And if I just close my eyes and continue to lay here, I realized, I probably would die.
I laid there for about ten or fifteen minutes. Then I opened my eyes, got up, walked to my little red wagon with the roll down windows, and drove back to my apartment.
New York City is the most seasonal place I've ever lived. Not only does it have all four, but each of them is right about three months in length. You get a true sense of the earth's quarterly cycle in New York.
Growing up in Gotham, I knew winter and I didn't like it. Three months of mediocre winter is right in the sweet spot for complaining. Just enough to feel entitled. But five months of a Michigan winter? Or Lord knows how long on the circus wheel of a windy Nebraska winter? You don't really complain anymore. Not if you wanna be a happy person. There'd just be too much goddamn complaining. I bucked up.
After New York, the Midwest, and even a year of pure contrast in Phoenix, I was quick to assess the situation when I moved to Baltimore in 2001. Only 200 miles north, New York is the most obvious comparison. The winters in NYC are indeed a bit worse than Charm City, but not that much.
A Baltimore winter, I've often said, is a real winter, but it's a short one. There's snow every year. Sometimes quite a bit. We set a local record with more than 70 inches a few years ago, which is lurching towards southern Michigan totals. Then again, some years there's hardly enough snow to notice. But there's always at least some. And it does get cold. You're bound to have some sub-freezing temperatures, most often at night. Maybe just a few nights, but there can be a good spell of it, depending on the year. Maryland's the South, but just barely.
The actual duration of a Baltimore winter is pretty consistent. It usually doesn't begin until New Year. December is late autumn, drizzly instead of snowy, chilly instead of downright cold. For the most part, down here White Christmas is just an old Bing Crosby song. And by the second week of March, winter's done. Come the 8th or 9th, whatever icy grip the season had on you is broken, and there's no going back. Planting your hydrangeas might be a gamble at that point, but not a bad one.
[For the record, I don't know nothin' about gardening. Following my advice will probably get your plants dead fast, so don't do it.]
By my reckoning, a Maryland winter is about ten weeks in all. Not even a full season in the conventional sense, and patches of it here and there honestly feel more like fall.
I'm happy with that. When I moved here, having already been hardened by Michigan and Nebraska, I found they typical Maryland winter to be some weak-ass shit. And that was perfectly fine by me. I had built up a sturdy winter psychology during my years in the heartland, and was happy to ease into a shorter, softer version of long nights and low sun.
I do enjoy the change of seasons, and there are things I like about winter in particular. It's quiet. It's pretty. It adds an especial dimension to social interactions around a hearth or in a cozy bar. But a little goes a long way. I feel like I've done my time, gracefully, and ten weeks of half-assed winter is A-OK by me. So over the years I remained content and grateful, always remembering how longer, colder, and snowier it could be.
Or so I'd thought.
Memory's a funny thing. Sometimes you think you remember. But then you realize you hadn't, really. Not until something visceral actually reminds you.
This winter made me really remember.
I remembered what a real winter is like. It started early. December was not merely autumnal; it was winter cold. The usual 10 weeks grew to more than 3 months. And too often this year, cold was cold. Not 30s and 40s, but lots of 20s and even 10s. Way too many nights in the 10s for my taste. And more than enough snow.
It made me remember.
I remembered how to get through it. I remembered what I don't like about it, and also its pleasantries. I remembered pretending to be a Viking, or enjoying jokes about chickens and barbed wire. I remembered being goddamned ready for it to be over already. And I'm beginning to remember a sincere yearning for the deep, melting joy of spring. The satisfaction that comes from shedding boots and coats, from sauntering about freely, and from finally feeling muscles loosened and shoulders relaxed by the sun's warm embrace.
As I sat writing this essay on March 5th, it was after yet another sub-freezing day with lows in the teens. After that, the days got a little better, although the lows continued to reside in the land of popsicles.
And then it broke. On Saturday the 8th, the first day of the second week of March, like clockwork, Old Man Winter heaved his death throes in Maryland as the thermometer cracked the 60 degree mark (about 15C).
I remember winter. I remember making peace with it. I remember loving it in ways both peculiar and joyous, engaging and resigned. And I remember having had enough of it.
Welcome, Spring. It's your turn to shine.
Seo Young Deok. Nirvana 2, 2010.
Snow on Hawaii (a medieval cosmology)
by Leanne Ogasawara
It is my second favorite essay of all time: C.S Lewis' Imagination and Thought in the Middle Ages. First delivered as a lecture in 1956, the piece was later published posthumuously in this collection of his essays in 1966. Unlike in my #1 favorite essay, William Golding's magnificent Hot Gates, CS Lewis does not seek to form arguments or to persuade. What he does instead is to transport the reader back in time, illuminating the medieval world-view using nothing more than words alone.
He begins his essay urging the reader to perform an experiment. He says,
Go out on any starry night and walk alone for half an hour, resolutely assuming that pre-Copernican astronomy is true.
Look up at the sky with that assumption in mind. The real difference between living in that universe and living in ours will, I predict, begin to dawn on you.
Intrigued, I decided to take him up on his suggestion. It so happened that my beloved and I had found ourselves up on the summit of Mauna Kea, on the Big Island. Home to the world's greatest collection of large telescopes, the skies up there are dark and famously clear.
As a girl, I had wanted to become a cosmologist. It was my first great passion. And, in addition to reading astronomy books voraciously, I spent many nights using my amateur telescope to look up at the stars from my parent's house in Los Angeles. Growing up, I drifted away from cosmology, turning naturally toward philosophy. Still, I always loved the stars--for as Van Gogh said, they make me dream. Returning home to Los Angeles about twenty five years after leaving it, I have been dismayed by their disappearance. What happened to all those myriad of stars of my childhood? Indeed, I cannot recall the last time I saw the Milky Way--had never seen it in Japan and was sad to see it was simply invisible from LA now. It is dis-heartening, really, since the splendid vision of the stars at night is something that we used to just take for granted.
Fast forward to last week in Hawaii. Surrounded by snow, the summit of Mauna Kea sits above the clouds. Like from the summit of Mount Fuji, you can stand and watch the clouds roiling beneath you. As with all mountaintops, the summit of Mauna Kea is magnetic and the views exhilarating. We were there to visit the KECK Observatory and were so fortunate to be there as they opened the dome to the night sky at sunset. The galaxies that they observe are so faraway that now it is as much about capturing the distant images as it is about subtracting or reducing the turbulance and distortions that is in the way. So, as exciting as it was to watch the dome open up onto the night sky and see the telescope begin to rotate into position, even more interesting was watching them fire up the laser to use the adaptive optics system to get the clearest images possible of galaxies that are very, very faraway. I thought that, while sometimes it seems that not much really has changed theoretically in astronomy since I was a girl (maybe dark energy and exoplanets?) it was this area of instrumentation and optics that have been revolutionized astronomy during my lifetime.
After the laser shot up into the dark skies to create an artificial guide star for imaging the scientific target object, we stood there in the freezing cold, allowing our eyes to get used to darkness. It took several minutes, but finally they began to appear--- stars upon stars upon stars.
First was Jupiter, more beautiful than I had ever seen her; followed by several very familar constellations --old friends that I have not seen in decades. And then finally--at long last-- the vision of the Milky Way appeared before our eyes in all its majesty. 幽玄。
Staring up, I tried to do as CS Lewis suggested to imagine that pre-Copernican astronomy is true. The first thing that dawns on one is that for the Medievals, no matter how impossibly large the universe was to them, it was ultimately something finite--and this is perhaps something that generates a feeling of being embraced, because it forms a kind of edge, or frontier as Lewis says.
You will be looking at a world unimaginably large but quite definitely finite. At no speed possible to man, in no lifetime possible to man, could you ever reach its frontier, but the frontier is there; hard, clear, sudden as a national frontier.
And, secondly, because the earth is at the absolute center, it is not just distance that is felt, but height. So, some stars are not simply a long distance from us, but they are far, far "above" us too.
In this way, beyond the gates of the moon, everything was in timeless and heavenly realm. The stars and galaxies were therefore changeless, necessary and not open to Fortune. The moon was the gateway between our world of decay and change (and Fortune) ---and the heavens, which were perfectly finite and regular. Things in the heavenly realm were not evolving and indeed, there were no ultimate causes and effects, not in the way astronomers look for today. On this topic, Aristotle posited an Unmoved Mover, and as Lewis explains:
The infinite, according to Aristotle, is not actual. No infinite object exists; no infinite processes occur. Hence we cannot explain the movement of one body by another and so on forever. No such infinite series, he thought, could exist. All the movements of the universe must therefore, in the last resort, result from a compulsive force exercised by something immovable...Accordingly we find (not now by analogy but in strictest fact) that in every sphere there is a rational creature called an Intelligence which is compelled to move, and therefore to keep his sphere moving, by his incessant desire for God.
This comes deliciously close to CS Lewis' famous Argument from Desire, but it also illuminates the meaning of Dante's Theology of love. For the medievals, says Lewis, an unmoved mover does not move other things in terms of ends, like balls on a billiard table, but rather it is the things that themselves move from out of their own desire, like food moves a hungry man or a mistress moves her lover.
A modern might ask why a love for God should lead to a perpetual rotation. I think, because this love or appetite for God is a desire to participate as much as possible in his nature; i.e. to imitate it. And the nearest approach to His eternal immobility, the second best, is eternal regular movement in the most perfect figure, which, for any Greek, is the circle.
When the medievals looked out at the night sky, they did not see dark skies as we do now, but they rather saw a universe jam-packed with stars and planets and angels and music (Lewis writes beautifully in the essay about how the heavens were filled with heavenly music). And all this activity, they believed was put in motion not by causes and effects but rather out of love. But he cautions us not to misunderstand Dante's famous line about the love that moves the heaven and stars; for this is less about modern conceptions of love with their ethical connotations as it is an appetites or desires. So, as Lewis describes it, the Medieval universe was rotating in its desire or appetite for God. It was a musical, ordered and festive universe; for Lewis says the angels and seraphim spend their time engaged in festivals of great pagentry):
The motions of the universe are to be conceived not as those of a machine or even an army, but rather as a dance, a festival, a symphony, a ritual, a carnival, or all these in one. They are the unimpeded movement of the most perfect impulse towards the most perfect object.
One has to admit that there is something incredibly aesthetically pleasing to understand the universe in these terms.
That night in Hawaii, seeing once again the great splendor of night sky remembered from my childhood, I realized how much we had lost. Our gracious and wonderful host at the observatory said that he really understood the Dark Sky Movement since the vision of the night sky is such a crucial part of our human heritage --and indeed we have lost so much. Before getting back in the car to go back down the mountain, I took one last look at the myriad stars twinkling so beautifully in the sky. Sadly, I recalled Emerson's famous quote about the stars since the envoys of beauty no longer come out to light the universe in smiles anymore.
"If the stars should appear one night in a thousand years, how would men believe and adore; and preserve for many generations the remembrance of the city of God which had been shown! But every night come out these envoys of beauty, and light the universe with their admonishing smile.”
It is perhaps a dying art in Eglish (?), but if you have a favorite essay, I'm all ears.
Fellow time travelers will love Lewis' essay.
I wrote about Hot Gates here: Fighting in the Shade of 10,000 Arrows.
Part One of my Medieval Triptych is here: WINGS OF DESIRE (A MEDIEVAL PHYSIOLOGY)
And an added bonus for your mucho reading pleasure: Richard Dawkins and the Ascent of Madness
KECK in Motion here! And my favorite KECK video of all below--enjoy!
Mental Illness, the Identity Thief
by Grace Boey
I felt a Funeral, in my Brain,
And Mourners to and fro
Kept treading - treating - till it seemed
That Sense was breaking through -
And when they all were seated,
A Service, like a Drum -
Kept beating - beating - till I thought
My mind was going numb -
And then I heard them lift a Box
And creak across my Soul
With those same Boots of Lead, again,
Then Space - began to toll,
As all the Heavens were a Bell,
And Being, but an Ear,
And I, and Silence, some strange Race,
Wrecked, solitary, here -
And then a Plank in Reason broke,
And I dropped down, and down -
And hit a World, at every plunge,
And Finished knowing - then -
* * *
In the poem, I felt a Funeral, in my Brain, Emily Dickinson watches a part of herself die as she sinks into insanity. The fragmentation and loss of the Self that Dickinson describes is a common theme amongst victims of mental illness. By their very nature, conditions like schizophrenia, depression and bipolar disorder have profound impact on one's personality, behaviour and beliefs. Mental illness can rear its head and usurp one's identity at any time; what happens next can be confusing and frightening, for victims as well as their loved ones.
Transformation of the self
A good place to start examining the loss of identity in mental illness is depression. This is something that many of us will experience in some form, if only briefly and temporarily, at least once in our lives. In addition to simply feeling low, those who are depressed lose interest in pursuing activities they usually enjoy, and struggle with feelings of negative self-worth. I remember watching one of my own close friends slip into depression as a young adult. She was usually a cheerful, kind and bubbly girl. But as her first semester in college progressed, she became increasingly reclusive, pessimistic and irritable. She stopped playing sports as she 'no longer felt like running around', and gained weight as she slept all day.
After witnessing and worrying about her continuous decline for a few months, I suggested she see a psychiatrist. She did. After a few months of being treated with therapy and antidepressants, my friend made a good recovery. She has sinced bounced back to being more or less her old self. The last time we met, she thanked me for pushing her to get treatment. "I felt so crappy - I had never felt so bad about myself in my life. I didn't know what was happening," she said. "I thought that maybe I was just changing, my personality was changing, and it was normal. But it wasn't. I feel like myself again."
Fortunately, my friend made a clean recovery, and hasn't had a relapse since her brush with depression five years ago. But for many others, the effects of mental illness are much more sticky - the 'old self' that used to exist is not fully recoverable. In a New York Times article, Linda Logan describes her long-drawn battle with mental illness. She writes, "During the 20-odd years since my hospitalizations, many parts of my old self have been straggling home. But not everything made the return trip. While I no longer jump from moving cars on the way to parties, I still find social events uncomfortable. And, although I don't have to battle to stay awake during the day, I still don't have full days - I'm only functional mornings to midafternoons. I haven't been able to return to teaching. How many employers would welcome a request for a cot, a soft pillow and half the day off?"
In another personal account, psychiatrist Karen Hochman recalls how paranoid schizophrenia completely transformed her brother Mark, as he descended into delusion and irrationality. "It was following my mother's death that I believe Mark experienced his first psychotic break. I coped with my grief in my way, which was always with the support of and conection to others. Mark, in his characteristic way, bore his grief on his own. His choices for himself had always differed from mine for myself, but during the years immediately following my mother’s death, his ideas, choices, and actions became increasingly incomprehensible to me. His discourse became vague. At the same time, his poorly articulated ideas became of increasing importance in his definition of himself." Six years after his mother's death, Mark hung himself from a maple tree.
Discourse of 'old' and 'real' selves, of course, presupposes the existence of a healthy or authentic self that is separable from the sick one. For victims whose symptoms manifest from young, the notion of such a self in opposition to the sick self might not make sense at all. In the two-part documentary, The Secret Life of the Manic Depressive, Stephen Fry describes how he experienced symptoms of bipolar disorder from an early age. Fry began acting out as a young boy, and was expelled from school for bad behaviour at the age of 15. At 17, he was arrested and served jail time for credit card theft.
What might Stephen Fry be like today if not for his manic depression? Lacking any reference point, it's impossible to say. His troubled adolescence is typical of those who are later diagnosed with bipolar disorder - for such people, mental illness swoops in and takes over one's identity before it even gets the chance to develop. Before they seek help, if they do, these people know no other way of being than the one that mental illness has given them.
Losing social identities
In life, we often find ourselves in the social roles we play. The multiplicity of these roles gives mental illness yet more avenues to wreak havoc on our identity. When struggling to keep up with the demands of everyday life, many victims of mental illness come to identify with labels like 'bad friend', 'bad lover', or 'bad employee'.
For a long time, I myself struggled with being a 'bad student'. Like Fry, I began to experience symptoms of bipolar disorder from a young age, which affected my behaviour performance in school. I was a bright, hardworking child when I entered primary school at 7, and landed myself in a special programme for gifted students. But my performance in school started slipping at around the age of 9 or 10 as my mental life became increasingly troubled. I lost all interest in my academics, and with increasing frequency I was sent out of class or reprimanded for failing to do my homework. By the time I was thirteen, I was a bona fide bad student - I squandered my opportunity in the top girl's school in my country, probably spent more hours in detention than in class, and was effectively kicked out at the age of 14. I continued to underperform and face disciplinary problems right up to my GCSE A-Levels.
Luckily for me, the hardworking young girl who once took pride in her studies somehow emerged again in college, after disappearing for a decade. I remember how confused I was when my one of my professors praised me for being a 'good student' in freshman year - I couldn't believe I was getting positive feedback from an authority figure in school. I thought to myself, he's got to be sarcastic. Me? Good student? It wasn't until then that I realized how much of my poor self-esteem stemmed from identifying as a bad one. Despite working hard and eventually getting straight As, I constantly expected to fail after each test. To this day, I still suffer from a fear of academic authority figures. I still struggle not to engage in self-sabotaging academic behaviour. I still don't really think of myself as a good girl.
Since seeking help for manic depression, in addition to learning what it means to be a good student, I'm also learning what it means to be a better friend, sister and daughter. But perhaps the most heartbreaking way in which mental illness can harm one's identity is through the role of parenthood. Logan describes the transformation of her identity as a loving parent into one who slept all day, and didn't see much of her children. Before the onset of her depression, she was “twirling [her] baby girl under the gloaming sky on a Florida beach and flopping on the bed with [her] husband.” After the onset of her depression, she struggled with taking care of her kids, slept for long stretches at a time, and was in and out of psychiatric units; all of this affected her sense of competency as a mother. When she was out of the hospital, she hired a full-time housekeeper - while Logan was “appreciative of her help, [she] felt as if [her] role had been usurped.”
Sadly, mental illness can steal one's parenthood before it has even begun. Many with mental illness choose to remain childless - either because they are doubtful of their ability to be a good caregiver, or as a recognition of the reality that mental illness is genetically transmitted. Curtis Hartmann, a lawyer and writer with bipolar disorder, writes that "this, unquestionably, has been the cruelty hardest to bear: no children to love for a man who loves to love."
In search of a sustainable sense of identity
Shifts in identity and personality means it is a constant struggle for many of the mentally troubled to reconcile the actions of their 'sick' self with their healthy self. Mental illness causes people to do things that are reckless or irrational - things that they later regret, or that are harmful to those around them. Who's this talking - me or my illness? If someone with psychosis experienced the desire to hop off a building, under the delusion that he could fly, he'd likely distance himself from this desire later, saying it was his illness talking and not him. Yet, not all behaviours and mental states are so clearly divorced from physical reality. It's often difficult for victims - and those around them - to recognize whether behaviour is attributable to the diminished capacity that illness brings.
There are many technical discussions to be had about mental illness, diminished capacity, blameworthiness, and identity. Such questions are undoubtably important, particularly for questions of legal culpability. But what good is the diagnosis of clinical depression when one has damaged a relationship beyond repair?
To cope with everyday life, many of those with mental illness simply come to accept their disorder and its effects as "part of the rich tapestry of life", as so eloquently put by one of the interviewees in Stephen Fry's documentary. Some even come to identify with their illnesses in positive ways: when asked, all of the subjects that Fry interviewed said they would not opt out of their bipolarity if given the chance. Manic depression is a way of life that comes with its own richness and perspective.
One may endlessly ponder the philosophical implications of sickness on the authentic self, and wonder what 'would have been' if not for this and that. But the brute reality is, mental illness saddles one with a set of limitations from which one's identity is forced to develop. The mentally ill cannot choose their condition, and although there are steps they can take to seek help and shape their own identities, they must accept that there are many things that will continue to remain beyond control.
Then again, isn't that so for all of us?
Image by Ana Kova.
Some Varieties of Musical Experience
by Bill Benzon
My earliest memory is of a song about a fly that married a bumblebee. I've been told–I don't really remember this–that early one morning I played that record so often that it drove a visiting uncle to distraction.
I don't know how many people count music as their earliest memory, but I surely can't be unique in that. For music is a basic and compelling form of human experience. Martin Luther believed that "next to the Word of God, the noble art of music is the greatest treasure in the world. It controls our thoughts, minds, hearts, and spirits." And so it does.
Which perhaps is why we are so ambivalent about it. If it can control us, then it is dangerous. Why else would repressive regimes have worked so hard to suppress jazz and rock and roll? Why would the Taliban attempt to suppress all music?
But let us set the danger aside. It is the power that interests me.
Some years ago Roy Eldridge, the jazz great trumpeter, told Whitney Balliett (American Musicians: 56 Portraits in Jazz) about playing with Gene Krupa:
When ... we started to play, I'd fall to pieces. The first three or four bars of my first solo, I'd shake like a leaf, and you could hear it. Then this light would surround me, and it would seem as if there wasn't any band there, and I'd go right through and be all right. It was something I never understood.
What's going on? I suppose we could say it had something to do with the brain and nervous system, but what?
In a similar vein Vladimir Horowitz, the classical pianist, told Helen Epstein (Music Talks: Conversations with Musicians): "The moment that I feel that cutaway–the moment I am in uniform–it's like a horse before the races. You start to perspire. You feel already in you some electricity to do something." Again, the nervous system, getting him primed, for what?
"When I'm right and the band is right and the music is right," [Sonny] Rollins said, "I feel myself getting closer to the place where the sound is less polished and more aboriginal. That's what I'm striving for. The trumpeter Roy Eldridge once told a guy he could only reach a divine state in performance four or five times a year. That sounds about right for me."
A divine state? What's that – perhaps it's another one of those things that the nervous system rigs up, no? Perhaps. We might also wonder whether or not it's the same thing that Martin Luther had in mind when he talked of music as "the greatest treasure in the world." And yet they lived in such different worlds, after all: Martin Luther, Sonny Rollins, Roy Eldridge, and Vladimir Horowitz.
It's like you leave your body. It's like you're dizzy and lightheaded and yet right there. My hands just seem to throb, like a pulse almost. It's the best feeling in the world, bar none. It took me a lot of singing lessons before I finally connected with that feeling. The first time it clicked and I connected, I nearly fell down, and I started crying.
Is her throbbing like Roy Eldridge's shaking? When he was surrounded by light, was he also dizzy and lightheaded?
Boyd also interviewed Eric Clapton, the rock guitarist:
It's a massive rush of adrenaline which comes at a certain point. Usually it's a sharing experience; it's not something I could experience on my own ... other musicians ... an audience ... Everyone in that building or place seems to unify at one point. It's not necessarily me that's doing it, it may be another musician. But it's when you get that completely harmonic experience, where everyone is hearing exactly the same thing without any interpretation whatsoever or any kind of angle. They're all transported toward the same place. That's not very common, but it always seems to happen at least once a show.
Bullard talked of leaving her body. Clapton spoke of everyone being transported. There's a word for that, ecstasy, from the Greek ekstasis ‘standing outside oneself,' based on ek- ‘out' + histanai ‘to place.' Clapton also notes that this – whatever THIS is – is something that that happens, perhaps CAN ONLY happen, with others.
If this is something the nervous system does, then, it must be something that happens between nervous systems as well. And, wouldn't you know? neuroscientists are now investigating brain-to-brain coupling. What happens if you have two people interacting in some way and you examine what's happening in both brains? You discover that activity in the two brains is similar. What if that activity were exactly – of if not that, very very closely – similar? What happens to the (remaining) difference between the two?
Some years ago, in March of 2003, I participated in a large anti-war demonstration in Manhattan, where I met Charlie Keil in midtown and followed the demonstration to Washington Square in the West Village. I had my trumpet and Charlie had his cornet, and a bell or two as I remember. As we walked with and through the demo we encountered other musicians too, drummers, bell players, and horn players. Some had come together as Charlie and I had, and had a few routines worked out. But we all were looking to join up with others and see what happened.
There must have been two dozen or so musicians in the stretch where Charlie and I settled. Sometimes we were closer, within a 5 or 6-yard radius, and sometimes we sprawled over 50 yards. The music was like that too, sometimes close, sometimes sprawled.
Sometimes the music made magic. The drummers would lock on a rhythm, then a horn player–we took turns doing this–would set a riff, with the four or five others joining in on harmony parts or unison with the lead. At the same time the crowd would chant "peace now" between the riffs while raising their hands in the air, in synch.
All of a sudden–it only took two or three seconds for this to happen–thirty-yard swath of people became one. Horn players traded off on solos, the others kept the riffs flowing, percussionists were locked, people changed "peace" and the crowd embraced us all. But no one was directing this activity. It just happened.
What was going on in our brains? Did the crowd become, in some way, one mind? That's a real question, real in the sense that one day investigators are going to be able to "instrument" a crowd, collect a boat load of data, and figure out what's going on.
Let's push the issue a bit further. Some years ago the late Wayne Booth, a distinguished professor of English, wrote about his experiences as an amateur cellist, an avocation he shares with his wife Phyllis: For the Love of It: Amateuring and Its Rivals. In November of 1969 Booth was grieving the recent death of his son. In the process of "trying, sometimes successfully, to regain his lost affirmation of life" Booth began drafting a book about life, death, and music. Concerning a performance of Beethoven's string quartet in C-sharp minor, he said:
Leaving the rest of the audience aside for a moment, there were three of us there: Beethoven ... the quartet members counting as one ... Phyllis and me, also counting only as one whenever we really listened ... Now then: there that "one" was, but where was "there"? The C-sharp minor part of each of us was fusing in a mysterious way ...[contrasting] so sharply with what many people think of as "reality." A part of each of the "three" ... becomes identical.
There is Beethoven, one hundred and forty-three years ago ... writing away at the marvelous theme and variations in the fourth movement. ... Here is the four-players doing the best it can to make the revolutionary welding possible. And here we are, doing the best we can to turn our "self" totally into it: all of us impersonally slogging away (these tears about my son's death? ignore them, irrelevant) to turn ourselves into that deathless quartet.
We've seen some of this before; Clapton spoke to the merging of selves and Eldridge and Horowitz spoke to separation from everyday time and space. Beethoven adds another factor into the mix. If distinctions between one self and another are lost in the, then what difference does it make that it was Beethoven then and Phyllis and Wayne Booth now?
And let's grant that it's all a matter something happening in the nervous system – Beethoven's, Wayne Booth's, Phyllis Booth's, members of the quartet, the rest the audience, you, me, everyone. So what? On the one hand, until we actually know what's going on in these many nervous systems, referring such–strange, interesting, compelling–phenomena to the nervous system doesn't actually explain anything. It just shoves them under the intellectual carpet.
But one day we are going to understand these things in a way we do not now, perhaps even in way we cannot now imagine. What then? What if our best current approximation to that advanced understanding is that, yes, in that performance of Beethoven's string quartet in C-sharp minor the boundaries of space, time, and person collapsed and Wayne Booth, Phyllis Booth, the performers, audience, and Beethoven became one? What would Martin Luther say to that?
* * * * *
Back in the 1980s Leonard Bernstein directed a recording of West Side Story using opera singers. That recording session has been documented on DVD: The Making of West Side Story, Leonard Bernstein, Tatiana Troyanos, José Carreras, Kiri Te Kanawa, BBC Television London, UNITEL 1985. And clips from that DVD are on the web. The performance of "One hand, one heart" is devastatingly beautiful:
If you doubt your own experience of that performance, read through some of the comments.
* * * * *
Monday, March 03, 2014
Transcendental Arguments and Their Discontents
by Scott F. Aikin and Robert B. Talisse
Consider the nihilist who provides us with an argument with the conclusion that nothing exists, or that there are no norms for reason. Take the relativist who contends that all facts are relative to some perspective. Note the skeptic who consistently criticizes not only our claims to knowledge, but our very standards. Call such views Transcendental Pessimism. An appealing and longstanding reply to Transcendental Pessimism is that it is self-defeating in some way. The nihilist nevertheless avows a fact and relies on norms of rationality to run the argument for his own conclusion. The relativist isn't just saying that it's all relative to her perspective, but that it's all relative full stop. The skeptic's conclusion that we have no knowledge or have no reliable means to assess knowledge purports to be a knowledge-like commitment held on purportedly good epistemic grounds. The critical line is this: Transcendental Pessimist views cannot be consistently thought. Such views, to make sense at all, must presuppose precisely what they deny.
So far, this self-defeat maneuver against nihilists, relativists, and skeptics is but an inarticulate hunch. Transcendental arguments are attempts at making that hunch explicit, not only about how the negative views are self-defeating, but also regarding the positive views worth preserving. That is, we deploy transcendental argumentation not only as a critical line against Transcendental Pessimism, but we also (and perhaps thereby) establish some positive conclusion. Call this objective Transcendental Optimism.
Immanuel Kant is widely acknowledged to be the first to overtly use the argument type. The primary example of Kantian transcendental argument comes in the Second Analogy of Kant's Critique of Pure Reason. The rough form of argument runs as follows: One can judge a series of representations is evidence of a series of events only if one holds that the series is asymmetric (it must happen in that order, not in a reverse or other order). One can believe that the representations are asymmetric only if one holds that the events represented are similarly asymmetric. If a series of states is asymmetric, the earlier states are causes of the later states. Therefore: One can take a series of representations as evidence only if one takes them as evidence of a causal order. Experience can be a source of information only if there is a causal order.
In the 20th Century, Donald Davidson employed a transcendental argument in defense of his thesis of radical interpretation. The criterion for identifying anyone as speaking a language is that of taking their utterances as semantically contentful. The condition for identifying semantically contentful utterances is that of interpreting the things people say to be responsive to events in the world around them. In his essay "Radical Interpretation," Davidson explains the constraint thus: "A theory of interpretation must be supportable by evidence available to interpreters." And so, we must have our defaults set on interpreting others as saying mostly true things. In his influential essay "On the Very Idea of a Conceptual Scheme," Davidson writes, "We make maximum sense of the words and thoughts of others when we interpret them in a way that optimizes agreement." Consequently, we have no intelligible reason to hold that others have different conceptual schemes from us. Radical interpretation is transcendentally dependent on the Principle of Charity.
Now, there are two problems with transcendental arguments; one dialectical, one formal. The dialectical challenge for transcendental arguments is that they seem to either beg the question or are otiose. They, consequently, do not play the rebutting or undercutting role in the critical exchange with the Transcendental Pessimist that the Optimist needs them to. Call this the dialectical dilemma for transcendental arguments.
Consider Davidson's argument. It begins from the requirement that any theory of interpretation must be supportable by evidence of connection between utterances and the world. Such a requirement is widely held to be a form of verificationism – the view that the meaning of a statement is delineated by conditions for its confirmation. This view of meaning does all the heavy lifting in Davidson's argument. But no skeptic or relativist or nihilist (no Pessimist) would accept verificationism. So the argument begs the question. Alternately, note that if the verificationism does all the work, the transcendental argument was, in the end, unnecessary. It is otiose. So if you can't convince the relativist of verificationism, you can't run Davidson's transcendental argument, and if you can sell verificationism to the relativist, you don't need the transcendental argument. As a consequence, either way, the transcendental argument is worthless. That's the dilemma.
The formal problem for transcendental arguments is that their optimistic conclusions are helplessly equivocal. Consider a shortened version of Kant's argument:
P1: It is necessary that: Contentful experience is possible for a subject only if that subject deploys the concepts of cause and effect.
P2: Subjects have contentful experience.
C: There must be cause and effect.
Yet the ambitious transcendentally optimistic conclusion C in fact does not follow. The premises rather support a much more modest result:
C*: Subjects must use the concepts of cause and effect.
As Kant puts it, "Experience itself . . . is thus possible only in so far as we subject the succession of appearances . . . to the law of causality; and as likewise follows, the appearances . . . are themselves possible only in conformity with the law." Here we can see the difference between the two kinds of conclusion. The same thing happens in many other forms of transcendental argumentation. In order to ask a real question, one must think there are possible answers; in order to interpret others, one must take them to be in broad agreement with you; the condition for expecting an unsupported stone to drop is believing that gravity is real, and so on. What does not follow from any of these holdings, judgings, and believings are the facts of their assertional contents. That, by the way, was what the Pessimist was affirming all along.
We might call transcendental arguments that show just that something substantive must be used, presupposed, or assumed in order to say positive things at all a form of Modest Transcendental Argument. The trouble with modest transcendental arguments, when posed as arguments that show that the use of certain concepts is not optional, or invincible, as Barry Stroud puts it, is that they sound less like justification for these commitments, and more like exculpations. Just because the concepts or commitments are not optional in having our first-order commitments about the world, minds, and morals does not mean they are justified or are good.
The question is whether we can do better than exculpation for our Transcendental Optimism without committing the fallacy of equivocation. We, the authors, think there is a chance of doing better. It looks like this.
If Transcendental Pessimism is self-defeating (you can't consistently believe it), then we have justification in rejecting the view. That justification doesn't guarantee that Pessmism is false, but that we are rational in recognizing that we cannot ever hold the view with positive justification. Notice, now, that Optimism and Pessimism are the only options – if you suspend judgment between the two, you've slipped into Pessimism. It is, to use a term from William James, a forced move. Since we are justified in rejecting Pessimism, we are then justified in accepting Transcendental Optimism. The consequence, of course, is nothing earth-shaking. In fact, the Optimistic thesis was that we were all reasonable in believing that there is a world of causally efficacious things, other minds, and truths all along. The objective with the argument was to make it explicit why.
Is Internet-Centrism a Religion?
by Jalees Rehman
On the evening of March 3 in 1514, Steven is sitting next to Friar Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the printing press will change everything."
Let us now fast-forward 500 years and re-enact this hypothetical scene with some tiny modifications.
On the evening of March 3 in 2014, Steven is sitting next to TED-Talker Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the internet will change everything."
Clay's advice in the first scene sounds ludicrous to us because we know that the printing press did not usher in an era of wealth, justice and peace. Being retrospectators, we realize that the printing press revolutionized how we disseminate information, but even the most efficient dissemination tool is just a means and not the ends.
It is more difficult for us to dismiss Clay's advice in the second scene because it echoes the familiar Silicon Valley slogans which inundate us with such persistence that some of us have begun to believe them. Clay's response is an example of what Evgeny Morozov refers to as "Internet-centrism", the unwavering belief that the Internet is not just an information dissemination tool but that it constitutes the path to salvation for humankind. In his book "To Save Everything, Click Here: The Folly of Technological Solutionism", Morozov suggests that "Internet-centrism" is taking on religion-like qualities:
"If the public debate is any indication, the finality of "the Internet"— the belief that it's the ultimate technology and the ultimate network— has been widely accepted. It's Silicon Valley's own version of the end of history: just as capitalism-driven liberal democracy in Francis Fukuyama's controversial account remains the only game in town, so does the capitalism-driven "Internet." It, the logic goes, is a precious gift from the gods that humanity should never abandon or tinker with. Thus, while "the Internet" might disrupt everything, it itself should never be disrupted. It's here to stay— and we'd better work around it, discover its real nature, accept its features as given, learn its lessons, and refurbish our world accordingly. If it sounds like a religion, it's because it is."
Morozov does not equate mere internet usage with "Internet-centrism". People routinely use the internet for work or leisure without ascribing mythical powers to it, but it is when the latter occurs that internet usage transforms into "Internet-centrism".
Does Morozov's portrayal of "Internet-centrism" as a religion correspond to our current understanding of religions? "Internet-centrism" does not involve deities, sacred scripture or traditional prayers, but social scientists and scholars of religion do not require deism, scriptures or prayers to categorize a body of beliefs and practices as a religion.
The German theologian Friedrich Schleiermacher (1768-1834) thought that the feeling of "absolute dependence" ("das schlechthinnige Abhängigkeitsgefühl") was one of the defining characteristics of a religion. In a January 2014 Pew Internet survey, 53% of adult internet users in said that it would be "very hard" to give up the internet, whereas only 38% felt this way in 2006. This does not necessarily meet the Schleiermacher threshold of "absolute dependence" but it indicates a growing perception of dependence among internet users, who are struggling to envision a life without the internet or a life beyond the internet.
Absolute dependence is not unique to religion, therefore it may be more helpful to turn to religion-specific definitions if we want to understand the religionesque characteristics of Internet-centrism. In his classic essay "Religion as a cultural system" (published in "The Interpretation of Cultures"), the anthropologist Clifford Geertz (1926-2006) defined religion as:
" (1) a system of symbols which acts to (2) establish powerful, persuasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic."
Today's Silicon Valley pundits (incidentally a Sanskrit term originally used for learned Hindu scholars well-versed in Vedic scriptures) excel at establishing "powerful, persuasive, and long-lasting moods and motivations" and endowing "conceptions of general order of existence" with an "aura of factuality". Morozov does not specifically reference the Geertz definition of religion, but he provides extensive internet pundit quotes which fit the bill. Here is one such example:
"To be a peer progressive, then, is to live with the conviction that Wikipedia is just the beginning, that we can learn from its success to build new systems that solve problems in education, governance, health, local communities, and countless other regions of human experience."
—Steven Johnson in "Future Perfect: The Case For Progress In A Networked Age"
One problem with abstract definitions of religion is that they do not encompass the practice of religion and its mythical or supernatural aspects, which are often essential parts of most religions. In "The Religious Experience", the religion scholar Ninian Smart (1927-2001) does not provide a handy definition for religions but instead offers six "dimensions" that are present in most major religions: 1) The Ritual Dimension, 2) The Mythological Dimension, 3) The Doctrinal Dimension, 4) The Ethical Dimension, 5) The Social Dimension and 6) The Experiential Dimension.
How do these dimensions of religion apply to Internet-centrism?
1) The Ritual Dimension: The need to continuously seek connectivity by accessing computers or seeking out wireless connectivity, checking emails or social media updates so frequently that this connectivity exceeds one's pragmatic needs could be considered a ritual of Internet-centrism. If one feels the need to check emails and Facebook or Twitter updates every one to two minutes, despite the fact that it is unlikely one would have received a message that required urgent action, it may be an indicator of the important role that this ritual plays in the life of an Internet-centrist. Worshippers of traditional religions feel uncomfortable if they miss out on regular prayers or lose their rosaries that allow them to commune with their God, and it appears that for some humans, the ritual of Internet-connectivity may play a similar role.
2) The Mythological Dimension: There is the physical internet, which consists of billions of physical components such as computers, servers, routers or cables that are connected to each other. Prophets and pundits of Internet-centrism also describe a mythical "Internet" which goes for beyond the physical internet, because it involves mythical narratives about the power of the internet as a higher force that is shaping human destiny. Just like "Scientism" attributes a certain mystique to real-world science, Internet-centrism adorns the physical internet with a similar mythological dimension.
Ideas of "cognitive surplus", crowdsourcing knowledge to improve the human condition, internet-based political revolutions that will put an end to injustice, oppression and poverty and other powerful metaphors are used to describe this poorly defined mythical entity that has little to do with the physical internet. The myth of egalitarianism is commonly perpetuated, yet the internet is anything but egalitarian. Social media hubs have millions of followers and certain corporations or organizations are experts at building filters and algorithms to control the information seen by consumers who have minimal power and control over the flow of information.
3) The Doctrinal Dimension: The doctrine of Internet-centrism is the relentless pursuit of sharedom through the internet. The idea is that the more we share, the more we collaborate and the more transparent we are via the internet, the easier it will be for us humans to conquer the challenges that face us. Challenging this basic doctrine that is promoted by Silicon Valley corporations can be perceived as heretical. It is a remarkable testimony to the proselytizing power of the prophets and pundits in Silicon Valley that people were outraged at the government institution NSA for violating our privacy. There was comparatively little concern about the fact that the primary benefactors of the growing culture of sharedom are the for-profit internet corporations that make money off our willingness to sacrifice our privacy.
4) The Ethical Dimension: In many religions, one is asked to follow aspects of a religious doctrine which have no direct ethical context. For example, seeking salvation by praying alone to a god on a mountain-top does not necessarily require adherence to ethical standards. On the other hand, most religions have developed moral imperatives that govern how adherents of a religion interact with fellow believers or non-believers. In Internet-centrism, the doctrinal dimension is conflated with the ethical dimension. Sharedom is not only a doctrinal imperative, it is also a moral imperative. We are told that sharing and collaborating is an ethical duty.
This may be unique to Internet-centrism since the internet (both in its physical or its mythical form) presupposes the existence of fellow beings with whom one can connect. If a catastrophe wiped out all humans but one, who happened to adhere to a traditional religion, she could still pray to a god (ritual), believe in salvation by a supernatural entity (mythological) and abide by the the religious laws (doctrinal). However, if she were an Internet-centrist, all her rituals, beliefs and doctrines would become meaningless.
5) The Social Dimension: Congregating in groups and social interactions are key for many religions, but Internet-centrism provides more tools than any other ideology, cultural movement or religion for us to interact with others. Whether we engage in this social activity by using social media such as Facebook or Twitter, by reading or writing blog posts, or by playing multi-player games online, Internet-centrism encourages us to fulfill our social needs by using the tools of the internet.
6) The Experiential Dimension: Most religions offer their adherents opportunities for highly personal, spiritual experiences. Internet-centrism avoids any talk of "spirituality", but the idea of a personalized experience is very much a part of Internet-centrism. One of its goals is to provide opportunities for self-actualization. We all may be connected via the internet, but Internet-centrists also want us to believe that this connectivity provides a path for self-actualization. We can modify settings to customize our web browsing experience, we can pick and choose from millions of options of what online courses we want to take, videos we want to watch or music we want to listen to. The sense of connectedness and omnipotentiality is what provides the adherent of Internet-centrism with a feeling of personal empowerment that comes close to a spiritual experience of traditional religions.
When one reviews the definitions by Schleiermacher or Geertz, or the multi-dimensional analysis by Ninian Smart, it does indeed seem that Morozov is right and that Internet-centrism is taking on many religion-like characteristics. There is probably still a big disconnect between the Silicon valley prophets or pundits who proselytize and the vast majority of internet users who primarily act as "consumers" but do not yet buy into the tenets of Internet-centrism. But it is likely that at least in the short-term, Internet-centrism will continue to grow, especially if Internet-centrist ideas are introduced to children in schools and they grow up believing that these ideas are both essential and sufficient for our intellectual and social wellbeing. Perhaps the pundits of Internet-centrism could discuss the future of this emerging religion with adherents of other faiths at a TEDxInterfaith conference.
Image Credits: Photo of Gutenberg Bible (Creative Commons license, via NYC Wanderer at Flickr)
Zach Nader. Optional Features Shown, 2012, video still.
Thanks to Ryan Moritz via Jaffer Kolb.
Must We Have Fascism With Our Petits Fours
by Dwight Furrow
A few weeks ago in the pages of 3 Quarks Daily we were treated to the proclamation of a new doctrine called "Anti-Gopnikism". The reference in the title is to Adam Gopnik, essayist for the New Yorker, who writes frequently in praise of French culture, especially French food. Philosopher Justin Smith, who is responsible for the proclamation of this doctrine, defines Gopnikism as follows:
The first rule of this genre is that one must assume at the outset that France --like America, in its own way-- is an absolutely exceptional place, with a timeless and unchanging and thoroughly authentic spirit. This authenticity is reflected par excellence in the French relation to food, which, as the subtitle of Adam Gopnik's now canonical book reminds us, stands synecdochically for family, and therefore implicitly also for nation.
Thus, Anti-Gopnikism, we are to infer, must consist of a denial that France is an exceptional place, or that it has a timeless, unchanging, authentic spirit, or that its relationship to its food is unique, or all of the above. We are not provided with any evidence to support any of these denials.
Whether American writers are correct to extoll the exceptional virtues of France depends on what you're looking for. The French are lousy at the Olympics but their wine is awesome. Their music can be simple ear-candy and overly romantic but then there is Boulez and Messiaen. Their language is lovely but peculiar; their conversation at times formal but extraordinarily civilized. Like any nation, they have virtues and vices. If you are interested in food and wine they are an essential nation, and have for centuries, defined what fine food is. To claim their relationship to food is not exceptional is to be blind to their extraordinary influence. Other cultures may lay claim to being more influential today but that does not erase the glorious history of French food. As to the timeless, unchanging, authentic spirit—well we are all part of history and no culture is timeless or unchanging. As far as I can tell, Gopnik doesn't claim or imply a timeless, unchanging essence. In fact, in his recent book The Table Comes First: France, Family, and the Meaning of Food, he claims French food has fundamentally changed in recent decades, is in crisis, and he upbraids them for narcissism and navel gazing.
So what is this diatribe against "Gopnikism" really about? It turns out Gopnikism is a lot more sinister than a French food fetish. Smith writes:
France, in other words, is a country that invites ignorant Americans, under cover of apolitical vacationing, of living 'the good life and of cultivating their faculty of taste, to unwittingly indulge their fantasies of blood-and-soil ideology. You'll say I'm exaggerating, but I mean exactly what I say. From M.F.K. Fisher's Francocentric judgment that jalapeños are for undisciplined peoples stuck in the childhood of humanity, to Gopnik's celebration of Gallic commensality as the tie that binds family and country, French soil has long been portrayed by Americans as uniquely suited for the production of people with the right kind of values. This is dangerous stuff.
Oh my! This is truly a puzzling argument. No doubt the French view their cuisine as an expression of their national character just as do the Italians, Japanese, or Chinese among others. Gopnik's claim is that the French have discovered, perhaps more so than other nations, that the pleasure of food brings intimations of the sacred into our lives. Independently of whether such a claim is true or not, what on earth does this have to do with Nazi "blood and soil" ideology. Something has gone deeply wrong here.
This argument relating French food to Nazism seems to go something like this: (1) French attitudes toward their cuisine are expressions of excessive nationalism, (2) German attitudes in the 1930's about the purity and superiority of their "racial stock" were expressions of excessive nationalism, (3) Therefore, writers (and tourists) who extoll the virtues of French cuisine are implicitly endorsing the attitudes of Nazis toward their alleged racial superiority. What exactly a love of Cassoulet has to do with burning people in ovens we are not told.
I suppose we get a clue from Smith's criticisms of the French treatment of their immigrant populations—especially Muslims.
I have witnessed incessant stop-and-frisk of young black men in the Gare du Nord; in contrast with New York, here in Paris this practice is scarcely debated. I've been told by a taxi driver as we passed through a black neighborhood: "I hope you got your shots. You don't need to go to Africa anymore to get a tropical disease." On numerous occasions, French strangers have offered up the observation to me, in reference to ethnic minorities going about their lives in the capital: "This is no longer France. France is over." There is a constant, droning presupposition in virtually all social interactions that a clear and meaningful division can be made between the real France and the impostors.
I don't live in France, but if the American media is to be believed, the French treatment of minority populations as well as rising xenophobia throughout Europe is deplorable, although it is not obvious it is uniquely so. Perhaps the French treatment of immigrant populations is an indication of a kind of insularity endemic to French culture which per hypothesis explains the decline in creativity in French cooking that some authors, including Gopnik, have noted. But smug complacency regarding one's cuisine is hardly the same thing as a regime of genocide or violent immigrant bashing.
Indigenous foods that express the terroir of local soils and the sensibility of a people are about the uniqueness and incomparability of a place. These, by definition, cannot be transplanted; they belong nowhere else but in that location among those people. Nazi "blood and soil" ideology was about universal hegemony. It was about the right to rule over and exterminate others. The conceptual chasm between French food fetishism and Nazi violence is enormous.
Even if we stick to food and ignore the silly notion that "food fights" are akin to real violence, the inference from love of one's culture to attempts at world domination makes no sense. You can praise the virtues of some constellation of flavors or a method of straining soups without thinking everyone must deploy those flavors or methods in their cuisine. Something might work wonderfully in the French style without being appropriate anywhere else, and nothing about the virtues of one locality's food precludes the appreciation of another. Even if the French think they have the world's best cuisine it doesn't follow that they think everyone must emulate or promote it.
Despite this utterly failed comparison, there is an interesting and important philosophical issue percolating behind the slippery logic of this argument. Can you love a place, a culture, a people and think of them as uniquely virtuous without excluding respect for others who are outside that culture? Can one enjoy the goods of being immersed in and loyal to one's own culture while acknowledging the good of other cultures? Is particularity compatible with universalism? The answer would seem to be, obviously, yes. The devil is of course in the details. Some conflicts between cultural belief systems cannot be mitigated let alone resolved. But there is no general or principled reason why love of one's nation or culture cannot be constrained by an acknowledgement of the rights of others. This is true even when the stakes are high. Many of these "food fights" as well as debates over immigration policies are motivated by fears of cultural annihilation. But the French, or anyone else, can pursue cultural survival without excessive force or attempts at world domination.
Arguably, if cultural survival is at stake and there is too much influence from the outside, one's identity or particularity is undermined. The French, of course, have always been deeply protective of their cultural and linguistic heritage, going so far as to have a ministry of the state responsible for the preservation of French identity. Perhaps this exaggerated "anxiety of influence" is the source of Smith's worry that French fascism is hiding under your croissant. But the rational response to such a threat is creative "border management" where new influences interact with entrenched traditions to create new formations that constitute cultural advance. Food traditions are in fact excellent examples of creative "border management". French cuisine would not have the depth it has without the Germanic-influenced dishes from Alsace, the Mediterranean and North African-influenced foods of Provence, the Spanish influence on Basque cooking, etc. The history of food shows that the "anxiety of influence" is overwrought and food writers such as Gopnik are adept at highlighting this history. Perhaps it is Smith's contention that the French are incapable of such border management. Well, but they obviously are so capable given the history of their food.
Partiality toward one's culture or nation can be benign or dangerous depending on whether it is supplemented by megalomania. Love of one's culture is not dangerous. It is the idea that one's culture is in fact a universal culture that threatens. The French are showing no signs of becoming a world hegemon and Gopnik's writing will hardly make it so.
I predict anti-Gopnikism will join phrenology and the four humors in the dustbin of history.
For more ruminations on the philosophy of food and wine, visit Edible Arts
Nothing Hurts The Godly
One fish says, "So, how's the water?"
The other fish replies, "What water?"
Ladies and gentlemen, I give you Richard Stallman, shuffling onto the stage at Cooper Union's Great Hall. Accompanying Stallman is the veritable Platonic Ideal of a potbelly; left behind are his shoes, which are almost immediately discarded and left by the podium. Padding around the same stage where, in 1860, Abraham Lincoln gave the speech that ignited his political career, Stallman proceeded to subject his New York audience to a rambling disquisition on freedom and computer code, consisting of oftentimes astonishingly petty invective, and peppered with various requests that veered from the absurd to the hopelessly idealistic, but which ultimately served to drive away a good portion of the audience, including myself, well before its conclusion, nearly three hours later.
Why is this recent encounter with a nerd's nerd at all worth recounting? (While entertaining, I will forego the petty bits, although you can view the whole talk here). Simply because, in computing circles, Stallman is an archetype: the avenging angel of free software. Over 30 years ago, he founded the Free Software Foundation (FSF), which has since that time been developing the GNU system, a free operating system that was completed by the addition of Linus Torvald's Linux kernel. It is no understatement to say that the smooth functioning and scalability of much of the Internet is thanks to the overall availability and robustness of the GNU/Linux operating system and its various derivative projects. These, in turn, are the result of probably millions of hours of volunteer labor.
So when Stallman says ‘free,' he really means it, and this is where the trouble begins. According to the FSF, free software allows anyone
(0) to run the program,
(1) to study and change the program in source code form,
(2) to redistribute exact copies, and
(3) to distribute modified versions.
This is a simple and powerful set of axioms. It also requires certain conditions to be met, the most challenging of which is access to the code in its source form. Any time the chain of modification and distribution is broken – say, if the person modifying the code chooses to make the source code unavailable, or chooses to charge a fee for the modification – the code is no longer considered free. Of course, ‘unfree' code can also be made free (this is in fact what Torvalds did with Linux).
Stallman is an idealist and makes no bones about it – in his ongoing capacity as GNU's leading light, he enjoys referring to himself as "the Chief GNUisance." I admire this – like many purists, he is as constant as the North Star. You always know where you stand with him, which generally means the only question is how short you fall of his ideals. As with any purist, I suspect that there are only two kinds of people in his worldview: free software advocates and everyone else. Unfortunately, this jihadi attitude leads some of us to consider a different binarism: that the world consists of those who are free software advocates, and those who think that free software advocates are insufferable assholes. This is unfortunate.
Here is something else that is unfortunate: three brief critiques that do not undermine the axioms above, but rather make those axioms irrelevant, or at the very least vastly less impactful than FSF advocates might hope.
1) Not everyone can read source code, or wants to. When I'm not mouthing off on 3QuarksDaily, I help to design, develop and run a custom-coded internal learning technology platform for a fairly large multinational. On Friday afternoon, the developers pushed through an update to the platform that did not seem to be particularly intricate but that nevertheless wound up breaking much of the platform's functionality. Given that this internal site is viewable by upwards of 50,000 people, I issued an all-hands-on-deck (in the spirit of inventing new collective nouns, I would like to propose ‘a compile of developers' for such occasions) and, following a six-hour conference call, we managed to return the platform to a more-or-less steady state.
What I want to point out here is not the fact that software breaks – this is more often the case than not, as software, despite its name, is inherently brittle. More salient is the fact that it took five or six people who are contract professionals in their field a good chunk of time to understand and fix what had gone wrong in an information system of, frankly, only mild complexity. Software has reached a state of complexity that challenges even the people who originally wrote the code themselves. So we can confidently say that the number of people who can evaluate almost any non-trivial source code is drastically limited. This is to say nothing of whether one is being held accountable for the stability and integrity of said code via compensation. It is one thing to be able to fire your developers for incompetence, since you can just as easily hire others to fix it. When the entire system of free software is predicated on potlatch principles institutional actors lose leverage to get time-sensitive work done, and done to their specifications.
2) Not all outcomes on the Internet are driven by whether code is free. There has recently been much talk about the demise of "net neutrality," especially as a result of the piss-up between Netflix and Comcast. This is a complex topic (with excellent explanations here and here) but suffice to say that it is the principle that travel of all content across the network is treated the same. In theory, the Internet is designed to not favor the delivery of cat videos over the State of the Union Address. The relevance to free software is simply this: the Internet depends not only on software. In previous times, the argument leveled against free software advocates is that you still needed the vast infrastructure of hardware to make that software, free or otherwise, relevant. No one was going to build a server farm for free. Indeed, whoever came up with the term ‘the cloud' earned their marketing stripes, since it is nothing more than the outcome of decades of exponential progress in, and decrease in the cost of, computing power, bandwidth and memory. The materiality of this technology has not decreased at all, but, like factory farming, has merely been removed from view. However, the philosophy of the FSF is about software, not hardware.
In the case of net neutrality, the burning question is about the system of payments that guarantees the distribution of content. What is fair and equitable, and who gets to decide? Until recently – that is, until the advent of video streaming – the existing agreements and competition were sufficient to guarantee the timely delivery of content to users. Rather coincidentally, the decentralized architecture of the Internet was able to absorb existing demand. But with Netflix and YouTube's video streaming service taking up about half of downstream Internet traffic, we now have a giant tug-of-war between firms that handle traffic from its point of origin to the point of consumption.
In the logic of network economics, one of the ways to resolve this tug-of-war is for firms to merge, sometimes horizontally but especially vertically. While this may improve service, competition nevertheless suffers. These mergers result in companies evolving ever closer towards monopoly, and things reach a toxic boil when this integration combines both access providers (eg, a classic Internet Service Provider that is only interested in providing the pipes) with content providers (eg, Comcast, which in addition to providing access also owns or co-owns NBC, E!, Hulu, etc). Suddenly the access provider is now incentivized to privilege its traffic over that of its clients, like Netflix.
The FCC has been caught flat-footed by this eruption and, in the resulting regulatory vacuum, players like Comcast and Netflix have proceeded to make their own arrangements. Aside from being ultimately detrimental to consumers (has anyone seen their cable bill go down as a result of vertical or horizontal mergers praised for their intention to create economies of scale?), the landscape is much sparser, and until the government catches up and begins regulating the Internet as a utility, there is little recourse for content providers, let alone consumers. If you don't think the Internet is important enough to be considered a utility like electricity or telephony, consider the fact that (the much-derided) healthcare.gov website is in fact the first major government service to be offered exclusively on line – and that it will scarcely be the last.
Note that in the entire discussion above, there is no mention of whether the code being used to run all this is free or proprietary. That's because it just doesn't matter. It's why the old joke about fish and water is appropriate here. The fish have more important things to think about, like where dinner is coming from, and how to avoid becoming someone else's dinner.
3) Not all devices are accessible, even if you have access to source code. Concerning the Internet's future, this is probably the most important category of all. In fact, it's a combination of the two preceding critiques: individual ability/willingness and access to hardware.
Encapsulated in the term the Internet of Things, we are talking about the entirely reasonable, and in fact inevitable, sensorization of everything, and the ensuing connection of all those sensors to the Internet. The classic example is the refrigerator that notices you are low on milk and helpfully puts it on your list, or just goes ahead and orders it for you. At the same time, it seems that these same fridges have been recruited by hackers to send out spam mail (technology is occasionally not without its moments of irony), so obviously there is plenty of room for improvement.
But say that you want to fix your fridge so that the only spam you get out of it is some kind of dodgy meat product? Even if you had access to the source code and had the ability to read and modify it, into where would you plug your laptop? Perhaps the handy USB port provided for just such an occasion by General Electric? Fat chance. It is the rare manufacturer that is interested in opening its hardware to the masses (although Jaron Lanier, former roommate and current nemesis of Richard Stallman, strong-armed Microsoft into doing so for its Kinect hardware, and to great results). We can argue as much as we like about the general disarray in which intellectual property law finds itself, or how an overly litigious culture discourages companies from allowing people to tinker with their stuff, but the point is that free software, in Stallman's stern manifestation, does not begin to address the much more salient question of access to devices in the actual, physical world. And, as with the instance of net neutrality discussed above, almost no one but an overarching regulatory agency will ever be able to mandate any such availability.
This truth becomes even more expansive when we consider that the Internet of Things goes well beyond toasters and thermostats (although the latter are big business indeed). To a large degree, the entire concept of "smart cities" is predicated upon the generation of enormous amounts of data – data that can only be conjured by millions of sensors placed throughout the built environment. This is, to put it mildly, a double-edged blade, with the promised efficiencies inextricable from the specter of a command-and-control tyranny. However, the charge towards smart cities is driven wholly by corporations, and bought and paid for by governments. I can't think of two entities that, working in concert, would be less amenable to the idea of opening source code to all comers.
Indeed, the Internet of Things brings up another, even more explosively fragmented future: one in which computers themselves are limited to only specific tasks. In a fascinating talk delivered in 2011 entitled "The Coming War On General Purpose Computation," author and general gadfly Cory Doctorow lays out a picture of a computing landscape where firms manufacture purpose-built computers that carry a reduced instruction set. In this case, none of the software built up over the past thirty years by the free software movement will even run on these machines. Forget about free vs. proprietary: to Doctorow, the fight is about keeping tomorrow's devices able to run software unintended for them at all.
In all three critiques, we can actually come to an understanding of why free software was successful, because that is inextricably linked to where it was successful, and when. The GNU/Linux OS has been supremely successful – and vital – to providing the Internet's software backbone, a very deep and unfamiliar place to most of us. You basically had to be an expert even to find the conversation in the first place. Moreover, this was technology developed primarily in the 1980s and early 1990s, when the World Wide Web didn't quite yet exist and Internet was non-commercial. There were simply fewer players, and there was also less at stake. This is not to say that the hacker ethos does not live on, nor that people aren't choosing to become further involved in re-making their digital (and physical) lives. But these movements are either decidedly on the periphery, or, once they become visible or useful to the mainstream, are quickly assimilated, bought or legislated out of existence.
One could make an argument that the free software movement made the contribution it did precisely because the form of its social organization and ethos was exceptionally well-suited to the circumstances of the time. The uncompromising stance created a legacy that lives on today – for example, an astonishing 61% of web servers run on Apache, another free software project derived from GNU. But at the same time this purity points to another fatal flaw: if it's so great and obviously the best way to go, why isn't free software everywhere? Back at Cooper Union I thought I caught a glimpse of the answer. Richard Stallman, for all his quirky grandstanding, awful joke-telling and Bush-bashing (yes, it is 2014 and he was gleefully Bush-bashing), never once admitted that he or the free software movement had ever made a mistake. This is the problem with purists – all controversies have been settled long ago, whether it is about dinosaur fossils, the number of virgins awaiting us in heaven, or the real value of gold. I dearly wanted to ask Stallman if there was anything that he would have done differently in the past – perhaps the gentlest form that that sort of question can take – but weighing his right to speech versus my right to have a drink, left to have a few beers around the corner instead.
Monday, February 24, 2014
Pakistan: Negotiations and Operations… and Islamicate rationality
by Omar Ali
This headline refers to two separate (though distantly related) subjects. First, to Pakistan. Apparently the Pakistani army is now conducting some operation or the other against some group or the other in North Waziristan and other “tribal areas” infested by various Islamic militant groups under the umbrella of the Tehreek-e-Taliban Pakistan (TTP). This operation was preceded by some farcical negotiations in which the Nawaz Sharif government nominated a group of powerless “moderate Islamists” to conduct negotiations with the TTP. It is likely that these "talks" were never meant to be serious, and that Nawaz Sharif and his advisors intended to use them to expose the bloodthirsty Taliban and their civilian supporters (like Imran Khan’s PTI and the Jamat-e-Islami) as unreliable and extremist elements against whom a military operation was unavoidable. This gambit had worked once before in Swat in 2009 when a peace deal was signed with the Swat Taliban and they were given control of Swat. They proceeded to behead people, whip women and begin marching into neighboring regions, thus showing that no reasonable peace was possible and only a military operation would work against them. But the Taliban 2.0 have learned some lessons of their own. They announced their own farcical committee (briefly including cricket star turned political buffoon Imran Khan) to hold negotiations with Nawaz Sharif's farcical committee. Within a few days the airwaves were dominated by Taliban representatives asking Pakistanis if they wanted Islamic law or preferred to be ruled by corrupt Western dupes? The Taliban, who routinely behead captives and even play football with their heads, were suddenly respected stakeholders and negotiation partners, holding territory, nominating representatives and promising peace if the state acted reasonably and responsibly. At the same time, their “bad cop” factions continued to knock off opponents and spread terror (including a gruesome video in which they brought freshly killed, blood soaked headless bodies of soldiers they had taken captive 3 years ago, in broad daylight, in an open pickup truck, and dumped them on a "government controlled" road in Mohmand).
The government then half-heartedly suspended negotiations and started bombing selected targets. This may have been the intent all along, but the negotiations ploy certainly did not deliver the PR victory the state wanted; instead it further confused the state’s already muddled narrative. Even now, with some sort of operation under way, the Taliban are using the negotiating committee as a means of putting pressure on the state to halt operations against them and the state’s propaganda war remains hobbled by their own ill-advised negotiation scheme.
Of course the state’s PR problems go beyond the merely tactical setback of one badly thought out negotiations ploy. Pakistan’s foundational myths were confused and incoherent in any case and the version promoted by the deep state is heavy on Islamist propaganda, especially since 1969, when Yahya Khan’s team of General Sher Ali and General Ghulam Umer (father of PTI whiz kid Asad Umer) decided that Islamism was the best bulwark against leftist and/or separatist forces. An entire generation of Pakistanis has grown up with notions of a once and future Islamic golden age that has little or no connection with actually existing Pakistani institutions or culture. This brainwashing makes it difficult to intellectually confront Islamist terrorists groups who are only demanding what the state itself has promoted as an ideal, i.e. an “Islamic system of government” and a “proud Islamic state” that stands up against anti-Islamic powers like India, Israel and the United States. Imran Khan is a particularly egregious example of the resultant confusion among semi-educated Pakistanis, but he is not the only one. Thanks to this added twist, it is harder to fight Islamist armed gangs in Pakistan than it should be given the technical sophistication of our institutions and our integration into the modern world. In short, while Pakistan is not as primitive as Somalia (where there are practically no institutional, economic or cultural resources above the level of Islamic solidarity and sharia law) , the ruling elite has an added level of vulnerability that arises from its own Islamist ideological narrative, over and above all the vulnerabilities of any corrupt third world elite.
But here is the final twist. This added vulnerability (a vulnerability that is a particular obsession of mine) is not enough to spell the doom of the corrupt ruling elite. It adds to their problems, and to the extent that they believe their own propaganda, it has caused them to score repeated own goals, but I still believe that they will not be overwhelmed by the TTP or other “Islamic revolutionaries”. In fact, I will make several predictions and I invite readers to make theirs. Mine will be relatively concrete and simple-minded but I hope commentators will add value.
- The British-Indian colonial state, much decayed as it may be, is still light years ahead of any “system” Maulana Samiulhaq and his madrassa students can throw together. Tariq Ali’s anti-imperialist warriors have no viable modern political system or institutions to draw upon and nothing to offer except beheadings and endless sectarian warfare. There is no there there. The state possesses a modern army and a semi-modern postcolonial state. Its leaders may not fully understand what they have, but they do have it. They can still defeat the Taliban with both ideological hands tied behind their back. Of course it won’t be easy and it certainly won’t be pretty. The Pakistani state’s efforts may not be as vicious as the Sri-Lankan army’s campaign against the Tamil Tigers, but the human rights violations and collateral damage will be no picnic (for more on this, see my Pakistani liberal’s survival guide).
- As the Pakistani army is forced to confront the particularly vicious groups gathered under the umbrella of the TTP, it will face a period of determined Islamist terrorism. But this is not the last wave of Islamist terrorism they will have to face. Two large reservoirs of terrorists are yet to commit themselves fully to a fight against the Pakistani state (or perhaps it would be more accurate to say that the state is yet to commit to fighting them); one is the anti-Shia terrorists of the Lashkar e Jhangvi, whose front organizations (ASWJ) and networks of madrassas still operate without hindrance in the country and especially in Punjab; and the other are the various Kashmiri Jihadist organizations that remain on good terms with the army.
- Of these two groups, the LEJ is in a very unstable equilibrium with the state. While some in the LEJ and some in the state security apparatus (and the right wing political parties) continue to behave as if anti-Shia mobilization can coexist with a nominally inclusive Pakistani state, this is not really a viable strategy. When push comes to shove (and it’s getting dangerously close to the shove state) the Pakistani state will have to opt against the LEJ. Tolerating their brand of Shia-hatred is fundamentally incompatible with the continued existence of semi-modern Pakistan. So, like it or not, the state will find itself having to confront the LEJ’s front organizations at some point and when it does so it will face an especially unpleasant round of terrorism.
- The second reservoir of Islamist terrorists (the Kashmiri jihadists) has been kept relatively quiet by promises that the glorious jihad will restart in full once America leaves, but that too is not a viable long term policy. India, for all its incompetence, is not such an easy target any more. The days when Benazir could wish to see Jagmohan (governor of Indian Kashmir) converted to “jag jag mo mo han han” (i.e. broken into little pieces) were the high point of that whole strategy. India survived that point and by now, those days are long gone. Some in the deep state may not realize it yet, but just like they have had to give up on so many other Jihadist dreams, they will also have to permanently abandon their Jihadist dreams in Kashmir. And when the deep state finally comes to that point, the remaining LET and Jaish e Mohammed cadres will have to choose between a life of crime and open warfare against the state. Many will undoubtedly become kidnappers and armed gangsters, but some true believers will opt to fight. It is likely that many of them will make common cause with TTP terrorists and LEJ (beyond the connections that already exist). Islamist terrorism, in short, has not yet peaked in Pakistan. There are at least two more waves to come even after the current TTP-sponsored wave passes its peak. There is also the possibility that these three waves may more or less combine into one in the days to come.
- The state will fight several groups of Islamist fanatics, but that does not mean it will become liberal or convert to Scandinavian style Social democracy. Warfare with the Islamist terrorist groups may still co-exist with attempts to outflank them by imposing sharia in some places and by pretending to be extremely anti-Indian and anti-American in others. Democracy and human rights will also suffer as they do in any state fighting an internal enemy. Crude suppression of Baloch and Sindhi nationalism will continue apace. Crony capitalism will become nastier and cruder than ever. Subject to the same pressures as the rest of planet earth, there will be more mixing of the sexes, more singing and dancing, and more semi-naked women being used to sell hamburgers and car-insurance, but many other trends will be unpleasant and will be unfair towards the weaker sections of society. These problems are, of course, not unique to Pakistan. These are the problems common to many of the artificial postcolonial states of the “developing world”. But it’s worth keeping in mind that the self-inflicted Islamist wound is not our only (or even our biggest) problem. It just makes it extra-hard to focus on all the other problems that also have to be solved.
- Still, there is a certain window of opportunity for mainstream liberal/secular parties (liberal in the Pakistani context. Obviously not by Western or even East Asian standards). Even though the deep state is still using the CIA-RAW conspiracy against Islam as its main tool to motivate its own soldiers and remains fixated on “failed politicians” as the be all and end all of Pakistani incompetence and corruption, it will inevitably find itself standing closer to the hated PPP, MQM and ANP when it comes to fighting the Jihadist militias. Its old favorites in the religious parties, favored as recently as in Musharraf’s so-called “enlightened moderate” era, have too many ideological sympathies with the Taliban. While personal links, past usefulness and shared antipathies still sustain links with the Jamat e Islami and various JUI factions and the dream of using “good jihadis” against Baloch nationalists and in various foreign policy adventures) remains alive, practical necessity will force a slight rethink. This gives the “secular” parties a fighting chance to step forward and grab the initiative. All three (PPP, MQM and ANP) have made some efforts in that direction already, but they need to do much more. Pakistan’s small, but culturally disproportionately significant, old-guard left may also get a chance to enlarge their space and regain a little of the initiative they lost decades ago to the religious parties. Taking advantage of this opportunity is critical and both the “mainstream secular parties” and the old-guard Left must make the most of it.
- Unfortunately, in this task (of stepping forward, making alliances and grabbing political space from the religious parties), the left-liberal intelligentsia will be hampered by opportunity cost imposed by the unusual penetration of ideas from the academic and elite sections of the Western” Left” into the South Asian intellectual elite. Their numbers are small and luckily most are not active in real-life politics, but their cultural and academic presence is not insignificant and they will do some damage. After all, there are only so many bright young intellectuals within the ruling elite who are temperamentally inclined towards liberal ideas. If 35% of them are sucked up into a universe where they read Tariq Ali, Pankaj Mishra and Arundhati Roy for political advice (not just for occasional insights, interesting information, entertainment or commentary on our absurd existence), well… you do the math.
Now to the second part of that title. A friend sent me Asad Q Ahmed’s article about Islam’s invented golden age (http://www.loonwatch.com/2013/10/asad-q-ahmed-islams-invented-golden-age/). I completely agree with the writer that there was no golden age of rationality that was followed by a dark age of irrationality simply because rationality was abandoned on the orders of Al-Ghazali and party. But Asad Q Ahmed then seems to imply that actually things were going so much better than “orientalist” scholars believe and just recently took a dip for reasons that have nothing to do with the irrationality of Imam Ghazali. He offers two tentative suggestions as to why intellectual endeavor declined (especially in the South Asian context): the adoption of Urdu instead of Arabic and Persian, and the rise of printing. I think this mixes up the issue of correcting a misrepresentation of Islamicate theology and philosophy (which were not as hopelessly irrational or sterile by contemporary standards as the “dark age” narrative implies) with the larger question of why scientific and industrial progress did not accelerate in the Islamicate world when it took off in nearby Europe.
I think we need to step back further than just correction some misconceptions about Islamicate philosophers and theologians. First of all, it’s good to keep in mind that these (and other) golden age and Dark Age myths and legends are inevitable parts of a certain superficial level of propaganda. They are almost always untrue in scholarly detail. But that is not necessarily their point. It may not be the best idea to to assess them from the level of the serious historical scholar. They are propaganda and their purpose is to promote or inhibit particular trends in current political conflicts. For a serious scholar to “discover” that they are erroneous is expected. And unsurprising. The point is what struggle they are being used in, and what side you wish to take in that propaganda war.
Moving on from that, if a serious scholar is going to take on this topic, then they should focus on their area of expertise. In this case, showing what Muslim religious and philosophical scholars actually read or thought. That is a huge service in itself. And I am sure Asad Q Ahmed has forgotten more about that topic than I can hope to learn in a lifetime. But the topic of why particular societies became more powerful or more scientifically advanced than others is a very big topic. It is not exhausted by learning about what theologians and philosophers said about reason and theology. It may in fact have surprisintly little to do with what theologians and medieval philosophers dreamed up (in the East or the West). A relatively small group of societies started the modern scientific and industrial revolutions. Whatever the reasons for this sudden acceleration (and while unlikely, it is not inconceivable that all we may ever say with certainty is “that’s just how it happened to be”), those reasons are likely to involve MUCH more than what the respective theologians of those societies said about reason and free will. The slippery nature of this topic is exemplified by the two tentative reasons Asad does end up proposing: Urdu and printing. I am sure everyone can remember equally impressive articles where the failure to develop learning in indigenous vernacular languages (e.g. Punjabi in Punjab) is the cause of our underdevelopment, and where the failure to take up printing on a large scale was a big problem, rather than a god-sent opportunity to write in margins. My point is not that the writer’s suggestions are necessarily wrong. Just that they may be not even wrong. They may be tangential to the main issues.
There is no one single Islamic model or empire. The early Arab empire was an imperial undertaking, and a successful one, but when it ran out of steam, its successor Islamicate empires (e.g. Ottoman, Mughal, Safavid) all failed to evolve any tradition of science or industry that matched what was happening within sight of them in Europe. They also failed to develop any political institutions beyond the old models of Kings and emperor that they had taken from Near-Eastern and Central Asian models centuries earlier. Ghazali probably did not cause this failure to accelerate, but his efforts did not contribute to any significant advance in these areas either. Scholars will eventually bring to light (i.e. bring into the modern scholarly mainstream) whatever lies lost in Arabic and Persian manuscripts, and that will be a good thing. But the explanation of, say, Syria’s relative relative lack of modern scientific, industrial and political development may not lie hidden in those debates in any meaningful way.
Something like that. This is just off the top of my head, and I look forward to enlightening comments, arguments and questions. My line of thought may become clearer (or even change) as the argument progresses.
I would add (to avoid unnecessary diversions)that by “advanced” or “underdeveloped” I mostly mean scientifically, industrially and politically developed. No Moral judgment is implied.
btw, youtube is still banned and these guys are not happy. Give them a hand
Paul Anthony Smith. Untitled #2. 2012.
Picotage on pigment print.
Does Beer Cause Cancer?
by Carol A. Westbrook
I have been taken to task by several of my readers for promoting beer drinking. "How can you, a cancer doctor, advocate drinking beer, " I was asked, "when it is KNOWN to cause cancer?" I realized that it was time to set the facts straight. Is moderate beer drinking good for your health, as I have always maintained, or does it cause cancer?
Recently there has been some discussion in the popular press about studies showing a possible link between alcohol and cancer. As a matter of fact, reports linking foods to cancer causation (or prevention) are relatively common. I generally ignore these press releases because they generate a lot of hype but are usually based on single studies that, on follow-up, turn out to have flaws or cannot be confirmed; the negative follow-up study rarely receives any publicity. Moreover, there are often other studies published at other times showing completely contradictory results; for example, that red wine both prevents and causes cancer.
Furthermore, there is a great deal of self-righteousness about certain foods, and this attitude can cloud objectivity and lead to bias in interpreting the results; often these feelings have strong political implications as well. Some politically charged dietary issues include: vegetarianism; genetically modified crops; artificial sweeteners; sugared soft drinks. Alcohol fits right into this category--remember, we are the country that adopted prohibition for 13 years. There is no doubt the United States has significant public health issues related to alcohol use, including alcohol-related auto accidents, underage drinking, and alcoholism, and the consequent problems of unemployment, cirrhosis of the liver, brain and neurologic problems, and fetal alcohol syndrome. Wouldn't it be great if the government could mandate a label on every beer can stating, "consumption of alcohol can cause cancer and should be avoided"? Wouldn't that be a wonderful "I told you so!" for the alcohol nay-sayers?
Before going further, I will acknowledge that are alcohol-related cancers. As a specialist I am well aware that cancers of the head and neck area, the larynx (voice box) and the esophagus are frequently seen in heavy drinkers, almost always in association with cigarette smoking. Liver cancer is seen primarily in people with cirrhosis--also a result of heavy drinking. In both instances, the more alcohol that is consumed, the greater the risk of developing one of these cancers--and I have rarely seen these cancers in non-smokers or non-drinkers. But assuming that my readers are not alcoholics, the question that they are really asking is whether or not they are going to get cancer from low to moderate beer drinking.
So what, then, are the facts? Does beer cause cancer? This is a much more difficult question to answer than most people realize, and can easily be the subject of years of study for a PhD dissertation (and probably has been). Researchers will be quick to admit how difficult it is to do scientifically rigorous studies on the health effects of individual dietary components. You can't just take a group of thirty year-olds, split them into two groups, give beer to one group and make the other abstain, watch them for 20 years and see who gets more cancer. So we have to rely on population studies, estimating alcohol consumption based on purchasing statistics, self-reporting of drinking (which is often unreliable), surveys, and death certificates for cancer. Incidentally, beer is not considered separately from other alcoholic beverages in any of these studies.
For example, an interesting study by Holahan and colleagues, published in 2010 in the journal Alcoholism: Clinical and Experimental Research, followed 1,824 middle-aged men and women (ages 55–65) over 20 years and found that moderate drinkers lived longer than did both heavy drinkers and teetotalers. In particular, their data suggested that non-drinkers had a 50% higher death rate than moderate drinkers (1 - 2 drinks per day). Others have criticized this conclusion because the no-alcohol group included people who didn't drink because they were already at a higher risk of death for other reasons such as serious medical conditions, previous cancers, or they were former alcoholics on the wagon. The authors claimed that they controlled for these variables but that is almost impossible to do, and that is one of the reasons that it is difficult to get accurate data from this kind of study. So it may be hard to conclude that moderate drinking significantly increases your lifespan, but it certainly doesn't shorten it.
What about cancer? The publication that started the most recent hype about cancer and alcohol appeared in the April 2013 issue ofThe American Journal of Public Health, and was written by David Nelson MD, MPH and his colleagues. They combined information from others' publications with epidemiologic surveys to determine the number of cancer deaths attributable to alcohol, as well as the types of cancer that were associated. They found that about 3% of all cancer deaths in the US were related to alcohol consumption, with most of it seen in the head and neck, larynx and esophagus. There was still a slight increased risk at low alcohol use (greater than 0 but less than 1 1/2 drinks per day), which led them to conclude, "regular alcohol use at low consumption levels is also associated with increased cancer risk." I looked at their study, and couldn't argue with their conclusion, but I don't think the risk is significant enough to recommend becoming a teetotaler.
Neither does the US National Cancer Institute (NCI). Heavy drinking aside, the NCI does not recommend that people discontinue low or moderate drinking since it would have only a minimal impact on their chance of developing cancer. Some caution is indicated for specific cancers: There is a 1.5 times increased risk of breast cancer in women who drink more than 3 drinks per day compared to non-drinkers; similarly, the risk of colon cancer is 1.5 times increased in people who more than 3.5 drinks per day. Incidentally, 3.5 drinks per day is still well above the level that is considered "low to moderate" drinking, which is usually defined as no more than 1 drink per day for a woman, 2 per day for a man. That being said, lowering your alcohol consumption deserves some consideration if you are anxious to change your odds for these two specific cancers. Nonetheless, the risks from alcohol are still low when compared to the impact of other lifestyle factors. Addressing these factors will have a much greater impact than giving up that beer or wine with your dinner: don't smoke, lose weight if you are over; exercise; eat a high-fiber diet; increase your vegetable and fruit consumption, while limiting red meat; avoid processed food; follow-up on your doctor's cancer screening recommendations for colonoscopy, pap smears, mammography and prostate screening.
Do the positive effects of drinking beer outweigh the negative effects? Moderate alcohol consumption has been reported to lower the risks of heart disease, stroke, hypertension and Type 2 diabetes; for men, it may lower the risk of kidney stones and of prostate cancer; may improve bone health; may prevent brain function decline. Alcohol consumption actually lowers the risk of kidney cancer and of lymphoma. Overall, in most studies, the positive effect was very small, but the beneficial effects of beer are only in moderate drinking, not for those who drink to excess. And of course, there are social and psychological benefits to sharing a beer with friends.
So, is beer drinking good for you? Or bad? Are you healthier if you drink, say, a beer or two per day, or are you worse off? My conclusion as a medical specialist is: it depends. On average, for the general population, drinking a little alcohol is better than abstaining completely. But on an individual basis, it depends on your current health conditions and your risk factors. Are you more likely to die of heart disease or of colon cancer? And if you want to cut down your risk of either condition you must be sure to avoid cigarettes, keep your weight down, exercise, eat a high-fiber diet that is low in red meat and processed foods, and increase your fruit and vegetable intake. The impact of alcohol consumption is likely to be small compared to these lifestyle changes.
What does the Beer Doctor do? As a cancer specialist, my lifestyle includes all of the above recommendations on exercise, weight and diet. I continue to enjoy my beer, but I keep my consumption within the low to moderate range, that is on average about 0.5 to 1 per day, and not every day. For me, the health benefits of drinking beer outweigh the negatives. To your health!
© 2014, Carol Westbrook. This article is from my forthcoming book, To Your Health! The opinions expressed here are my own, and do not reflect those of my employer, Geisinger Health Systems.
The Spirit of the Beehive
by Lisa Lieberman
"Trauma's never overcome," Melvin Jules Bukiet asserted in The American Scholar. Redemptive works of literary fiction—or "Brooklyn Books of Wonder" (most of the authors he excoriated in the essay, including Alice Sebold, Jonathan Safran Foer, Myla Goldberg, Nicole Krauss, and Dave Eggers, hailed from the borough)—provide mock encounters with enormity. Wooly mysticism blunts the force of death and violence, expunging cruelty and indifference. Legitimate feelings of grief and rage are muffled in sentimentality. But the comfort these healing narratives offer is not only superficial. It is a travesty:
Your father is dead, or your mother, and so are most of the Jews of Europe, and the World Trade Center's gone, and racism prevails, and sex murders occur. What is, is. The real is the true, and anything that suggests otherwise, no matter how artfully constructed, is a violation of human experience.
Bukiet, the son of Holocaust survivors, preferred the open wound. He and other members of the so-called second generation were marked by their parents' ordeal. The ghetto, the lager, the devastating losses of an older generation who could not communicate their experiences: no matter how hard survivors's children tried to imagine life on the other side of the barbed wire, their efforts fell short of the truth. Their reconstructions, in the telling phrase of another second generation author, Henri Raczymow, were shot through with holes. Why bring closure to suffering that has no end?
Other twentieth-century catastrophes have marked the descendants of those who lived through them, the Spanish Civil War (1936-39) especially. Outside of Spain, idealized treatments are abundant, Hemingway's For Whom the Bell Tolls and Malraux's L'Espoir upstaging Orwell's hard-nosed account, Homage to Catalonia. But within Spain itself, artistic renderings of the event have been more nuanced, resisting the trivializing sentimentality of the Brooklyn-Books-of-Wonder approach until fairy recently (Belle Epoque, which won the Oscar for best foreign language film in 1994, comes to mind).
The Spirit of the Beehive (1973) was the first film to address the trauma of the Spanish Civil War, which it presented obliquely, through the eyes of a child. In part this was necessary to evade the censors; the dictator Francisco Franco still ruled Spain when Victor Erice made the film. But the story, which Erice wrote as well as directed, was intensely personal. "Erice and co-screenwriter Ángel Fernández Santos based the script on their own memories," Paul Julian Smith revealed in his Criterion essay on the film, "recreating school anatomy lessons, the discovery of poisonous mushrooms, and the ghoulish games of childhood. It is no accident that the film is set in 1940, the year of Erice's own birth."
Erice belongs to the second generation of Spanish Civil War survivors. Too young to have experienced the worst of the conflict, when Loyalist defenders of the democratically-elected Republic battled with Nationalist rebels led by Generalísimo Franco while the German Luftwaffe bombed civilians in Republican strongholds, he grew up in a society where memory was suppressed. The victors imposed their version of history, presenting the war as a quasi-religious crusade, a reassertion of traditional Spanish values against the godless agenda of the "Reds." Supporters of the Republic who were not killed, imprisoned, or forced into exile after the defeat were silenced. Mourning was done in private, betrayal being commonplace, particularly in small villages such as the one in which Spirit of the Beehive is set. "Only by acting as if everything is perfectly normal can you show that you are above suspicion," said one of the subjects interviewed by Roland Fraser in his oral history of the war and its aftermath, Blood of Spain.
Sometimes, to remain silent is to lie, since silence can be interpreted as assent.
-Miguel de Unamuno
Ana, the young heroine of Erice's film, lives in a remote village in Old Castile, a region conquered early in the war by Franco's forces. We are made aware that both of her parents supported the Republic. Ana's father Fernando is an old-style rationalist who dabbles in natural science, studying the behavior of his bees and jotting down his philosophical reflections in a little notebook, working late into the night on his esoteric research. Teresa, Ana's mother, spends her days alone, writing to an ex-lover who is now a refugee in France, most likely because he belonged to one of the Republican militias. "Perhaps our ability to really feel life has vanished along with the rest," she laments in a letter.
Certainly the household is emotionally cauterized. Fernando and Teresa seem detached from Ana and her older sister Isabel and barely speak to one another; in one scene, we see Teresa pretending to be asleep when Fernando finally comes to bed. The camerawork reinforces the isolation. Never do we see the family together in one establishing shot, not even when they are all at the breakfast table. The characters speak in low voices, when they speak at all. "The Spirit of the Beehive" is one of the most silent films I've ever seen. The atmosphere is one of bereavement, the adults walking around as if their skin hurts, the way you feel when you realize the world no longer holds the person you loved.
Ana comes to enact her parents' grief—and perhaps the grief of Spain itself. A wounded soldier she encounters in an abandoned barn near the family's house becomes a friendly spirit in her imagination. One day he disappears. We know that he was shot by the local police, but Ana is told nothing, and so she invents an answer to the mystery. She retreats into silence now, neither eating or sleeping. The doctor is called, another crypto-Republican it would appear as Teresa calls him by his Christian name, Miguel. But other than reminding her of the sacrifices that they must all make, Miguel offers only the weakest of reassurances. "Teresa, the important thing is that your daughter's alive. She's alive." Ana has had a shock, he says, and will heal in time. Thirty-three years later, Erice seemed to be saying, Spain is still waiting.
Monday, February 17, 2014
Do our moral beliefs need to be consistent?
by Emrys Westacott
We generally think it desirable for our moral and political opinions to be logically consistent. We view inconsistency as a failing. Why?
I'm not talking here about consistency between a person's beliefs and their actions. Failing to practice what we preach is the sort of inconsistency we call hypocrisy, and it's easy to see why we disapprove of that. Hypocrites are less trustworthy and predictable than people whose actions accord with their stated opinions. Nor am I talking about remaining consistent over time, never altering or abandoning one's earlier convictions. That's the sort of "foolish consistency" that Emerson ridiculed as "the hobgoblin of little minds."
I'm talking about logical consistency between beliefs. Why do we care about this? Exposing inconsistency is a standard move in many an ethical argument. Take the debate about abortion, for instance. A standard argument for viewing abortion as immoral is that it is essentially no different from infanticide, which, as it is the premeditated killing of an innocent human being, meets the definition of murder. Note the form of the argument: if you think murder is wrong, then, to be consistent, you should think infanticide is wrong, in which case, to be consistent, you should think that abortion is wrong. On the other side, a common justification for permitting abortion rests on the idea that a woman has property rights over her own body. Essentially, the argument runs: if you agree that a woman's body is her own property, then consistency requires you to accept that she can do with it as she pleases, and if you agree that the fetus is a part of her body, then consistency requires you to accept that she can do as she pleases with the fetus.
Or take Peter Singer's well-known argument for why all of us who can afford to should give more to help the needy. We all agree it would be wrong to not save someone from drowning just because we didn't want to ruin our shoes. Well, Singer argues, if we think that, then we should also accept that we have a duty to save human lives if we can do so by making similar minor sacrifices–and many of us can do this by donating our disposable income to charity. Whether these lives are close by or far away is irrelevant. Again, the underlying strategy here is an appeal to consistency. If you think x, then you ought, for the sake of consistency, to think y. Many other arguments about moral matters take this form.
But why do we value consistency? In science and in our everyday beliefs about the way things are, there is a straightforward answer. Inconsistent beliefs, taken together, form a contradiction: a proposition that has the form "p and not p." We assume that reality does not contain contradictions (an assumption first articulated by Parmenides). So we infer that an inconsistent set of beliefs cannot possibly be an accurate description of the way things are.
It may be a useful working map or model; we may not know at present how to improve on it; but so long as there are inconsistencies, we assume it cannot be the final account, the definitive truth. (Note: I am not assuming here that a realist view of truth is philosophically satisfactory, only that it is the one most of us work with most of the time. I am also aware that in logic contradictory statements are false by definition, but that isn't why most people think they are false; the logician's definition reflects ordinary thinking, not vice-versa.)
When it comes to moral beliefs, however, this reason for valuing consistency doesn't apply. Well, to be fair, it does apply if you believe in an objective moral world order that makes our moral beliefs true or false. But in an increasingly secular age that position is unfashionable, to put it mildly, and hard to support without some dubious religious or metaphysical props.
So, if we don't think that our moral judgements describe an objective moral reality, are there other reasons for wanting them to be consistent? I can think of two. One is that we want to be rational, and consistency is the hallmark of rationality. This is a weak argument, though. Really, it just tries to bolster the idea that we should care about consistency by associating it with another term we automatically approve of—rationality. Besides, a concern for strict logical consistency is not the only way of being rational. There is also a pragmatic kind of rationality, where we are concerned with finding the best means to secure a desired end. And sometimes, often perhaps, this sort of rationality should take precedence. For instance, it may be hard to justify theoretically why we allow athletes to buy some sorts of advantages, like enhanced vision through eye surgery, but not others such as higher stamina levels through blood transfusions. But instead of looking for some subtle difference that supposedly justifies differential treatment, we should perhaps simply acknowledge that the reasons are essentially pragmatic, having to do with the likely consequences for certain sports if these procedures are allowed or banned.
The other reason for caring about consistency is more significant. Being consistent in the way we treat people is at the heart of our notions of impartiality and fairness. Thus, a leading argument for legalizing same-sex marriage is that if heterosexuals can marry the person they love, then gays should be free to as well. This is clearly an appeal to consistency. What is at stake here, though, is not primarily the theoretical coherence of our beliefs but the practical discrimination experienced by certain members of society.
Now some people will claim that any inconsistency in the way we treat people is inherently unfair and hence wrong. To a large extent this is indeed our default way of thinking in everyday life. Nevertheless, there is a difference between thinking of consistency as intrinsically good and thinking of it as pragmatically good most of the time. And I would argue that the latter point of view is preferable. On this view, we should generally approve of treating people consistently because a world in which this notion of fairness prevails will be a world in which people live more happily; there is likely to be less conflict, greater social mobility, more efficient use of labor, less resentment, less selfishness, greater social cohesion, and so on. The alternative to thinking this way is to declare that consistency is simply good in itself and for its own sake. But then there is no satisfactory answer to the critic who asks why we would want to fetishize an abstract virtue–consistency–possibly even at the expense of concrete human well-being.
Of course, on the face of it we often seem to condone inconsistent treatment for pragmatic reasons. For instance, we discriminate against epileptics by not allowing them to operate locomotives. But one could argue that every supposedly pragmatic justification for such discrimination can be recast as an explanation as to why we are not really being inconsistent. Such explanations point to what we see as a relevant difference in the discriminatees compared to the rest of the population. We say, for instance, that the vulnerability of epileptics to seizures increases the risk of an accident to an unacceptable level. Thus, the policy does not treat people inconsistently at all; that would only be true if there were no relevant differences between the people receiving differential treatment.
That is a pretty good argument, and it covers many cases. But I don't think it covers all. There are times when our attempts to justify an apparent inconsistency in our judgements or practices look suspiciously like disingenuous rationalizations of ways of thinking or acting that we have become comfortable with, or that it would be problematic to abandon, or that we think carry significant practical benefits, or to which we can't think of a decent alternative. Consider, for instance, the laws regarding alcohol compared to other recreational drugs, or US foreign policy regarding various undemocratic countries, or court rulings that rest on dubious appeals to precedent or questionable interpretations of the constitution.
The considerations touched on above perhaps help to explain why we value logical consistency in our moral judgements . We value it in other areas as a necessary condition of truth; we see it as central to our notion of rationality; and in policy and practice it is closely linked to our conception of fairness. So maybe our commitment to consistency gets carried over from these spheres into the realm of moral theory and reflection. It then becomes something like a conversational convention, a rule that everyone recognizes and that helps give form to the discourse. Such conventions can certainly have instrumental value: dialogue must have forms just as games need rules. So a commitment to logical consistency may well be worth upholding much of the time; but it can still make sense sometimes to grant other considerations a higher value.
The analogy between conversational conventions and the rules of a game can illuminate this point. In show jumping, knocking over a fence detracts from the rider's overall score, but the fault needn't be decisive; it can be compensated for by other factors such as speed. Thinking along these lines, inconsistency might be viewed as a flaw in a position or a theory, but not necessarily as constituting a decisive objection. Logical consistency can be viewed as a desideratum in our moral beliefs, but it may sometimes be trumped by other values such as the practical benefits of enacting a certain policy.
An advantage of not insisting on logical consistency as a sine qua non of any acceptable moral position or ethical theory is that we will be more likely to give due weight to pragmatic considerations. Consider the abortion debate again. Much ink has been spilled constructing sophisticated arguments to show that allowing abortion is or is not consistent with certain other precepts we adhere to. But an alternative approach is to cut the Gordian knot by not worrying about that and simply asking instead: what are the likely consequences of allowing or prohibiting abortion? If prohibiting it is likely to produce more dangerous backstreet abortions, more unwanted children growing up in deprived circumstances, more single mothers mired in poverty, and so on, then these are reasons for ensuring that it be legal and available. If, on the other hand, its ready availability tends to put a heavy economic burden on the health care system, diminish our respect for human life, and foster less careful attitudes to sex which in turn increases the incidence of sexually transmitted diseases, then these are reasons for banning abortion.
To sum up: I'm not saying that we should stop caring at all about logical consistency in working out our positions on moral issues. But I think it is interesting and reasonable to ask why we do care. Moral philosophers, as theoreticians, naturally tend to focus on the theoretical coherence of statements and their implications. But morality isn't mathematics. It is perfectly rational, in one sense of the term, to prioritize practical consequences over logical consistency. Once we accept this, we will perhaps be more comfortable taking a pragmatic approach to moral problems, and feel free to do so without dissimulation or apology.
Bob Tomlinson. Dance II: The Gaze of Death. 2010.
Oil and collage on canvas.
Mullah Omar Carved in Stone
by Maniza Naqvi
Yes. Why not? You paying? Well then--- make it a double. So let me return the favor by telling you a story---something I've been holding on to for a while. Well—who knows, anyway--- I think it's interesting. Maybe you've already heard this but here goes---You know----I said make it a double. So yeah---I heard it shortly after the war in Afghanistan took up were it had left off with a bit of change your partner--do si do. This guy that I met—where---yeah—where else--- So this guy back in October 2001 told me about how he and the UN delegation he was with had met Mullah Omar---yeah Mullah Omar---about eight months earlier back in the bitterly cold winter of February 2001 in Kandahar.
Anyway this is what happened—it's a hoot! You're sure to get a good laugh: It was the dead of winter, people were dying of cold and hunger and there was a boycott on Afghanistan by the world because of the Taliban Government. A UN delegation was meeting Mullah Omar in his tent and he asked them for help: "My people are starving says Mullah Omar-They are freezing and there is a famine---please help us." And then there's a back and forth—the Head of the UN delegation trying to explain the problems in being able to do this. And then finally the Head of the Delegation takes out the UN Charter—a thick document and says—"This is for us, like our Bible, we follow rules—Our charter on Human Rights"----and then he says—"You know— We do precisely what is written here we follow these rules. You know? How you say---this is our Koran. It is, for us, how you say--carved in stone". Mullah Omar is staring at the guy ---bug eyed—with that one eye of his. The Head of the Delegation is thumping the document "You understand---Absolutely, certainly, but we cannot assist any country that violates our charter. And your Government has, isn't it so, violated, our charter of human rights---girls' education, war and so forth." There is silence.
Again Mullah Omar repeats his plea----"There is a famine, my people are dying." The Head of the Delegation shrugs, sticks out his lower lip—thinks and replies "You are responsible for that. Are you not? Your actions are not our responsibility—you can change. Our hands are tied by you. We cannot do anything about that--Well I can't do anything about that—that is for sure---our rules, bible you know—Koran---as you know because of the attitude here, there is an economic embargo on Afghanistan. You must change your behavior." There is silence. Mullah Omar stares at him.
Then perhaps the Head of the Delegation is beginning to feel uncomfortable under this steady gaze of a cold eye or perhaps he is freezing in this frigid temperature ---even though he is bundled up in artic gear or perhaps he just wants to be loved and wants to make the eye staring at him become less loathing towards him—whatever his reasons he blurts out "But here is what I can do. We have funds for which the agenda is cultural heritage preservation and we can work within this agenda---yes—we would be happy to help in the restoration of the Buddhas at Bamiyan."
The Head of delegation is now beaming. Can you imagine! Here is Mullah Omar—an illiterate, desperate, war weary, lame, one eyed soldier, literally wearing just one of those cotton shalwar kameez and those leather sandals with truck tire rubbers for soles and a chaddar draped over him—in this freezing temperature--and—here he is surrounded by all these warmly dressed well fed people, it is in the deep winter—bitterly cold-- his country is in the midst of famine and epidemic and war—he is literally begging these guys to help his country, his people because of the cold and the famine. He has just told this UN delegation that people are starving---The delegation knows that they are starving—they are the fuckin' UN after all and Mullah Omar has been told that he is meeting with the UN delegation—he probably missed the part that he was meeting with A delegation from the UN—so that's what they are—to him "THE UN"----he doesn't know which part of the UN this group of dummies belong to or that there are parts of the UN----he just knows that they are the UN delegation—he has no idea how many agencies there are in the UN and what they do---he just knows that they are representatives of the UN who are supposed to help people like his. He tells them how many people are starving, how many have died and where. The Head of Delegation has interrupted him impatiently— The Head of the Delegation says that while they cannot help his starving people with food and medicine as a matter of principle carved in stone—because this you see is the Koran of the UN so he waves it again at Mullah Omar "This is the "Koran of the UN"---so while they cannot help with food and medicine they can however, restore what is carved in stone.
You ask what happened next. Well what happened next, was that this guy who told me this story said that in that moment he felt fear rising inside of himself as the heat rose in his face and his heart was beating so fast he thought for sure he was going to pass out—he woulda shat in his pants but he could feel it dryin' up inside him—as he stared at Mullah Omar----well--Mullah Omar seemed to have turned to stone--- There's complete silence. He stands up—slowly--- stands up—because of his gamey leg—you know. Then he limps to the opening of the tent, he stands there looking out then he turns and walks back into the tent goes to the corner where there is a pail. He picks up the pail and overturns it over his head. Splash of freezing cold water-----he dumps it over his head. The delegation get a few drops too. He stands there drenched in cold water. The delegation sits there and stares at him. Mullah Omar, shivering and dripping with freezing cold water says—"I needed to do that---to cool my head---or else…Now---.Get out. Get out now."
The Head of Delegation is clearly shaken but manages to keep a look of disdain on his features. "Uncultured, intolerable, violent man", he says after they hurriedly leave the tent and are safely back in their four wheel drive.
A few days later there comes the news that the Buddhas in Bamiyan have been destroyed by the Taliban. Yeah---blown up after being there for what? Over a thousand years? More? Almost as if the guy might be saying---"Thanks for pointing those out to me as a big deal for you. So let me fix that. Agenda cleared. No more Buddhas to restore—now do I have your attention? Now will you do your job and get relief supplies of food to my people? Now will you remove the embargo on food supplies to my country?"
Pakistani? The guy—who told me this---yeah why? Oh! Yeah of course! Yeah, total liars. Never met a single one who shared the same point of view with another one or with anyone for that matter! Just don't get it— Unbelievable, right? I know! Jesus H. Christ, the stories those guys spin! Totally untrustworthy. What are you gonna do! Anyway that was then. And this is now. It is what it is. Who gives a shit. Shit happens. Yeah why not, another--—the same.
Excerpt from a novel.
Other writings by Maniza Naqvi here.
wings of desire (a medieval physiology)
by Leanne Ogasawara
The ancients told us that it was the heart that mattered. Thinking too much, they warned, will only give you a headache. And this fact was backed up by the finest research of Medieval physicians and theologians. Aristotelian philosophy had imparted to the Medievals that the heart was hot and dry-- often times burning hot; and that intelligence, emotion, and passion all originated there, in that heat. Ibn Arabi further refined this by adding that, if the mind thinks (考), then heart imagines （思・想).
We find ourselves back in a time when heart and imagination took center stage--and love was thought to move the stars and the heavens above.
Not surprisingly, it was a time when lovesickness was the most common form of heart disease. A veiled glimpse ignites a fire causing two people to circle each other as Lover desires Beloved; each seeking to know the Other. This all being something which took place within the topography of the heart itself. It was something imagined-- over weeks upon weeks; months upon months. Imagined as "'spirits take bodies and bodies become spirits' something so powerful that European physicians of the Middle Ages declared that, if one wasn’t careful, lovesickness could lead a person into madness (see Averroes' study of love as affliction, for example).
13th and 14th century scholars talked about something known to them as visual species. These were defined as "objects" (propogated through the air) that mediated between the physical and imaginal world by imprinting themselves on a person's imaginaton from a distance. It was the image as held in the body that caused the troublesome-- and sometimes dangerous-- overheating of the bood around the heart.
To pursue the logic of all this, because the "visual species" that caused desire and lovesickness were things originating outside the person, it followed that magic could also generate new species. And this is why love charms, amulets and the use of magical incantations in maters of love became fairly common. For romantic success, men were encouraged to write “pax + pix + abyra + syth + samasic” on a hazel stick and hit a woman with it three times on the head, then quicky kiss her; while Tristan and Iseult were undone by a love potion which they accidentally drank. People reported that like the other magical incantation-- abbracadbra-- that just whispering the words out loud "I-love-you" had the power to move mountains. It even had the power to cure gout--or maybe that was abbracadabra?
I told him that I wanted to employ a famous Medieval love charm at our wedding. Not surprising, the Eucharistic host played many different roles in various Medieval love potions. But my favorite was perhaps the simplest-- a lady would slip the host under her tongue. Then, kissing her beloved with the host still in her mouth, she would ensure that he would love her forever.
He dismissed my plans, saying, he already knew he would love me forever so I didn't need to go through so much trouble.
I wonder, though, is the heart that knowable?
For in addition to its wondrous heat, the Medieval heart was also believed to be extremely porous--something which inextricably connected inner with outer (and outer and inner). Heather Webb explains it thus:
It was thought that the air we breathed mixed with the blood in our hearts to form generative spirits that, sent back into the world, connected us to one another and to the greater circulating universe. According to the Aristotelian and Aquinian theory, the heart should imperfectly mimic the circulations of the heavens"
Through our breath, then, and our persistently beating hearts, we are connected to the world around us. One breathes in landscape, atmosphere and social context and breathes out heart and poetry. The heart doesn't so much have a mind --or reasons-- of its own as it is simply busy breathing. Vulnerable to magical charms and prone to dangerously overheating, who can predict what the heart will say next? It is surely the ficklest of organs that takes us most by surprise?
Out to buy figs, young Dante was in a rush to arrive at the market. As he walked, he happily imagined the pleasures that awaited him: the smell of Sicilian lemons; of sweet sugar from Egypt; of perfumed vinegars and syrups made from grapes. Cherries, endives, oranges, spicy sausage; dried fruit, dried fish, mint, orange blossoms and roses. Just thinking about the perfume of these things caused him to quicken his pace. And, turning the corner to the lively street that followed the great River Arno, he spotted her.
It had been precisely nine years since the first time he had caught glimpse of her, at a time when they had both still been only children. But he recognized her in an instant.
Then, as she approached him-- not surprising, given the story- their eyes locked. And as a thousand birds took flight in his heart, the man stood there barely breathing. Time stopped. Breath quickened and he let out an amorous sigh (溜息→感嘆).
Too quickly, however, Beatrice's friend, with whom she had been walking arm-in-arm, urged her to continue walking, and so Beatrice-- with just the most perfunctory greetings to her beloved-- walked away from him. Dante, in a dream later that night, saw the God of Love, who commanded him: Vide cor tuum: Look upon your heart.
And so in this way, they circled each other. This, despite the fact that they would never meet again, they would revolve around each other-- like planets circling the sun; like dervishes circling God. In love, there is a great desire to be known by the Beloved. Just in the same way that the Beloved seeks to know the soul that he feels belongs to him. As the poets insist, true love is a great mirror reflecting one's soul at the same time reflecting the soul of the Beloved in unio mystica.
All of this being part of a playful game of hide-and-seek that God plays with Himself, says the hindu and sufi mystics. Perhaps no one in history worked out a theory for this like Ibn Arabi. Indeed, his theories on divine love made him famous throughout the Islamic world. Born 100 years before Dante, scholars posit that it was Ibn Arabi's poetry and philosophy which would inspire, illuminate and be reborn within Dante's poetry of Beatrice. Beatrice's Body. Beatrice's soul.
Stranger things have happened, I am sure you will agree. Ibn Arabi's theories of love became a dialectic of love, characterized by angel's wings and planets circling the sun (both images rich in Dante's poetry). Desire transfigured by imagination-- imagination, says Arabi-- being the function of the heart. In Plato's Phaedrus, Socrates explains that the symptoms of lovesickness --the sweating, fever, and physical over-heating are facts signifying the growth of the soul's wings-- taking the soul back to its original winged state. And this is expressed in poetry. In the form of heavenly angels,their wings beating in desire, they leave feathers behind in bed. For C.
Monday, February 10, 2014
The Crisis in American Colleges: Rising Tuition and Labor Degradation
by Akim Reinhardt
American colleges have undergone substantial changes during the last three decades. Rising tuition costs, which have far outpaced the rate of inflation, are nearly universal. Other changes that have affected most schools include a tremendous growth in non-instructional areas and a serious re-shuffling of labor. Many schools have added layers of administration; seen their rosters of administrators substantially enlarged; and spent millions of dollars on non-instructional construction such as recreation centers, student unions, and administrative buildings. Meanwhile, the ranks of college teachers have shifted from tenured and tenure track (TTT) professors to predominantly contingent faculty (ie. non-tenure track) that falls into two broad groups: part-time labor (adjuncts and graduate students) and full time labor (mostly lecturers and visiting faculty).
There are, of course, many causes and explanations for these wide ranging changes, as well as varying degrees of change among America's hundreds of colleges. For example, private colleges are generally less dependent on public largess, though many of them do in fact receive public subsidies from federal, state, and even local governments. Meanwhile, the public colleges that rely more heavily on public spending face different circumstances depending on which of the fifty states they are part of, all of which have different budgets and policies for supporting higher education. In some states there has been extreme volatility in funding while some have been more stable, though in almost all states the share of public college budgets supplied by state governments has declined. This has led most public schools to not only raise tuition rates, but to also seek substantial revenue from fund raising, which runs the gamut from alumni contributions, to naming rights of campus buildings, to exclusive contracts with junk food venders. For example, many schools have cut deals with either Pepsi Co. or Coca Cola, Inc. granting one or the other head of this corporate duopoly exclusive rights to sell beverages on their campus.
Amid all these changes, most TTT college professors are alarmed at the decline of their cohort, less for selfish reasons (they are secure, or will be once they earn tenure), but more because it is a degradation of higher education. The creation of a two-tiered labor system, with a minority of TTT professors and a majority of contingent faculty, is patently exploitative and an affront to the values of higher education.
Contingent faculty receive lower pay and benefits (if any) and have no job security, generally working on short contracts. Some are only 10 months long. Most are 4. For schools that run on a trimester schedule, adjunct contracts may be only 3 months long. Indeed, labor conditions are so insecure that many schools will not even admit to firing contingent faculty except for the rare instances that take place mid-semester; when adjuncts, lecturers, and the like are effectively fired, colleges often insist otherwise, claiming that the these temp workers are simply not having their expired contract renewed.
When it comes to justifying the degraded working conditions of contingent faculty, college administrations have a choice. On the one hand, they could say it is because contingent faculty are inferior teachers. They could disingenuously claim that this is a class of worker not good enough at their craft to earn a TTT position.
Of course colleges do not actually say this, for many reasons, not the least of which is that it's not true. But beyond that, while such a claim would rationalize labor exploitation, at least to some people, it would greatly upset parents and students who are paying the ever higher rates of tuition, the faculty themselves, and institutions like U.S. News and World Report that issue the college rankings many administrators are so keen on.
The other "justification" for exploiting workers is more honest: that colleges are simply taking advantage of a labor glut and engaging in crass exploitation to produce a two-tiered system in which the lower tier of workers gets less compensation and no job security despite being, on the whole, every bit as good at their job as the higher tier of workers.
The problem with admitting to this is that, aside from being a very distasteful thing to say publicly, colleges don't want to draw attention to the fact they are actually complicit in creating the labor glut they exploit. All colleges are guilty, for they have eliminated positions in the higher tier and created more and more jobs in the lower tier. Beyond that, however, research universities have an extra layer of complicity. These are the institutions that overproduce Ph.D. students. So as all colleges reduce the supply of top tier jobs and increase the supply of bottom tier jobs, research schools also increase the demand for top tier jobs by cranking out too many doctoral students.
Predictably, American colleges simply try to avoid publicly talking about the reasons for their two-tiered labor system.
Perhaps the bitterest of all ironies in this situation is that the top tier workers at research universities, TTT professors, directly profit from this overproduction of Ph.D.s. At such institutions, graduate students handle a large chunk of TTT professors' grading, thereby facilitating the professors' focus on research. In addition, the growing class of contingent faculty, which is partly the result of the research schools overproducing Ph.D.s, teaches a majority of courses at American colleges, thereby subsidizing the small teaching loads of TTT professors at research schools, who typically teach four courses per year.
At colleges that emphasize teaching instead of research, TTT professors teach more, handle most if not all of their own grading, and produce few if any doctoral students, so their role in the exploitation of the lower tier of labor is lesser. However, lesser is not the same as none. Some of them still funnel undergraduates to doctoral programs, thereby contributing to the glut. And all TTT professors comprise an upper tier of labor that, generally speaking and as a group, has either done little to change matters, or has been ineffective when they have tried, even to the degree they are capable.
This situation has contributed to an increasingly hostile climate between faculty and administration. Many TTT faculty blame college administrations for the two-tiered labor system, the loss of TTT jobs, and the exploitation of an expanding lower tier of labor. A common accusation is that college administrations have grown bloated at the expense of faculty.
Meanwhile, administrators often play into stereotypes about professors by accusing them of being out of touch with "the real world," in this case the realities of modern college budgets. For example, they point out that some administrative growth is the result of federal regulations, not discretionary spending. Since nearly all colleges receive federal funds, they are all subject to federal regulations tied to those monies. These regulations often demand increased administrative expenditures to ensure compliance. Furthermore, some administrative costs, like IT support, didn't exist thirty years ago.
And for their part, contingent faculty are often embittered by the entire situation, quick to blame all sides, and not unreasonably so.
The reality is that many phases of society, in and out of academia, are to blame for the current problems in higher education.
Many state governments, alarmed at higher education's share of discretionary spending, have slashed funding, thereby forcing schools to make tough choices that are almost guaranteed to produce negative results.
Many administrations have gone on to make dubious choices about resource allocations, in part because of budget cuts, but also in part of misplaced priorities, such as a growing corporate culture that stubbornly insists non-profit schools should be run like for-profit businesses, and students should be treated like "customers."
Many TTT professors are either safely ensconced in their tenured positions or working towards such, while doing nothing to challenge the two tiered labor system they profit from.
Many parents and students, as consumers, have rolled over. Convinced of the supposed necessity of a college education, some have let themselves be bullied into putting up with spiraling costs and a problematic teaching system. Others have taken a lackadaisical approach to examining a product they will spend tens, if not hundreds of thousands of dollars on, and are largely unaware of the situation. Indeed, so long as parents and students put up with all of this, nothing is likely to change anytime soon.
And even contingent faculty themselves must, at a certain point, take a modicum of responsibility for their situation. They are clearly the most victimized class in this scenario, frequently having accumulated five- or even six-figure debt as graduate students while spending years earning poverty wages so they could train for this career. However, the current labor market conditions took a noticeable turn for the worse nearly 6 years ago, and have been very bad regardless since the 1970s. It is important to enter doctoral programs with a realistic understanding of one's chances of obtaining a TTT job, and to give serious thought about how long one is willing to be a contingent faculty while pursing a TTT job, and what the other reasonable options are if that fails.
Again, not to blame the victim. Excepting the small fraction of adjuncts, such as retirees, who really do want occasional part time work, the vast majority of contingent faculty have every reason to feel angry and aggrieved; they are in fact being grossly exploited. But the complexities of their career choices at various stages are one part of a large, complicated equation.
Large and complicated enough that the 2,500 words or so in a Monday 3QD article can hardly scratch the surface under most circumstances. However, a report released just this month by the American Institutes of Research (AIR) offers an opportunity to concisely shed some light.
Entitled Labor Intensive or Labor Expensive? Changing Staffing and Compensation Patterns in Higher Education, the report examines spending at American colleges during the years 2000-2012. The findings are illuminating.
Between 2000-2012, the total higher education workforce actually grew by 28 percent, despite the Great Recession of 2008-present. This is of course noticeably different than many other industries that contracted during those years. Why the overall growth? There short answer is: Millennials. Simply put, there's an up tick in student populations. Another echo from the post-war Baby Boom, there is a bulge in the college age demographic. Thus, new workers were hired at colleges, even during the Great Recession, in an attempt (and not always a successful one) to keep pace with rising rates of student enrollment.
However, most employee growth at colleges during the last twelve years has not been in the form of teachers. It is in the form of non-instructional employees, who comprise a clear majority of the college labor force. Among them, the report defines two classes: salaried administrators, whose numbers have grown substantially, and support staff such as secretaries and maintenance workers, whose numbers have actually declined.
The report notes that, "administrators have assumed a much larger presence on college campuses than ever before."
As mentioned above, administrations have several sound explanations for at least some of their growth. Other justifications are perhaps less sound. One example is to classify much of what they do under a vague banner of their own invention: "student services."
After all, who can argue with the importance of serving students?
But make no mistake: so-called student services are still non-instructional. What are they exactly? A far ranging spectrum too broad to comprehensively list here.
Some of it is vital, like dormitories and food halls. Some of it is non-essential to education but still of tremendous worth, such as counseling and health centers. And some of it, like campus festivals and concerts, leads many faculty to roll their eyes and suspect that "student services" has become a cover for unworthy administrative expenses. Regardless, none of it is designed for the classroom. And from 2002-2010, as paradoxical as it sounds, spending on instruction declined while spending on student services rose.
When peeling back the layers of administrative growth, the AIR report finds that it is a two pronged development. First comes the hiring of managerial administrators. These tend to get the most attention, as they include high powered administrators who typically boast shiny titles, expansive offices, and six-figured salaries bolstered by expense accounts. Shiny titles and big salaries tend to ruffle feathers at colleges for several reasons:
-They seem incongruous at a non-profit institution.
-They seem incongruous within the culture of academia specifically, which prides itself on a professorial workforce that has chosen "a calling" instead of prioritizing material gains (Whether or not this is true is debatable, of course, but it's a common perception within academic culture.).
-The professorial workforce is composed of intellectuals trained to analyze and critique, and its members are often quick to question the legitimacy of shiny titles and big salaries/expense accounts for non-instructional administrators.
But what the report makes clear is that the expansion of top rank administrators and their sometimes shockingly large salaries is not, in and of itself, what drives up costs. After all, a quarter-million dollar salary to one individual really is a drop in the bucket of most college budgets.
Rather, it's that most of these new high-ranking administrators require, or at least demand, a sizable staff of administrators working under them. After all, they're managers. The crux of their job is overseeing the work of others. A manager without a workforce is like a god without parishioners: that fancy title just ain't worth much.
According to the AIR report, the rising cost of administrative salaries is due to the expanding ranks of subordinate administrators (not to be confused with staff such as secretaries), as much as the new top level executive administrators whom they answer to.
How this plays out across academia depends to a large degree on the type of school being examined. At private research colleges, which tend to have more money than public institutions, the rate of administrative growth was higher even than the rate of student growth, sometimes substantially. At public colleges, however, administrative growth barely merely kept pace with student growth for the most part.
Regardless, as college administrations have grown, the ratio of faculty and staff to administrators has declined at all types of institutions. And the decline has not been small. At most colleges, the faculty/staff : administrator ratio plummeted by about 40% from 1990-2012.
As the report bluntly states: "On most college campuses, the majority of workers are not teaching students."
Taking this all into account, it should come as no surprise that at colleges and universities across America, the ranks of full time teachers (both TTT professors and full time contingent faculty) have dropped. Together they comprise between one-fifth and one-quarter of the total workforce.
In other words, about 4/5 of employees at American colleges are either not teachers or only teach part time. And of the remaining fifth, a growing share of full time teachers are contingent faculty with less pay and fewer benefits.
Adjunct faculty, in the form of part-time workers and graduate assistants, constitute more than one-half of the teaching workforce at most colleges. New full time faculty are still being hired, but at a rate that usually lags behind student enrollments. Furthermore, many new full time hires are contingent faculty who receive lower pay and benefits, if any.
At wealthy schools, adjunct faculty allow for more sections to be taught, thereby helping maintain the lower teaching loads of TTT faculty. However, at schools with fewer resources, typically the public institutions that ostensibly focus on faculty teaching instead of faculty research, the growing ranks of adjuncts are actually replacing full time positions, whether contingent full timers or even TTT professors.
Community colleges are the schools most dependent on adjuncts, essentially riding a flotilla of part time teachers. This underscores the fact that state colleges which emphasize teaching often get the least amount of public resources, while schools that emphasize research almost always take home the lion's share. Clearly an argument can be made that this is a disservice to students.
Add it all up and the report find that the numbers of administrators and adjuncts, as a percentage of the workforce, grew at every type of college in America. Meanwhile, the percentage of staff employees has decreased at every type of college, and the percentage TTT faculty has decreased at every type except for public research universities (1% increase) and private research universities (3% increase). In no category of school do TTT faculty comprise so much as a quarter of the workforce.
Given the growth of contingent, and especially part time faculty, and thus the downward shift in faculty compensation, it should not be surprising that faculty salaries do not explain skyrocketing college tuition. In fact, like most American workers, full time faculty have suffered stagnant wages. From 2002 (six years before the Great Recession) through 2010, college expenditures on salary for full-time faculty were essentially flat.
But if faculty expenditures don't explain rising costs, neither is the answer simply "administration." The growth in administration, both in terms of salaries and expenditures on non-instructional projects such as construction, has indeed been a factor, the AIR report reveals. But it is not the only one.
In addition to the increase in administrative salaries, one factor is the increased cost of salaried benefits for all administrators, faculty, and staff who receive benefits. For example, America's stunningly inefficient healthcare system, which gobbles up about 18% of Gross Domestic Product, continues to claim a growing share of university expenditures.
Again, it is very important to note that many adjunct faculty, and even many contingent full time faculty, receive few if any of these benefits. This is at once a clear indication of labor exploitation, and also an indictment of larger social and economic problems in the United States.
Another factor explaining rising costs is the aforementioned decline in government subsidies, which affects all schools, but particularly public schools. The downward trend is longstanding, but the Great Recession put sharp spurs to it.
While a small handful of schools such as Harvard and Yale boast endowments larger than the GDP of many nations, and enjoy the fiscal flexibility that comes with them, the vast, vast majority of American colleges do not have a rainy day fund anywhere near large enough to help them cope with these complex changes.
As a result, colleges have scrambled to adjust. The stunning rise in tuition rates over the last three decades is one result, and so too is the equally shocking rise in exploited contingent faculty. And while it seems likely that many colleges have made some poor choices while trying to adjust, larger structural forces have greatly limited their options.
Akim Reinhardt's website is ThePublicProfessor.com
Ink on board.
The stories of our lives
by Sarah Firisen
Odds are you’re on Facebook. After all, 1 in 6 people on the planet are on it, why should you be the exception? I think in my immediate circle of friends and family I know one person who isn’t on it at all. I know, I know, we overshare these days; we have no privacy; we allow ourselves to be marketing pawns for Facebook and their minions; we’ve welcomed Big Brother into our lives with open arms. But nevertheless, for most of us, it seems to be the case that if a fabulous meal is eaten and no photos of it are posted on Facebook for our friends and family to salivate over, then the meal never really happened.
So I’m going to go ahead with the assumption that almost everyone reading this, except my one friend, has been exposed to a huge number of their Facebook “friends” posting “Here’s my Facebook movie. Find yours at…” I have to admit that when I first started seeing these pop up in my newsfeed I was skeptical and resisted for a day or so. Then I watched a couple and they were cute and short. Even some of my more intellectually serious friends, including a certain 3QD editor, couldn’t resist. Finally I gave into temptation and had Facebook create mine. I was pleasantly surprised; it did a good job of choosing the highlights that I might have selected. There were a lot of photos of my kids, a few of the dog and one of me showing off a new haircut. And it ended with my New Year’s greeting to friends and family making a vague reference to the hard year I’d had because of my divorce and thanking people for their support. Seemed like a fitting end point.
For hundreds of years people kept diaries and they wrote letters. In these ways, they narrated their own lives and allowed others to follow them, in the case of letters, as their correspondents and in both cases as a record of the stories of their lives for future generations. Most of us don’t write diaries, if anything we write blogs and certainly, if the slow death of the US Postal Service is anything to go by, we don’t write letters. Increasingly, we don’t even send personal emails. These days, I don’t even keep in regular contact with many people via non Facebook email. Some people I text with. A few I used BBM or Whatsapp. Instead, the story of my life is documented on Facebook and for other people on Twitter, Instagram et al.
We don’t have hard copy photo albums anymore, instead we have our Facebook albums and photos are uploaded to them almost in real time from our mobile phones. And while there are disadvantages to this - what on earth will we all do if Facebook ever suffers a catastrophic data loss – there are clearly pros to both the immediacy of the photo sharing and the free agency that allows the viewer to choose whether and when to look at my holiday snaps rather politely faking interest as they feel compelled to flick through a 100 photos of me with bad sunburn.
I know there are people who will bemoan this general turn of events. They’ll talk about what’s lost: the reduced attention span of generations of people who can’t pay attention to more than a 140 character tweet. But actually, as you watch these Facebook movies you come to realize what a very rich trail we’re leaving. Photos, videos, news articles we’ve loved, Daily Show episodes we applaud, causes we’ve been inspired by, relationships coming and going and sometimes coming again. And much more.
I spent this week at a conference about data visualization and storytelling. There were some very interesting presentations from sources as varied at the World Bank to ESPN to Marvel. Something that came through in most of the presentations is that data visualization (and this can be anything from a map to a chart to a movie all allowing varying degrees of interaction) and storytelling should be mutually reinforcing; a storyteller shouldn’t impose patterns on the data, instead allowing the natural patterns that emerge to tell their stories. And in return, the data visualization should bring stories to life, delivering a multimedia experience to the audience. In the case of one of my favorite presenters, Santiago Ortiz, sometimes he literally takes a story – the Iliad for example – and will visually represent it as a data stream.
And this all brought me back to thinking about Facebook and other social media, but particularly Facebook because this does seem to be one of the most narrative driven of the social media tools. Before social media, unless we were letter writers, most of us had very sporadic, piecemeal and one-dimensional interactions with a lot of the people in our lives; we made phone calls, tried to remember birthdays, sent the odd Wish you were here holiday postcard. And when we finally managed to meet up, we had to spend a long time pulling together the threads of what the other person had been doing with their lives. Trying to piece together a coherent narrative of where their relationships, moves, graduations and job changes fit in a linear timeline – “So you dumped John just after you finished grad school because you were planning to move to New York and he wanted to stay in California?” This wasn’t necessarily a bad thing; many of us now have the experience of catching up in-person only to find out that we don’t have that much to talk about because, through Facebook, our friends and family already know what we’ve been up to all year. This is the downside: the overexposure of our stories. But there is also an upside; I’ve found that it can make catching up with people a far richer experience because, to some extent, they feel as if they’ve lived some of my narrative with me in real time. Because they’ve watched my life unfolding, I think they feel far more vested in my stories than they used to when hearing them at a distance of months and years when even my memories of the events were sketchy. Following me on Facebook gives visual context to the stories we then go on to tell in-person.
And like any good data visualization, Facebook is interactive; you can always go to someone’s page and go back over their timeline to see what they’ve been doing over months and years. Unlike those rather tedious Christmas letters that go on for pages and pages about everything the family has done in the preceding twelve months, looking at a timeline can allow you to select for yourself the parts of the narrative that most interest you and to deep dive as particular photo albums, videos and blog postings catch your eye.
Whatever else it is, Facebook does seem to be a great data visualization of our personal narratives. As someone who teaches other people how to be better storytellers, I find this an interesting phenomenon. When I teach a storytelling class, people will often say to me “But I don’t have any stories”. My answer “Of course you have stories, everyone has stories. Everyone has interesting, funny, scary, profound things that happen to them all the time. All you have to do is to select the right story for the right occasion, shape it a little and present it with impact”. Perhaps the first part of that process should be going back through your Facebook timeline to remind yourself of the stories of your life.
Eating animals and personal guilt: the individualization of responsibility for factory farming
by Grace Boey
Last year, I decided to stop eating animal products and meat, apart from some seafood. I’d felt uncomfortable about the facts of factory farming for quite some time, and finally resolved to take the plunge. Having enjoyed meat, eggs and dairy for all my life, it was initially a challenge adjusting to my new diet – while cutting meat was surprisingly easy, I mourned the loss of scrambled eggs for breakfast for at least a month. I still sometimes find it hard to resist certain desserts made with made with eggs, butter and milk. It helps, though, that I carry pictures like these around with me on my phone. The bright yellow hue of a lemon tart that comes from egg yolks doesn’t seem so appealing anymore after I call up pictures of filthy hens squished together in cages. I slip up sometimes, but on the whole, I’ve been pretty good about sticking to my diet.
The tougher challenge for me was, and still is, talking to others about my abstention. Ideally, I’d proudly announce my decision, and freely share my reasons for making it. But in reality, I avoid talking about it as much as possible. I almost never proactively tell anyone about my diet, and I don’t mention it unless circumstances make it necessary. There are few things that make me more physically uncomfortable than having my personal business suddenly put on the spot. I’m also hopeless at expressing myself verbally. And bringing up animal abstention tends to open up a conversational can of worms of the most squirmish kind. Okay, so I’m making my abstention public here – but it’s not too often I get to kick off a conversation by explaining myself in a couple thousand words, in my medium of choice, before the other party gets to respond.
Before I began abstaining from animals, I’d heard about the legendary amount of snark and hostility experienced by others who did. I’ve since gotten my fair share of this ugliness, which usually goes like this: someone will wrangle information about my diet out of me, and then proceed, entirely unsolicited, to say something f!#@ing rude about it. I’ve gradually learned to let idiotic comments like – for every steak you don’t eat, I’m going to eat three – slide. I’m still wondering how to respond to those who make a show of delightedly biting into chicken wings, right after I sincerely express my sadness over animals being tortured in factory farms.
Such crassness is tiresome, but what genuinely troubles me more are remarks like – I know I should do it, but it tastes so good – or – I know it’s wrong, but what difference are you making by stopping? Is it just something that helps you sleep better at night? To me, they represent a bunch of concerns that I myself have mixed feelings about. And the fact that people feel the need to proactively justify themselves says this – that they, in some way, perceive that I’m judging them. Once, as I was munching on vegan cookies in the subway, my boyfriend confessed he sometimes worried I thought he was a bad person for eating meat.
The truth is: judgment doesn’t even enter my mind until others bring it up themselves. I know how hard it is to give up food you’ve grown up eating all your life. I also know how pointless it seems, in the grand scheme of things, for one person out of billions to abstain from meat, milk and eggs. I also get how hard it is not to eat the stuff, when it's everywhere in innocent-looking forms. I know that we’re rarely physically confronted with the reality of how it’s all produced. I may be a philosophy grad student, but I'm also, y'know, a human being who was still eating meat not too long ago. I’m wholly capable of stepping outside my little box of moral reasoning to identify with the phenomenology driving human behaviour.
Unfortunately, I’ve yet to figure out how to verbally communicate this in a way that doesn’t make me sound like a condescending prick (for some reason, I suspect “I don’t judge you, because I’m stepping outside my philosophical box of moral reasoning to empathize with humanity” just doesn’t cut it). It also begs the question: why exactly am I abstaining, and why the hell do I carry pictures of battery cages on my phone? For the moment, let’s put that question on hold; I’ll answer it later. I’m interested in first examining this troubling phenomenon of guilt, pointlessness and self-conscious indifference that so many of us have experienced at least once in our lives. How many of us have been traumatized after watching Earthlings or a PETA video, and resolved to go vegetarian … only to give in to a cheeseburger one day later? We know there’s something terribly wrong about the way we treat and eat animals, but for various reasons it seems too hard and too pointless to give it up. How has it come to this, and how should we view abstinence in light of it?
Problems with individualizing responsibility
It’s helpful to first take a brief step back from the issue of animal rights, and look to a parallel discussion in another field: environmental degradation. Earth’s resources are quickly depleting, and we as citizens of the planet are constantly told that we can save the environment with our individual choices. We should recycle, ‘buy green’, eat organic, and ride bikes or walk instead of driving. Singapore, my home country, isn’t too big on recycling. But since I’ve shifted to New York, where recycling is mandatory, I’ve become fairly dutiful about sorting out my trash. I felt bad last week when I accidentally condemned a glass jar of pasta sauce to the landfill – every little bit counts, and this little bit could have been recycled or re-used (to store my homemade vegan lemon curd, perhaps).
In an influential 2001 paper Individualization. Plant a Tree, Ride a Bike, Save the World?, Michael Maniates, Professor of Political Science and Environmental Science, refers to this mindset as the “individualization of responsibility”:
[The individualization of responsibility] understands environmental degradation as the product of individual shortcomings … best countered by action that is staunchly individual and typically consumer-based … It embraces the notion that knotty issues of consumption, consumerism, power and responsibility can be resolved neatly and cleanly through enlightened, uncoordinated consumer choice ... Call this response the individualization of responsibility.
The problem with this kind of thinking, though, is that it diverts our attention from the major structural and institutional factors at play:
When responsibility for environmental problems is individualized, there is little room to ponder institutions, the nature and exercise of political power, or ways of collectively changing the distribution of power and influence in society – to, in other words, “think institutionally.” … Individualization, by implying that any action beyond the private and the consumptive is irrelevant, insulates people from the empowering experiences and political lessons of collective struggle for social change and reinforces corrosive myths about the difficulties of public life. By legitimating notions of consumer sovereignty and a self-balancing and autonomous market, it also diverts attention from political arenas that matter.
Individualization, ironically, affects the individual in a way that is ultimately disempowering:
In the end, individualizing responsibility does not work – you can’t plant a tree to save the world – and as citizens and consumers slowly come to discover this fact their cynicism about social change will only grow: “you mean after fifteen years of washing out these crummy jars and recycling them, environmental problems are still getting worse – geesh, what’s the use?”
Well, I don’t feel so bad about trashing that glass jar now.
Why doesn’t individualizing responsibility work well to curb environmental degradation? There are several reasons for this. First, as Maniates argues, the very structure of society makes it difficult – if not impossible – for the individual to make any real choice; anything we do against the backdrop of this industrial, consumerist society is going to have some kind of deleterious effect on the environment. Second, given the billions of people on Earth, any change that any one individual makes is simply going to be negligible in the grand scheme of things. Third, the fragmentation of agency involved in pollution and resource depletion leads to the classic Prisoner’s Dilemma. Stephen Gardner, philosophy professor at University of Washington, sums this up:
Suppose that a number of distinct agents are trying to decide whether or not to engage in a polluting activity, and that their situation is characterized by the following two claims:
(PD1) It is collectively rational to cooperate and restrict overall pollution: each agent prefers the outcome produced by everyone restricting their individual pollution over the outcome produced by no one doing so.
(PD2) It is individually rational not to restrict one’s own pollution: when each agent has the power to decide whether or not she will restrict her pollution, each (rationally) prefers not to do so, whatever the others do.
Agents in such a situation find themselves in a paradoxical position. On the one hand, given (PD1), they understand that it would be better for everyone if every agent cooperated; but, on the other hand, given (PD2), they also know that they should all choose to defect.
The three reasons above make it hard, pointless and even irrational (!) for individuals to try and save the environment by changing their personal patterns of consumption. It’s no wonder, then, that the problem of environmental degradation looms bigger than ever.
It’s easy to see how some of this relates to animal rights and dietary abstinence. We are often told that if we want horrific animal suffering to stop, it is our responsibility to vote with our dollars and simply stop consuming what the factory farms produce. As consumers, we are responsible for generating the demand for the products we buy. This boycott is the primary course of action that Peter Singer advocates in his animal rights classic, Animal Liberation:
Becoming a vegetarian is a highly practical and effective step one can take toward ending both the killing of nonhuman animals and the infliction of suffering upon them. … So long as people are prepared to buy the products of intensive farming, the usual forms of protest and political action will never bring about a major reform. … The people who profit by exploiting large numbers of animals do not need our approval. They need our money. The purchase of the corpses of the animals they rear is the main support the factory farmers ask from the public. … This is not to say that the normal channels of protest and political action are useless and should be abandoned. On the contrary, they are a necessary part of the overall struggle for effective change in the treatment of animals. But in themselves, these methods are not enough.
Animal Liberation will always have a spot on my bookshelf; the facts and arguments it presents were instrumental in getting me to care more about animal welfare. But I cannot help but think that, in advocating individualistic boycott as the primary course of remedy to his readers, Singer is being much too naïve.
Assume, like Singer does as a utilitarian, that what we’re concerned with is reducing or eliminating the total amount of animal suffering in factory farms. If this is the case, then the last two out of the three of reasons why individualization fails for environmental degradation apply here as well. To recap: given the billions of people on Earth, any change that any one individual makes is going to be negligible in the grand scheme of things; and, the fragmentation of agency involved leads to the Prisoner’s Dilemma. From our individualistic and utilitarian perspective, it seems pointless and even irrational for any one person to start abstaining from animals, given that they like eating meat.
There are those who will challenge the legitimacy of applying the negligibility premise to animal abstention. Surely the individual can make some impact, however small. Again, here is Singer:
I believe we do achieve something by our individual acts, even if the boycott as a whole should not succeed. George Bernard Shaw once said that he would be followed to his grave by numerous sheep, cattle, pigs, chickens, and a whole shoal of fish, all grateful at having been spared from slaughter because of his vegetarian diet. Although we cannot identify any individual animals whom we have benefitted by becoming a vegetarian, we can assume that our diet, together with that of the many others who are already avoiding meat, will have some impact on the number of animals raised in factory farms and slaughtered for food. This assumption is reasonable because the number of animals raised and slaughtered depends on the profitability of this process, and this profit depends in part on the demand for the product. The smaller the demand, the lower the price and the lower the profit. The lower the profit, the fewer the animals that will be raised and slaughtered.
There are some problems with Singer’s attempt at persuasion here. For sure, I think that sparing the suffering of just one animal is something worth doing. But, realistically, it’s doubtful that even this could be achieved by one person acting within the system – the food market is simply too enormous to sense the choice of one single consumer. Sadly, we as individuals actually can’t assume that our diet will have any impact at all on the number of animals raised in factory farms. In all likelihood, the massive number of animals processed by factory farms over the course of my lifetime will remain exactly the same, whether I boycott or not. Singer appeals to the impact of an aggregate of boycotts, but what he hasn’t done is to give a very convincing reason for any individual reader to give up meat independently of anyone else. For one person looking to make a real difference, abstinence still seems hopeless. Maybe a group of one hundred is enough to send a minute signal to the market, but one person is probably not. Independent abstinence would also be more compelling for the individual utilitarian if we, and not the market, were directly responsible for rearing and killing our own meat; unfortunately, this doesn’t go through against the backdrop of factory farms.
I’ve argued so far that, for any one person concerned about reducing the amount of animal suffering in factory farms, abstaining from animals understandably seems pointless and even irrational. This assumes, however, that the individual will keep on eating meat as much as possible – regardless of its source – as long as she doesn’t see much practical, utilitarian point in stopping. This is getting us closer to one of the reasons why I chose to stop eating meat. But before I directly address that, I want to raise the question: what gives us this intense desire to consume meat? Why is it so hard to stop? How is it that we can watch gross PETA videos but go back to eating hamburgers within the hour? What gives rise to the sentiment – I know I should stop eating meat, but I just can't – that I’ve heard from at least two thirds of my friends?
I’ve gotten flak from a lot of vegans for saying this, but I will stick to my guns: I think there is some legitimacy to this excuse. Sure, we should definitely feel awful about our food when we are confronted with the reality of how it’s farmed; I would judge someone who denied that there was any problem at all with this. But the reality is, the system acts in a way that aggressively shields us from the facts of farming and simultaneously stokes our desire for animal products at every turn. Meat and animal products, in their final form, are everywhere. From infanthood, we are raised eating meat, eggs and milk; we are taken to McDonalds as a treat, and normalized to the sight of raw, red beef in supermarkets. Meat, milk and eggs are everywhere. Ice cream is everywhere. Burgers are everywhere. But what goes on inside factory farms is not. Factory farms don’t have glass walls, and those who run them are doing everything they can to keep us out. I don’t even know where the one nearest to my house is. For most people, there is a deep psychological disconnect between the meat they see in supermarkets, and the live animals they see in pictures and documentaries. Whether or not this is a theoretically legitimate moral excuse, in reality, this makes it very, very hard for most to bridge the gap between morals and actions.
Whose fault is this? I’m not really sure how to answer this question, nor am I sure that it’s a terribly productive one to ask. I think it’s safe to say that no one wants billions of animals to live and die in such suffering, and almost everyone thinks it’s morally reprehensible. Just try taking a poll of people right after watching a documentary that accurately depicts factory farm conditions. Animal consumption didn’t historically start out that way; it’s just that industrialization and capitalism eventually gave rise to a system where producing the most food for cheapest depends on doing so. Judge all you want, but in some sense, consumers and producers (and middlemen too) are stuck in a deadlock, where it's both incredibly difficult and not in our interests to step back from a system we nonetheless know to be inherently based on dirty fundamentals. No wonder we’ve become so cynical.
I think it’s clear by now that the only way out of this is reform on an institutional and legal level. The way out of the Prisonner’s Dilemma is to change the rules of the game, such that collectively rational action converges with individually rational action. And the way out of systemic blindness is to push for constant education and exposure – even mandate it. It's going to be hard, but nothing's going to change until this happens. One possible path out of society’s indifference towards what goes on in factory farms is to require graphic and factual information about our food wherever it is bought and consumed; although this is a controversial method, I myself am all for it. We must ensure that the walls of these farms remain effectively transparent, to everyone, for good. Once this is done, people will no longer be able to ignore the facts of intensive farming. I believe it will then be clear to everyone that what goes on in there cannot be allowed to continue. And the natural way out of exploiting the welfare of animals for the sake of cost and efficiency is to pass and enforce laws that make it unattractive or ideally illegal for producers to do so. By this I mean laws that really matter – not like the current ridiculous cage-free labelling farce that does nothing but make dirty money off well-intentioned customers. At least guarantee farm animals the same legal rights we already accord to pets. That's right – in many jurisdictions, including the USA, anti-cruelty laws that cover "companion animals" don't apply to farm animals. If this drives up prices or even collapses the market, so be it; this will only motivate society to seriously search for other ways to feed everyone cheaply and nutritiously.
Peter Singer may be pessimistic about the institutional tactic for eliminating animal suffering, but I am even more pessimistic about the prospects of persuading individual consumers to give up animals. It is not realistic to think we will achieve an overhaul of tastes this way: all this will do is saddle society with a massive guilt trip, while achieving little else towards our collective cause. The current context means that it is not at all productive, nor entirely fair, to charge individual consumers with the suffering of animals. Stop calling people monsters for eating steak. I think we can do better. Understand the forces that push them to participate in the system, and work strategically from there.
Individual abstinence and integrity
Now it’s time for me to fulfill my promise, and answer this question: why did I stop eating most animals? This may put a big, ethical crosshair on my forehead, but I’m actually not entirely certain that I’m morally required to stop eating factory-farmed meat as a small cog in a large wheel. Though I'm pretty certain, I believe there’s still a small chance I’m acting beyond what moral duty calls for. I’m just being intellectually honest here, and I’ve yet to work the uncertainty out. I'm also thinking of cutting the seafood, and pondering over my boyfriend's offer to carefully shoot a duck for me in the wild and roast it. I haven't made up my mind, and all these things make for a whole bunch of other questions.
So while moral reasoning (and hedging) brought me to my decision, what gave me the final push was really this: I just felt sad every time I looked down at my plate. The reason why I started feeling so sad was because work had seen me meticulously poring through animal rights books and academic papers for months; so much of this information had seeped into my brain that I could no longer live with eating the stuff I knew to be produced in this way. That’s why giving up meat was so easy for me: where I used to see a pork chop on a plate, I now see a tail-less, crusty-eyed, psychotic sow. And it’s important to me that I keep this aversion going: I have no wish to remain in a system I don’t believe in, even if it should make no utilitarian impact.
For me, this is what animal abstinence in a broken system boils down to: integrity. Emotion and some cognition may have been the spark, but the desire for integrity is what really keeps this flame going. And this is why I keep pictures of battery cages in my phone, even though I don’t spontaneously visualize miserable hens when peeking into patisseries. When I opt out, I act in accordance with my own values about how the world should be – which is, free of the system. Whether or not the 'virtue' of such integrity makes for a strict moral requirement, it’s certainly important to my own project of self-integration and identity that I pursue it.
Abstention from the system is a legitimate and desirable reflection of my own values, and it is largely for this reason that I would encourage others to join me. It's none of my business if they don't, but I’m happy for those who do: I think it helps them achieve a more cohesive identity, helps them live better with themselves, and helps them break out of their indifference in general to animal welfare. For me, abstention is something that keeps me motivated towards my goal of being in a position to influence this cause in a substantial way. Perhaps there’s nothing theoretically incoherent about someone who lobbies against the system, while continuing to eat factory-farmed meat. Good for anyone who can do that, I suppose. But I can't, and I suspect the same is true of most others. In reality, continuing to participate in a system we disapprove of in our heads tends to push it further towards the back of our minds.
An Astonishing Tale about the Origins of Golf: A True Story
by Bill Benzon
Tiger Woods is only the most recent in a long line of fine black golfers. In saying that I refer to players other than the moderns such as Charles Sifford, Jim Thorpe, Jim Dent, Lee Elder, Calvin Peete, and Renee Powell. Truth be told, the tradition of sepia swing masters started in ancient Nubia, where the game was invented. In that company Woods would be no more than a middling player.
By today's standards Tiger is ferociously talented, though his game has lost a bit of its luster of late. But those Kushite drivers of ancient Nubia were giants the like of which haven't been seen in thousands of years.
Their stories, like so many stories, have been suppressed by the Europeans. Fortunately many of those stories have been collected by The Order of Mystic Jewels for the Propagation of Grace, Right Living, and Saturday Night through Historic Intervention by Any Means Necessary. The Jewels are dedicated to preserving the ancient stories and to intervening in history in ways variously clever and indirect. They are the chief source of that version of Afrocentric thinking known as Jivometric Drummology:
Jivometric Drummology: A philosophical system grounded in African and African-American musical practice. "Drummology" indicates that the governing logos is that of the drum, of rhythm, of hands and sticks coaxing sound from skin, of people joining together, each playing a simple rhythm, with the many simple rhythms melting into a single stream of infinite diversity. "Jivometric" characterizes the way language rolls off the tongue and tickles the ear; its meaning is secondary to its sound. Jivometrics is thus a principle of grace. A treatise may have drummological ideas, but if the language lacks grace, then the treatise is not jivometric -- jiveturkey is all too often the appropriate term. In the most profound works of this school jivometrics and drummology are joined through agape.
The following story is based on information from the recently discovered papers of Cassius Photon Gaillard, aka Slim. He was a Mystic Jewel who had studied Jivometrics with the masters.
The Origin of Golf and the Lights in the Sky
Golf was invented by the ancient Nubians. Most of the details have been lost, but the general shape and thrust of the story has been preserved.
It began in the reign of Pharaoh Ramses Golfotep X of the 25th or Kushite Dynasty. One day Rams, as he was known to friends and family, was hanging out with some of his friends in the gazebo at his summer palace. As usual they were playing bid whist and sipping Mount Gay and Coke, with a twist of lemon. As so often happens they got to talkin' trash about their wives and girl friends. Rams talked about how he particularly liked going into a special glade with his wife Cleo and a boom box loaded with some righteous jams. The best time was early evening when things were cooling down and the sun lit the sky with orange fire. They'd meander down this long narrow opening among the palms and get to a secluded spot ringed with patches of sand. The ground was firm and the grass kept closely cropped so they could dance freely. Inevitably the dancing would lead to a little fooling around, and that little fooling around generally led to more and before you know it Cleo was baking Ramses' sweet potato in her oven. That was some fine sweet potato pie they'd cook up. Yes indeed.
So, Rams and his friends kept talking and drinking and talking and drinking and before you knew it they found themselves nose to the ground chasing a lemon around. It was a lot of fun and, wouldn't you know, a week later Mount Gay and whist once again placed them on the grass chasing lemons. And so it went week after week. In the course of about a year or so they'd managed to invent golf, or something much like it.
For the ancient game was a bit different from the modern one. In the first place, the course was layed out in three sets of nine holes, for a total of 27, rather than the modern 18. 27 is the 3rd power of 3, and thus brings the basic design into agreement with the ternary basis of the underlying rhythms in ancient Egyptian music. Further more, the holes were somewhat longer than those in the modern game. Par three holes typically varried between 200 and 250 meters while par fives were between 500 and 600 meters. Par for a single round was 108.
However, the most significant differences between the ancient and modern games involved the finely-tuned geometric judgment and kinematic finesse of greens play. The ancients mastered putting so quickly that the rules had to be changed to make putting even more difficult. The rules committee, officially called the Jive Adjudicators and Soul Satisficers (JASS), required that all putts be executed while the player is standing on only one leg, with alternation from one leg to the other being required from one green to the next. When that became too easy the JASSers decided that all putts less than a meter long were to be executed from a headstand position. Further, at least half the putts had to be done single-handed, though the player was free to choose which hand to use. The concentration and balance thus required taxed the ability of even those magnificient athletes. In time, as knowledge of the game made its way to India, meadering from village to village, town to town, and city to city, the system of putting postures became separated from golf itself and evolved into the spiritual practice of Hatha Yoga. But that's another story, to be told at another time, in another place.
Clearly this new game required new gardens expressly designed to meet its demands in a surprising but felicitous way. And so Rams issued a royal decree and it was built: the Imperial Xanadu Golforama. It had sparkling brooks and fragrant cedars among the ancient forests. The clubhouse was one of the wonders of the ancient world. The icy wine cellars had rare vintages from all over and the domed ballroom featured the finest music for your dancing pleasure: Jelly Roll Liszt and his Red Hot Peppers, Ammon Bechet and the Swinging Scarabs, the Nomo Percussion Ensemble, featuring Zutty Pozo Addy, Beyonce James and her Swing Sisters Seven, Rudy Zerafino's Copascetic Syncopators, Duke Prez Earl and HonoriffX, the Dawg Cheops Orchestra, Ziggy ben Jammin and The Great Sphinx Riddle Masters, and, greatest of all, the Mighty Royal Roof Raisers, led by Daniel Louis Satchotep II, also known as King Toot.
And toot he did. When he was on his form couldn't nobody keep from dancing and dancing. On a bad night he was better than most, popping those high C's like they were birds lined up on a telephone wire. But on a good night, the Tootman was the baaadest horn player in the world, and then some! He could bring sight to the deaf, sound to the blind, make a lame man talk, and inspire the dumb to walk. He was mean!
But he couldn't bring light to the night. And that was a problem. You see, in those ancient days there weren't any stars or planets. Not even the moon. Just the sun and the earth. So, it was real dark at night, darker than you can possibly imagine. Of course, they had torches and whale oil lanterns and Zippo lighters. They could see enough to get around. But it was a drag and so unfriendly. Now that people were always out late dancing it got real oppresive coming home under that infinitely dark sky.
Rams thought about it every day for years and finally he had an idea. He got his clubs and several buckets of balls and went to the top of the highest pyramid. Once there he started hitting the balls as hard and far as he possibly could. 500 meters, 550, 563, he kept hitting them farther and farther. After three weeks he was approaching 600 meters. But ten weeks after that he wasn't hitting them any farther. He was up against it. Somehow he had to take his game to the next level.
Then he had a jolt of jivometric genius. He got King Toot's latest jam, Tight Like This, popped it in the Grand High Imperial Boom Box, and once more mounted the Big One. He teed up a Simulacrum II, took out his beloved No. 3 Jivometric Umoja Slammer and turned on the box. Slowly he started moving to the music, harmonizing his movement, summoning the Inner Spirit, the Ka force, easing into a righteous groove. As the music started coming up on Toot's first solo chorus Rams laid his eye out there on the ball, went into a backswing and as Toots hit his first note, Rams connected with the ball and knocked it a full kilometer. Solid.
Of course, since he started so high in the air, he had an advantage over contemporary golfers. Yet, a kilometer on the fly is pretty impressive anytime anywhere anyhow. The man was cooking! Within two hours he was up to ten kilometers. Breakthrough!
The next day he decided live music would be even more effective. He brought King Toot and the cats with him and they laid down some serious riffs. They started with a hot version of Struttin' with Some Barbecue and Rams swung into some serious slamming. By the end of the day he had knocked one all the way to the headwaters of the Nile. It flew so fast you could see a heat trail shimmering in the air. About ten minutes later they heard it land, tchhcck! in a bird's nest. Over the next few weeks that nest floated north into the Mediterranean and became the island of Crete. The day after that King Toot's Gully Low Blues inspired Rams to loft five into North America where their impact craters became the Great Lakes.
On the next day Golfotep achieved orbit for the first time. Toot rounded 3rd base heading into the final chorus of Cornet Chop Suey and Whrzhaap! "To the moon Alice! To the moon!" There it was, for the first time, the moon. One groovin' swing by a man, one giant step for mankind. "Yo! Toot my man, how's ‘bout a few hits of Muggles?" "You got it Rams." Thuuunnk! with the No. 3 Slammer and Mars bestrode the heavens. A couple of choruses into Hotter Than That and Wuzzschkk! Venus was up there making bed-time eyes to the world. Then Mercury, Saturn, & its moons, Jupiter, & its moons, Neptune, Uranus, Pluto, & another handfull of moons scattered here and about. Of course, those aren't the real names. The real names have been lost, erased from history by Nineteenth Century European Running Dog Jackal Pig Facist Racially-Deluded Honkey Imperialist White-Face Round-Eyed Devils.
That night there was light in the sky for the first time. The cool cats and jazz babies were delirious with joy. They danced and sang and balled the jack till the cows came marching Johnny home on the range where the buffalo roam from sea to shining amber waves of amen brothers and sisters praise the lord shalom-a-rama dama ding dong daddy from Dumas gonna do muh stuff with YOU baby! The day after that Ramses hit a zillion more into the heavens and created the asteroids. The next day he hit a gazillion more and there were all the stars and the so-called Milky Way — alas more white-washing.
A little smooth sippin'
Gets the honey drippin'
A little sweet talkin'
Gets the hips rockin'
A little righteous jammin'
Gets the backswing slammin'
That's how black folks invented golf and brought light to the night.
Na mo shiranu,
[Among the grasses,
An unknown flower
And that's the truth, Ruth.
Monday, February 03, 2014
Thoreau’s Body of Knowledge
Walking is a foundational practice, amounting in natural history to methodology. Charles Darwin in his Journal and remarks 1832–1836 more commonly known as The Voyage of the Beagle (1839) used the verb “walk”, or variants thereof, almost twice as frequently as the verb “sail” (walk, 94; sail 50). Darwin’s was more a journey on foot than a voyage by ocean. In fact “walking” is more prevalent in Darwin’s Voyages than it is in Walden, written by Thoreau that most legendary walker. Thoreau, however, has more to say about walking qua walking than Darwin. In his essay Walking (1862) Thoreau proclaimed that “I cannot preserve health and spirits, unless I spend four hours a day — and it is commonly more than that — sauntering through the woods and over the hills and fields, absolutely free from all worldly engagements.”
Thoreau’s walking is not, of course, mere exercise, nor is the essay Walking an instructional treatise though it does tell us something of the where (”the West”) and the how (“...shake off the village...”) of walking. The chiefest value of walking is that it carries the walker “to as strange a country as [he] ever expected to see.” Walking surprises us! Though half our walking time is taken up with the return to “the old hearth-side from which we set out”, nonetheless, the true spirit of walking consists of “the spirit of undying adventure”, from which we might never return.
For all of his talk of permanent leave-taking there is Thoreau claimed, a “harmony discoverable between the capabilities of the landscape and a circle of ten miles radius, or the limits of an afternoon walk, and the threescore years and ten of human life.” Thus there exists for Thoreau a non-trivial relationship between walking, our personal finitude, and finding our place in this world.
Thoreau makes the connection between walking and epistemology more transparent in his discussion of what he described as “beautiful knowledge”, a type of knowledge "useful in a higher sense". A scholar, Thoreau ruminated, can toil for a lifetime accumulating “Useful Knowledge” like a cow in a barn who has fed on hay all the year round and not left out to green pastures. Such a scholar suffers from “ignorance [of] our negative knowledge.” She knows what she knows and yet “[W]hat is most of our boasted so-called knowledge but a conceit that we know something, which robs us of the advantage of our actual ignorance.” Of such ignorance Thoreau claimed that a “man’s ignorance sometimes is not only useful but beautiful — while his knowledge, so called, is oftentimes worse that useless, besides being ugly.” For Thoreau whose desire is “to bathe my head in atmospheres unknown to my feet is perennial and constant”, what he wants is not “Knowledge” but “Sympathy with Intelligence.”
The idea that walking, sauntering as Thoreau dubs it, leads to Sympathy with Intelligence, a cryptic phrase to be sure, but by which he means “a novel and grand surprise in a sudden revelation of the insufficiency of all that we called Knowledge before…”, is a radical claim. Thoreau's grand surprise brings to mind the “Augenblick”, the “glance of the eye,” that William McNeill noted as important in Heidegger’s reading of Aristotle (The Glance of the Eye: Heidegger, Aristotle and the Ends of Theory, SUNY 1999). The significance to Heidegger, and to Thoreau perhaps, is the distinction between theoretical knowledge and the moment of ecstatic experience which is foundational for ethical knowledge. In Thoreau’s discussion of “beautiful knowledge” Max Oelschlaeger, the environmental philosopher, hears a pre-register of the phenomenological methods of Edmund Husserl. In The Idea of Wilderness: From Prehistory to the Age of Ecology, Yale University Press (1993), Oelschlaeger wrote “And with brilliant insight Walking proposes what is in effect a bracketing of both scientific and philosophical method — an epoche as relentless, if not as incisive, as that of twentieth century phenomenology” (p166).
None of this is to suggest that Thoreau eschewed traditional scientific knowledge or theorizing. He made meticulous observations, measured obsessively, and enunciated generalities. For instance observations on squirrels caching pignuts [hickory] lead him to the conclusion that “[t]his is the way, then, that forests are planted.” (Journal 24 Sept 1857). True, the term “theory” is seldom used by him, and the proximity of his use of the term to concrete, and often surprising facts, is close. Thoreau is, for instance, credited with coining the term “succession’ to describe the somewhat predictable changes that occur in vegetation over time, which provided a conceptual framework for much of 20th Century ecology. His theory of succession emerged from these observations on squirrels and from other meticulously details reported in his journals.
Thus, though I am not arguing that Thoreau was skeptical of scientific knowledge in general, he was nonetheless skeptical of facts, and of the pose of objectivity, for their own sakes. The naturalist must be above all be an attentive and attuned observer. In his journal on May 6 1854 he wrote: “There is no such thing as objective observation. Your observation, to be interesting, i.e. to be significant must be subjective.” He goes on to contend: “The man of most science is the man most alive, whose life is the greatest event.” And later in that same entry he wrote: “I cannot help suspecting that the life of these learned professors has been almost as inhuman and wooden as a rain-gauge or self-registering magnetic machine. They communicate no fact which rises to the temperature of blood heat.” He concluded that day’s entry as follows: “Dandelions, perhaps the first, yesterday…. I am surprised that the sight of it did not affect me more, but I look at is as unmoved as if but a day had elapsed since I saw it in the fall.” The mood of the scientist, or the poet, or the philosopher is all. In turn, the choice of material for reflection affects the perceiver.
Thoreau is the philosopher of attentiveness and experience rather than of systematic and theoretical reflection. The list of philosophers to whom he has not been compared appears short indeed, though there has been reluctance to welcome Thoreau into the philosophical fold. After all, it was Thoreau who quipped: “There are nowadays professors of philosophy, but not philosophers.” (Chapter 1, Walden). Perhaps most interestingly Stanley Cavell has argued in The Senses of Walden (University Of Chicago Press,1992) that Thoreau completed Kant’s critical project: Walden, in effect, provides a transcendental deduction for the concepts of the thing-in-itself and for determination — something Kant ought, so to speak, have done.”
Thoreau, Darwin, von Humboldt, Muir and all the other great naturalist-walkers ambled off, for the most part, to wilder regions. Thoreau, who to some extent remains most closely associated with a town — being almost synonymous with Concord — has the most scornful things to say about urban life. “Hope and the future for me,” he wrote in Walking, “are not in lawns and cultivated fields, not in towns and cities, but in the impervious and quaking swamps.” In contemplating the prospect of walking on the asphalted pavements of English towns he wrote: “I should die from mere nervousness at the thought of such confinement. I should hesitate before I were born, if those terms could be made known to me beforehand.” When contemplating the direction of his walks he emphatically walks in the direction of the forest where there are “no towns nor cities in it of enough consequence to disturb me.” (Walking).
Not all contemplatives have avoided the city, of course. Philosophy, especially since the time of the Greeks is arguably a product of urban locations. This is a central claim of Jean-Pierre Vernant, the French historian and anthropologist. In The Origin of Greek Thought (1962, trans Cornell University Press, 1984) he contended that “The advent of the polis constitutes a decisive event in the history of Greek thought.” Elsewhere Vernant is even more explicit: “The advent of the polis, the birth of philosophy — the two sequences are so closely linked that the origin of rational thought must be seen as bound up with the social and mental structures peculiar to the Greek city.” (130). So if we are to argue that Thoreau is a philosopher of sorts — which sort being, of course, undecided — his thought too would be a product of the very polis that he disdains.
Be that as it may, there is nonetheless, a general recognition of affiliation between the city and philosophy. Philosophical walking in the city is a familiar gesture from the time of the Greeks. The Phaedrus, for example, is exceptional among the Socratic dialogues precisely in not being conducted within the city walls. In that dialogue Socrates expressed appreciation of nature. As Phaedrus and Socrates settled under a plane-tree Socrates cooed: “How delightful is the breeze: so very sweet; and there is a sound in the air shrill and summerlike which makes answer to the chorus of the cicadae. But the greatest charm of all is the grass, like a pillow gently sloping to the head.” Yet Phaedrus retorted: “Socrates: when you are in the country, as you say, you really are like some stranger who is led about by a guide… I rather think that you never venture even outside the gates.”
The tradition of peripatetic philosophers continued through Aristotle’s peregrinations about the Lyceum to such latter day walkers as Walter Benjamin whose Arcades Project (New York: Belknap Press, 2002) written mainly in the 1930s popularized the figure of the flâneur, the literary urban stroller. In this tradition also is the work of Michel deCerteau. Like Benjamin’s Arcades Project, deCerteau’s important essay Walking in the City in the volume The Practice of Everyday Life (University of California Press, 1984) has important things to say to environmentalists, but they are not themselves explicitly environmental works.
So we find ourselves in the situation where philosophy, a product of the polis, gave birth to environmental thought which concerned itself, almost exclusively, with the world outside the city gates. At the birth of contemporary environmental thought the threshold between the philosophy and science was thin — which is why both ecologists and philosopher, when they are in the mood to do so, claim Thoreau as one of their own. Now, however, that the city, almost for the first time, has become the subject of enquiry for scientific ecology, it seems as if this proceeds without a filial relationship with philosophy. What should the foundations of a urban environmental philosophy look like?
In the very paragraphs I have left I want to bolster the claim that urban ecology is without a firm philosophical underpinning, and suggest how something like a Thoreauvian spirit, applied in a metropolitan direction, could be helpful.
Urban ecology is now a systematic sub-discipline within ecology, perhaps the newest. That which is “ontically nearest and familiar” to borrow from Heidegger is ecologically the farthest — it is as if ecologists were tripping over the cities in which the lived for almost a century without noticing that they were there! A emerging critical distinction in urban ecology is between “ecology of the city” versus “ecology in the city.” The distinction is regarded as a significant conceptual leap forward. It places uni-disciplinary, small scale ecological studies on one side, and multidisciplinary, multiscalar studies, especially those that examine the human and non-human aspects of nature simultaneously, on the other.
For instance, a study of the physical environment, the soil, or the biota of a city or a neighborhood would be considered “ecology in the city.” These studies can be aggregated to allow for generalities to emerge. Cities tend, for instance, to have their own distinctive climatic situations. Rain is more frequently in cities than in the hinterlands. Animal behavior in the urban settings in idiosyncratic. City temperatures tend to increase as the human population grows, up to a certain limit at least. These climatic differences have, in turn, implications for vegetation growing in the city. All the above representative insights emerge from within the “ecology in the city” paradigm.
“Ecology of the city” takes an explicitly systems view of things. By system here is meant a set of entities that interact to make a connected whole. In what manner do the elements of the city the human and non-human aspects of nature interact to contribute to an emergent whole city? A study of this form might be to ask what pollutants or carbon are taken up (sequestered being the $100 term preferred by ecologists) by all the trees in Chicago (that total is considerably larger than $100, to be sure!). The resource accounting tool of “ecological footprinting”, developed by William Rees and Mathis Wackernagel at the University of British Columbia in Vancouver, Canada, provides another example.
Another feature of the this approach is that it calls for integration of social science approaches with more traditional approaches to ecology, and they illustrate what this looks like with a series of increasingly sophisticated conceptual models revealing the interaction of physical, ecological, and social variables. Without insight into the integration of the human and the ecological perspectives at local and global scales urban ecology will be less effective in guiding public policy and management.
Although neither approach, I think, is a preferred approach, disciplinary leaders, nonetheless, seem to have a preference for ecology of the city studies preferring more abstract form of systems studies. Much effort is expended in trying to integrate the social and natural sciences in attempts to answer questions about so-called social-ecological systems — out cities, our agroecosystems and so on.
For the most part I agree with this emerging disciplinary consensus. We need those multidisciplinary, multiscalar, aggregative, holistic, inter-urban studies to really understand the sorts of ambiguous, hybrid, cyborgian affairs that have relatively recently created and in which now we dwell. We have, after all, become an urban species and wanting to know the ecological patterns and processes associated with these novel entities is understandable.
All of this being said, I have a suspicion that a full-throttled commitment to the discipline's (maginally) favored approach — ecology of cities — will result in losses of certain types of knowledge: the knowledge that manifests when a human body meanders through an ecosystem that it is enraptured by. An old-fashioned encounter with beings, coming in touch with what we might call brute reality, must count for something. This, of course, is the very essence of a Thoreauvian approach to ecology.
Let me conclude with this short anecdote. One of my first jobs as a young zoologist was to catalog the reprint collection of the Irish dipterist, Dr Declan Murray. In the collection was a paper that reported a rather unusual incident. A Finnish entomologist was in the field, north of the Arctic Circle, collecting chironomid midges. In the subzero temperatures the flies were inactive, and the biologist was in danger of getting hypothermia. He took out his hip flask and had a nip of a fortifying drink. He began, he reported, to sing an old Finnish folk tune. As he did so, he noticed that the flies began to swarm. When he stopped humming the flies went to ground. Again he sang, and again the flies arose in response. What he had stumbled upon in those frozen conditions, by virtue of his hypothermia avoidance technique, was that to conserve energy the male flies only swarmed when the female fly was nearby. His humming reached notes that replicated the wings of the female fly. The dipterist hummed and the world hummed back. There are some forms of ecology that are only learned with our bodies, whether it be our bodies traversing New England forests, or clambering up Douglas Spruces in the Sierras, or humming to flies in the high Arctic, of walking the pavement of cold Midwestern cities in search of the confluence of waterways.
I read an earlier version of this essay at the Philosophy of the City conference in Brooklyn College in December 2013. Thanks to the organizers of the conference, Shane Epting and Michael Menser, for organizing this wonderful event.
My So-Called Life On Walden Pond
"What would become of us, if we walked only in a garden or a mall?"
~ Thoreau, Walking
It is true that Thoreau had great misgivings about the railroad coming to Concord, and he correctly surmised that the train would make his beloved town a suburb of Boston. Somewhat inevitably, this has lead to the following sketches for a series, most likely to be submitted to the History Channel for immediate development into that esteemed channel's next surefire hit. (Note to my agent: While some of these may not seem funny, I can assure you that they are. Humor in the nineteenth century was just a bit different from ours, is all.)
Henry David Thoreau, philosopher, naturalist and iconoclast, is bored and restless. He starts farming beans in his front yard but is soon issued a citation by the homeowners' association. At the next association meeting, with his case on the agenda, he stands up and, in his defense, gives a rousing speech about self-reliance. This is not especially well received. Thinking they can salvage the situation, Thoreau's children persuade their science teacher to make the bean plot their submission to the science fair. However, in order for it to be a legitimate science experiment, the teacher insists that half the plot be planted with GMO beans.
Thoreau goes for a walk in the woods and gets lost. He is found and saved by a troop of Boy Scouts. In gratitude, he teaches them to forage for food. However, one of the scouts has a nut allergy. After a lengthy and anxious detour at a hospital, Thoreau returns home with a lawsuit on his hands. (Production note: Scoutmaster to be played by William H. Macy).
Having refused to pay taxes for some years, Thoreau eventually gets audited by the IRS. When the auditor arrives to review his paperwork, Thoreau accuses him of leading a life of quiet desperation. After that, he's really in trouble. Fortunately, his back taxes are paid the next day by his wealthier aunt, and Thoreau, like the Cincinnatus of civil disobedience that he is, returns to his bean field, once again a free man.
Thoreau takes up surveying as a hobby. Eventually, his volunteer work – and his incessant complaining about not being paid – leads the town council to hire him Thoreau as a surveyor. He is thrilled with his new job, not least because it allows him to trespass all over his neighbors' lands. He's really beginning to feel like he has reconciled his place in the community with his own perception of how a man should live. Nevertheless, one day he overhears people in the office discussing the fact that his maps are to be used by Toll Brothers for planning a new development. Thoreau resigns in principled disgust, and ponders his revenge.
Thoreau gets involved in the protests against fracking, an obvious threat to both the town and the countryside. But Mrs. Thoreau reminds him that "a good chunk" of the children's college fund is being generated from dividends yielded by energy-based master limited partnerships, and if the kids are going to go to a good school they had better be able to afford it, since Harvard certainly isn't getting any cheaper, not that he would know, as it's been how long since he last visited his alma mater, and why is he no longer so close with Ralph Waldo Emerson anyway, the two of you really hit it off when Emerson gave that speech at Harvard that Thoreau liked so much, and he (Emerson, that is) is such a well-respected person with connections and character and maybe there's an opportunity for a nice little earner with someone in his network, you never know and you'll certainly not meet anyone sitting around in the woods all day, will you, now?
Thoreau's philosophy and activism draw the attention of the Earth Liberation Front, several of whose members move into the Thoreau household. Mrs. Thoreau is none too pleased as Thoreau participates in the liberation of a foie gras farm that goes hilariously wrong (the duck and geese are too full to flee through the hole blown in the fence by the ELF). As a result, the FBI begins building a file on Thoreau. His fame steadily spreading, Thoreau also begins receiving correspondence from an incarcerated Ted Kaczynski. Awkward!
Thoreau's friend and mentor Ralph Waldo Emerson attempts to introduce Thoreau to a broader circle of authors and critics. This eventually becomes the New England Transcendentalist movement. The Dial is its flagship magazine, where Thoreau argues along with everyone else what exactly Transcendentalism is. Thoreau is jealous of Nate Hawthorne, who recently published "The Scarlet Letter," a novel about social media ostracism involving a scarlet "@" sign, or some nonsense like that, and repeatedly tries to get Edgar Allan Poe to write scathing reviews of Hawthorne's work. (Production note: have Poe meet Thoreau at a Concord tavern, where Poe proceeds to drink the entire town under the table. Afterwards, for closure, he and Thoreau burn down the Toll Brothers offices).
Money is tight. But some hipsters are opening Walden's first fair trade espresso bar, and they have been both admiring and envious of Thoreau's beard for quite some time. After jimmying him into skinny jeans and training him to pull shots, Thoreau becomes a renowned barista. However, after several weeks he acquires repetitive stress injury in his wrists. Because he is on a 1099, he cannot claim unemployment or disability. Since his injury precludes hoeing as well as shot-pulling, he now also has to hire Mexican migrant workers to tend his bean field for him; the episode ends on a heart-warming note with the Mexicans sharing all kinds of new and delicious bean recipes with Thoreau and his family.
Thoreau opens a small business to lead walking tours through the Concord woods. The venture is unsuccessful, as Thoreau prefers to walk out to a spot and sit there for a long time. This is the only way in which he can observe birds and other wildlife, as well as the changing of the seasons. (Production note: based on his recent work in Her, ask Joaquin Phoenix if he would like to play the part of Thoreau for this episode).
Thoreau goes to the Walden Whole Foods, where he sees the beans he grew displayed as "Local Produce," but he cannot afford to buy them himself. Also, he cannot seem to convince the manager that he is, in fact, the farmer who produced the beans in the first place. Whether this was to negotiate a discount on the purchase or for some other reason is unclear, as Thoreau is soon escorted from the premises by security.
Money is tight. But in the course of tilling his bean field and walking around the woods, Thoreau has amassed a formidable collection of Indian arrowheads. He sets up a shop on eBay to sell them. After an initial commercial success, eBay receives a cease-and-desist letter from lawyers representing the Indian nation whose patrimony is allegedly on the auction block. Thoreau's defense, that "it appeared by the arrowheads which I turned up in hoeing, that an extinct nation had anciently dwelt here and planted corn and beans ere white men came to clear the land, and so, to some extent, had exhausted the soil for this very crop," is considered inadequate if not irrelevant, and eBay shuts down his shop. Thoreau receives angry letters from customers whose orders go unfulfilled.
To help him in his quest to simplify his life, his wife buys him a subscription to Real Simple magazine. But her credit card is hacked during the online purchase, and Mrs. Thoreau finds her identity stolen by rugged, independence-minded anarcho-libertarian hackers of indeterminate nationality. To help defray the costs of the charges that the credit card compny refuses to cover, Thoreau takes a job as an adjunct professor of English at the local community college. Since he never claimed his Masters at Harvard, he is told that, regretfully, he cannot be considered for a full time position.
Despite his successes in improving the quality of both product and process in his father's pencil-making factory, Thoreau goes to work one day only to find that all manufacturing has been offshored to Shenzhen. However, as salary costs continue their inexorable rise there, rumors abound that the pencil factory will soon be "re-shored."
Thoreau's manuscript "Walden" is rejected by all publishers. After deciding to self-publish and spending much of his family's remaining savings on this enterprise, he holds a reading and book signing at the local Barnes & Noble. No one shows up. Dejected, Thoreau calls up Emerson, asking if he would connect him with Emerson's agent. Emerson says he will get back to him, but suggests in the meantime that he take up blogging instead.