Monday, March 23, 2015
Fatwas and fundamental truths
by Mandy de Waal
A South African literary event called 'The Time of the Writer' was to have been a moment of celebration for local writer Zainub Priya Dala. The author's debut novel, called What About Meera, was due to have been launched at the Durban festival.
Instead Dala was nursing injuries after being attacked at knifepoint with a brick and called [Salman] "Rushdie's Bitch!" The attack – which shocked and outraged SA's literary community – happened one day after Dala had expressed an appreciation of Rushdie's work.
"Dala was followed from the festival hotel and was harassed by three men in a vehicle who pushed her car off the road," a statement by Dala's publishers read. "When she stopped, two of the men advanced to her car, one holding a knife to her throat and the other hitting her in the face with a brick while calling her ‘Rushdie's bitch'. She has been treated by her doctor for soft-tissue trauma, and has reported the incident to the police."
The author – who is also a therapist who counsels autistic children – said through her publishers that she believed the attack stemmed from her voicing support for Rushdie's writing style. Dala was at a school's writing forum and was asked which writers she admired. She offered a list of writers including Arundhati Roy, and said that she "liked Salman Rushdie's literary style." After saying she appreciated Rushdie, a number of teachers and students stood up and walked out in protest. The next day Dala was attacked.
After discovering what happened to Dala, Rushdie Tweeted: "I'm so sorry to hear this. I hope you're recovering well. All good wishes." Dala's response? "Thank you. I have my family and children around me and am recovering."
SA literary site, www.bookslive.co.za stated that "the assault counts as an extension of Rushdie's complicated history with South Africa." BooksLive explained that Rushdie "was famously ‘disinvited' from a literary festival in 1988, after the Ayatollah Khomeini's fatwa was issued against him and his novel, The Satanic Verses."
Rushdie was invited to South Africa 27 years back by a top investigative newspaper to give a public lecture on censorship. He was due to have shared a platform with Booker prize winners Nadine Gordimer and JM Coetzee.
As news of the invitation spread, the paper received threats of violence. The Africa Muslim Agency demanded that the invitation be withdrawn, and The Islamic Missionary Society stated that "there was every likelihood that [Rushdie] would be assaulted." The Islamic society warned that blood would flow. "There are secret Muslim hit squads who have vowed to avenge the honour of the Holy Prophet Muhammed," it stated.
After long, careful and painful negotiation by multiple parties involved in the event, the invitation was withdrawn, an outcome that JM Coetzee condemned. "Islamic fundamentalism in its activist manifestation is bad news. Religious fundamentalism in general is bad news. We know about religious fundamentalism in South Africa. Calvinist fundamentalism has been an unmitigated force of benightedness in our history," Coetzee told a meeting in Cape Town.
"Wherever there is a bleeding sore on the body of the world, the same hard-eyed narrow-minded fanatics are busy, indifferent to life, in love with death. Behind them always come the mullahs, the rabbis, the predikante [ministers], giving their blessings," Coetzee added.
"There is nothing more inimical to writing than the spirit of fundamentalism. Fundamentalism abhors the play of signs, the endlessness of writing. Fundamentalism means nothing more or less than going back to an origin and staying there. It stands for one founding book and thereafter no more books," he said.
"As the various books of the various fundamentalisms, each claiming to be the one true book, fantasise themselves to be signed in fire or engraved in stone, so they aspire to strike dead every rival book, petrifying the sinuous, protean, forward-gliding life of the letters on their pages, turning them into physical objects to be anathematised, things of horror not to be touched, not to be looked upon," said Coetzee.
In the wake of this awful attack on freedom of speech and on a promising young writer, how does one show support for Dala? As anchor, author and journalist, Imran Garda eloquently tweeted, we support Dala by buying her book. By championing the "endlessness of writing" - her writing - we eloquently add to the roar of writers globally who condemn this heinous act.
Mandy de Waal is a writer and journalist based in Cape Town South Africa. Follow her on Twitter: @mandyLdewaal
Fireflies and Fiery Fatherly Love: An Excerpt from What About Meera by ZP Dala
South Africa: Clash of the Booker titans in The Guardian.
All The Wrong Places
by Lisa Lieberman
Hollywood, California, Summer 1941
I believe that the person you are when you're eight years old is the person you really are.
I was creeping up on Geoffrey as he sat meditating on the lawn—not that I could be invisible, my girl's body draped in my mother's mink coat—but Geoffrey was in one of his trances. I could have danced naked in front of him and he'd have continued to stare into the void.
Sometimes I did go naked; lots of people did at Walden Lodge in those days. My father was known as a bohemian and bathing suits were optional around the pool, although you had to dress for dinner in the lodge. Winters could be chilly even in Southern California, but there were always a few diehards who went skinny dipping regardless of the weather. Starlets who'd do anything to get a part in one of Father's pictures. Englishmen, like Geoffrey, who'd gone to boarding schools where they made you bathe in cold water, year-round. He got used to it, found it invigorating. "Manly," as my brother Gray put it, the arch tone in his voice laced with affection.
"Gray, darling. How would you know?" said Vivien, my mother, in the same tone, minus the affection.
I paused to kick off Vivien's high heels, which kept sinking into the earth. Barefoot, I moved stealthily over the silky grass, stalking my prey. The air smelled of citrus, the overripe sweetness of oranges that had fallen on the ground and were beginning to rot in the sun. We picked as many as we could, but there were always fruits we couldn't reach.
Years later, when I was in Sicily filming a B-movie with Adrian, beautiful, wounding Adrian, we stayed in a pensione in Taormina. Three months with my love in Italia! The movie was forgettable but I finagled a print from the director, mostly because of my scenes with Adrian. The Italian actress they got to dub my dialogue had this wonderful, husky voice. It's a treat watching us in Italian, where you don't have to pretend to follow the plot.
The pensione had a swimming pool set in a terraced garden that reminded me of Father's, complete with lemon trees. For breakfast, they served us juice made from blood oranges. I couldn't get over the ruby red pulp. That was Sicily, always surprising you with its vibrancy. Of course, I was passionately in love at the time and everything seemed bright and intense—especially in contrast to England, where Gray and I had been living for several years by that time on account of the blacklist. I swear it had rained every single day we'd been in London. I'd grown accustomed to the dreariness, everything subdued, even the kitten I found near our flat in Soho, a pitiful blue Persian with copper eyes.
"Her name is Fog," I informed my brother, "and we're keeping her." Not that he would have denied me anything at that point in our bleak exile. I was seventeen when we arrived and had just given up my newborn son for adoption. I was desperate for something to love. As was he, poor Gray, although being seventeen, I thought only about my own sorrows.
Geoffrey was wearing a khaki jacket over baggy shorts, one of those belted safari outfits with multiple pockets. He looked like an insect, a grasshopper, maybe, his spindly legs folded awkwardly beneath him, Indian fashion. That's what they called it then, Indian fashion, and I imagined him as an American Indian, sitting cross-legged on the ground. But Geoffrey was being the other kind of Indian, the Hindu kind. Every morning he did an hour of yoga, followed by a dip in the pool, au naturel. He was before his time, a visionary. I'll give him that. Walden Lodge is now a fashionable spa where celebrities go to lose weight and detox. Clothing is optional, I've heard, and yoga is all the rage.
I drew the mink coat over my head like a hood and tied the sleeves around my neck, to free up my paws for pouncing. With a snarl, I launched myself at Geoffrey, catching him squarely in the middle of his chest and knocking him backward onto the ground.
"Ouf!" he gasped. "I've just been attacked by a . . . what kind of creature are you, Cara child?"
"I'm a cheetah. I'm extremely fast. You didn't have a chance," I consoled him as I brushed him off and helped him resume his yogi pose.
"A cheetah?" He still sounded a bit winded. "Are cheetahs native to this region? If so, it's the first I've heard of it." Geoffrey once told me that he'd been picked on at school for being a know-it-all.
"Very well. I'm a puma, then. I'm still pretty fast and I've been known to eat humans. In one sitting."
He extracted his monogrammed cigarette case from a pocket. "Do you mind? I always smoke at times like this," he said. "Calms the nerves."
Strange, now that I think of it. Those were his exact words when I found him standing over Vivien's body.
* * * * *
Lisa Lieberman's debut historical noir has just been published in hardcover by Five Star. Fans of Lisa's film reviews will get a kick from All The Wrong Places, a mystery set in exotic European locales which pays tribute to the films of the forties and fifties, capped off with a thrilling finale straight out of Hitchcock. Order it from your favorite independent bookseller or buy it online from Amazon.
Monday, February 02, 2015
Literature and Philosophy in the Laboratory Meeting
by Jalees Rehman
Research institutions in the life sciences engage in two types of regular scientific meet-ups: scientific seminars and lab meetings. The structure of scientific seminars is fairly standard. Speakers give Powerpoint presentations (typically 45 to 55 minutes long) which provide the necessary scientific background, summarize their group's recent published scientific work and then (hopefully) present newer, unpublished data. Lab meetings are a rather different affair. The purpose of a lab meeting is to share the scientific work-in-progress with one's peers within a research group and also to update the laboratory heads. Lab meetings are usually less formal than seminars, and all members of a research group are encouraged to critique the presented scientific data and work-in-progress. There is no need to provide much background information because the audience of peers is already well-acquainted with the subject and it is not uncommon to show raw, unprocessed data and images in order to solicit constructive criticism and guidance from lab members and mentors on how to interpret the data. This enables peer review in real-time, so that, hopefully, major errors and flaws can be averted and newer ideas incorporated into the ongoing experiments.
During the past two decades that I have actively participated in biological, psychological and medical research, I have observed very different styles of lab meetings. Some involve brief 5-10 minute updates from each group member; others develop a rotation system in which one lab member has to present the progress of their ongoing work in a seminar-like, polished format with publication-quality images. Some labs have two hour meetings twice a week, other labs meet only every two weeks for an hour. Some groups bring snacks or coffee to lab meetings, others spend a lot of time discussing logistics such as obtaining and sharing biological reagents or establishing timelines for submitting manuscripts and grants. During the first decade of my work as a researcher, I was a trainee and followed the format of whatever group I belonged to. During the past decade, I have been heading my own research group and it has become my responsibility to structure our lab meetings. I do not know which format works best, so I approach lab meetings like our experiments. Developing a good lab meeting structure is a work-in-progress which requires continuous exploration and testing of new approaches. During the current academic year, I decided to try out a new twist: incorporating literature and philosophy into the weekly lab meetings.
My research group studies stem cells and tissue engineering, cellular metabolism in cancer cells and stem cells and the inflammation of blood vessels. Most of our work focuses on identifying molecular and cellular pathways in cells, and we then test our findings in animal models. Over the years, I have noticed that the increasing complexity of the molecular and cellular signaling pathways and the technologies we employ makes it easy to forget the "big picture" of why we are even conducting the experiments. Determining whether protein A is required for phenomenon X and whether protein B is a necessary co-activator which acts in concert with protein A becomes such a central focus of our work that we may not always remember what it is that compels us to study phenomenon X in the first place. Some of our research has direct medical relevance, but at other times we primarily want to unravel the awe-inspiring complexity of cellular processes. But the question of whether our work is establishing a definitive cause-effect relationship or whether we are uncovering yet another mechanism within an intricate web of causes and effects sometimes falls by the wayside. When asked to explain the purpose or goals of our research, we have become so used to directing a laser pointer onto a slide of a cellular model that it becomes challenging to explain the nature of our work without visual aids.
This fall, I introduced a new component into our weekly lab meetings. After our usual round-up of new experimental data and progress, I suggested that each week one lab member should give a brief 15 minute overview about a book they had recently finished or were still reading. The overview was meant to be a "teaser" without spoilers, explaining why they had started reading the book, what they liked about it, and whether they would recommend it to others. One major condition was to speak about the book without any Powerpoint slides! But there weren't any major restrictions when it came to the book; it could be fiction or non-fiction and published in any language of the world (but ideally also available in an English translation). If lab members were interested and wanted to talk more about the book, then we would continue to discuss it, otherwise we would disband and return to our usual work. If nobody in my lab wanted to talk about a book then I would give an impromptu mini-talk (without Powerpoint) about a topic relating to the philosophy or culture of science. I use the term "culture of science" broadly to encompass topics such as the peer review process and post-publication peer review, the question of reproducibility of scientific findings, retractions of scientific papers, science communication and science policy – topics which have not been traditionally considered philosophy of science issues but still relate to the process of scientific discovery and the dissemination of scientific findings.
One member of our group introduced us to "For Whom the Bell Tolls" by Ernest Hemingway. He had also recently lived in Spain as a postdoctoral research fellow and shared some of his own personal experiences about how his Spanish friends and colleagues talked about the Spanish Civil War. At another lab meeting, we heard about "Sycamore Row" by John Grisham and the ensuring discussion revolved around race relations in Mississippi. I spoke about "A Tale for a Time Being" by Ruth Ozeki and the difficulties that the book's protagonist faced as an outsider when her family returned to Japan after living in Silicon Valley. I think that the book which got nearly everyone in the group talking was "Far From the Tree: Parents, Children and the Search for Identity" by Andrew Solomon. The book describes how families grapple with profound physical or cognitive differences between parents and children. The PhD student who discussed the book focused on the "Deafness" chapter of this nearly 1000-page tome but she also placed it in the broader context of parenting, love and the stigma of disability. We stayed in the conference room long after the planned 15 minutes, talking about being "disabled" or being "differently abled" and the challenges that parents and children face.
On the weeks where nobody had a book they wanted to present, we used the time to touch on the cultural and philosophical aspects of science such as Thomas Kuhn's concept of paradigm shifts in "The Structure of Scientific Revolutions", Karl Popper's principles of falsifiability of scientific statements, the challenge of reproducibility of scientific results in stem cell biology and cancer research, or the emergence of Pubpeer as a post-publication peer review website. Some of the lab members had heard of Thomas Kuhn's or Karl Popper's ideas before, but by coupling it to a lab meeting, we were able to illustrate these ideas using our own work. A lot of 20th century philosophy of science arose from ideas rooted in physics. When undergraduate or graduate students take courses on philosophy of science, it isn't always easy for them to apply these abstract principles to their own lab work, especially if they pursue a research career in the life sciences. Thomas Kuhn saw Newtonian and Einsteinian theories as distinct paradigms, but what constitutes a paradigm shift in stem cell biology? Is the ability to generate induced pluripotent stem cells from mature adult cells a paradigm shift or "just" a technological advance?
It is difficult for me to know whether the members of my research group enjoy or benefit from these humanities blurbs at the end of our lab meetings. Perhaps they are just tolerating them as eccentricities of the management and maybe they will tire of them. I personally find these sessions valuable because I believe they help ground us in reality. They remind us that it is important to think and read outside of the box. As scientists, we all read numerous scientific articles every week just to stay up-to-date in our area(s) of expertise, but that does not exempt us from also thinking and reading about important issues facing society and the world we live in. I do not know whether discussing literature and philosophy makes us better scientists but I hope that it makes us better people.
Monday, January 19, 2015
Why don't more people kill themselves?
by Emrys Westacott
Option A: You live 34,748 days. Your final four weeks are spent in and out of hospital, alternating between discomfort and semi-consciousness, entirely dependent on family members and health care providers for assistance with every basic function.
You die in hospital or in a nursing home. The cost of home care, hospital services, and medications over this period depletes your estate by thousands of dollars.
Option B: You live 34,720 days–that is, 28 days less. The 28 days you give up are those last four weeks just described. You die at home. The money you save helps put a grandchild (or great grandchild) through college.
To my mind, this is a no-brainer. Option B is clearly preferable. In both cases you live until you are 95, a good long life. Everything significant that you were able to enjoy or accomplish will have happened. All you miss out on if you choose Option B is a few days of humiliation, discomfort (occasionally rising to out-and-out pain), guilt about the burden you are imposing on others, and anxiety about how your final pitiable condition might affect the way you are remembered. I assume most people will share my view that B is the better option. So the question arises: Why do the final days of so many people resemble Option A rather than Option B?
This question was prompted by two very good bestselling books that I read during the recent holidays: Atul Gawande's Being Mortal, and Roz Chast's Can't we talk about something more pleasant? Gawande, a physician, addresses an increasingly important problem. Due to the tremendous progress made in medicine over the last century, dying is often a much more complex and protracted process than it used to be. Doctors today have the know-how and the technology to keep us alive a lot longer after we are stricken with illness or old age. Unfortunately, says Gawande, doctors, other care-providers, and family members, often unthinkingly opt for whatever will prolong life without considering sufficiently whether what is being prolonged is really worth living from the perspective of the person who has to live it.
Our worst nursing homes are luxury hotels compared to the old workhouses and almshouses where people used to spend their final days, but they are nevertheless dreaded. Innovative assisted living arrangements make an honest attempt to eliminate some of most objectionable aspects of nursing homes, particularly the lack of independence granted to the residents. But all the same, loss of autonomy, and the blighting of even small pleasures by continual discomfort, seems to be the fate that awaits many of us if we take our time shuffling off our mortal coil.
In Can't we talk about something more pleasant? Roz Chast, the well-known New Yorker cartoonist, documents in graphic form the final stages of her parents' lives. It's a grim, funny, bitter, honest, entertaining book. Well into their nineties, her parents move into a "nice and clean and sickeningly expensive" assisted living instiution which her father calls a "hellhole." After her father falls and breaks his hip he is transferred to a "pretty depressing" nursing home, where he spends three weeks depressed, disoriented, developing awful bed sores, and racking up more bills. In July of his final year he says he wants to "pack it in." His bed sores are so deep that morphine is the only thing keeping him from screaming in pain. He makes regular trips to the hospital to have dead tissue removed—an agonizing procedure. Finally, in mid-October he dies.
Chast's mother, Elizabeth, becomes depressed following her husband's death. (They had been married for 69 years.) She revives a little, though not for long, when she bonds with Goodie, her round-the-clock nurse who helps her with dressing, feeding, and toilet functions. Goodie is wonderful, but the high cost of private nursing is over and above the cost of the assisted living home, and is paid for out of her mother's savings. Gradually Elizabeth's mind starts to crumble. In April the family celebrate her ninety-seventh birthday: she wonder's if she's turning 100. In the month that follow she exists in what Chast describes as a state of "suspended animation," sleeping most of the time, completely incontinent, doing nothing when awake except lying in bed--and burning through her savings. In July she starts receiving hospice care in addition to the care Goodie provides. By the end of August she hardly ever speaks or opens her eyes. She dies at the end of September.
A refreshing feature of Chast's book is that she's not afraid to voice her concerns about money–not just her worry that her parents' savings will give out, but also her self-interested awareness that the dollars spent on care will be dollars she won't be inheriting. Gawande prefers not to discuss money matters head on, which I think is a weakness of his book, although it is a failing that seems to be common among physicians. But the final pages of Being Mortal connect up with Chast's book since Gawande there describes in poignant detail the end of his own father's life. Suffering from cancer, with a tumor at the top of his spinal column, his father endures pain and the usual frustrations and humiliations that visit anyone whose body starts to fail them. Gawande's father is fortunate, though. His mind remains sharp; he stays engaged with the world; his pain is controlled with medication; and until the final few days the significant pleasures he is capable of experiencing outweigh his sufferings.
To return to my question: Why do so many people seem to end up dying just the way they say they don't want to? Why not choose an induced death in advance of those ghastly final few days or weeks, or in some cases months?
Obviously, cases differ, so there will be various factors at work depending on the circumstances. Some people have religious beliefs that forbid suicide. Usually the rationale for this taboo is that choosing when one dies means "playing God", for God alone has the right to take away life. Some have secular objections to suicide, seeing it as some sort of moral failure. Some are unable to secure the means to end their life. They may have gone past the point where they are mentally or physicaly capable of doing what needs to be done. If they are capable, they may know of no doctor willing to help, nor have anyone close to them who they can ask for assistance. This is more likely to be the case where assisted suicide is illegal, although the illegality of assisted suicide is not usually an insurmountable obstacle to ending one's life.
But there remain many who have no principled objection to suicide, who have it within their power to provide themselves with the means, but who nevertheless suffer and linger through to the bitter end. Why?
If the choice between Option A and Option B presented above seems like a no-brainer that may be because it is presented in objective terms, as if we have a perspective beyond life from which we are able to survey and compare the two options. But of course that isn't the point of view we have as we approach the end of life. We don't know exactly how soon or how quickly we will lose our independence; we don't know how much pain we will suffer; perhaps most significantly, we may still entertain hope that things will improve, or at least not get any worse. Gawande's book brings this point out forcefully: when hope and evidence arm wrestle, hope usually comes out on top.
Nor should we underestimate the simple will to live. A New York Times article last year by Nina Bernstein told a horror story involving a 91 year old man who, in spite of Herculean efforts by his daughter, spent a desperately miserable final year being shunted between hospitals and nursing homes. Yet he apparently never lost the desire to live, and just a few weeks before he died thanked his daughter for helping to keep him alive.
Finally, there are mighty institutional forces and financial interests that push people into the sort of care that prolongs life without much concern for whether the life being sustained is worth living. Medical personal often have an ingrained preference for whatever sustains life: even Gawande, a physician who clearly has reflected deeply on these matters, reports occasionally finding this default attitude within himself. And of course doctors, hospitals, nursing homes, and pharmaceutical companies, can all make a lot of money keeping people breathing.
For all that, I still I find it surprising that more people don't choose to kill themselves when they get close to the very end. I'm not talking here about avoiding entirely the inevitable decline of one's faculties as one advances into old age, which is what Ezekiel Emanuel has in mind in his provocative article "Why I Hope to Die at 75," published last year in The Atlantic. Emanuel states his case lucidly:
"living too long . . . renders many of us, if not disabled, then faltering and declining, a state that may not be worse than death but is nonetheless deprived. It robs us of our creativity and ability to contribute to work, society, the world. It transforms how people experience us, relate to us, and, most important, remember us. We are no longer remembered as vibrant and engaged but as feeble, ineffectual, even pathetic."
But I can understand why someone might not share Emanuel's attitude. At 75 life may still hold the prospect of many important pleasures: time spent with children and grandchildren and perhaps great grandchildren, witnessing their growth and accomplishments, study, travel, the arts, and the simple pleasures of friendship and domestic life.
What surprises me is that more people don't choose to avoid the final unpleasantness of pain, incapacity, and indignity. Even Emanuel says he won't induce his own death; he opposes legalizing euthanasia or physician-assisted suicide. His article simply explains why after 75 he will accept only palliative care, eschewing all curative treatments. His rejection of suicide is surprising since it seems that if he sticks to his policy there is still quite a good chance that he might live "too long" by his own lights, leaving behind just the sort of memories of him that he hopes not to bequeath.
I predict, though, that in the years to come increasing numbers of people will choose to induce death as they approach the very end. Indeed, I believe we are currently in the early stages of a sea change in attitudes, rather like the one that has seen the prejudice against homosexuality diminish to the point where gay marriage is now legal in many countries and most of the US–a reality that virtually no-one would have predicted thirty years ago.
Right now assisted suicide is legal in Switzerland, Germany, Japan, Colombia, and Albania, and in four US states: Montana, and New Mexico, Oregon, and Vermont. In modernized countries the trend is clearly towards expanding this right. The philosophical arguments in favor of granting the right to an induced death are powerful, as are most arguments that rest on John Stuart Mill's harm principle–the rule that individuals should be allowed to do what they want as long as their actions don't harm others. The religious objections will weaken as generations for whom religion is less important enter old age. The traditional stigma attached to suicide will also lessen as we all become familiar with examples of people who, both for their own sake and for the sake of those they love, make the rational decision to shorten by a little a life that is clearly coming to an end.
A 2011 BBC documentary, Terry Pratchett: Choosing to Die, showed Peter Smedley, a retired hotelier, ending his life by drinking a barbiturate in the company of his wife, Pratchett, and two staff from Dignitas, the Swiss organization that helps terminally ill people to die in circumstances of their own choosing. You can see parts of it on Youtube. Needless to say, the documentary was controversial, and of the thousands of comments posted to Youtube, many are critical of both Smedley and Dignitas. Just why the spectacle of assisted suicide arouses such animus is itself an interesting question. Compared to the drawn-out process described by Chast, Peter Smedley's end appears as easy and dignified as death can be.
Suicide has often been described as cowardly or selfish. But those who choose to bring an already complete life to a dignified close so that they avoid a short period of pointless pain and humiliation, cease to be a burden on others (including the medical system), and materially benefit their loved ones, should be applauded for their courage and selflessness. They are an example to us all.
 Nina Bernstein, "Fighting to honor a father's last wish: to die at home" [http://www.nytimes.com/2014/09/26/nyregion/family-fights-health-care-system-for-simple-request-to-die-at-home.html]
Monday, October 06, 2014
by Carol A. Westbrook
I gave a signed copy of my new book about beer, "To Your Health!" to a couple of favorite bartenders and a bar owner, all of whom had been featured in a story or two in this book about beer. A few weeks later I asked
each one how he enjoyed the book. And each admitted he hadn't yet opened the book, but assured me he put it in the bathroom. After my initial shock, I recognized that I was being paid the highest compliment. For a non-reader, the bathroom is the place of honor for reading material. A stack of books or magazines in the bathroom means, "this is valuable to me, and I am going to read it some day."
What a different world than the one in which I live! In my world, books hold a place of honor and, more importantly, books are read. I love books. When I was a kid, the Tooth Fairy left us books. My first Tooth Fairy book was "Harold and the Purple Crayon," by Crockett Johnson, which today remains my favorite children's book. I loved getting books from the Tooth Fairy, and treasured every one.
Because we were a Catholic family of four children, all of whom attended parochial school, we didn't have much money to spare, but books were always there. My father got many of these books for free, since they were demos at his place of work--he did PR for the Chicago Public Schools. We were fortunate to have a steady supply of children’s' books long after we had our permanent teeth.
Reading was a joyful activity in our family. We children taught each other to read long before we started first grade (there was no kindergarten at St. Hyacinth's School). I remember showing my younger brother how to sound out the letters in words; I was seven and he was three. Family vacations were always preceded by a trip to the library, to stock up on a dozen or so books to take along as we lounged at the lake or drove on our interminable car trips.
I was the bookworm of the family. In fourth grade I breezed through the classics on our classroom bookshelves--"Black Beauty," "Oliver Twist", and "Tom Sawyer.” I doubt these books would be considered suitable for a 10 year old today (even if they could read them), featuring abuse of both animals and children.
My true passion, though, was science fiction. My favorites were "Elevator to the Moon," by Stanley Widney, and "Have Space Suit, Will Travel," by Robert Heinlein, and "Space Cat,” by Ruthven Todd. By age 12, I had read all the young adult science fiction in our local library, so I was allowed to take the "el" train downtown by myself to the main Chicago library. There I discovered a world of books.
In high school I discovered science, and at the same time I discovered the John Crerar Library, a technical library that was on the campus of the Illinois Institute of Technology, an hour on the "el." I was impressed by the modern campus, designed by Mies van der Rohe, and dedicated to the study of science. At the Crerar Library I would spend hours in the stacks, finding books and articles for my current science fair project. Merely having those books around me made me feel like a true scientist.
In high school also I took a course in journalism. I learned the joys of writing a concise sentence, and the precision and accuracy of the English language. I decided I would have to learn touch-typing. I had to take the class in the summer, in public school, since the nuns at our high school would not allow the "college track" girls to register for "secretary track" classes. Remember, this was 1966. For a nice Catholic girl, attending public school was an education in itself. And learning to type gave me a voice. I begged my parents to buy me an electric typewriter and they obliged. It got me though college and then med school. I have it to this day, though it has been supplanted by my laptop.
The years passed, and I had three children of my own. I started reading to them when they were too young to understand all the big words; we read together at bedtime, going through C.S. Lewis' "Narnia" books, Madeline l'Engles' series, "A Wrinkle in Time," T.H.White's "The Once and Future King," numerous Robert Heinlein stories...too many others to remember. We read at bedtime, we read after dinner, we had books on tape for long car trips. All of my children were bookworms, too.
My children taught each other to read, just as I did with my siblings. My oldest son was an early reader. He attended the University of Chicago Lab School, which was later attended by the Obama girls. In nursery school he read books to his classmates, and he taught his younger sister to read; both eventually went on to study science and medicine. My second son was a late reader, but he had us all conned because he would memorize every book that was read to him, and "read" it back to us. As he grew older and became an actor, he retained this remarkable ability to memorize lines for plays. Ironically, in spite of being a late reader, he majored in English. The kids and I continue to read and recommend books to each other, and we are especially on the lookout for science fiction.
When I moved out of our old house in Chicago to Cambridge, Massachusetts, I found the box containing the children's books, our old friends that we read aloud to each other. When I opened it I was shocked--all that was left were small scraps of paper, and insect larvae. The books had been devoured by bookworms. Yes, there really ARE bookworms, and they do eat paper. I cried.
Cambridge was wonderful. There were so many bookstores I felt I was in paradise! Sadly, many of the bookstores closed, one by one, and I have since moved out of Cambridge. I'm writing books and blogs of my own now. But I still love to read for pleasure. Yes, I have my Nook and my Kindle and my iPhone Kindle Reader app. But I still prefer books. I like the feeling of the book in my hand, the weight of the paper. I like to read the flyleaf and the front pages, and the comments and bios on the back cover. When I read, I feel that I am inside the book, physically, with the story, and back on vacation as a child.
If you are reading this blog, you are probably a reader, too. No doubt you have stories like mine--I'd love to hear them! I am writing this to remind you to keep books in your life, and give them to your children and grandchildren. Make them bookworms. Buy books and keep bookstores open. And don't just keep the books in the bathroom. Read them!
Some day I will have grandchildren of my own, and I will read to them. For now, I only have grand dogs and grand kittens, and they don't enjoy books. My grandkids will get books from their grandmother, and I will read to them, perhaps on Skype. The first book will be, "Harold and the Purple Crayon."
Monday, July 21, 2014
Buddhist Musings in Ramadan
by Jalees Rehman
Ramadan is the month of fasting and a time for spiritual growth among Muslims. The traditionalist approach to "spiritual growth" is for Muslims to complement their fasting with performing additional prayers at night and regular reading of the Quran throughout the month. My own approach is somewhat different, I tend to complement my fasting with the reading of writings and scriptures from other philosophies or faith traditions, including atheist and humanist teachings. This year, I decided to study the Dhammapada (in the translation of Gil Fronsdal), one of the most widely read and revered writings in the Buddhist faith.
I was inspired to learn more about Buddhism because I was reading the remarkable novel "A Tale for the Time Being" by Ruth Ozeki, who is not only a brilliant author but also an ordained Zen Buddhist priest. The first person narrator in the novel is a 16-year old Japanese girl Nao who is bullied by her classmates. Nao's parents moved from Japan to Silicon Valley but were forced to return to Japan when the Dotcom bubble burst. Nao's father loses his job and the family is forced to live in poverty. The family's poverty and the fact that Nao is seen as an alien "transfer student" lead to her being ostracized at school. But her classmates go even further and begin psychologically and physically torturing her, leaving scars and scabs all over her body.
Nao is invited to spend the summer with her 104-year old great-grandmother Jiko who is a Buddhist nun. In the following scene, Jiko takes a bath with Nao and notices the scars:
"I waited. Old Jiko liked to take her time, and she was really good at it because she'd been practicing for so many years, so as a result, I was always waiting for her, and you'd think that waiting would be annoying for a young person like me, but for some reason I didn't mind. It wasn't like I had anything better to do that summer. I sat there on my little wooden stool, naked and hugging my knees and shivering, not from the cold but in anticipation of the scalding heat of the water, so when, instead, I felt her fingertip touch a small scar in the middle of my back, I was startled. My body stiffened. The light was so dim, how could she see my scars with her bad eyes? I figured she couldn't, but then I felt her finger move across my skin in a pattern, hesitant, pausing here and there to connect the dots.
"You must be very angry," she said. She spoke so quietly, it was like she was talking to herself, and maybe she was. Or maybe she hadn't said anything at all, and I'd just imagined it. Either way, my throat squeezed shut and I couldn't answer, so I shook my head. I was so ashamed, but at the same time, this enormous feeling of sadness brimmed up inside me, and I had to hold my breath to stop from crying.
She didn't say anything else. She washed me gently, and for the first time I just wanted her to hurry up and finish. After we were done, I got dressed quickly and said good night and left her there. I thought I was going to throw up. I didn't want to go back to my room, so I ran halfway down the mountainside and hid in the bamboo forest until it got dark and the fireflies came out. When Muji rang the big bell at the end of her fire watch to signal the end of the day, I snuck back into the temple and crawled into bed.
The next morning I went looking for old Jiko and found her in her room. She was sitting on the floor with her back to the door, bent over her low table. She was reading. I stood in the doorway and didn't even bother to go in. "Yes," I told her. "I'm angry, so what?"
Once Nao is able to speak about her anger to Jiko, Nao's healing process can begin. The story makes frequent references to Buddhist teachings, quoting from Buddhist texts as well as allowing the reader to gradually imbibe important spiritual concepts. To better understand these concepts, I decided to read the Dhammapada. I first began with the chapter on "Anger" where I was struck by the following verses:
"The one who keeps anger in check as it arises,
As one would a careening chariot,
I call a charioteer.
Others are merely rein-holders."
How often do we let our anger chariot determine our paths? I can remember countless times when I have been passively holding the reins but rarely take control of this chariot.
I will just leave you with one more excerpt from the Dhammapada, but I advise you to read it (and, of course, Ozeki's novel!) in its entirety:
From the chapter "The Sage":
"As a solid mass of rock,
Is not moved by the wind,
So a sage is not moved
By praise or blame."
Monday, May 26, 2014
FC Bayern Munich: Too Jewish for the Nazis
by Jalees Rehman
Konrad Heitkamp was taken aback by the extraordinary ordinariness present in the lobby of the Zurich hotel. In November of 1943, life in Zurich seemed unperturbed by the fact that the countries surrounding Switzerland were embroiled in one of the most devastating wars in the history of the world. Heitkamp realized that as the coach of the FC Bayern München soccer team, he was one of the privileged few who could bask in this oasis of normalcy for a few days before he would have to head back home to Munich. He surveyed the lobby and began waving his hand at some of his players standing across the vestibule. Hopefully, the Gestapo men watching him thought of this as an innocuous gesture, a soccer team coach acknowledging the arrival of his players and performing a headcount. But he could not bank on it.
The Gestapo must have known that for the past weeks, Heitkamp and his players were forward to the friendly match against the Swiss national soccer team because it would give them a chance to finally see their friend Kurt Landauer again. Before the team embarked on their trip to Zurich, the Gestapo had ordered all Bayern München players to attend a special "education" session in Gestapo headquarters of Munich. The team was informed that the Gestapo would accompany the team on their brief trip to Switzerland. The Gestapo explicitly forbade the team members to have any contact with German emigrants in Switzerland.
The Nazis were always weary of any potential contacts between Germans and German emigrants who were seen as traitors and collaborators of the Allied forces. But FC Bayern München was a special thorn in the flesh of the Nazi machinery. Nazis routinely referred to FC Bayern München as a "Judenclub" ("Jew Club"), because German Jews had held some of the key leadership positions. The club won its first German national soccer championship in 1932 under the leadership of the Jewish club president Kurt Landauer and the coach Richard Dombi, an Austrian Jew. Only a few months later in January 1933, Hitler came to power and soon all leaders of Jewish origin were forced to give up their leadership positions.
Kurt Landauer was one of the first to resign from the club presidency. He even lost his job as the manager of a Munich newspaper's advertising department, and was only able to find work in a textile shop owned by a Jewish family. In the wake of the anti-semitic pogroms in the night of the 9th November 1938 (Kristallnacht oder Reichspogromnacht), this shop was attacked and devastated. Landauer was arrested and sent to the Dachau concentration camp. After a brief period of internment, he was released and he used this opportunity to emigrate to Switzerland and survived the Holocaust. Most of his siblings were less fortunate and were murdered by the Nazis.
Konrad Heitkamp and his wife Magdalena are about to walk towards their hotel room, when a bellhop appears in front of them and hands Heitkamp a note. It is a message from Kurt Landauer. Heitkamp tries to suppress his excitement , but it is already too late. Before he can even read the note, a man taps him on the shoulder and says "Gestapo. Give me the note. We know who it is from and we absolutely forbid you to have any contact with that man. We are watching you!"
For the remainder of the trip, the Gestapo closely walls off Heitkamp and his players, making it impossible for them to have any contact with Landauer. But the players still manage to embarrass the Nazis and the Gestapo. Immediately after the whistle is blown to start the game, the FC Bayern München players run up to the area of the soccer field in front of Kurt Landauer and greet their former president from afar.
The book "Der FC Bayern und seine Juden: Aufstieg und Zerschlagung einer liberalen Fußballkultur" (FC Bayern and its Jews: The Rise and Destruction of a Liberal Soccer Culture) by the German soccer historian Dietrich Schulze-Marmeling describes the prevalent culture of tolerance at FC Bayern München in the years prior to the Nazi takeover of Germany. Many members, players and leaders of the club were Jewish, but the question of ethnicity or religion was not even a real issue for the club. All that really mattered was whether or not you were a member of the club. Once the Nazis came to power in 1933, they tried to install their henchmen at leadership positions of all institutions, including sport clubs. TSV 1860, the other big Munich soccer club, immediately acquiesced to the new Nazi masters, allowing SA men to take control of the club from 1934 onwards. Players and members of FC Bayern München, on the other hand, staving off Nazi leadership up until 1943. The Nazis were often frustrated by the recalcitrant "Judenclub" which resisted and delayed the implementation of Nazi ordinances.
I have been an FC Bayern München fan all my life. My childhood home in Munich was just a ten minute walk away from the club's headquarters at the Säbener Strasse. It is not difficult to be proud of its achievements. In 2013, the club won every major trophy that it was eligible for - Bundesliga champions, the German soccer federation cup (DFB-Pokal), the European Champions League and the 2013 FIFA Club World Cup – thus underscoring its dominance as the world's best soccer club. But none of these victories made me as proud of my club as finding out about how my club defied anti-semitism and the Nazis.
Image: The headquarters of FC Bayern München in the Säbener Strasse (photo by J. Rehman)
Reference: Schulze-Marmeling, D. (2011). "Der FC Bayern und seine Juden: Aufstieg und Zerschlagung einer liberalen Fußballkultur". Werkstatt GmbH
Monday, April 28, 2014
Does Literary Fiction Challenge Racial Stereotypes?
by Jalees Rehman
A book is a mirror: if a fool looks in, do not expect an apostle to look out.
Georg Christoph Lichtenberg (1742-1799)
Reading literary fiction can be highly pleasurable, but does it also make you a better person? Conventional wisdom and intuition lead us to believe that reading can indeed improve us. However, as the philosopher Emrys Westacott has recently pointed out in his essay for 3Quarksdaily, we may overestimate the capacity of literary fiction to foster moral improvement. A slew of scientific studies have taken on the task of studying the impact of literary fiction on our emotions and thoughts. Some of the recent research has centered on the question of whether literary fiction can increase empathy. In 2013, Bal and Veltkamp published a paper in the journal PLOS One showing that subjects who read excerpts from literary texts scored higher on an empathy scale than those who had read a nonfiction text. This increase in empathy was predominantly found in the participants who felt "transported" (emotionally and cognitively involved) into the literary narrative. Another 2013 study published in the journal Science by Kidd and Castano suggested that reading literary fiction texts increased the ability to understand and relate to the thoughts and emotions of other humans when compared to reading either non-fiction or popular fiction texts.
Scientific assessments of how fiction affects empathy are fraught with difficulties and critics raise many legitimate questions. Do "empathy scales" used in psychology studies truly capture the psychological phenomenon of "empathy"? How long does the effect of reading literary fiction last and does it translate into meaningful shifts in behavior? How does one select appropriate literary fiction texts and control texts, and conduct such studies in a heterogeneous group of participants who probably have very diverse literary tastes? Kidd and Castano, for example, used an excerpt of The Tiger's Wife by Téa Obreht as a literary fiction text because the book was a finalist for the National Book Award, whereas an excerpt of Gone Girl by Gillian Flynn was used as a ‘popular fiction' text even though it was long-listed for the prestigious Women's Prize for Fiction.
The recent study "Changing Race Boundary Perception by Reading Narrative Fiction" led by the psychology researcher Dan Johnson from Washington and Lee University took a somewhat different approach. Instead of assessing global changes in empathy, Johnson and colleagues focused on a more specific question. Could the reading of a fictional narrative change the perception of racial stereotypes?
Johnson and his colleagues chose an excerpt from the novel "Saffron Dreams" by the Pakistani-American author Shaila Abdullah. In this novel, the protagonist is a recently widowed pregnant Muslim woman Arissa whose husband Faizan was working in the World Trade Center on September 11, 2001 and killed when the building collapsed. The excerpt from the novel provided to the participants in Johnson's research study describes a scene in which Arissa is traveling alone late at night and is attacked by a group of male teenagers. The teenagers mock and threaten her with a knife because of her Muslim head-scarf (hijab), use racial and ethnic slurs as well as make references to the 9/11 attacks. The narrative excerpt does not specifically mention the word Caucasian, but one of the attackers is identified as blond and another one has a swastika tattoo. They do not believe her when she tries to explain that she was also a victim of the 9/11 attacks and instead refer to her as belonging to a "race of murderers".
The researchers used a second text in their experiment, a synopsis of the literary excerpt from Saffron Dreams. This allowed Johnson colleagues to distinguish between the effects of the literary narrative style with its inner monologue and description of emotions versus the effects of the content. Samples of the literary text and the synopsis used by the researchers can be found at the end of this article (scroll down) for those readers who would like to compare their own reactions to the two texts.
The researchers recruited 68 U.S. participants (mean age 36 years, roughly half were female, 81% Caucasian, reporting seven different religious affiliations but none of them were Muslim) and randomly assigned them to the full literary narrative group (33 participants) or the synopsis group (35 participants). After the participants read the texts, they were asked to complete a number of questions about the text and its impact on them. They were also presented with 18 male faces that the researchers had designed with a special software in a manner that they appeared ambiguous in terms of Caucasian or Arab characteristics. For example, the faces combined blue eyes with darker skin tones. The participants were asked to grade the faces as being:
2) mixed, more Arab than Caucasian
3) mixed, more Caucasian than Arab
The participants were also asked to estimate the genetic overlap between Caucasians and Arabs on a scale from 0% to 100%.
Participants in the narrative fiction group were more likely to choose one of the ambiguous options (mixed, more Arab than Caucasian or mixed, more Caucasian than Arab) and less likely to choose the categorical options (Arab or Caucasian) than those who read the synopsis. Even more interesting is the finding that the average percentage of genetic overlap between Caucasians and Arabs estimated by the synopsis group was 33%, whereas it was 57% in the narrative fiction group.
Both of these estimates are way off. The genetic overlap between any one human being and another human being on our planet is approximately 99.9%. Even much of the 0.1% variation in the human genome sequences is not due to 'racial' differences. As pointed out in a Nature Genetics article by Lynn Jorde and Stephen Wooding, approximately 90% of total genetic variation between humans would be present in a collection of individuals from any one continent (Asia, Europe or Africa). Only an additional 10% genetic variation would be found if the collection consisted of a mixture of Europeans, Asians and Africans.
It is surprising that both groups of study participants heavily underestimated the genetic overlap between Arabs and Caucasians, and that simply reading the fictional text changed their views of the human genome. This latter finding is also a red flag that informs us about the poor state of general knowledge of genetics, which appears to be so fragile that views can be swayed by nonscientific literary texts.
This study is the first to systematically test the impact of reading literary fiction on an individual's assessment of race boundaries and genetic similarity. It suggests that fiction can indeed blur the perception of race boundaries and challenge our stereotypes. The text chosen by the researchers is especially well-suited to defy stereotypical views held by the readers. The protagonist's Muslim husband was killed in the 9/11 attacks and she herself is being harassed by non-Muslim thugs. This may challenge assumptions held by some readers that only non-Muslims were the victims of the 9/11 attacks.
The effect of reading the narrative text seemed to have effects on the readers that went far beyond the content matter – the story of a Muslim woman who is showing significant courage while being threatened. The faces shown to the study participants were those of men, and the question of genetic overlap between Caucasians and Arabs was a rather abstract question which had little to do with Arissa's story. Perhaps Arissa's story had a broader effect on the readers. The study did not measure the impact of the narrative on additional stereotypes or assumptions held by the readers such as those regarding other races or sexual orientations, but this is a question that ought to be investigated.
One of the limitations of the study is that it assessed the impact of the story only at a single time-point, immediately after reading the text. Without measuring the effect a few days or weeks later, it is difficult to ascertain whether this was a lasting effect. Another limitation of this study is that it purposefully chose an anti-stereotypical text, but did not test the opposite hypothesis, that some fictional narratives may potentially foster negative stereotypes.
One of my earliest memories of an English-language novel about Muslim characters is the spy novel "The Mahdi" by the British author A.J Quinnell (pen name for Philip Nicholson) written in 1981. The basic plot is that (spoiler alert) US and British intelligence agencies want to manipulate and control the Muslim world by installing a 'Mahdi', the long-awaited spiritual and political leader of Muslims foretold by Muslim tradition. The ridiculous part of the plan is that the puppet leader is accepted by the Muslim world as the true incarnation of the Mahdi because of a green laser beam emanating from a satellite. The beam incinerates a sacrificial animal in front of a crowd of millions of Muslims at the Hajj pilgrimage and convinces them (and the rest of the Muslim world) that God sent this green laser beam as a sign. This novel portrayed Muslims as gullible idiots who would simply accept the divine nature of a green laser beam. One can only wonder what impact reading an excerpt from that novel would have had on the perception of race boundaries by study participants.
The study by Johnson and colleagues is an important contribution to the research of how reading can change our perceptions of race and possibly stereotypes in general. It shows that reading fiction can blur the perception of race boundaries, but it also raises a number of additional questions about how long this effect lasts, how pervasive it is and whether fiction might also have the opposite effect. Hopefully, these questions will be addressed in future research studies.
Image Credit: Saffron Woman by N.M. Rehman (generated from an attribution-free, public domain photograph)
Dan R. Johnson , Brandie L. Huffman & Danny M. Jasper (2014)
Changing Race Boundary Perception by Reading Narrative Fiction, Basic and Applied Social Psychology, 36:1, 83-90, DOI:10.1080/01973533.2013.856791
Excerpt of the literary fiction sample from "Saffron Dreams" by Shaila Abdullah
This is just an excerpt from the narrative sample used by the researchers, which was 3,108 words in length (pages 57-64 from the book):
"I got off the northbound No. 2 IRT and found out almost immediately that I was not alone. The late October evening inside the station felt unusually weighty on my senses.
I heard heavy breathing behind me. Angry, smoky, scared. I could tell there were several of them, probably four. Not pros, perhaps in their teens. They walked closer sometimes, and other times the heavy thud of spiked boots on concrete and clanking chains receded into the distance. They walked like boys wanting to be men. They fell short. Why was there no fear in my heart? Probably because there was no more room in my heart for terror. When horror comes face-to-face with you and causes a loved one's death, fear leaves your heart. In its place, merciful God places pain. Throbbing, pulsating, oozing pus, a wound that stays fresh and raw no matter how carefully you treat it. How can you be afraid when you have no one to be fearful for? The safety of your loved ones is what breeds fear in your heart. They are the weak links in your life. Unraveled from them, you are fearless. You can dangle by a thread, hang from the rooftop, bungee jump, skydive, walk a pole, hold your hand over the flame of a candle. Burnt, scalded, crashed, lost, dead, the only loss would be to your own self. Certain things you are not allowed to say or do. Defiant as I am, I say and do them anyway.
And so I traveled with a purse that I held protectively on one side. My hijab covered my head and body as the cool breeze threatened to unveil me. I laughed inwardly as I realized I was more afraid of losing the veil than of being mugged. The funny part of it is, I desperately wanted to lose my hijab when I came to America, but Faizan had stood in my way. For generations, women in his household had worn the veil, although none of them seemed particularly devout. It's just something that was done, no questions asked, no explanations needed. My argument was that we should try to assimilate into the new culture as much as possible, not stand out. Now that he was gone, losing the hijab meant losing a portion of our time together.
It had been just 41 days. My iddat, bereavement period, was over. Technically I was a free woman, not tied to anyone, but what could I do about the skeletons in my closet that wouldn't leave me alone?"
Excerpt of the Synopsis used by the researchers as a comparator:
This is the corresponding excerpt from the synopsis used by the researchers. The full-length synopsis was 491 words long:
"The scene starts with Arissa getting off the subway train. She is being followed. Most commuters have already returned home, so it is not the safest time to be traveling alone. Four people are walking behind her. Initially confused by the lack of fear in her heart, she realizes that it is the consequence of losing someone so close to her. It is ironic that she is wearing her hijab, a Muslim veil. She wanted to get rid of it when she came to America, but her husband, Faizon, insisted she keep it. Following his death, keeping the hijab was a way of keeping some of their time together. It has been 41 days since the attack, and Arissa's iddat, bereavement period, is over. She is a free woman, but cannot put aside her grave feelings of loss."
Monday, March 17, 2014
Why Amazon Reminds Me of the British Empire
by Emrys Westacott
"Life—that is: being cruel and inexorable against everything about us that is growing old and weak….being without reverence for those who are dying, who are wretched, who are ancient." (Friedrich Nietzsche, The Gay Science)
A recent article by George Packer in The New Yorker about Amazon is both eye-opening and thought-provoking. In "Cheap Words" Packer describes Amazon's business practices, the impact of these on writers, publishers, and booksellers, and the seemingly limitless ambitions of Amazon's founder and CEO Jeff Bezos whose "stroke of business genius," he says, was "to have seen in a bookstore a means to world domination."
Amazon began as an online book store, but US books sales now account for only about seven percent of the seventy-five billion dollars it takes in each year. Through selling books, however, Amazon developed perhaps better than any other business two strategies that have been key to its success: it uses to the full sophisticated computerized collection and analysis of data about its customers; and it makes the interaction between buyer and seller maximally simple and convenient. It also, of course, typically offers lower prices than its competitors. Bezos' plan to one day have drones provide same-day delivery of items that have been stocked in warehouses near you in anticipation of your order is the logical next step in this drive toward creating a frictionless customer experience.
Amazon's impact on the world of books has been massive. Over the past twenty years the number of independent bookstores in the US has been cut in half from four thousand to two thousand, and this number continues to dwindle. Because Amazon is by far the biggest bookseller, no publisher can afford to not use its services, and Amazon exploits this situation to the hilt. Publishers are required to pay Amazon millions of dollars in "marketing discount" fees. Those that balked at paying the amount demanded had the ‘Buy' button removed from their titles on Amazon's web site. Amazon used the same tactic to try to force Macmillan to agree to its terms regarding digital books. And of course Amazon's Kindle dominates the world of e-books, another major threat to traditional publishers and booksellers.
The argument for viewing Amazon in a positive light is not difficult to make.
They offer the customer a bigger selection of books than anyone else, usually at lower prices. Buying online as a returning customer with a registered credit card is laughably easy. Any wannabe writer can self-publish with Amazon, and those whose books sell receive a much higher percentage in royalties. In opening up this opportunity to all, and in basing its advertising and promotional decisions on computer analysis of customer behavior rather than on some self-styled expert's opinion, Amazon eliminates the unnecessary middlemen, professional tastemakers, and elitist gatekeepers that have controlled—and constrained—publishing for so long, replacing them with the dynamic democracy of the digital market place.
For all that, more than one person I know reacted to Packer's article by pledging to avoid buying stuff from Amazon in future, at least as far as and for as long as this is possible (which judging from the way things are going may not be too far or very long). Why this reaction? Well, when I told my daughter about Packer's article her immediate response was to say that Amazon sounded a bit like the British Empire. Which set me thinking.
What parallels can be found between the premier online retailer and the largest empire in history? I see similarities in three areas: beliefs and attitudes; practices; and impact on affected populations. Let's consider these in turn.
According to Packer's account, the prevailing attitude among those in charge at Amazon is arrogance. Here is where I think the echoes of imperialism are most apparent. British imperialists typically viewed themselves as superior to those they displaced or ruled on various counts: birth, race, heritage, education, culture, morals, religion, ability, and character, all resulting in and backed up by superior political and military power. The proof of this superiority could be seen on any map of the world that showed the extent of Britannia's rule. The Amazon execs are indifferent, of course, to such things as birth or pedigree; what matters to them is being smart. But thinking of themselves as smart is the basis for a particular kind of arrogance which they seem to share with other successful types in places like Silicon Valley and Wall Street. The way one top exec is described to Packer by a colleague is revealing: he's said to be "the smartest guy in the room at a company where everyone believes himself to be just that."
This fetishism of smartness is certainly not confined to techies, but it assumes a specific and perhaps especially intense form among them. Obviously, there are many different ways of being intelligent. One can excel at abstract reasoning, creative problem-solving, learning languages, understanding people, remembering information, noticing patterns and connections, interpreting works of art, manipulating people and events, mastering a practical skill, recognizing opportunities, artistic creativity, witty repartee—the list is virtually endless. So there are many people out there who are smart in various ways. But at any particular time and place, certain kinds of intelligence will be especially valued. It might be the ability to track an animal, or plan a battle, or discourse fluently in Latin, or demonstrate erudition, or make accurate and discriminating observations, or solve technical problems using mathematics and logic. These are all forms of smartness that at different times have been applauded and rewarded. And of course one kind of smartness is to recognize just what kind of smarts the present or immediate future will reward.
Today we live in an age when science enjoys cultural hegemony and most educated people earn a living by processing information. Naturally enough, therefore, certain kinds of smartness are now much in demand and are rewarded accordingly. Prominent among these is fluency in computer science and technology. The market value of knowledge and skills in this area has been greatly enhanced by the growth of the internet since this has expanded to an unprecedented degree the potential customer base or audience for any online enterprise.
The fetishism of smartness at places like Amazon is thus, naturally enough, oriented towards technological fluency and business acumen. But it seems to be accompanied by a moral subtext. Our success is not due to chance or luck; it's due to our intelligence; therefore it's deserved. On the face of it, this might seem dissimilar to the attitude of a British imperialist who, after all, could hardly claim credit for being born British (Cecil Rhodes supposedly said that "to be born English is to win first prize in the lottery of life"). But it is similar insofar as the British attributed their success in conquering and ruling much of the world to their possession of certain qualities—intelligence, industry, organization, moral and cultural superiority. The similarity extends also to the contemptuous attitude felt and sometimes expressed toward those who suffer as a result of this success. One former Amazon employee cited by Packer says that execs at Amazon view the older publishers as "antediluvian losers" and describe whole sections of the print world as the "Rust Belt media." Imperialists like Winston Churchill regularly referred to the native populations whose settlements, property, and whole way of life he cheerfully helped to destroy when serving as a military officer in Africa as "primitive," "backward," barbarous," "ignorant," "savage," and "improvident."
In the eyes of both, what legitimizes this contempt—and reinforces the arrogance—is the conviction that they are on the side of history. As Jeff Bezos said to Charlie Rose: "Amazon is not happening to bookselling. The future is happening to bookselling." The attitude is a form of Social Darwinism. Countries with superior military power and political organization will naturally dominate people who are lacking in these. ("Whatever happens we have got / The Gatling gun, and they have not.") Businesses that know how to use the latest technology effectively will inevitably send to the wall those that still rely on dated methods that are less efficient: that's the way capitalism functions. The ultimate and unarguable proof of superiority is real world success: the subjugation of native populations; the growth of market share. Might is right.
Seeing themselves as being aligned with the forces of inevitable historical change is accompanied, naturally enough, by the belief that they are agents of progress, that the changes they help being about are desirable. Obviously, this self-perception can be self-serving; but that doesn't make it foolish. There is an idealistic strain in enterprises like Amazon, Google, or Facebook that is not simply a piece of self-deception or a marketing strategy. Amazon really does make books available to people who lack a local bookstore (although in some cases, of course, this lack may be largely due to the local bookstore being put out of business by Amazon). Their constantly expanding inventory–Bezos' eventual goal is to warehouse copies of every book ever written–means that it is now much easier than ever before to buy obscure and out of print titles. Electronic self-publishing makes it easier and cheaper for all writers to put their work out in the public domain. British imperialists also saw themselves as benefiting the world. Churchill, reflecting on what the British had achieved in Africa, thought that future historians would judge them to be "a people, of whom at least it may be said, that they have added to the happiness, the learning and the liberties of mankind." Cecil Rhodes was bracingly blunt: "I contend that we are the first race in the world, and the more of the world we inhabit the better it is for the human race."
Moving from attitudes to actions, we should first of all be fair to Amazon. They don't massacre by the thousand those who resist their growing power; they don't torch villages in acts of punitive reprisal; they don't use gunboats to force the Chinese to keep buying opium from British drug traffickers. But within the parameters of legal business operations, they do seem to be pretty ruthless. Some of their success is undoubtedly due to their clever use of up to date methods, from automated, individual-oriented advertising to warehouses staffed by non-unionized workers who are already being replaced by robots. But according to Packer their success in bookselling is also largely due to a strategy whereby they "created dependency and harshly exploited its leverage." Refusing to sell books by publishers who won't cough up a sufficiently large "marketing discount" fee is a case in point. This is, in effect, a legal extortion racket. To be sure, it isn't as crude as the way the British persuaded the Chinese to sign the Treaty of Nanking, which required China to hand over twenty-one million dollars, grant all sorts of trading concessions, and cede control of Hong Kong (the British method was to threaten Nanking with gunboats). But the underlying mentality isn't so different. Where one isn't constrained by moral considerations, all that remains is a power struggle; and all that ultimately matters in that struggle is who wins. As Quirrell says in Harry Potter and the Philosopher's Stone, echoing Machiavelli, Hobbes, and Nietzsche: "There is no good and evil, there is only power and those too weak to seek it.
Of course, Jeff Bezos is hardly the first capitalist to play hardball, so it wouldn't make much sense to single out his company as singularly ruthless in its business strategies. The ethics of Amazon are pretty much the ethics of any big business striving toward monopoly status. What is troubling, though, about the mindset described by Packer is the seeming indifference to, or even satisfaction over, the negative impact of the company's actions on significant numbers of people. Packer reports that among "people who care about reading, Amazon's unparalleled power generates endless discussion, along with paranoia, resentment, confusion, and yearning." This could equally stand as a description of those who found themselves powerless to resist British rule. But in both cases, the view from the seat of power is that those who aren't with the program either don't recognize what's in their best interests or deserve to disappear.
"Innovate or die." "Move fast and break things" Such mantras are associated with the technological revolution, but there is nothing essentially new here. They express the essential spirit–and reality– of capitalism that Marx describes in The Communist Manifesto. Those who find themselves surfing the waves of innovation naturally enough sing the praises of the new. So much is understandable. It feels good to be a winner, doubly good if you sense the wind of history at your back, and triply good if you believe you're making the world a better place. British imperialists felt good on all three counts, yet we are now critical of their attitude in large part because of their indifference to the individuals, communities and cultures they affected and in many cases destroyed. They could have done with more humility and more humanity. The same goes for the Amazon execs described by Packer. What is unbecoming, even ugly, in both groups is the callousness drifting into contempt toward those who, also understandably, lament the destruction of something they cherish, whether it be a secure job (like working in a bookstore), a respected occupation (like print publishing), a skill that is no longer marketable (like editing), a pleasure that may soon no longer be available (like browsing in used bookstores) or, indeed, an entire form of life.
Monday, March 03, 2014
Is Internet-Centrism a Religion?
by Jalees Rehman
On the evening of March 3 in 1514, Steven is sitting next to Friar Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the printing press will change everything."
Let us now fast-forward 500 years and re-enact this hypothetical scene with some tiny modifications.
On the evening of March 3 in 2014, Steven is sitting next to TED-Talker Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the internet will change everything."
Clay's advice in the first scene sounds ludicrous to us because we know that the printing press did not usher in an era of wealth, justice and peace. Being retrospectators, we realize that the printing press revolutionized how we disseminate information, but even the most efficient dissemination tool is just a means and not the ends.
It is more difficult for us to dismiss Clay's advice in the second scene because it echoes the familiar Silicon Valley slogans which inundate us with such persistence that some of us have begun to believe them. Clay's response is an example of what Evgeny Morozov refers to as "Internet-centrism", the unwavering belief that the Internet is not just an information dissemination tool but that it constitutes the path to salvation for humankind. In his book "To Save Everything, Click Here: The Folly of Technological Solutionism", Morozov suggests that "Internet-centrism" is taking on religion-like qualities:
"If the public debate is any indication, the finality of "the Internet"— the belief that it's the ultimate technology and the ultimate network— has been widely accepted. It's Silicon Valley's own version of the end of history: just as capitalism-driven liberal democracy in Francis Fukuyama's controversial account remains the only game in town, so does the capitalism-driven "Internet." It, the logic goes, is a precious gift from the gods that humanity should never abandon or tinker with. Thus, while "the Internet" might disrupt everything, it itself should never be disrupted. It's here to stay— and we'd better work around it, discover its real nature, accept its features as given, learn its lessons, and refurbish our world accordingly. If it sounds like a religion, it's because it is."
Morozov does not equate mere internet usage with "Internet-centrism". People routinely use the internet for work or leisure without ascribing mythical powers to it, but it is when the latter occurs that internet usage transforms into "Internet-centrism".
Does Morozov's portrayal of "Internet-centrism" as a religion correspond to our current understanding of religions? "Internet-centrism" does not involve deities, sacred scripture or traditional prayers, but social scientists and scholars of religion do not require deism, scriptures or prayers to categorize a body of beliefs and practices as a religion.
The German theologian Friedrich Schleiermacher (1768-1834) thought that the feeling of "absolute dependence" ("das schlechthinnige Abhängigkeitsgefühl") was one of the defining characteristics of a religion. In a January 2014 Pew Internet survey, 53% of adult internet users in said that it would be "very hard" to give up the internet, whereas only 38% felt this way in 2006. This does not necessarily meet the Schleiermacher threshold of "absolute dependence" but it indicates a growing perception of dependence among internet users, who are struggling to envision a life without the internet or a life beyond the internet.
Absolute dependence is not unique to religion, therefore it may be more helpful to turn to religion-specific definitions if we want to understand the religionesque characteristics of Internet-centrism. In his classic essay "Religion as a cultural system" (published in "The Interpretation of Cultures"), the anthropologist Clifford Geertz (1926-2006) defined religion as:
" (1) a system of symbols which acts to (2) establish powerful, persuasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic."
Today's Silicon Valley pundits (incidentally a Sanskrit term originally used for learned Hindu scholars well-versed in Vedic scriptures) excel at establishing "powerful, persuasive, and long-lasting moods and motivations" and endowing "conceptions of general order of existence" with an "aura of factuality". Morozov does not specifically reference the Geertz definition of religion, but he provides extensive internet pundit quotes which fit the bill. Here is one such example:
"To be a peer progressive, then, is to live with the conviction that Wikipedia is just the beginning, that we can learn from its success to build new systems that solve problems in education, governance, health, local communities, and countless other regions of human experience."
—Steven Johnson in "Future Perfect: The Case For Progress In A Networked Age"
One problem with abstract definitions of religion is that they do not encompass the practice of religion and its mythical or supernatural aspects, which are often essential parts of most religions. In "The Religious Experience", the religion scholar Ninian Smart (1927-2001) does not provide a handy definition for religions but instead offers six "dimensions" that are present in most major religions: 1) The Ritual Dimension, 2) The Mythological Dimension, 3) The Doctrinal Dimension, 4) The Ethical Dimension, 5) The Social Dimension and 6) The Experiential Dimension.
How do these dimensions of religion apply to Internet-centrism?
1) The Ritual Dimension: The need to continuously seek connectivity by accessing computers or seeking out wireless connectivity, checking emails or social media updates so frequently that this connectivity exceeds one's pragmatic needs could be considered a ritual of Internet-centrism. If one feels the need to check emails and Facebook or Twitter updates every one to two minutes, despite the fact that it is unlikely one would have received a message that required urgent action, it may be an indicator of the important role that this ritual plays in the life of an Internet-centrist. Worshippers of traditional religions feel uncomfortable if they miss out on regular prayers or lose their rosaries that allow them to commune with their God, and it appears that for some humans, the ritual of Internet-connectivity may play a similar role.
2) The Mythological Dimension: There is the physical internet, which consists of billions of physical components such as computers, servers, routers or cables that are connected to each other. Prophets and pundits of Internet-centrism also describe a mythical "Internet" which goes for beyond the physical internet, because it involves mythical narratives about the power of the internet as a higher force that is shaping human destiny. Just like "Scientism" attributes a certain mystique to real-world science, Internet-centrism adorns the physical internet with a similar mythological dimension.
Ideas of "cognitive surplus", crowdsourcing knowledge to improve the human condition, internet-based political revolutions that will put an end to injustice, oppression and poverty and other powerful metaphors are used to describe this poorly defined mythical entity that has little to do with the physical internet. The myth of egalitarianism is commonly perpetuated, yet the internet is anything but egalitarian. Social media hubs have millions of followers and certain corporations or organizations are experts at building filters and algorithms to control the information seen by consumers who have minimal power and control over the flow of information.
3) The Doctrinal Dimension: The doctrine of Internet-centrism is the relentless pursuit of sharedom through the internet. The idea is that the more we share, the more we collaborate and the more transparent we are via the internet, the easier it will be for us humans to conquer the challenges that face us. Challenging this basic doctrine that is promoted by Silicon Valley corporations can be perceived as heretical. It is a remarkable testimony to the proselytizing power of the prophets and pundits in Silicon Valley that people were outraged at the government institution NSA for violating our privacy. There was comparatively little concern about the fact that the primary benefactors of the growing culture of sharedom are the for-profit internet corporations that make money off our willingness to sacrifice our privacy.
4) The Ethical Dimension: In many religions, one is asked to follow aspects of a religious doctrine which have no direct ethical context. For example, seeking salvation by praying alone to a god on a mountain-top does not necessarily require adherence to ethical standards. On the other hand, most religions have developed moral imperatives that govern how adherents of a religion interact with fellow believers or non-believers. In Internet-centrism, the doctrinal dimension is conflated with the ethical dimension. Sharedom is not only a doctrinal imperative, it is also a moral imperative. We are told that sharing and collaborating is an ethical duty.
This may be unique to Internet-centrism since the internet (both in its physical or its mythical form) presupposes the existence of fellow beings with whom one can connect. If a catastrophe wiped out all humans but one, who happened to adhere to a traditional religion, she could still pray to a god (ritual), believe in salvation by a supernatural entity (mythological) and abide by the the religious laws (doctrinal). However, if she were an Internet-centrist, all her rituals, beliefs and doctrines would become meaningless.
5) The Social Dimension: Congregating in groups and social interactions are key for many religions, but Internet-centrism provides more tools than any other ideology, cultural movement or religion for us to interact with others. Whether we engage in this social activity by using social media such as Facebook or Twitter, by reading or writing blog posts, or by playing multi-player games online, Internet-centrism encourages us to fulfill our social needs by using the tools of the internet.
6) The Experiential Dimension: Most religions offer their adherents opportunities for highly personal, spiritual experiences. Internet-centrism avoids any talk of "spirituality", but the idea of a personalized experience is very much a part of Internet-centrism. One of its goals is to provide opportunities for self-actualization. We all may be connected via the internet, but Internet-centrists also want us to believe that this connectivity provides a path for self-actualization. We can modify settings to customize our web browsing experience, we can pick and choose from millions of options of what online courses we want to take, videos we want to watch or music we want to listen to. The sense of connectedness and omnipotentiality is what provides the adherent of Internet-centrism with a feeling of personal empowerment that comes close to a spiritual experience of traditional religions.
When one reviews the definitions by Schleiermacher or Geertz, or the multi-dimensional analysis by Ninian Smart, it does indeed seem that Morozov is right and that Internet-centrism is taking on many religion-like characteristics. There is probably still a big disconnect between the Silicon valley prophets or pundits who proselytize and the vast majority of internet users who primarily act as "consumers" but do not yet buy into the tenets of Internet-centrism. But it is likely that at least in the short-term, Internet-centrism will continue to grow, especially if Internet-centrist ideas are introduced to children in schools and they grow up believing that these ideas are both essential and sufficient for our intellectual and social wellbeing. Perhaps the pundits of Internet-centrism could discuss the future of this emerging religion with adherents of other faiths at a TEDxInterfaith conference.
Image Credits: Photo of Gutenberg Bible (Creative Commons license, via NYC Wanderer at Flickr)
Monday, February 03, 2014
Haiku and Landays in Science
by Jalees Rehman
That's all that remains
Of warriors' dreams.
My favorite scientific experiments are those which resemble a haiku: simple and beautiful with a revelatory twist. This is why the haiku is very well suited for expressing scientific ideas in a poetic form. Contemporary haiku poets do not necessarily abide by the rules of traditional Japanese haiku, such as including a word which implies the season of the poem or the 17 (5-7-5) syllable structure of three verses. Especially when writing in a language other than Japanese, one can easily argue that the original 5-7-5 structure was based on Japanese equivalents of syllables and that there is no need to apply this syllable count to English-language haiku. Even the reference to seasons and nature may not apply to a modern-day English haiku about urban life or, as in my case, science.
Does this mean that contemporary haiku are not subject to any rules? In the introductory essay to an excellent anthology of English-language haiku, "Haiku in English: The First Hundred Years", the poet Billy Collins describes the benefit of retaining some degree of structure while writing a haiku:
Many poets, myself included, stick to the basic form of seventeen syllables, typically arranged in three lines in a 5-7-5 order. This light harness is put on like any formal constraint in poetry so the poet can feel the comfort of its embrace while being pushed by those same limits into unexpected discoveries. Asked where he got his inspiration, Yeats answered, "in looking for the next rhyme word." To follow such rules, whether received as is the case with the sonnet or concocted on the spot, is to feel the form pushing back against one's self-expressive impulses. For the poet, this palpable resistance can be a vital part of the compositional experience. I count syllables not out of any allegiance to tradition but because I want the indifference and inflexibility of a seventeen-syllable limit to balance my self-expressive yearnings. With the form in place, the act of composition becomes a negotiation between one's subjective urges and the rules of order, which in this case could not be simpler or firmer.
The seventeen syllable limit – like any other limit or rule in poetic forms – provides the necessary constraints that channel our boundless creativity to create a finite poem. It is a daunting task to sit down with a pen and paper, and try to write a poem about a certain topic. Our minds and souls are flooded with a paralyzing plethora of images and ideas. But, as Collins suggests, if we are already aware of certain rules, it becomes much easier to start the process of poetic filtering and negotiation.
What is the essence of a haiku? In the same essay, Collins offers a very elegant answer:
Whether they are the counting or the non-counting type, poets are likely to agree that at the heart of the haiku lies something beyond counting, that is, its revelatory effect on the reader, that eye-opening moment of insight that occurs whenever a haiku succeeds in drawing us through the keyhole of its details into the infinite, or to put it more ineffably, into the "Void of the Whole." No one would argue that any tercet that mentions a cloud or a frog qualifies as a real haiku; it would be like calling an eleven-line poem about courtly love a sonnet. A true haiku contains a special uncountable feature, and every serious devotee of the form aims to achieve that with every attempt.
The revelatory surprise, the "Aha moment", is what characterizes a true haiku. I have experimented with the haiku form, trying to capture scientific concepts or the process of scientific discovery. Many poets do not give titles to their haiku, but I feel that the title can be very helpful to create a poetic tension and provide a context that may be difficult to incorporate within the haiku verses. A haiku – like every good poem – should not require explanatory lines by the poet, but I think that one can make some exceptions here in the context of experimenting with haiku.
Scientific images or phrases are not always self-evident, so I include brief annotations for the haiku I have written which may be helpful for people who are not routinely exposed to the scientific research.
Grainy threads in cells,
powerhouses of life are
harbingers of death
I have been studying mitochondria for a number of years, but I still marvel at the Janus-like role of mitochondria. They are active sites of biosynthesis and produce the universal energy molecule of cells (ATP) thus ensuring the growth and survival of cells. At the same time, mitochondria can initiate a cell's suicide program (apoptosis), forcing a cell to die. You can read about some of our mitochondrial research on lung cancer here.
Ceci n'est pas une
pipette, porting microdrops
for my macrodreams
Many of us have spent hours, days and months repetitively pipetting hundreds of samples for PCR reactions, ELISA assays or other tests, and sooner or later most of us wonder about the meaning of these Sisyphean tasks.
in science are tested only
to be rejected
If I received a dollar for every wonderful scientific idea I have had that turned out to be wrong, I would not have to write any more grants to support my lab.
Haiku have become an integral part of English language poetry, but there is another poetic form that may soon be gaining popularity. The journalist and poet Eliza Griswold recently teamed up with the photographer Seamus Murphy, traveled to Afghanistan and collected landays that are commonly composed by Afghani women in their native language Pushto. Landays are a form of folk poetry, couplets consisting of a verse with nine syllables followed by one with thirteen syllables. Griswold worked with native Pushto speakers to translate the landays into English. In her brilliant essay published in the June 2013 issue of Poetry Magazine, Griswold provides us with glimpses into the lives of Afghani women, the hardships that they face on a daily basis. The essay also contains translations of landays, which have become a form of lyrical resistance for Afghani women, allowing them to voice their anger and frustration. Illiterate women compose, share and recite these poems, often anonymously and behind closed doors, in society that marginalizes women. The narratives about Afghani women and the translations of landays, which preserve their characteristic wit and sarcasm, are accompanied by haunting photographs that convey the beauty of war-torn Afghanistan and its people.
Here is a description of landays from Griswold's essay:
A landay has only a few formal properties. Each has twenty-two syllables: nine in the first line, thirteen in the second. The poem ends with the sound "ma" or "na." Sometimes they rhyme, but more often not. In Pashto, they lilt internally from word to word in a kind of two-line lullaby that belies the sharpness of their content, which is distinctive not only for its beauty, bawdiness, and wit, but also for the piercing ability to articulate a common truth about war, separation, homeland, grief, or love. Within these five main tropes, the couplets express a collective fury, a lament, an earthy joke, a love of home, a longing for the end of separation, a call to arms, all of which frustrate any facile image of a Pashtun woman as nothing but a mute ghost beneath a blue burqa.
Examples of landays collected by Griswold:
You sold me to an old man, father.
May God destroy your home, I was your daughter.
I tried to kiss you in secret but you're bald!
Your bare skull thumped against the wall.
I dream I am the president.
When I awake, I am the beggar of the world.
In April of 2014, Griswold and Murphy will also release the book "I Am the Beggar of the World: Landays from Contemporary Afghanistan" which will contain a more comprehensive collection of landays.
Landays have not yet caught on as a poetic form in the English-language, but this landmark work by Griswold might change that. I think that landays might be a great opportunity for scientists to describe their experiences with the scientific enterprise.
My landays revolve around the work and lives of academic scientists:
I work alone in the lab each night,
conducting all our experiments for your career.
Sirens of tenure captivate us,
chained to hallowed halls of academic freedom.
Journals can make or break our careers,
careers can make or break us, we can make or break journals.
These landays attempt to approximate the 9-13 syllable count in the couplets but as with haiku, the nature, structure and themes of landays written in English will likely be different from the original Pushto landays.
It does not really matter what poetic form or structure scientists choose to express themselves, but my personal experience has been that poetry is a wonderful way to share science. Writing haiku or landays about science has forced me to think about what aspects of my scientific work I really treasure. What started as a playful exercise with words has become a journey.
Monday, December 02, 2013
The 400 Blows
by Lisa Lieberman
The opening credits sequence of The 400 Blows (1959) takes us for a drive along the empty streets of Paris on a gray morning in early winter. Bare trees, a glimpse of the weak sun as we make our way toward the Eiffel Tower: a lonely feeling settles over us and never really leaves. This world, the world of François Truffaut's childhood, is not the chic 1950s Paris of sidewalk cafés, couples strolling along the Seine, and Edith Piaf regretting nothing.
Eleven-year-old Antoine Doinel is in school when the film begins. We see him singled out for misbehavior by a teacher. He may not be a model student, but he's no worse than any of the other boys. Nevertheless, an example must be set pour encourager les autres. Draconian punishment of a potential ringleader is a time-honored means of enforcing discipline among the troops. Antoine is sent to the corner, kept in during recess, assigned extra homework. Even so, the teacher's authority is subverted. Small insurrections break out in the classroom when his back is turned. Exasperated, he threatens reprisals. "Speak up, or your neighbor will get it."
We begin to suspect that we are not in 1950s Paris. We are in Paris during the German occupation—the era when Truffaut was actually growing up. The somber mood, the furtive acts of rebellion and retaliation, as when some of the students, led by Antoine, destroy a pair of goggles belonging to the class snitch.
There are other clues. A scene that evokes the hunger, when wartime rationing was in effect. Antoine spends a night on the streets, afraid to go home after he's been caught in a lie. As dawn approaches, he steals a bottle of milk from a caddy he spots on the curb in front of a shop and drinks it ravenously. Later, Truffaut draws our attention to a notice about exterminating rats on the wall of the police station where Antoine is locked up after his stepfather turns him in for a petty theft. Equating Jews with vermin was de rigueur in Vichy propaganda, a standard feature of the newsreels shown before the movies that the future filmmaker sneaked into when he was supposed to be in school.
Truffaut's stepfather really did hand him over to the police. He was subsequently sent to a reform school on the outskirts of the city, the Paris Observation Center for Minors, a grim institution where corporal punishment was employed to keep the delinquents in line. Antoine is sent to an Observation Center in Normandy, near the coast. The routine is strict, militaristic. We see the young offenders marching two-by-two under the watchful gaze of the warden. No deviation passes unnoticed. Antoine is slapped for taking a bite of bread before he is given permission to eat, the blow delivered casually and without rancor. A simple transaction: one violation of the rules earns a slap.
More serious infractions, such as running away, earn a beating. A boy is returned to the institution, his face bruised and bloody, dragged past the other juveniles by his captors and locked in a cell. Truffaut suffered the same fate for attempting to escape and ended up spending several months in solitary confinement. He also underwent a series of psychological assessments. In the film, Antoine is warned by another boy not to let his guard down in his interview with the "spychologist." Anything he does or says in her presence will be noted in his dossier, his source cautions, together with "what everyone thinks of you, including your neighbors."
The Kids in the Cage
This scene, though not strictly autobiographical (in reality, the Center's psychologist became Truffaut's staunchest ally), is in keeping with the wartime undercurrents running throughout the picture. Harder to decipher is an incongruous detail the filmmaker inserted into an outdoor sequence at the reform school, where we see the warden locking his own small children in a cage, presumably for their own protection, as the young offenders pass close by for their daily exercise. Granted, the cage is a rather pretty structure, filigreed metal painted white, but the image echoes a key moment in the police station, when Antoine was taken out of the basement cell he shared with a male inmate to make way for some newly-arrested prostitutes.
The idea of an eleven-year-old boy being locked up with these immoral women was so unthinkable that he was removed to a cage the size of a phone booth for his protection. Film scholar Adam Lowenstein draws a connection between the image of the kids in the cage and the work of French director Georges Franju, whose horror films exerted a powerful influence on Truffaut. Franju liked to slip uncanny images into his work, "forcing a recognition with the disturbing historical events that haunt it." The past, in Franju's cinematic vision, was not safely past; events such as the German occupation and postwar purges, the round-ups of French Jews and their deportation to the death camps, continued to inform the present in myriad ways, not all of them conscious. Indeed, Truffaut said in an interview that he intended the kids in the cage as a tribute to Franju.
The persistence of past trauma in present-day awareness was also a central preoccupation in the films of Truffaut's colleague and mentor Alain Resnais. His documentary, Night and Fog (1955), was released during the Algerian war (1954-62), when French soldiers were accused of "doing over there what the Germans had done over here," as Albert Camus bluntly put it. The narrator's final words, scripted by Mauthausen survivor Jean Cayrol, stand as commentary on France's dirty war in the colony.
We pretend it all happened only once, at a given time and place. We turn a blind eye to what surrounds us, and a deaf ear to humanity's never-ending cry.
The bleakest moments of The 400 Blows seem freighted with political significance. Let us return to that notice on the wall of the police station about rat exterminations. The term used in the notice, deratissages, closely resembles the euphemism the French army employed when referring to their anti-terrorist raids on Algerian villages: rat hunts or ratissages. These operations entailed razing the village to the ground, rounding up suspected terrorists, and forcibly resettling the remaining inhabitants in barbed wire-enclosed camps. Some two million Algerians were expelled from their homes and interned under harsh conditions by French authorities, resulting in tens of thousands of deaths from starvation, disease, or exposure.
Evidence of such inhumane policies, on top of the Gestapo tactics decried by Camus—torture, hostage-taking and indiscriminate reprisals against civilians, summary executions—was impossible to ignore in the late 1950s, when Truffaut was making his film. No less troubling were the French government's efforts to suppress debate on the Algerian campaign at home. When the journalist and former Resistance leader Claude Bourdet published an editorial in 1957 critical of the war, he was arrested at his home in Paris, handcuffed and brought to the Fresnes Prison, strip-searched, and questioned for the better part of a day. Fresnes Prison was where the Gestapo had interrogated members of the Resistance; Bourdet himself had been tortured there in 1944 before being sent to a concentration camp, and he did not hesitate to draw a parallel between the two experiences. "When the doorbell rings at 6 a.m. and it's the milkman, you know you are in a democracy."
Discipline and Punish
The curtailing of personal freedom in the interest of security and public order would become the focal point of Michel Foucault's investigations into the disciplinary mechanisms permeating modern society. Working as a cultural attaché in the French foreign mission in Hamburg, he may well have seen The 400 Blows when it came out. The picture made quite a splash at the 1959 Cannes film festival, earning Truffaut the award for Best Director and a nomination for the top prize, the Palme d'Or, and it was Foucault's job to promote French cultural productions. Movies also happened to be one of the few distractions Foucault permitted himself, beginning in his student days at the École Normale.
Imagine the as yet unknown scholar, putting aside his work on the manuscript of Madness and Civilization (1961) to take in Truffaut's picture. He would have appreciated the "spychologist" line; Foucault himself had been subjected to psychiatric evaluations after his first suicide attempt. The film's spontaneity, an affront to the mannered traditions of French cinema—a tradition Truffaut dismissed as "cinéma de papa"—would have appealed to the iconoclastic philosopher. And it's tempting to regard the image of the kids in the cage as the proverbial grain of sand, the nucleus of the book that many consider the pearl in Foucault's oeuvre, Discipline and Punish (1975).
Toward the end of Discipline and Punish, Foucault introduces a walk-on character, Béasse, a thirteen-year-old orphan brought before the authorities in 1840 for vagabondage. The judge viewed the boy as a delinquent because he had no home and no steady employment. Idleness was a punishable offense under nineteenth-century French jurisprudence. Béasse understood his situation differently, however:
I don't work for anybody. I've worked for myself for a long time now. I have my day station and my night station. In the day, for instance, I hand out leaflets free of charge to all the passers-by; I run after the stagecoaches when they arrive and carry luggage for the passengers; I turn cart-wheels on the avenue de Neuilly; at night there are the shows; I open coach doors, I sell pass-out tickets; I've plenty to do.
The Béasses of this world, Foucault lamented, could not withstand the disciplinary system of "civilization" and "order" and "legality" that defined freedom as a crime, and yet the boy's joyful exuberance could not be suppressed entirely.
Hearing his sentence of two years in a reformatory, Béasse ‘pulled an ugly face, then, recovering his good humor, remarked: "Two years, that's never more than twenty-four months. Let's be off then!"'
The 400 Blows is punctuated with moments of joyful exuberance, but the ending suggests that there is no evading the regimen of the Observation Center. Antoine escapes, and we follow him as he makes his way to the ocean. He runs along the beach, dashes into the surf, then turns back. Where can he go? The camera zooms in on Antoine's expression, the final shot a freeze frame of his face. That lost look will stay with us for a long time.
Monday, November 11, 2013
Tapping into the Creative Potential of our Elders
by Jalees Rehman
The unprecedented increase in the mean life expectancy during the past centuries and a concomitant drop in the birth rate has resulted in a major demographic shift in most parts of the world. The proportion of fellow humans older than 65 years of age is higher than at any time before in our history. This trend of generalized population ageing will likely continue in developed as well as in developing countries. Population ageing has sadly also given rise to ageism, prejudice against the elderly. In 1950, more than 20% of citizens aged 65 years or older participate used to participate in the labor workforce of the developed world. The percentage now has dropped to below 10%. If the value of a human being is primarily based on their economic productivity – as is so commonly done in societies driven by neoliberal capitalist values – it is easy to see why prejudices against senior citizens are on the rise. They are viewed as non-productive members of society who do not contribute to the economic growth and instead represent an economic burden because they sap up valuable dollars required to treat chronic illnesses associated with old age.
In "Agewise: Fighting the New Ageism in America", the scholar and cultural critic Margaret Morganroth Gullette ties the rise of ageism to unfettered capitalism:
There are larger social forces at work that might make everyone, male or female, white or nonwhite, wary of the future. Under American capitalism, with productivity so fetishized, retirement from paid work can move you into the ranks of the "unproductive" who are bleeding society. One vile interpretation of longevity (that more people living longer produces intolerable medical expense) makes the long-lived a national threat, and another (that very long-lived people lack adequate quality of life) is a direct attack on the progress narratives of those who expect to live to a good old age. Self-esteem in later life, the oxygen of selfhood, is likely to be asphyxiated by the spreading hostile rhetoric about the unnecessary and expendable costs of "aging America".
Instead of recognizing the value of the creative potential, wisdom and experiences that senior citizens can share with their respective communities, we are treating them as if they were merely a financial liability. The rise of neo-liberalism and the monetization of our lives are not unique to the United States and it is likely that such capitalist values are also fueling ageism in other parts of the world. Watching this growing disdain for senior citizens is especially painful for those of us who grew up inspired by our elders and who have respected their intellect and guidance they can offer.
In her book, Gullette also explores the cultural dimension of cognitive decline that occurs with aging and how it contributes to ageism. As our minds age, most of us will experience some degree of cognitive decline such as memory loss, deceleration in our ability to learn or process information. In certain disease states such as Alzheimer's dementia or vascular dementia (usually due to strokes or ‘mini-strokes'), the degree of cognitive impairment can be quite severe. However, as Gullete points out, the dichotomy between dementia and non-dementia is often an oversimplification. Cognitive impairment with aging represents a broad continuum. Not every form of dementia is severe and not every cognitive impairment – whether or not it is directly associated with a diagnosis of dementia – is global. Episodic memory loss in an aging person does not necessarily mean that the person has lost his or her ability to play a musical instrument or write a poem. However, in a climate of ageism, labels such as "dementia" or "cognitive impairment" are sometimes used as a convenient excuse to marginalize and ignore aged fellow humans.
Perhaps I am simply getting older or maybe some of my academic colleagues have placed me on the marketing lists of cognitive impairment snake oil salesmen. My junk mail folder used to be full of emails promising hours of sexual pleasure if I purchased herbal Viagra equivalents. However, in the past months I have received a number of junk emails trying to sell nutritional supplements which can supposedly boost my memory and cognitive skills and restore the intellectual vigor of my youth. As much as I would like strengthen my cognitive skills by popping a few pills, there is no scientific data that supports the efficacy of such treatments. A recent article by Naqvi and colleagues reviewed randomized controlled trials– the ‘gold standard' for testing the efficacy of medical treatments – did not find any definitive scientific data that vitamin supplements or herbs such as Ginkgo can improve cognitive function in the elderly. The emerging consensus is that based on the currently available data, there are two basic interventions which are best suited for improving cognitive function or preventing cognitive decline in older adults: regular physical activity and cognitive training.
Cognitive training is a rather broad approach and can range from enrolling older adults in formal education classes to teaching participants exercises that enhance specific cognitive skills such as improving short-term memory. One of the key issues with studies which investigate the impact of cognitive training in older adults has been the difficulty of narrowing down what aspect of the training is actually beneficial. Is it merely being enrolled in a structured activity or is it the challenging nature of the program which improves cognitive skills? Does it matter what type of education the participants are receiving? The lack of appropriate control groups in some studies has made it difficult to interpret the results.
The recent study "The Impact of Sustained Engagement on Cognitive Function in Older Adults: The Synapse Project" published in the journal Psychological Science by the psychology researcher Denise Park and her colleagues at the University of Texas at Dallas is an example of an extremely well-designed study which attempts to tease out the benefits of participating in a structured activity versus receiving formal education and acquiring new skills. The researchers assigned subjects with a mean age of 72 years (259 participants were enrolled, but only 221 subjects completed the whole study) to participate in 14-week program in one of five intervention groups: 1) learning digital photography, 2) learning how to make quilts, 3) learning both digital photography and quilting (half of the time spent in each program), 4) a "social condition" in which the members participated in a social club involving activities such as cooking, playing games, watching movies, reminiscing, going on regular field trips but without the acquisition of any specific new skills or 5) a "placebo condition" in which participants were provided with documentaries, informative magazines, word games and puzzles, classical-music CDs and asked to perform and log at least 15 hours a week of such activities. None of the participants carried a diagnosis of dementia and they were novices to the areas of digital photography or quilting. Upon subsequent review of the activities in each of the five intervention groups, it turned out that each group spent an average of about 16-18 hours per week in the aforementioned activities, without any significant difference between the groups. Lastly, a sixth group of participants was not enrolled in any specific program but merely asked to keep a log of their activities and used as a no-intervention control.
When the researchers assessed the cognitive skills of the participants after the 14-week period, the type of activity they had been enrolled in had a significant impact on their cognition. For example, the participants in the photography class had a much greater degree of improvement in their episodic memory and their visuospatial processing than the placebo condition. On the other hand, cognitive processing speed of the participants increased most in the dual condition group (photography and quilting) as well as the social condition. The general trend was that the groups which placed the highest cognitive demands on the participants and also challenged them to be creative (acquiring digital photography skills, learning to make quilts) showed the greatest improvements.
However, there are key limitations of the study. Since only 221 participants were divided across six groups, each individual group was fairly small. Repeating this study with a larger sample would increase the statistical power of the study and provide more definitive results. Furthermore, the cognitive assessments were performed soon after completion of the 14-week programs. Would the photography group show sustained memory benefits even a year after completion of the 14-week program? Would the participants continue to be engaged in digital photography long after completion of the respective courses?
Despite these limitations, there is an important take-home message of this study: Cognitive skills in older adults can indeed be improved, especially if they are exposed to an unfamiliar terrain and asked to actively acquire new cognitive skills. Merely watching educational documentaries or completing puzzles ("placebo condition") is not enough. This research will likely spark many future studies which will help define the specific mechanisms of how acquiring new skills leads to improved memory function and also studies that perhaps individualize cognitive training. Some older adults may benefit most from learning digital photography, others might benefit from acquiring science skills or participating in creative writing workshops. This research also gives us hope as to how we can break the vicious cycle of ageism in which older citizens are marginalized because of cognitive decline, but this marginalization itself further accelerates their decline. By providing opportunities to channel their creativity, we can improve their cognitive function and ensure that they remain engaged in the community.
There are many examples of people who have defied the odds and broken the glass ceiling of ageism. I felt a special sense of pride when I saw my uncle Jamil's name on the 2011 Man Asian Literary Prize shortlist for his book The Wandering Falcon: He was nominated for a ‘debut' novel at the age of 78. It is true that the inter-connected tales of the "The Wandering Falcon" were inspired by his work and life in the tribal areas of the Pakistan-Afghanistan borderlands when he was starting out as a young civil servant and that he completed the first manuscript drafts of these stories in the 1970s. But these stories remained unpublished, squirreled away and biding their time until they would eventually be published nearly four decades later. They would have withered away in this cocooned state, if it hadn't been for his younger brother Javed, who prodded the long-retired Jamil, convincing him to dig up, rework and submit those fascinating tales for publication. Fortunately, my uncle found a literary agent and publisher who were not deterred by his advanced age and recognized the immense value of his writing.
When we help older adults tap into their creative potential, we can engender a new culture of respect for the creativity and intellect of our elders.
- Gullette, Margaret Morganroth. Agewise: Fighting the new ageism in America. University of Chicago Press, 2011.
- Naqvi, Raza et al "Preventing cognitive decline in healthy older adults" CMAJ July 9, 2013 185:881-885.doi: 10.1503/cmaj.121448
- Park, Denise C et al "The Impact of Sustained Engagement on Cognitive Function in Older Adults", published online on Nov 8, 2013 in Psychological Science doi:10.1177/0956797613499592
Monday, September 30, 2013
Food and Power: An Interview with Rachel Laudan
All photos courtesy of Rachel Laudan
Rachel Laudan is the prize-winning author of The Food of Paradise: Exploring Hawaii’s Culinary Heritage, and a co-editor of the Oxford Companion to the History of Modern Science. In this interview, Rachel and I talk about her new book, Cuisine and Empire: Cooking in World History, and her transition from historian and philosopher of science to historian of food.
Rachel Laudan: I can remember when there was no such discipline as history of science! In fact, moving to history of food was a breeze. After all, the making of food from plant and animal raw materials is one of our oldest technologies, quite likely the oldest, and it continues to be one of the most important. The astonishing transformations that occur when, for example, a grain becomes bread or beer, or (later) perishable sugar cane juice becomes seemingly-eternal sugar have always intrigued thinkers from the earliest philosophers to the alchemists to modern chemists. And the making of cuisines is shaped by philosophical ideas about the state, about virtue, and about growth, life, and death.
A lot of food writing is about how we feel about food, particularly about the good feelings that food induces. I'm more interested in how we think about food. In fact, I put culinary philosophy at the center of my book. Our culinary philosophy is the bridge between food and culture, between what we eat and how we relate to the natural world, including our bodies, to the social world, and to the gods, or to morality.EH: Your earlier book, The Food of Paradise, necessarily dealt with food politics and food history. So many cultures were blended into local food in Hawaii. I treasure that book -- almost a miniature of what you’re doing in Cuisine and Empire.
RL: Well, thank you. It came as a surprise to me that I had a subject for a book-length treatment of something to do with food or cooking -- as interested in the subject as I certainly was. The only genre I knew was the cookbook, and I am not cut out to write recipes.The book was prompted by a move to teach at the University of Hawaii in the mid 1980s. I went reluctantly, convinced by the tourist propaganda that the resources of the islands consisted of little more than sandy beaches and grass-skirted dancers doing the hula.
I couldn't have been more wrong. These tiny islands, the most remote inhabited land on earth, have extraordinarily various peoples and environments. They were an extraordinary laboratory for observing the encounter of three radically different cuisines inspired by totally different culinary philosophies.
EH: It wasn’t all that long ago -- going on 18 years -- but you were a pioneer in the approach you took. It was history, not a compendium of anecdotes. And it was a treatment of culinary philosophies. Was there anything to tell you it would be so well received?
RL: Not at all. Mainland publishers were interested only in a book with exotic tropical recipes. I wanted to use the recipes as illustrations of how three cuisines were merged into a fusion cuisine called Local Food. Readers were welcome to cook from them, but that wasn’t their point.The University of Hawaii Press, after some anguishing about whether a mainlander could write a book about the politically touchy subject of foods in Hawaii, took the manuscript. So I was bowled over when it won the Jane Grigson/Julia Child prize of the International Association of Culinary Professionals.
EH: Any publisher might have had more confidence, originally, in your cultural sensitivity, if they’d seen how many cultures you had by then participated in. And the list has grown. You’ve really gotten around.
RL: I have had the luck to have been successively immersed in four distinct cultures: those of England, the United States mainland, Hawaii, and Mexico. Growing up in Britain, I ate the way that many foodies today dream about: local food, entirely home cooked, raw milk from the dairy, home preserved produce from the vegetable garden. I never saw the inside of a restaurant until my teens. When I was 18, before I went to college, I spent a year teaching in one of the first girls' high schools in Nigeria, something that I later realized taught me a lot about the food of that part of the world. In addition, I have lived, shopped and cooked for periods of months in France, Germany, Spain, Australia, and Argentina.
EH: Were you always teaching?
RL: Not always. My husband Larry Laudan and I left academia of our own free will when we were in our 50s, thinking it would be exciting to try something different. We thought lots of others would do the same, but no. It turns out that is unusual.
EH: Unusual, I’ll say! How did you make the shift not only to a new field, but to a more independent life as a scholar and writer?
RL: At the time, I decided to put in cold calls to people I thought were doing interesting work: Joyce Toomre; Barbara Wheaton; Barbara Haber who were working on Russian, French, and American food history in Cambridge, Mass.; Alan Davidson, founder of the Oxford Symposium of Food and Cookery in England; Gene Anderson, the anthropologist and historian of Chinese cuisine; and the food writer Betty Fussell and the nutritionist Marion Nestle in New York. They could not have been more encouraging, inviting me to speak, join their groups, calling from England, and introducing me to others, including Elizabeth Andoh, expert on Japanese cuisine, and Ray Sokolov, then working for the Wall Street Journal, who had just published Why We Eat What We Eat, that examined long-distance exchanges of food. I was buoyed by this sense of community as I jumped fields and left academia.
EH: You weren’t even thinking whether the history of food was a serious area of study, were you?
RL: Not at all. I’ve always believed that if you can show people you are on to an important problem and have things to say about it, they will listen. Soon after I began working on food I spent a year as a research fellow at the now-defunct Dibner Institute for the History of Science and Technology at MIT. There, to the horror of many, I proposed a seminar on the European culinary revolution of the mid- seventeenth century when main dishes flavored with spices and sugar and the acid, often bread or nut-thickened sauces of the Middle Ages were abandoned. They were replaced by a rigid separation of salt and sweet courses and sauces based on fats, as well as by airy drinks and desserts. This was the beginning of high French cuisine.
I argued that this was due to the replacement of Galenic humoral theory by a new theory of physiology and nutrition deriving from the work of Paracelsus and accepted by the physicians in the courts of Europe. Once it became clear that my theory could account very precisely for the change in cuisine, they were all ears. A scholarly version won the Sophie Coe Prize of the Oxford Symposium on Food and Cookery and was published in the pioneering food history journal, Petits Propos Culinaires. And a popular version was later published by Scientific American.
EH: I am moved and impressed that you left academe with a plan. Many people would have just waited by the phone rather than build a new network. Yet your central concerns, as an independent scholar, remained the same as when you were teaching, and have come to full fruition in Cuisine and Empire. Food and technology require to be considered together, do they not?
RL: Indeed they do. Food, after all, is something we make. Plants and animals are simply the raw materials. We don't eat them until we have transformed them into something we regard as edible. Even raw foodists chop, grind, mix, and allow some heating. So I could bring to food history, the hard won conclusions of historians of technology.
EH: What are historians of technology mainly concerned with?
RL: Well, historians of technology are not primarily concerned with inventions. The infamous light bulb was useful only as part of a whole electrical system. Similarly soy sauce, say, or cake, have to be understood as part of whole culinary systems or cuisines. When these are transferred, disseminated, copied, they change the world.
And, perhaps most important, new ideas or prompt changes in technology. They cause cooks, for example, to come up with or adopt new techniques. As the shift to French high cuisine shows, if people change their minds about what healthy food is, they will change their cuisine. When they adopt new religious beliefs, Buddhism or Christianity, say, they abandon meat cooked in the sacrificial fire for enlightenment-enhancing foods such as sugar and rice in the case of Buddhism, or for periods of fasting in the case of Christianity. When they reject monarchy as a political system, as happened in republican Rome, the early Dutch republic, and in the early United States, they reject the extravagant dining associated with reinforcing kingly or imperial power.
So a large part of the book is dedicated to laying out the culinary philosophy underlying each of the world's great cuisines. When that culinary philosophy is transformed, so is the cuisine.
EH: Ah! Just one reason I am so excited about Cuisine and Empire is that I cannot think of anyone else who could take all this on, even if they thought to.
RL: My background in history of science and technology was a big help. It had become clear that this was not simply one damn experiment and discovery after another but shaped by great traditions of scientific inquiry shaped by atomism or Newtonianism or uniformitarianism, to turn to my specialty, geology. And I had explored the parallels between science and technology as cognitive systems, arguing that technology too was not just one invention after another but shaped by traditions of knowledge that, for example, specified materials, techniques, and ways of handling them in say, the evolution of gearing, or interchangeable parts, or jet engines.
My experience in Hawaii had already suggested that there were far reaching traditions in food too. So I asked “If even the history of the foods of Hawaii has to be told in terms of the cross-oceanic, cross-continent expansion of a few great culinary traditions, might not that also be true of world food history?"
Cuisine and Empire answers that with a resounding yes. It's possible to capture most of food history in the last 20,000 years by talking about the expansion of about a dozen different cuisines.
EH: I will be thinking about this book for years and years. I’m already starting to wonder what broad cultural assumptions, that I’ve never thought to identify, much less question, I must bring with me when I cook... These are assumptions about science and technology, too, because science exists within culture. Despite how well prepared -- I want to say uniquely prepared -- you were for writing Cuisine and Empire, it was a tremendously ambitious project, was it not?
RL: It was ridiculously ambitious.
EH: Now, this is a question everyone who writes will understand. Did it ever seem so huge and unwieldy you wanted to chuck it?
RL: More times than I care to admit. What was I writing about? Farming? Cooking? Dining? What were the big turning points? And what about all the regions such as Central Europe and Southeast Asia that got short shrift? On the other hand I had the wonderful gift of time to take on a big project and I didn’t want to fritter it away. So I gritted my teeth, kept re-working my organization, telling myself I was as well prepared as anyone.
EH: How so?
RL: On the practical side, I had grown up on a working farm. And I learned early on that cooking was just as important as farming. One of my earliest memories was the day my father decided he would make bread with the wheat he had grown. At the time, there was no internet to look up how this might be done. He put it in a pestle and pounded it. Nothing but flattened grains, even though many of the archaeologists in our part of the world assumed without experimenting that that was how it was done. He screwed the meat mincer on to the side of the large kitchen table and put the grains through that. Nothing but little lumps. Finally, he put a handful of grains on the flagstone floor and attacked them with a hammer. Fragments scattered all over the kitchen, but still no flour. With barns full of wheat, we could have starved because we did not know how to turn wheat into flour to make bread.
Later I had the chance to shop and cook in Europe, Australia, the USA and Mexico so I had a pretty good grip on a variety of cuisines. In Nigeria and Hawaii, I had experienced cuisines based on roots, not grains. At the University of Hawaii, I taught a wildly popular hands on world history of food, learning a huge amount from my students, almost all of them of Asian ancestry. And in Mexico, women taught me what my father couldn’t, namely how to grind grains into flour.
On the intellectual side, in the course of my academic life I’d also taught social history, an eye-opener about what life, including diet, was like for ordinary people until very recently. And at the University of Hawaii, with its polyglot population, I’d had a chance to talk with many of the pioneers of world history.EH: Unlike when you were writing The Food of Paradise, was there also a wave to catch? In the form of other like minded scholars and writers at work?
RL: A wave? If there was, it was more in world history than in food history, which in spite of the efforts of some fine scholars, did not really become mainstream until a few years ago. World historians such as William McNeill, Philip Curtin, Alfred Crosby and Jerry Bentley -- the latter my colleague at Hawaii -- were drawing on decades of detailed historical scholarship to see if they could trace big patterns of disease, warfare, enslavement, ecological change, and religious conversion.
Why shouldn't I jump into the fray and see if there were big patterns to be traced in food? Surely it was just as important in human history as their topics. I'd always loved making sense of masses of complicated data. Now here was a real challenge.
EH: Rachel, I expect lots of readers for your book. Which other books do you think it will be on the night table with? I’m thinking particularly of Michael Pollan and Bee Wilson -- is there a cogent comparison? I note Paul Freedman blurbed your book, by the way -- along with Naomi Duguid, Anne Willan, and Dan Headrick. Gee, good company!
RL: Well, if mine ends up on the night table with these books, I will be tickled pink. And I think it complements them nicely. Michael Pollan's recent book, wonderfully written as always, is a long meditation on contemporary cooking. I differ from him in not drawing a sharp distinction between cooking and processing. Processing (pre and post industrial) and cooking are on a continuum of stages in food preparation. Bee Wilson's delightful book is also about cooking and full of wonderful historical insights as befits a historian. But whereas she treats themes such as knife, fire, and measure, I organize by the origin, spread, and transformation of cuisines. In my wildest dreams, I would like to think of this as the historical counterpart to Harold McGee’s On Food and Cooking.
EH: Readers will be intrigued by your historical treatment of “processing.” It’s become a bad word –- code for turning food into non-food. I regularly read your blog, so I know you mean it a certain way that looks at the very big picture, including labor economics. But the food you personally like is emphatically not processed…
RL: Not if you limit “processed” to what many call junk food. I’ve never acquired a taste for fast-food hamburgers or soft drinks, have never eaten Wonder Bread or its siblings, and cook at home six nights out of seven. Picky is what I am. At the same time though, I think that we hinder our understanding of food if we don’t understand that all our food, with the exception of a few fruits, has been transformed, that is, processed, before we eat it. The foods that humans eat are one of their greatest creations, one of their greatest arts in that dual sense of technique and aesthetics, and we should celebrate that they are artifacts, not bemoan it. Like all human creations, some foods are better than others, and should be judged as such, but they are all creations.
EH: So there! How do cuisines speak to you personally -- as someone who loves food and cooking? If a cuisine does reveal a culture, then would tasting and analyzing it be as telling as listening to a poem or seeing a drama?
RL: Absolutely. Every time you go into the kitchen, you take your culture with you. As you plan a meal for guests, say, you bring to it assumptions about how to mesh their preferences with yours, about how much it is appropriate to spend on the meal, about how to accommodate their religious or ethical food rules, and about what they believe to be healthy and delicious.
I like to play a little game with myself when I go to a different country or meet someone from a different background. Knowing the history of that place or the heritage of that person, can I guess what the cuisine will be like? Or conversely, if presented with a meal, can I read it, dissecting, say, the noodles, the condiments, and the meat to tell a story about how it evolved over the centuries? And the answer is almost always yes.
EH: What holds a cuisine together?
RL: Again it was Hawaii that gave me the clue. It was not the local plants and animals because Hawaii had almost nothing edible before humans arrived. It was systems of belief or ideas or culture. The Pacific Islanders all valued taro, which had a place in their traditional religion, they all had a variant of the same herbal medicine. The Asians (apart from the Filipinos) had all been touched by Buddhism with its veneration of rice, and all subscribed to some form of humoral theory. And the Anglos came from a Christian tradition that placed high importance on raised bread and they followed modern nutritional theory.
EH: You have empires in the title, but you haven’t mentioned them yet. Where do they fit in?
RL: Empires have been the most widely spread form of political organization and as such the major theater in which cuisines have been created and disseminated. It's not a case of one empire, one cuisine, though. Because aspiring leaders always copy and adapt the customs of what they see as successful rivals, cuisines were copied and adapted from one empire to another. In the ancient world, for example, Persian cuisine was copied and adapted by the Indians and the Greeks, and then the Romans copied and adapted Greek cuisine.
EH: So cuisines spread from empire to empire. Is it a coherent story all around the world?
RL: Amazingly, yes. Beginning with the first states, interlinked barley-wheat cuisines underpin all the early empires. Then in the next phase, Buddhism transforms cuisines of eastern Asia, followed by the Islamic transformation of cuisines from Southeast Asia in the east to parts of Africa and Spain in the west (and the shaping of the Catholic cuisines of medieval Europe), and Catholic cuisines transform the cuisines of most of the Americas in the sixteenth century. Protestant critiques open the way to modern cuisines in Europe, with the rest of the world quick to make similar changes. Protestant-inspired high French cuisine becomes world high cuisine, Anglo cuisines create a middle way between high and humble cuisines, a middle way that is copied from Japan to Latin America in late nineteenth century. Although there are countless wrinkles, exceptions, and idiosyncrasies, at the core is a simple, coherent story of a few big families of cuisine and three major stages.
EH: If empires spread cuisines, does the reverse apply? Does food affect the success of empires, or smaller states? I have read in Jared Diamond about food affecting the success or failure of a whole society – the Norse colony in Greenland, whose people starved rather than ate fish for instance. What about embracing a culturally new food for political reasons?
RL: Certainly most people in the past believed that food could affect the success or failure of a whole society. At the end of the nineteenth century, for example, leaders around the world looked at what seemed to be the unstoppable expansion of the Anglo world, that is, the British Empire and the United States of America.
One explanation was that Anglo strength derived from a cuisine based on white wheaten bread and beef served at family meals. Unlike alternative explanations such as the special characteristics of Anglos or their upbringing in bracing climates, this offered a strategy for countering this expansion. If you could persuade your subjects or citizens to abandon corn or rice or cassava, and shift to bread or pasta, if you could persuade them to eat more meat, if you could persuade them to eat as families, then they might become stronger.
EH: Well, I’m naïve, then. “Eating as a family” is not a given across cultures? Please tell me more.
RL: The importance of the family meal as the foundation of society and the state is so deeply ingrained in the American tradition that it’s hard to appreciate just how American it is, perhaps inherited from Dutch settlers. Of course many meals were prepared in the home throughout history, though institutional food was more important than we realize. Just think of the courts, the military, the religious orders, as well as prisons, boarding schools, poor houses, and so on. Just think of the pictures of dining in the past and how rarely it is a family that is depicted. Who you ate with reflected rank rather than family ties.
But even when prepared in the home, the meal was often very different from that depicted in Norman Rockwell’s “Freedom from Want.” The children might eat in the nursery, as in nineteenth-century middle class England. Or the father might eat in a different place and at a different time from the wife, as in Japan. Or the father might eat food prepared by different wives on different days, as in Nigeria. Or the meal might include unrelated apprentices and farmhands. So to many societies, the idea of the communal family meal as offering both physical and moral/social nourishment was a novelty.
EH: And the shift to bread, pasta, and meat?
RL: Even in the United States, there were concerted efforts to persuade southerners, particularly in the Appalachians, to abandon corn bread for biscuits of wheat flour. And Brazilians, Mexicans, Venezuelans, Colombians, Indians, and Chinese debated, and often put in place policies to bring about this change. The most successful efforts were in Japan where the diets of the military and of people living in cities were changed to add more meat, more fat, more wheat, and to introduce family meals.
EH: Ah! Taking on the strength of the aggressor, or of the dominant culture! I wonder who’s doing that right now, and with regard to whose food… I’m fascinated with the cover of Cuisine and Empire. I know it’s a Japanese print. I wanted it to be the Jesuits, but that’s centuries off the mark.
RL: It’s a print in the Library of Congress collection by the Japanese artist, Yoshikazu Utagawa, made in 1861 just a few years after the forcible opening of Japan to the West. It shows two Americans, great big fellows, one of them baking bread in a beehive oven and the other preparing a dish over a bench top stove. I chose it because it so nicely illustrates the themes of the book. It puts the kitchen at the center. And it shows the keen interest that societies took in observing, and often copying, the cuisines of rivals.
EH: The kitchen at the center of history -- a beautiful phrase. The book launches very soon.
RL: I believe the official launch date is in November. Copies, though, will be available this week.
EH: Well, mine will arrive today or tomorrow. Thank you so much for this fascinating preview and discussion. I’m already thinking how to incorporate 20,000 years of causality into the book party menu.
A different version of this interview, emphasizing gastronomy in history, is available at The Rambling Epicure.
Read Rachel’s article for SaudiAramco World on the Islamic influence on Mexican Cuisine
Read Rachel’s personal blog, “A Historian’s Take on Food and Food Politics” at http://www.rachellaudan.com/Live in or around Boston? Come with me to a talk by Rachel Laudan the evening of October 28 at BU!
Monday, August 05, 2013
European Crime Fiction - Mini Reviews
by Ruchira Paul
"There was a desert wind blowing that night. It was one of those hot, dry Santa Anas that come down through the mountain passes and curl your hair and make your nerves jump and your skin itch. On nights like that every booze party ends in a fight. Meek little wives feel the edge of the carving knife and study their husbands' necks ... Anything can happen." —Red Wind, Raymond Chandler
It is not just the Santa Ana that inflames a fevered mind; the sirocco that raises a dust storm, the arctic wind which howls over frozen fjords and the gentle Mediterranean breeze that rocks tethered boats too can fan murderous intentions. From slums to manicured suburbs the world over, sudden ill winds blow in the depths of the human heart when it comes to crime and crime fiction.
My devotion to mystery / detective stories began early -around age nine or ten - and as was common among English speaking Indian children of my generation, it followed the usual trajectory of Enid Blyton, Conan Doyle and the formidable Agatha Christie. British mysteries dominated the shelves of Indian book stores and libraries at the time. The first encounter with American crime fiction took place in my teen years when I began rooting through Ellery Queen's mystery magazines and the Perry Mason books in my uncle's paperback collection. The hardboiled American gumshoe caught my attention in college - the down-at-the-heel, smoking, drinking, quietly desperate philosopher-avenger was a far cry from the polished and well mannered British crime busters. The first such charming prototype appeared in the form of Ross Macdonald's Lew Archer and I was hooked. Macdonald provided the gateway into the vast world of American crime fiction. His hypnotic story telling led me to Dashiell Hammett, Raymond Chandler, James Cain of the pulp fiction era and later to dozens of newer writers, some of whom continue to write to this day. Thus began a life-long habit. No matter what else I read - high, low or middle brow - after a while I go back to a good mystery book for a dose of adrenaline induced relaxation.
I began sampling European crime writings only recently. Among the writers featured here, I have greatly enjoyed some and not so much the others. (Britain is excluded from "Europe" for the purpose of this post. The long tradition of excellent British crime lit warrants a separate review of its own.) All good mysteries dwell not just on the whodunnit aspect of murder and mayhem but also the whydunnit. The dark broodings of the human mind are as crucial to the story line as nefarious criminal acts. In that respect the good writers on both sides of the Atlantic succeed. But unlike American detective stories, few European crime novels feature lone wolf protagonists. Even when an investigator acts alone, he or she is part of a team and an official action plan. Detectives rarely use their guns and when they do, they do so reluctantly. One similarity between American and European crime novels is that the main characters are usually male, middle aged and with a couple of exceptions, tend to have troubled personal lives.
Good writers of any genre bring alive the local flavor of the place in which they set their stories. The mood in the Scandinavian mysteries is generally bleak. Cold rains, dark nights, icy roads and muddy slush routinely figure in the atmospherics, as do characters who keep their private thoughts private and their conversations laconic. Even when a story is set in the long days of arctic summers, the thinly populated landscapes and the quiet lives of the inhabitants evoke a sense of loneliness. In Italy, France, Spain and Greece in contrast, the stories bustle with people, traffic jams and voluble interactions. Then there is nourishment. From the sparse mention of food in the Scandinavian novels, one may be led to believe that the northern detectives' sustenance derives solely from alcohol, nicotine and caffeine. Their Mediterranean counterparts on the other hand, savor their food and drink and even in the midst of gruesome happenings, the writers take the trouble to describe the content of the investigative officer's lunch plate, occasionally stopping to share a recipe.
Fred Vargas (France) Inspector Jean-Baptiste Adamsberg
Shoes of corpses with the feet still in them; a three hundred year old superstition that drives a modern day murder and mutilation spree; an old man kills his wife for stifling him with monotonous household routines but then continues to live by her rules after she is dead. These are some cases that Inspector Jean-Baptiste Adamsberg encounters in his capacity as the commissaire of a police department in Paris. Adamsberg is astute, introspective and attentive to his surroundings, if not so much to his personal life. In his idle moments he may be given to weighing questions such as whether seagulls mewl in different languages in France and England. He wears two watches set 90 minutes apart, but tells time by the hour when his one-armed neighbor goes into to the garden to take a leak. He recognizes the unique talents of the officers in his squad but is also keenly aware of their failings and personal predilections. In the midst of pressing professional demands, Adamsberg can be coaxed/ bullied into delivering kittens for his neighbor's cat.
If all this sounds slightly goofy for a mystery novel, readers can rest assured that real crimes do occur in Fred Vargas' stories and the perpetrators are duly apprehended by old fashioned police work. On the way one also learns that a man once ate a wardrobe (a thekophagist, if you must know)! Fred Vargas is the pseudonym for Frédérique Audoin-Rouzeau, a biological archeologist. The brilliant Ms Vargas is a very engaging story teller.
Andrea Camilleri (Italy) Salvo Montalbano
The small town of Vigata in Sicily has its fair share of criminal activities and DS Salvo Montalbano is responsible for getting to the bottom of it all. Montalbano's task is made especially perilous by the involvement of the powerful local Mafia in almost all unsavory events. A workaholic, Montalbano is an aging bachelor with a long time, long distance, long suffering girl friend who is routinely stood up at carefully planned romantic occasions. Living alone in a house by the sea, the detective is given to flashes of insight into complex cases while sipping coffee early in the morning or having a drink late at night on his verandah facing the ocean. Montalbano knows Vigata well and possesses a lively imagination. Those qualities come in handy in making the right connections between seemingly unrelated events such as a modern day robbery and the accidental discovery of a pair of fifty year old skeletons found in a sealed cave. His crusty demeanor and long years as a criminal investigator have not made him cynical. He is made queasy at autopsies, not so much by the physical detritus of violent death but by imagining the suffering that preceded it. In a melancholic moment he is likely to see the parallel between the death dance of a seagull and the brutal dying moments of a ballet dancer. Throughout police procedures that do not always unfold strictly by the book, we hear Montalbano rant against the corruption of Italy's politicians, its judiciary and business establishments.
Andrea Camilleri began writing at a late age and became a best selling author with the Montalbano series. His stories have plenty of action, twists and turns and interesting local flavor. I could have done with a little less buffoonery from some of the characters (perhaps some things translate badly from Italian to English). Camilleri is great fun to read.
Jean-Claude Izzo (France) Fabio Montale
Jean-Claude Izzo's protagonist Fabio Montale is an ex-cop who reluctantly gets involved in helping friends who are victims of crime. The contemplation of life and his surroundings - the ruthless underbelly of the port city of Marseilles - leaves him feeling despondent and fatalistic. Of Italian ancestry, Montale sees himself as somewhat of an outsider in France although he has lived there all his life. He tries to take a balanced view of the struggles, aspirations and prejudices of both the natives and the immigrants (North African Muslims, mostly); the anger and suspicion that boil over the social and cultural divide alarms him terribly.
Izzo was an excellent writer. (He died in 2000) Like his main character, he was a life long resident of Marseilles. Some writers make the physical features, history, architecture and the underlying vibes of a place such an integral part of the narrative that a city or region becomes as much a character in their stories as the human actors. Raymond Chandler's Los Angeles, Carl Hiaasen's hilarious rants against the despoilers of South Florida, Elmore Leonard's gritty city of Detroit come to mind. Izzo was passionate about his birthplace. His Marseilles trilogy is as much about crime as it is about his beloved city - its storied past, uncertain present and what Izzo (through the eyes of Montale) feared would become its bleak future. I thoroughly enjoyed Total Chaos, the first book in the trilogy. I picked up Chourmo soon thereafter but did not like it as much. I chose not to read Solea, the last in the series, for the same reason that initially made me eager to read Izzo a second time - I knew it too was likely to be another love letter to Marseilles.
Jussi Adler-Olsen (Denmark) DS Carl Morck
Homicide detective Carl Morck first appears in Jussi Adler-Olsen's The Keeper of Lost Causes just after he has been "promoted" to the post of chief and sole employee of Department Q located in the basement of his precinct in Copenhagen. His new job is to take care of cold cases. Morck knows that he has been sidelined without actually being fired and the new job is a pointed reprimand for dereliction of duty. In his last operation, one of his colleagues got killed and another was paralyzed in a deadly encounter during which Morck neglected to draw his gun. Depressed, isolated and licking his wounds, Morck asks for an assistant and is assigned the freshly hired Hafez el-Assad, a recent Syrian immigrant with no experience in law enforcement. The cheerful and energetic Assad proves to be adept at cleaning the basement offices, cooking oily snacks and ferreting out information from uncooperative secretaries. When an old case starts to break open, the newly formed team of Morck & Assad begins the hunt. During the confounding and action filled events that follow, it becomes clear to Morck and the reader that the unflappable Assad is not who he claims to be - he is probably not from Syria and his real name certainly is not Hafez el-Assad. His shrewd grasp of the criminal mind and lethal skills with weapons point to a more "professional" past than Assad is willing to own up to.
The son of a psychologist, part of Adler-Olsen's childhood was spent living on the premises of psychiatric institutions where his father was employed. His books are described as psychological thrillers. A very good writer, Adler-Olsen's plots are complex and the characters vivid, including the minor ones. The unlikely Morck-Assad pairing is handled cleverly with considerable humor, a successful launch of the Department Q series.
Maj Sjöwall and Per Wahlöö (Sweden) Inspector Martin Beck
A young woman from Nebraska is found dead in a canal in Sweden; an American detective named Kafka from Lincoln provides background information of the victim; the case of the Laughing Policeman turns out to be not so jolly. Veteran police inspector Martin Beck handles the cases with patience and without flamboyance. A serious man of a somewhat dour temperament, Beck hates driving, is susceptible to violent winter colds, suffers from frequent dark moods and doesn't much like going home. A father of two young children in a lackluster marriage, his loyalty to police work is unwavering. Colleagues trust him and he has no qualms in seeking help from others. A case may drag on for months but Beck pursues the slimmest of leads with doggedness until it reaches a satisfactory conclusion.
Sjowall and Wahloo are widely recognized as the pioneers of modern Swedish crime fiction. Author Henning Mankell (Inspector Kurt Wallander) credits them for his own interest in the genre. Beginning in the 1960s, the couple wrote several books together until the death of Wahloo in 1975. Their popularity paved the way for other Swedish crime writers, turning the focus to human interactions and motivations rather than mechanical sleuthing. Sjowal & Wahloo's style was matter-of-fact but not without empathy. Over many books the recurring characters are fleshed out well. Marxist in their leanings, Sjowall & Wahloo wrote their novels during the Vietnam War and widespread student protests the world over. Except for the occasional passing reference to prejudices against immigrants and Sweden's indigenous Sami population, there is not much evidence of heavy handed politics in their writings.
Arnaldur Indridason (Iceland) Inspector Erlendur
Arnaldur Indridason's well written series features the lumbering, lonely and stoic Inspector Erlendur (like all Icelanders, he goes only by his first name) of Reykjavik. Erlendur nurses an ancient guilt and new sorrows but doesn't let them get in the way of his professional duties at which he is very good. The mood in the books is bleak - persistent gloomy weather, with the backdrop of an even gloomier personal life of the main character. The tightly knit stories unfold at a fast but not frantic pace on the field, pausing occasionally to cast a glance at Erlendur's dispiriting personal life. The author avoids getting snared in excessive navel gazing and contrived scenarios. The violence too remains within digestible limits. Free of gimmicks, Indridason's books are classic crime fiction - complex but not convoluted. Worth reading.
Karin Fossum (Norway) Inspector Konrad Sejer
Another Scandinavian whose books are described as psychological thrillers, Karin Fossum is deft at what she does. The books center around a quiet little town outside Oslo, a seemingly unlikely place for brutal murders. But murders do take place even in idyllic places like Elvestad. The experienced Inspector Konrad Sejer and his young assistant Jakob Skarre of the local police department are at the helm of the investigations which they conduct quietly, reassuringly and shrewdly. Fossum's low key writing style is civilized and compassionate. The creepiness of some of the crimes, many involving children, therefore comes as a surprise. Her focus is not just on the murderer and the murdered but also on those who must stand by and watch. We learn that the unexpected can happen when placid lives are thrown into turmoil.
A very good writer, Fossum sometimes dwells a bit too long on the fragile workings of the human mind. She comes across as vaguely moralistic but not judgmental. I have read two of her books and will probably check out a couple more.
Michael Dibdin (Italy) Inspector Aurelio Zen
Michael Dibdin was British by birth, died in the US and lived in Italy for a while. His popular Aurelio Zen books feature the capable but crotchety inspector from Venice who lives with his mother in Rome. An experienced and dedicated crime fighter, Zen is not above the occasional deception, pulling rank and intimidation of witnesses to ensure results. Aurelio Zen mysteries are set in different Italian cities and Dibdin does a good job of capturing the character of each place, its inhabitants and the protagonist's dyspeptic view of life everywhere he finds himself.
Petros Markaris: (Greece) Inspector Costas Haritos
Petros Markaris' straightforward police procedure stories are narrated in first person by Inspector Costas Haritos of Athens. The politically incorrect (but not unsympathetic) Haritos spends his days dealing with ambitious superiors, undependable subordinates and pestering reporters. He loves his daughter and his relationship with his wife of many years is often contentious but always reconcilable. After a hard day at the office, he likes to read dictionaries for relaxation. The job requires Haritos to drive up and down the congested streets of Athens. We are told the names of scores of Athenian streets that he covers in his beaten up Mirafiori. But we learn very little about the layout of the city, its sights and sounds other than the traffic jams and road rages that Haritos must negotiate to get to his destination. The reviews point to Markaris' popularity in several European countries. I wasn't terribly impressed.
Manuel Vázquez Montalbán (Spain) Pepe Carvalho
I could not finish the only book by Manuel Vazquez Montalban that I tried. Montalban is a well regarded author and he wrote much more than detective novels. My curiosity about him was piqued when I came across his name in one of Andrea Camilleri's books. Camilleri's fictional detective DS Montalbano (see above) is a fan of his real-life Spanish namesake. Apparently, so is Camilleri himself. But I found the novel starring ex-cop Pepe Carvalho less than compelling. It was distracting to keep up with various different threads - national and international crime and politics, Carvalho's lively appetite for food and sex, his travels. Others may find him more readable or may be I picked the wrong book.
(The list was gathered from the recommendations of friends and from book reviews. Naturally, the mix contains well known writers deemed worthy of translation by Anglophone mystery fans. I had initially planned to give all ten writers equal billing. But as the word count began to rise, I decided to describe five in more detail than the rest. I have left out the two best known Scandinavian authors - Stieg Larsson (Sweden) and Jo Nesbo (Norway); I have never read Larsson and Nesbo, the most "American" of the lot, is probably familiar to readers.)
Monday, July 08, 2013
The Great Spy's Dream
by James McGirk
I asked Patrick if there was anything particularly useful he could pass on to me “about the CIA.” “The first thing to remember is that nobody connected to the Agency calls it the CIA. It’s plain CIA.”
—Harry Mathews, My Life in CIA.
“The reason why these agencies are coming out of the shadows is that they want to tell their story to the extent that they can,” says Peter Earnest, the founding director of the International Spy Museum in Washington, D.C. As to how an intelligence agency should go about telling its story when so much of that story is concealed from the public eye is easy, he says, “you simply don’t tell people the parts that are classified.” The problem with leaving holes in a story, however, particularly one as juicy as that of government espionage, is that those holes create a vacuum and that vacuum fills with rubbish, sinister, exceedingly compelling rubbish that supports an entire ecosystem of strange scavengers. The question is: are these scavengers a bug, a feature, or simply a sideshow to the story being told?
Given that bamboozlement is essentially an operational mandate for an intelligence agency, one wonders whether there might be something else going on. John le Carré called this addictive haze of paranoia the “Great Spy’s Dream.” Writing for the New Yorker in 2008, le Carré reflected on his first clandestine mission, a meeting with a Czechoslovakian double agent that was casually aborted when le Carré’s Browning automatic slipped from his waistband and dropped to the floor of an Austrian bar. Le Carré wonders whether his case officer might have invented the entire operation, “his composure astonished me. Not a word of rebuke.” Le Carré diagnoses a kind of delusional paranoia from the incident, “a condition that in the spook world, rather like a superbug in a hospital, is endemic, hard to detect, and harder still to eradicate.” He sees it contaminating the Iraq Dossier, pushing intelligence officers to produce the slam-dunk evidence for the Iraq War, and all because we, the public, want to believe in our spies, “no matter how many times they trip over their cloaks and leave their daggers on the train.” Yet something is going on out there.
Every American agency that employs someone other than a security guard to carry a gun has an unofficial fan club, with a character that is a funhouse reflection of its parent bureaucracy’s. The Defense Advanced Research Projects Agency (DARPA) the agency that built the Internet and invented stealth technology and god knows what else, attracts futurists with a sinister side, while the Bureau of Alcohol, Tobacco and Firearms attracts gun geeks and inveterate smokers, while the U.S. Border Patrol’s various fan clubs are slightly xenophobic and frankly downright hysterical. The web is riddled with chat-rooms, archives and clipping services discussing the minutia of these agencies. They come in all flavors though there is a definite paranoid crunch to most of them. A left-leaning paranoiac interested in intelligence might be drawn to Cryptome.org, a storehouse of sinister government documents that predates Wikileaks, while his or her rightwing counterpart might visit a site like AmericanBorderPatrol.com. Belonging to and participating in these sites must be a sort of wish fulfillment. Particularly since the agencies with the most pull on the imagination belong to America’s intelligence community, especially Central Intelligence or CIA.
There has been an explosion of interest in all things spy-related since the end of the Cold War. Central Intelligence Agency now has an entertainment liaison to field the myriad requests from movie producers and journalists that come in, and there are online discussion boards devoted to every fragment of the clandestine experience, from tradecraft to getting into the agency; and a former Russian spy, Anne Chapman, a pneumatic redheaded femme fatale who was part of a massive – and massively incompetent (or so the FBI would have us believe) – spy network, deported back to Russia after being caught red-handed encoding airport blueprints in computer graphics (a process called steganography) and has since evolved into the sort of politician/pin-up girl hybrid that was previously only possible in hopelessly corrupt but fun-loving places like Italy and the gentler former Soviet satellites. On top of this, or perhaps beneath all of this, the U.S. government seems to be deliberately manipulating the relationship between its clandestine agencies and the general public.
Immediately following the Second World War, according to Tim Weiner in his Legacy of Ashes (2008), as the Communist menace loomed large for the Western world and it became clear to President Eisenhower, particularly after the devastating Korean War, that the United States and Western Europe simply did not have the manpower or resources to hold off an aggressive Soviet or Chinese state for very long and that the only way to remain in the geopolitical catbird seat was to multiply the effect of their existing forces by rapidly escalating America’s nuclear arsenal and its clandestine forces. The former would function as the geopolitical equivalent of porcupine quills, turning the United States into something so prickly to take a bite out of no matter how delicious it may have been, while the latter would allow the United States to outmaneuver its enemies. Occasionally this meant covert undertakings, such as toppling governments (like Guatemala or Iran) and funding modern art exhibitions and multi-megawatt radio stations playing contagiously cool American music, but mostly it meant gathering and analyzing intelligence, knowing an enemy’s moves before it knew them itself, in effect exerting control through narrative.
Though we know now that the Cold War was winding down in the 1980s, for those on its shadowy frontlines a secret war was roiling. According to Tim Weiner, the agency was at its peak strength under Director Robert Gates, with several active agents on the Soviet side who produced excellent intelligence for the American government, and a series of successful clandestine operations -- operations being the more James-Bond-like side of intelligence – had produced real results, American funding and munitions were keeping the Soviets bogged down in Afghanistan and NATO had infiltrated a top secret Russian program that was using the hard currency largesse accumulated during the oil shocks in the 1970s to purchase advanced Western computer technology and industrial equipment. Western intelligence agencies began inserting malicious programming code into electronic components that were being used to remotely control oil pipelines. In 1982 the pressure inside of a remote stretch of pipeline in Siberia was gradually and undetectably increased by Western agents. Nothing showed up on Russian monitors until there was a massive explosion, one large enough to be mistaken for a tactical nuclear weapon.
On the information front, American intelligence agencies were also beginning to score victories – they had found evidence in Afghanistan and Southeast Asia that the Russians were continuing to test biological weapons and threatened to openly confront them. The Russians were determined to strike back. As chronicled in Thomas Boghardt’s amazing “Soviet Bloc Intelligence and its AIDS Disinformation Campaign” (Studies in Intelligence, 2009), the Russians began designating a quarter of their operating budget toward what the director of East Germany’s Department X of the Stasi’s foreign intelligence bureau, Col. Rolf Wagenbreth described as what “Our friends in Moscow call ‘dezinformatsiya,’ our enemies in America call ‘active measures’ and I, dear friends, call ‘my favorite pastime.”
The most effective attempt at hijacking the narrative was a project named OPERATION INFEKTION, which claimed that the HIV virus, the one that causes AIDS, was created in a U.S. government laboratory. It remains an enduring example the destabilizing damage that an intelligence agency can do through narrative alone, particularly one created by knowing its enemy’s weaknesses intimately and striking at a issue. The idea remains a serious problem for social workers and humanitarian agencies to this day. For foreigners the idea that there might be something sinister to the missions of mercy the United States and its allies were conducting, that the syringes they insisted on poking into the arms of their children might contain something other than the miraculous medicines they were being promised, particularly as a virulent sexually transmitted disease was streaking through the presumably populations of Africa, Southeast Asia and America’s presumably undesirable subcultures, while leaving the majority of Westerners unscathed. And it fit in perfectly with the perception of Western culture being morally bankrupt in a sexually voracious way, and technologically advanced to the point of having near-magical power. And after all it was not so long ago that the American government had been busted testing mind-bending drugs on its own citizens and the Tuskegee syphilis experiments of the 1930s were not so far away. Naturally the Soviet campaign drew upon all of this.
Thomas Boghardt describes how the Soviets attempted to pin AIDS on the Pentagon, even before the HIV virus was isolated. Their first platform was to accuse the United States of conducting eugenics. Soviet writers cited prior instances of “American perfidity,” that is Freedom of Information Act documents detailing experiments performed on U.S. citizens such as MKULTRA (which tested the hallucinogenic drug LSD on unwitting soldiers and Harvard students, including a young Ted Kaczynski), tests of aerosolized biowarfare systems that sprayed benign bacteria into the San Francisco and New York City subway systems, and they pointed out America’s support of South Africa (then under Apartheid) and noted that AIDS seemed to be radiating out of East Coast cities, such as Washington D.C., Boston, and New York City, cities that not only had large ghettos, but were also conveniently close to nearby biological warfare laboratories. This time the story didn’t quite stick but the next one did.
The Soviets tried again in 1983, using their government propaganda wing to seed a letter in the July 17th issue of a left-wing Indian newspaper called The Patriot. The anonymous letter claimed to have been written by a “prominent American anthropologist” and again cited well established facts about AIDS, described U.S. testing on American citizens then claimed that the U.S. had never abandoned bacteriological weapons research as they had claimed in 1969, claimed that researchers at Fort Detrick created AIDS “by ‘analyzing samples of highly pathogenic viruses in Africa and Latin America,’” and concluded by citing well-known articles warning of the threat AIDS posed to developing nations.
The story languished for several years until it was cited in the KGB paper in 1985 (“Panic in the West or What is Hiding behind the Sensation Surrounding AIDS” in the 30 October 1985 issue of Literaturnya Gazeta). By now the United States had increased pressure on the Soviet Union, accusing them of breaking the Geneva Convention over their continued bio-warfare programs and blocking international AIDS relief. OPERATION INFEKION began again in earnest, this time with the East Germans providing back up support, which included leaking information to an unwitting agent, a highly respected but also highly ideological scientist named Dr. Jakob Segal. Segal latched onto the story and became fixated with the idea of American-made AIDS and spent the rest of his career spreading the idea at international conferences, writing peer-reviewed papers that cited American AIDS as fact and essentially collaring anyone who would listen to him and forcing the idea down their throat. He converted masses of people, including an Austrian author named Johannes Mario Simmel who wrote a best-selling novel (Along with the Clowns came the Tears) spin-off TV miniseries about the Segal’s ideas. According to Boghardt, the KGB called people like this – the uncompensated evangelists of propaganda – subconscious multiplicators that is when they didn’t just refer to them as “useful idiots.”
The damage wrought endures to this day. A 1992 survey found that 15 percent of Americans still believe that Pentagon created AIDS, while a RAND Foundation and Oregon State University poll taken in 2005 found that 50 percent of African Americans “thought AIDS was manmade,” “25 percent believed AIDS was the product of a government laboratory,” and “12 percent believed it was created and spread by the CIA.” Respectable academics still sometimes debate the veracity of this myth. Accusations eugenics-by-intentional-infection continue to pepper the opinion pages of third world newspapers and first world free papers. Humanitarian and social workers are occasionally killed because of this idea, even though in 2008 the KGB’s replacement admitted it was a hoax (and had walked away from the story as early as 1992).
OPERATION INFEKION was hardly the only disinformation campaign conducted against the CIA, according to Boghardt, among the many Soviet campaigns the agency was accused of committing the Jonestown Massacre and running baby farms in Latin America to supply North Americans with organs for transplant. Though Soviet propaganda had an effect on the public perception of the clandestine community, the very worst damage ever done was self-inflicted.
Boghardt maintains in his article that the CIA conducted no equivalent to the U.S.S.R’s active measures. That said, propaganda is said to come in three flavors: black propaganda is entirely fabricated and includes stories like the laboratory AIDS one; grey propaganda is true but willfully slanted, an example of this might be blaming the recent financial collapse on impoverished, irresponsible subprime mortgage holders. Certainly they have had some agency in the crisis but to leave out the downright predatory behavior of banks, reckless valuations, and speculation gone berserk would be misleading, to say the least. The last category is white, and these are stories that meant to be unvarnished truth. White propaganda is by definition nearly impossible to refute and is by the far the most damaging. There is a reason why the most persistent paranoid conspiracies about the United States government are clustered around its greatest failures. That people believe the U.S. government was behind September 11th, that the CIA was complicit in the assassination of President John F. Kennedy, concealing alien technologies, or actively trying to control the minds of American citizens is basically the result of people trying to fill the holes between the idea of omnipotent, mysterious government agencies and the gross arrogance and incompetence behind some of the U.S.’s government’s misreadings of intelligence and poorly conceived clandestine operations, for example the revelation of national security funding on campus in the 1960s by Ramparts magazine, the revelations of domestic espionage and experimentation with LSD by the Church Committee in 1975, and the Pentagon Papers revelations of undeclared wars in Southeast Asia and the Kennedy brothers assassination attempts against Cuban leader Fidel Castro.
For a clandestine agency to be effective it must command a sterling reputation both within and outside its organization. “No one else can understand it,” said Colin Thompson who had served in Laos, Cambodia, and Vietnam [to Tim Weiner in Legacy of Ashes]. “It’s a mist you dip into and hide behind. You believe have become an elite person in the world of American government, and the agency encourages that belief from the moment you come in. They make you a believer.” The CIA’s perceived invulnerability and prestige goes double for the recruitment of agents. “Contrary to popular jargon, a CIA agent is not the actual employee of the CIA but rather the hapless schlub who has been recruited by a CIA case officer to spy on behalf of the United States, usually in exchange for money,” writes former CIA case officer Lindsay Moran in Blowing My Cover (2005). To commit treason against your motherland an agent has to trust that the agency he or she is doing it for will be able to protect them.
The first half of Moran’s book is a meticulously observed description of the author’s own yearlong training by Central Intelligence. Moran describes how she learned the Recruitment Cycle, “the process of spotting, assessing, developing and enlisting foreign agents.” She learned to diagnose a potential recruit’s vulnerabilities and then “play upon those weaknesses and introduce ways in which ‘our organization’ might help;” and if there were no weaknesses, to “wine and dine him [N.B. she notes earlier that most agents are men], ply him with alcohol and glimpses of the good life [and] if all went well, ultimately… weaken his resolve.” A classic article from Central Intelligence Agency’s electronic archives “More On The Recruitment of Soviets” (Studies in Intelligence, 1965) describes the traits to look out for in greater detail:
“[The] single, simple, self-evident explanation is that the enormous act of defection, of betrayal, treason, is almost invariably the act of a warped, emotionally maladjusted personality. It is compelled by a fear, hatred, a deep sense of grievance, or obsession with revenge far exceeding in intensity these emotions as experienced by normal, reasonably well-integrated and well-adjusted individuals… All [Soviet defectors] in the writer’s experience have manifest some behavioral problem – such as alcoholism, satyriasis, morbid depression, a psychopathic behavior pattern of one type or another, an evasion of adult responsibility – which was adequate evidence for an underlying personality defect decisive in their defection. It is only mild hyperbole to say that no one can consider himself a Soviet operations officer until he has gone through the sordid experience of holding his Soviet “friend’s” head while he vomits five days of drinking into the sink.”
The final stage of Lindsay Moran’s training as a CIA case officer was to travel to a nearby city, usually Richmond or Williamsburg, Virginia where she had to spend hours driving around the city trying to detect and evade surveillance while securing meetings with promising “agents,” who were all retired case officers in the game literally for a free lunch. Moran would wine and dine her quarry, gradually prying from them the crucial details that would allow her to convince the agents to work for the U.S. government. The last stage of the recruitment cycle (for the case officer) is to have the agent sign a receipt after accepting payment, starting a paper trail and committing them to the agency (not to mention adding a not-so-subtle threat of blackmail to the equation). To Moran, the paper trail and the mountains of calcified government bureaucracy devoted to processing it constrains the real work of espionage. In the second half of the book, Moran is dispatched to Eastern Europe where she learns that life as a real spy is that it is nothing at all like the courtly diplomatic parlor games she learned to play in the United States. Instead it is a chaotic and increasingly morally ambiguous mess. There are moments of extreme danger Moran began to question the agency after September 11th, wondering whether the massive resources spent on espionage yet still hopelessly snarled in government bureaucracy might better be spent elsewhere. She eventually quits the agency and becomes a journalist (she had earlier earned an MFA at Columbia University).
September 11th was a public disgrace for the Central Intelligence Agency and was quickly followed by another, even more heinous debacle – the mis-reading and exaggeration of the evidence of weapons of mass destruction in Iraq, which was the rationale for going to War in Iraq. In the years that followed, the intelligence community in the United States was reorganized under a central bureaucracy, and the Central Intelligence Agency was bled of funds. At the same time as the agency was coming under attack from the government, there was a flood of entertainment products devoted to intelligence, much of it casting them in a very good light, at least when compared with the depictions of the secret government activity from only a few years prior. An intriguing comparison might be drawn between Chris Carter’s X-Files, which ran from 1993 to 2002 and imagined a sinister shadow government attempting to take over the world on behalf of UFO-borne aliens, another FOX series, 24 which debuted in November 2001 and depicted a far more vulnerable if much more kickass version of the U.S. government. The X-Files began in the wake of the siege of the Branch Dravidians compound at Waco, Texas (February 1993) and Ruby Ridge (1992), a time when suspicion toward the government was peaking. Many wondered why, after the end of Cold War, the U.S. government needed any paramilitary federal agency at all. The bombing of the Oklahoma City federal building in 1995 restored a measure of sympathy. Meanwhile 24, with its ticking clock and frenetic pacing began only a few months after September 11th. The government became a system on the verge of collapse held together by a few courageous rogue agents.
Many branches of the U.S. military employ an entertainment liaison. What these offices do is provide a point person for anyone wanting to make a movie about the U.S. military, offering what some might call a Faustian bargain, providing access to real equipment (like tanks and airplanes) and government facilities in exchange for a chance to edit a script, presumably redacting any especially unflattering depictions of the military. Central Intelligence now employs an entertainment liaison as well. Since the CIA vets any document published by an agent after they retire from the agency, it is quite shocking that Moran’s book contains such a detailed account of her training, particularly given her critiques of the agency’s cloying bureaucracy. Perhaps Moran’s book reflected a new sort of propaganda for the agency, not that she intended it to be propaganda, but that her book told the CIA’s story in a new and particularly compelling way – it was almost like a police procedural, underlining how much expertise and teamwork goes into espionage, even if that meant that the agency might have to swallow an occasional swipe against its bloated administration. After all, those stories about rogue detectives flouting the orders of their blindered chiefs ultimately reinforce the idea of the policeman as a force for order and good.
Since September 11th there seems to have been an increasing appetite for depictions of the procedural side of intelligence, and these have been welcomed by agencies. A somewhat recent review in the declassified version of the CIA’s Intelligence Studies journal actually wondered whether there might some day be a movie that depicts the intense anxiety and pressures associated with the analysis side of intelligence. In a way there is: Eye Spy Intelligence Magazine, which bills itself as the “world’s only independent publication about espionage and intelligence” and a “bridge between officialdom and the public” has been in publication since May 2001 and has a circulation of 100,000. The International Spy Museum was founded the following year.
The International Spy Museum is the second museum devoted to espionage to open in the Washington D.C, and while the International Spy Museum is not affiliated with the government in any official capacity, its connections to the clandestine community run deep. Museum Director Peter Earnest is a former Central Intelligence agent who concluded a 36-year long career in intelligence as Central Intelligence Agency’s public relations director, and the International Spy Museum was funded by Milton Maltz, a broadcasting tycoon who began his career at the National Security Agency (NSA), the U.S. government’s code-cracking and signals intelligence agency. Like all things espionage-related, Maltz’s funding of the spy museum may seem as benign or sinister as you could possibly wish it to be. On the one hand Maltz’ company, The Malrite Company, is responsible for such benign and friendly entities as The Rock and Roll Hall of Fame, The Maltz Museum of Jewish Heritage, the Jupiter theatre, and the Spy Museum is “committed to the apolitical presentation of the history of espionage in order to provide visitors with nonbiased, accurate information…” and the museum seems targeted toward a younger demographic; but the history of espionage in the United States is a strange and brutal subject to whitewash, and since there are former spies on the board of directors, could this be a government agency’s back channel way of manipulating the American public? It might even be a backchannel created with the best of intentions.
It seems extremely unlikely that the Maltz or the International Spy Museum have ulterior motives. But the National Security Agency does have a particularly weird relationship with the public. The NSA has long had a reputation for being the U.S. government’s most secretive agency, and it remains exceedingly secretive, for example, they refused an interview request for this piece, which would not be unusual were it not for a highly confrontational follow-up telephone call to the author. A brusque voice – the imperative “command voice” – demanded to know the author’s name, rank, academic affiliation, and standing in the pantheon of journalists. He was found wanting. Central Intelligence was delighted to answer questions but then delayed and didn’t send responses until long this after the story was due. Yet for all of its clandestine camouflage, the NSA maintains the nation’s only official spy museum, the National Cryptologic Museum in Fort Meade, Maryland. (Though the National Cryptologic Museum is the only official spy museum open to the public, most agencies do have museums of their own, albeit in restricted areas the public is not allowed to enter, despite owning the collections, which are officially are held in perpetual “public trust”.) The National Cryptologic Museum bills itself as “the National Security Agency's principal gateway to the public.”
The relationship between the public and its agencies is still being negotiated. Even a declassified museum exhibits are a fraught with the leylines of clandestine intrigue and bureaucracy. The two hottest tickets on the museum circuit last year were Cold War space machines. Naturally the cute one attracted the most attention: museums all over the country fought to show the sweet porpoise-nosed Space Shuttle, while the KH-9 Hexagon, a spy satellite that the National Reconnaissance Office declassified last September for its 50th Anniversary celebration went almost unnoticed. The thing was a bristling tube with the approximate dimensions of a subway car. Its gargantuan twin cameras swept back and forth exposing hourglass-shaped panoramas on drums of chemical film. When the drums were full, it disgorged “exploded-buckets” the size of “garbage-cans” into the atmosphere to be snatched by the U.S. Air Force’s 6594th Test Group, i.e. the “Catch a Falling Star” squadron. The photos were known as Exemplar to those who actually saw them, and Cue Ball to those who only knew that such intelligence existed. As America’s aeronautic museums squabbled over the remaining Space Shuttles, the National Reconnaissance Office put Hexagon on display at the Smithsonian Museum. At first they placed the satellite on public display for a single day before replacing it in its crate and presumably whisking it back to a top-secret hangar not unlike the one in Raiders of the Lost Ark. (That is not entirely true, the satellite was displayed to several air force bases off limits to the public.)
For a government office whose existence was classified for the first 31 years of operation this seems like the equivalent of barking at the moon. What was the point letting the public glimpse the satellite for a single day? There was barely any attention paid in the media, so it couldn’t have been publicity they were after, and if they wanted to keep the project a secret why declassify in the first place? There is a clue on the 6594th Test Group’s website (which contains amazing footage of an aerial recovery). An invitation to the Smithsonian’s exhibit displays an enormous list of contractors. No matter what the awareness of Hexagon was to the public at large, there was still an audience of tens of thousands of people were involved with the project who had been sworn on pain of prison (or even execution) to stay silent about what was an awesome engineering, logistics and analysis achievement. In an investigation for the Washington Post (“Top Secret America”) exposing the vast expansion of the intelligence community since Sept. 11th, Dana Priest calculated that 854,000 people hold top-secret clearance in the United States and that there are “1,271 government organizations and 1,931 private companies work[ing] on programs related to counterterrorism, homeland security and intelligence.” It seems possible that the sheer number of these millions of contractors and government employees pressing upon society, whether they’re clamoring for recognition, blabbing to their spouses and friends about their jobs, attempting to recruit new members, may well be flooding the collective unconsciousness with ghostly stories about espionage, and besides, in a culture inundated with data and knowledge of all kinds, what could be more delicious than secrecy? Harry Mathew’s line about CIA is a lie, by the way, unless that is what they want us to think.
Monday, June 24, 2013
by Jalees Rehman
"The most radical revolutionary will become a conservative the day after the revolution."
The recent revelations by the whistleblower Edward Snowden that the NSA (National Security Agency) is engaged in mass surveillance of private online communications between individuals by obtaining data from "internet corporations" such as Google, Facebook and Microsoft as part of a covert program called PRISM have resulted in widespread outrage and shock. The outrage is understandable, because such forms of surveillance constitute a major invasion of our privacy. The shock, on the other hand, is somewhat puzzling. In the past years, the Obama administration has repeatedly demonstrated that it is willing to continue or even expand the surveillance policies of the Bush government. The PATRIOT Act was renewed in 2011 under Obama and government intrusion into our personal lives is justified under the mantle of "national security". We chuckle at the absurdity of obediently removing our shoes at airport security checkpoints and at the irony of having to place Hobbit-size toothpaste tubes into transparent bags for a government that seems to have little respect for transparency. Non-US-citizens who reside in or travel to the United States know that they can be detained by US authorities, but even US citizens who are critical of their government, such as the MacArthur Genius grantee Laura Poitras, are hassled by American authorities. Did anyone really believe that the Obama administration with its devastating track record of murdering hundreds of civilians - including many children – in drone attacks would have moral qualms about using the NSA to spy on individual citizens?
The Stasi analogy
One of the obvious analogies drawn in the aftermath of Snowden's assertions is the comparison between the NSA and the "Stasi", the abbreviated nickname for the "Ministerium für Staatssicherheit" (Department of State Security) in the former German Democratic Republic (GDR or DDR). Articles referring to the "United Stasi of America" or the "Modern Day Stasi-State" make references to the massive surveillance apparatus of the East German Stasi, which monitored all forms of communications between citizens of East Germany, from wire-tapping apartments, offices, phones and secretly reading letters. The Stasi "perfected" the invasion of personal spaces – as exemplified in the Oscar-winning movie "The Lives of Others". It is tempting to think of today's NSA monitoring of emails, Facebook posts or other social media interactions as a high-tech version of the Stasi legacy. A movie director may already be working on a screenplay for a movie about Snowden and the NSA called "The Bytes of Others". However, there are some key differences between the surveillance conducted by the Stasi and the PRISM surveillance program of the NSA. The Stasi was a state-run organization which was responsible for amassing the data and creating profiles of the monitored citizens. It did not just rely on regular Stasi employees, but heavily relied on so called IMs – "inoffizielle Mitarbeiter" or "informelle Mitarbeiter" - informal informants. These informal informants were East German citizens who met with designated Stasi officers, reporting on the opinions and actions of their friends, colleagues and relatives and at times aiding the Stasi in promoting state propaganda. In the case of the PRISM program, the amassing of data is conducted by private "internet corporations" such as Facebook, Google and Microsoft, who then share some of the data with the state. Furthermore, instead of having to rely on informal informants like the Stasi, "internet corporations" simply rely on the users themselves who readily divulge their demographic information, opinions and interests to the corporations.
Corporate erosion of our privacy
It seems strange that the outrage ensuing after the PRISM revelations is primarily directed at the US government and the NSA, but not at the corporations which are invading our privacy. Criticisms of the role that private corporations have played in the PRISM program primarily focus on the fact that these corporations divulged the information to the government, but seem to ignore the fact that corporations such as Facebook, Google and Microsoft continuously invade our privacy and use our data for their own marketing goals or share it with their clients. Centuries of persecution and oppression by governments - monarchs, dictators or democratically elected governments - have sensitized us to privacy invasion by governments, but we seem to have a rather laissez-faire attitude when it comes to corporate invasion of our privacy. In fact, we associate the expressions "corporate espionage" or "corporate surveillance" with corporations spying on each other but not necessarily with them spying on us. If we had found out the US Postal Service kept track of how many letters we send to certain recipients, perhaps even scanned our personal letters for certain keywords and then used this information for its own marketing purposes or sold it to interested parties, most of us would have considered this an egregious violation of our privacy. Yet we know that "internet corporations" such as Google and Facebook routinely practice this form of privacy invasion. In our neoliberal world of unfettered capitalism, the state is increasingly answering to corporate interests while ignoring the concerns of citizens. We have to ask ourselves whether such an eviscerated state is the only threat to our civil liberties, or whether we need be more sensitive to violations of our privacy and liberties by private corporations.
Long before the leak of the PRISM documents, critics such as Evgeny Morozov in "The Net Delusion", Rebecca MacKinnon in "Consent of the Networked" or Robert McChesney in "Digital Disconnect" warned us about the invasion of rivacy by "internet corporations" which are collecting information about us. We do not have to pay to use Google and Facebook, but the reason why these for-profit corporations offer us "free" services is because they use and market the information we unwittingly provide them. This type of information-gathering is probably legal, because when we sign up for accounts, most of us agree to their terms and conditions. Even if new laws or regulations are enacted after the PRISM scandal to limit surveillance, it is likely they will only pertain to how government agencies manage information on individuals or how corporations convey such information to government agencies, but it is unlikely that new laws will limit the information gathering for corporate benefits.
Why is it that we tend to be so lenient towards "internet corporations"? One reason may be the mythopoesis surrounding the "internet". Instead of viewing Silicon Valley executives of "internet corporations" as capitalists who sell our privacy for profit, we envision them as benevolent, entrepreneurial hipsters who eat organic quinoa salads and donate some portion of their profits to philanthropic causes. Some of us may buy into the myth of the egalitarian nature of the "internet". The "internet" is not egalitarian, especially not when it comes to the sharing and marketing of information by corporations. For example, there is a fundamental asymmetry when Facebook collects data on its users but does not feel compelled to reveal exactly how it uses the information. Jeff Jarvis, a vocal supporter of "internet corporations" has already expressed concern that users may start questioning their blind trust in the "internet" as a consequence of the PRISM revelations, skillfully avoiding a discussion of corporate privacy invasion. This strategy of placing all the blame for privacy violations on the government may be the best strategy for corporations. Google's attempt to challenge the US government, asking for permission to disclose any data requests from the NSA, enables Google to portray itself as a knight in shining armor and evade the far more uncomfortable discussion of corporate uses and abuses of amassed data.
Culture of sharedom
Evgeny Morozov's recent book "To Save Everything Click Here" provides an excellent insight into the mythos of the "internet". The physical internet consists of computers, routers and servers that are connected to each other, whereas the mythical "internet" is a cultural icon to which god-like powers are ascribed. Morozov refers to this ideology as "internet-centrism". The ideology of "solutionism", a term borrowed from the world of architecture and urban planning, refers to:
…an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions— the kind of stuff that wows audiences at TED Conferences— to problems that are extremely complex, fluid, and contentious.
"Solutionism" and "internet-centrism" can act in concert, creating a virtuous cycle in which the mythical "internet" is seen as a means to provide the ultimate solutions to the problems of humankind. This view of the "internet" and the afore-mentioned neoliberal awe of Silicon Valley entrepreneurs all may contribute to why privacy invasions by internet corporations are forgiven or ignored.
One additional cultural phenomenon that has allowed "internet corporations" to erode our privacy is that of sharedom, the incessant and growing desire to share our opinions and details of our personal lives with a broad audience. Just like "solutionism" or neoliberalism, sharedom is not a product of the "internet", but it has become a major fuel for the mythical "internet". Sharedom is just another word for nothing left to hide. Reality television, for example, is a manifestation of sharedom. The MTV reality TV show "The Real World" was first broadcast in 1992 when the "internet" was still in its embryonic stage. Millions of viewers could watch minute details of the lives of strangers living in a house together. One may view reality TV as a form of mass exhibitionism and mass voyeurism, but as Mark Greif has pointed out, one of the key aspects of reality TV was that it allowed viewers to "judge" the people they were observing. While reality TV only allowed a small group of people – selected from thousands of applicants – to "share" their lives with a broad audience, the "internet" gradually enabled everyone with an online connection to share their lives. We started living in transparent cages - Massive Open Online Cages (MOOCs) - and the "internet" permitted the audience to give instant feedback by passing online "judgments", such as leaving comments on social media posts or blog posts. This culture of sharedom was an unexpected bounty for "internet corporations", because it not only made us less cautious about our privacy but also supplied them with massive amounts of free personal data that could be marketed.
We often hear about the trade-off between privacy and security and the need for an optimal balance, which maximizes the privacy of the individual while maintaining the security of our society. This sounds like a reasonable argument, but it ignores the fact that this is not the only privacy trade-off. Corporations are interested in maximizing their profits and since individual data is a marketable commodity, their interest is to find a balance between maximal profit and maintaining some degree of privacy for users that makes them feel comfortable enough to share personal data that can be marketed. In addition to this trade-off between profits and privacy, the culture of sharedom also creates the trade-off between publicity and privacy. Jill Lepore has recently discussed the challenges of this trade-off in an essay in the New Yorker:
In the twentieth century, the golden age of public relations, publicity, meaning the attention of the press, came to be something that many private citizens sought out and even paid for. This has led, in our own time, to the paradox of an American culture obsessed, at once, with being seen and with being hidden, a world in which the only thing more cherished than privacy is publicity. In this world, we chronicle our lives on Facebook while demanding the latest and best form of privacy protection—ciphers of numbers and letters—so that no one can violate the selves we have so entirely contrived to expose.
Another form of trade-off is that of convenience versus privacy. Using a website such as Amazon to purchase products offers a lot of convenience: It remembers which products we have previously bought, it offers targeted recommendations for new or related products that may be of interest based on our profile, and it even remembers which products we recently browsed. The more we use Amazon, the more accurate their profile of our interests becomes, as evidenced by the accuracy of Amazon's recommendations for new purchases. All we have to offer Amazon in exchange for this convenience is a window into the privacy of our soul.
I remember coming across the expression "Faustian bargain" to describe how we exchange our privacy for the sake of convenience. When Goethe's Faust agreed to serve the devil Mephistopheles in the after-life, he was rewarded with youth and a beautiful lover. We may not approve of Faust's choice, but his deal at least merits some consideration. We currently sacrifice our privacy for the benefit of corporate profits and in exchange receive free shipping, targeted ads and coupons. No youth, no lovers. Our deal does not even rise to the level of a "Faustian bargain".
The recent study "Silent Listeners: The Evolution of Privacy and Disclosure on Facebook" conducted by researchers at Carnegie Mellon University monitored the public disclosure (information visible to all) and private disclosure (information visible to Facebook friends) of personal data by more than 5,000 Facebook users during the time period 2005-2011. The researchers identified two opposing trends. Over time, Facebook users divulged less and less personal information such as birthdates, favorite books or political information to the public. On the other hand, the researchers also noticed a trend of revealing more personal information to Facebook friends. Apparently, there was a growing awareness of how public disclosures can compromise privacy, but users were also emboldened to reveal more personal information when they deemed their audience to be trustworthy. As the researchers correctly pointed out, these "private disclosures" are always available to Facebook itself, third-party apps and to advertisers, referred to as "silent listeners" by the researchers. This is a key point when it comes to privacy settings on social media websites. Users are able to control how much information is displayed to other individuals and future laws and regulations may protect users by curtailing disclosures to government agencies, but information disclosures to the company that provides the service itself and its corporate clients are often beyond our control.
The poll "Teens, Social Media and Privacy" conducted by the Pew Research Center confirmed this lack of concern about third-party access to personal data in a group of 632 teenagers. Overall, 60% of teenagers said that they were either not at all concerned or not too concerned about third-party access (such as advertisers or third-party apps) to their personal information. Only 9% were very concerned about it. Individual comments made by teenagers in a Pew focus group further underscore this cavalier attitude towards corporate access to personal data:
Male (age 16): "It's mostly just bands and musicians that I ‘like' [on Facebook], but also different companies that I ‘like', whether they're clothing or mostly skateboarding companies. I can see what they're up to, whether they're posting videos or new products... [because] a lot of times you don't hear about it as fast, because I don't feel the need to Google every company that I want to keep up with every day. So with the news feed, it's all right there, and you know exactly."
Male (age 13): "I usually just hit allow on everything [when I get a new app]. Because I feel like it would get more features. And a lot of people allow it, so it's not like they're going to single out my stuff. I don't really feel worried about it."
Value of privacy
The revelations about how the government is using surveillance data obtained by "internet corporations" should prompt a broad debate of how we value privacy, especially because it is difficult to affix a price-tag on this intangible non-commodity. This debate will hopefully lead to greater transparency in regards to how governments access and handle personal information. However, it is important to also raise awareness of the potential abuse of personal information by private corporations. If we truly value our privacy, we need to develop methods that restrict government and corporate access to our personal data. In the process we will have to unravel our myths surrounding internet-centrism, solutionism and sharedom.
Image Credits: Automated envelope sealer used by the Stasi to close opened letters after review of the letter contents (image by Appaloosa - Wikimedia Commons), a Stasi surveillance post (image by Lokilech - Wikimedia Commons)
Monday, June 10, 2013
The Metropolitan Trilogy
by James McGirk
After writing a spate of reasonably successful—and very autobiographical—novels, James Ellroy and Martin Amis took the cities surrounding them and used them as test beds, experimenting with new voices and forms and populating this familiar terrain with doppelgangers and villains and foils and sexual obsessions. Amis wrote three novels devoted to northwest London (and the chicer parts of Manhattan) known colloquially as “the London Trilogy”, while Ellroy revisited the Los Angeles neighborhoods he had prowled as a burglar to write his “L.A. Quartet.” Both used cities to refine distinctive writing styles. Yet despite their precocity, these immense literary efforts remain tethered to a biological fact in each of the author’s lives. A fact that pulses through the work and keeps it vital and exciting despite the fact that the novelists have essentially written the same novel over and over again.
James Ellroy’s mother was raped and brutally murdered when he was only ten years old, and the murder remains unsolved. At the time he was about as estranged from his mother as a ten-year-old could possibly be, and claims to have been delighted that she died because he was sent off to live with his father, an indulgent lowlife who passed away not long after. His dad gave him a copy of Jack Webb’s The Badge, and Ellroy became obsessed with a chapter about the murder of Elizabeth Short, better known as The Black Dahlia, a beautiful woman whose unsolved, grisly murder haunted Los Angeles ten years before Ellory’s mother was killed.
Ellroy began his quartet by reconstructing Betty Short’s murder. The Black Dahlia is told from the point of view of a policeman as he investigates Short’s murder. After that Ellroy’s novels become much more ambitious. The second in the series, The Big Nowhere, is narrated by a god-like omniscience, following three characters as they get sucked into a series of strange murders and political intrigue. The third novel, L.A. Confidential traverses eight years of Los Angeles history, ending on approximately the same day that Ellroy’s mother was killed. (Geneva Ellory died June 22, 1958. The last chapter of L.A. Confidential is date-less but occurs after a series of scenes set in April and is titled “After You’ve Gone”). Along the way, Ellroy experiments with techniques to compress information without sacrificing the velocity of his story (i.e. the pie crust), introducing documents, police reports, and newspaper clippings into his story. The final novel in the quartet, White Jazz, abandons traditional narrative completely. It’s impossibly dense with detail and takes the form of a reconstructed file, animated with clipped recollections, and ends with an epilogue that takes his enormous cast of characters and traces their lives back up to the present day.
The prose changes from: “I never knew her in life. She exists for me through others, in evidence of the ways her death drove them” (The first words of The Black Dahlia) to “All I have is the will to remember. Time revoked/fever dreams—I wake up reaching, afraid I’ll forget. Pictures keep the woman young. L.A., fall 1958. Newsprint: link the dots. Names, events—so brutal they beg to be connected. Years down—the story stays dispersed. The names are dead or too guilty to tell.” (First words of White Jazz) The books are so similar: young men obsessed, assembling files, while an unknown killer does horrible things to beautiful women who sometimes live and often die, while the men around them do ugly, conflicted, heroic things.
Taken in one fat dose, the quartet reads as if Ellroy wanted to take Betty Short’s death, take the shock of it, and capture its reverberations through the corrupt police departments, chintzy Hollywood glitz, and lush underworld of the Los Angeles of his youth. Take all of it in, digest it and understand why—why his own life was jangled forever by his mother’s killing. (After White Jazz he went on to write two memoirs about his mother’s killing, My Dark Places and The Hilliker Curse: My Pursuit of Women.)
Martin Amis’ life was marred by tragedy, too. His cousin, Lucy Partington, vanished in 1973 (her remains were discovered in 1994, in a serial killer’s basement). And Amis dedicated several of his novels to his sister Sally, who lived a short and troubled life. But if there had to be a single biological idiosyncrasy underpinning the London Trilogy, it would to be Amis coming to terms with being a writer. His father, Kingsley Amis, was, at the time, probably the most important British novelist alive when Martin wrote the London Trilogy. Why else would he spread the apocryphal story about his father refusing to read his early novels? Or tell interviewers Kinglsey hurled the first novel is his unofficial trilogy, Money, across the room the moment a character named Martin Amis was introduced, in other words, the very instant Martin broke away from his father’s high modernist legacy and become postmodern… (Mark O’Connell’s superb essay, “The Arcades Project: Martin Amis’ guide to Classic Video Games,” makes a convincing case for a second biological fact: an addiction to Space Invaders might be lurking beneath the experimentation in the London Trilogy.)
While Ellroy compresses more and more information as the quartet evolves, as if panning the silt stirred up by the Dahlia’s murder for news of his mother, Amis seems to be at war with the very idea of being a writer.
Like The Black Dahlia, Money is narrated by its protagonist, a film director aptly named John Self who (after a prologue by M.A.) tells us: “As my cab pulled off FDR Drive, somewhere in the early Hundreds, a low-slung Tomahawk full of black guys came sharking out of lane and sloped in fast right across our bows.” The story is relatively straightforward: Self spends obscene amounts of investors’ money and consumes grotesque amounts of food and alcohol trying to make a movie, as the entire earth—and even his own body—seem to revolt against his appetites.
Maybe the story about Kingsley throwing Money was true. The language is so florid it is neon purple, so the opposite of the flinty prose preferred in the 1980s and 1990s, that entire book was such a contrarian gesture, such a slap in the face, that even if Amis Senior didn’t actually throw the book, perhaps he should have.
Martin Amis expands his scope in London Fields. “This is a true story but I can’t believe it’s really happening. It’s a murder story, too. I can’t believe my luck. And a love story (I think), of all strange things, so late in the century, so late in the goddamned day.” The narrating voice is now a writer, who is self-consciously writing (and even attempting to sell) the novel as the story unfolds, participating in events and gathering information, incorporating four distinct characters and an approaching apocalypse. His sentences remain florid, and the London neighborhood and even some of the characters are nearly the same but the structure is so much more complicated. It is as if the story is being seen in cross-section, refracted in a box of mirrors.
And then in the last book of the trilogy, The Information, Amis abandons the outward gimmickry of postmodernism and borrows a trick from Moby Dick. “Cities at night, I feel, contain men who cry in their sleep and then say Nothing. It’s nothing. Just sad dreams. Or something like that… Swing low in your weep ship, with your tear scans and your sob probes, and you would mark them.” There is a presence narrating the story, an I, but it is pushed far into the background. Instead of intervening directly, the narrator cuts in squibs of information about astronomy (the way Herman Melville used chapters connecting whaling to every instant of human history). Amis expands the scope of his novel to the astrological infinite, which, when refracted against the plot of his story (and writing itself) reveals the one and only insight of postmodernism: that a discrete chunk of information can only describe relationships between other chunks of information. That information says “Nothing.”
Tom McCarthy’s (2007) Remainder was about a traumatized, wealthy amnesiac who remembers nothing of his life before, except for a tiny hairline fracture on a wall. He hires hundreds of people to rebuild his memory from that fracture but can’t quite do it, and the entire production spins apart in the end. Amis and Ellroy skipped the production company. They used familiar locations and reoccurring plots and character types to create an adventure playground, a safe, familiar, but challenging space where they could experiment with painful fragments of their memories, pick them up and examine every frightening facet, and then put them aside.
Ellroy would go on to write a memoir and then tackle a national counter-history propelled by the Kennedy assassination (his American Tabloid trilogy). Amis wrote a detective novel called Night Train and then spent a decade writing non-fiction. These novels belong to a category beyond a sophomore novel. They scour the prose of the authors’ intimately familiar innards and leave behind a machine capable of writing tackling something universal.
Monday, May 27, 2013
by Jalees Rehman
"Shorter sentences and simple words!" was the battle cry of all my English teachers. Their comments and corrections of our English-language essays and homework assignments were very predictable. Apparently, they had all sworn allegiance to the same secret Fraternal Order of Syntax Police. I am sure that students of the English language all over the world have heard similar advice from their teachers, but English teachers at German schools excel in their diligent use of linguistic guillotines to chop up sentences and words. The problem is that they have to teach English to students who think, write and breathe in German, the lego of languages.
Lego blocks invite the observer to grab them and build marvelously creative and complex structures. The German language similarly invites its users to construct composite words and composite sentences. A virtually unlimited number of composite nouns can be created in German, begetting new words which consist of two, three or more components with meanings that extend far beyond the sum of their parts. The famous composite German word "Schadenfreude" is now used worldwide to describe the shameful emotion of joy when observing harm befall others. It combines "Schaden" (harm or damage) and "Freude" (joy), and its allure lies in the honest labeling of a guilty pleasure and the inherent tension of combining two seemingly discordant words.
The lego-like qualities of German can also be easily applied to how sentences are structured. Commas are a German writer's best friends. A German sentence can contain numerous clauses and sub-clauses, weaving a quilt of truths, tangents and tangential truths, all combined into the serpentine splendor of a single sentence. Readers may not enjoy navigating their way through such verschachtelt sentences, but writers take great pleasure in envisioning a reader who unwraps a sentence as if opening a matryoshka doll only to find that the last word of a mammoth sentence negates its fore-shadowed meaning.
Even though our teachers indulged such playfulness when we wrote in German, they were all the more harsh when it came to our English assignments. They knew that we had a hankering for creating long sentences, so they returned them to us covered in red ink markings, indicative of their syntactic fervor. This obsession with short sentences and words took the joy out of writing in English. German was the language of beauty and poetry, whereas English became the language best suited for efficient communication. By the time I reached my teenage years, I began to lose interest in writing anything in English beyond our mandatory school assignments. I still enjoyed reading books in English, such as the books of Enid Blyton, but I could not fathom how a language of simple sentences and simple words could be used to create works of literary beauty. This false notion fell apart when I first read "Things Fall Apart" by Chinua Achebe.
The decision to read "Things Fall Apart" was not completely arbitrary. My earliest memories of this world are those of the years I spent as a child in Igboland. My family moved from Pakistan to Germany when I was one year old, but we soon moved on to Nigeria. Germany was financing the rehabilitation of the electrical power grid that had been destroyed during the Biafra War. My father was one of the electrical engineers sent from Germany to help with the restoration and expansion of the electrical power supply in the South-Eastern part of Nigeria – the region which was home to the Igbo people and which had attempted and failed to secede as the Republic of Biafra.
We first stayed in Enugu, the former capital of the transient Republic of Biafra and then lived in the city of Aba. My memories of the time in Igboland are just sequences of images and scenes, and it is difficult to make sense of all of them: Kind and friendly people, palm trees and mysterious forests, riding a tricycle in elliptical loops, visits to electrical sub-stations. We returned to Germany when I was four years old. I would never live in the Igboland again, but recalling the fragmented memories of those early childhood years has always evoked a sense of comfort and joy in me. When I came across "Things Fall Apart" as a fourteen-year old and learned that it took place in an Igbo village, I knew that I simply had to read it.
I was not prepared for the impact the book would have on me. Great books shake us up, change us in a profound and unpredictable manner, leaving footprints that are etched into the rinds of our soul. "Things Fall Apart" was the first great English language book that I read. I was mesmerized by its language. This book was living proof that one could write a profound and beautiful book in English, using short, simple sentences.
As the Ibo say: "When the moon is shining the cripple becomes hungry for a walk."
And so Okonkwo was ruled by one passion— to hate everything that his father Unoka had loved. One of those things was gentleness and another was idleness.
Living fire begets cold, impotent ash.
A child cannot pay for its mother's milk.
It wasn't just the beautiful language, aphorisms, Igbo proverbs and haunting images that made this book so unique. "Things Fall Apart" contained no heroes. The books that I had read before "Things Fall Apart" usually made it obvious who the hero was. But "Things Fall Apart" was different. Okonkwo was no hero, not even a tragic hero. But he also was no villain. As with so many of the characters in the book, I could see myself in them and yet I was also disgusted by some of the abhorrent acts they committed. I wanted to like Okonkwo, but I could not like a man who participated in the killing of his adopted son or nearly killed his wife in a fit of anger.
Guns fired the last salute and the cannon rent the sky. And then from the center of the delirious fury came a cry of agony and shouts of horror. It was as if a spell had been cast. All was silent. In the center of the crowd a boy lay in a pool of blood. It was the dead man's sixteen-year-old son, who with his brothers and half-brothers had been dancing the traditional farewell to their father. Okonkwo's gun had exploded and a piece of iron had pierced the boy's heart.
Achebe was not judging or mocking his characters, but sharing them with us. He was telling us about how real humans think and behave. As I read the book, I felt that I was being initiated into life. Life would be messy. Most of us would end up being neither true heroes nor true villains but composites of heroism and villainy. If I did not want end up like Okwonkwo, the ultimate non-negotiator, I needed to accept the fact that my life would be a series of negotiations: negotiations between individuals, negotiations between conflicting identities and negotiations between values and cultures. The book described a specific clash of cultures in colonial Africa, but it was easy to apply the same clash to so many other cultures. I tried to envision Okwonkwo as an Indian farmer whose world began to fall apart when Arab armies invaded the Sindh. I imagined Okwonkwo as a Native American, a Roman or a Japanese warrior, each negotiating his way through cultural upheavals. The history of humankind is always that of things falling apart and, importantly, that of rebuilding after the falling apart.
As soon as the day broke, a large crowd of men from Ezeudu's quarter stormed Okonkwo's compound, dressed in garbs of war. They set fire to his houses, demolished his red walls, killed his animals and destroyed his barn. It was the justice of the earth goddess, and they were merely her messengers. They had no hatred in their hearts against Okonkwo. His greatest friend, Obierika, was among them. They were merely cleansing the land which Okonkwo had polluted with the blood of a clansman.
I read "Things Fall Apart" to find my past, but it defined my future. It helped me recognize the beauty of the English language and prepared me for life in a way that no book had ever done before.
Notes: All quotes are from "Things Fall Apart" by Chinua Achebe
Image Credits: Stack of "Things Fall Apart" (by Scartol via Wikimedia Commons), Photo of a porcelain insulator with a bullet hole probably from the Biafra war, Photo taken from the Presidential Hotel in Enugu 1973.
Monday, April 01, 2013
by Jalees Rehman
Nietzschean, Heideggerian, fascist, anarchist, libertarian, brilliant genius, blabbering nutjob - these and many other labels have probably been used to describe Peter Sloterdijk, who is one of Germany's most widely known contemporary philosophers. He has achieved a rock-star status in the echelons of contemporary German thinkers, perhaps because none is more apt than Sloterdijk at fulfilling the true purpose of a public intellectual: inculcating his audience with an insatiable desire to think. His fans adore him; his critics are maddened by him. Few, if any, experience indifference when they encounter the provocateur Sloterdijk.
Sloterdijk achieved fame in Germany after publishing his masterpiece "Kritik der zynischen Vernunft" (English translation: "Critique of Cynical Reason") in 1983, but his hosting of the regular late-night talk show "Das Philosophische Quartett" on the major German TV network ZDF for ten years turned him into a cultural icon and a household name. I realize that it might seem strange to non-Germans that philosophers instead of comedians can host TV talk shows, however Sloterdijk would probably be the first to agree that there isn't much of a difference between a true comedian and a true philosopher. Not only do we Germans have TV philosophers, we even enjoy the TV gossip and cockfights that they indulge in. When the ZDF network decided to get rid of Sloterdijk and replace him with the younger, more handsome and less thoughtful philosopher Richard David Precht, they start engaging in reciprocal mockery and name-calling.
Unfortunately, Sloterdijk is not quite so well-known in the English-speaking world and this may in part be due to the fact that much of his oeuvre has only recently been translated into the English language. It is no easy feat to translate his writings, in part because his playful mastery of German words is one of his signatures. Sloterdijk is a wonderful story-teller who weaves in beautiful images and puns into his narration, many of which are unique to the German language. His story-telling also makes it difficult to understand some of his texts in the original German. One may be enthralled by his stories, but after reading a whole chapter or book, it is quite difficult to condense it into a handy "message" or "point". Sloterdijk is a professional digressor, going off on tangents that are entertaining and exciting, but at times quite frustrating. He shares his brilliant insights on a broad range of topics ranging from metaphysics to politics with his readers, but he also offers practical advice on how we can change our lives as well as bizarre and pompous statements.
One of his more recent books is called "Philosophische Temperamente: Von Platon bis Foucault", which can be translated as "Philosophical Temperaments: From Plato to Foucault" and it is not yet available in an English translation. In the 1990s, Sloterdijk assembled a collection of texts and excerpts by 19 philosophers (Plato, Aristotle, Augustine, Bruno, Descartes, Pascal, Leibniz, Hegel, Schelling, Fichte, Schopenhauer, Kierkegaard, Marx, Nietzsche, Husserl, Wittgenstein, Sartre, Foucault) which he felt ought to be studied. Sloterdijk was convinced that the best way to truly approach a philosopher was to read the primary texts instead of relying on secondary sources. He also wrote short prefaces for the 19 volumes, each containing 400-500 pages of texts by one philosopher. The prefaces were intended to serve as brief introductions, enticing the readers to delve into the main volume. These prefaces were not academic-style summaries of the lives or works of the philosophers, they were verbal portraits painted by Sloterdijk. They were subjective impressions of their philosophical moods and temperaments, which explains why the collection of these 19 prefaces was released under the title "Philosophical Temperaments".
As with so many portraits, they reveal more about the painter than the subject of the portrait. "Philosophische Temperamente" allows us to take a peek into Sloterdijk's own temperaments. These portraits are stand-alone essays, but what is most striking is that despite their brevity, they are packed with provocative insights. The whole book has only 144 pages and only few of them are longer than seven pages. Even in these tiny portraits, Sloterdijk manages to digress, using a few core ideas of the philosopher as a starting point and then drawing parallels to our lives today. But it is precisely these kinds of digressions and parallels that remind us why these dusty classic of philosophy continue to be relevant for our lives.
This past decade has seen the rise of the TED-talk mentality. The idea of providing a forum for innovative thinkers to share their ideas with rich conference attendees, as well as the not-so-rich general public via a free internet broadcast has become a hot fad. Now that we are inundated with thousands of TED-talks and TED-copycats, many of us have developed TED-fatigue. The expression "TEDtalking" may soon become a new form of insult, referring to the watering down and oversimplifying of complex ideas, the sharing of touching and life-changing personal stories or exuding excessive positivity which fills the audience with vacuous joy and earns a heartfelt applause. I always thought of Sloterdijk as the prototypical anti-TEDtalker, because his writings do not attempt to leave the reader in a happy and cozy place. Sloterdijk likes to challenge us, evoking intellectual unease and restlessness in our minds and invites us to disagree. His essays and books with all their digressions tend to be so long, that I thought it was inconceivable for him to condense them into a 15 minute TED time slot. Sloterdijk does not offer any convenient prefab take-home messages or TED-style smug happiness.
After reading "Philosophische Temperamente", I have begun to reconsider my views on Sloterdijk and TED-talks. In these 19 mini-essays, Sloterdijk gives TED-talks without TEDtalking. His TED stands for "Tease Entertain Disagree" and instead of the traditional TED motto of "Ideas worth spreading", Sloterdijk presents us with "Ideas worth critiquing". Perhaps the organizers and presenters at TED-conferences could learn something from Sloterdijk's style.
Each mini-essay is a teaser which could potentially ignite discussions, not only about a specific philosopher, but also about the role of philosophy itself. The portrait of Augustine, arguably the least flattering of all portraits in the book, suggests that he infused Western thought with a sense of debasing anti-humanist "masochism", the idea that humankind is worthless, were it not for the grace of God. This idea thus directly connects Augustine to contemporary debates revolving around the role of religion, which do not only apply to Augustine or Christianity, but to all religions. Similarly, all other portraits also offer similarly provocative statements.
Here are translations of a few short excerpts from the book:
The chapter on Plato is the longest in Sloterdijk's book, but it discusses far more than just Plato, ranging from the purpose of philosophy to the ills of contemporary fundamentalism.
„Der Fundamentalismus, der heute weltweit aus dem Mißtrauen gegen die Modernität entspringt, kann immer nur Hilfskonstruktion für Hilflose liefern; er erzeugt nur Scheinsicherheiten ohne Weiterwissen; auf lange Sicht ruiniert er die befallenen Gesellschaften durch die Drogen der falschen Gewißheit."
"The world-wide phenomenon of fundamentalism which in today's world is rooted in a distrust of modernity can only serve as futile aides for the helpless; it generates pseudo-certainties without the desire for further knowledge; in the long run it ruins the afflicted societies with the addictive drug of false certainty."
The portrait of Schopenhauer introduces him as the pioneering thinker who quit the "Church of Reason" ("Vernunftkirche").
„Von Schopenhauer könnte der Satz stammen: Nur die Verzweiflung kann uns noch retten; er hatte freilich nicht von Verzweiflung, sondern von Verzicht gesprochen. Verzicht ist für die Modernen das schwierigste Wort der Welt."
„Schopenhauer might have uttered the phrase: Only desperation can save us. Yet he did not speak of despair, but of renunciation. Renunciation is the most difficult word for the modern world."
This passage from the chapter on Marx includes a fascinating statement about contemporary media:
„Telekommunikation läßt sich von Televampyrismus immer schwerer unterscheiden. Fernseher und Fernsauger schöpfen aus einer verflüssigten Welt, die kaum noch weiß, was widerstandsfähiges oder eigenes Leben wäre."
"It is becoming difficult to distinguish between telecommunication and televampirism. Television and Telesuction draw from a liquefied world that hardly knows the concept of an independent or resistant life."
It is difficult to translate Sloterdijk's neologism "Fernsauger", which literally means "tele-sucker" or "tele-suction device". In the original German, it is a beautiful play on the words Fernseher (television or tele-viewer) and the German word for a vacuum cleaner ("Staubsauger" , literally a "dust-sucker").
„Was Sartre angeht, so blieb er zeitlebens seiner Weise, die bodenlose Freiheit zu leben, treu. Für ihn war das Nichts der Subjektivität kein herabziehender Abgrund, sondern eine heraufsprudelnde Quelle, ein Überschuß an Verneinungskraft gegen alles Umschließende."
"As for Sartre, he remained true to leading a life of boundless freedom. For him, the void of subjectivity was not an abyss that pulls us down. Instead, it was a spring, gushing upwards and resisting all forms of enclosure."
English-speaking readers will soon be able to read a translation of the complete book, to be published by Columbia University Press in May 2013. I have not yet seen the translation, but I suspect and hope that the nature of this particular Sloterdijk book will make it one of the most accessible introductions to Sloterdijk's thinking and explanation for why we should continue to study classic Western philosophers.
Monday, March 04, 2013
by Jalees Rehman
"For every rational line or forthright statement there are leagues of senseless cacophony, verbal nonsense, and incoherency."
The British-Australian art curator Nick Waterlow was tragically murdered on November 9, 2009 in the Sydney suburb of Randwick. His untimely death shocked the Australian art community, not only because of the gruesome nature of his death – Waterlow was stabbed alongside his daughter by his mentally ill son – but also because his death represented a major blow to the burgeoning Australian art community. He was a highly regarded art curator, who had served as a director of the Sydney Biennale and international art exhibitions and was also an art ambassador who brought together artists and audiences from all over the world.
After his untimely death, his partner Juliet Darling discovered some notes that Waterlow had jotted down shortly before his untimely death to characterize what defines and motivates a good art curator and he gave them the eerily prescient title “A Curator’s Last Will and Testament”:
2. An eye of discernment
3. An empty vessel
4. An ability to be uncertain
5. Belief in the necessity of art and artists
6. A medium— bringing a passionate and informed understanding of works of art to an audience in ways that will stimulate, inspire, question
7. Making possible the altering of perception.
Waterlow’s notes help dismantle the cliché of stuffy old curators walking around in museums who ensure that their collections remain unblemished and instead portray the curator as a passionate person who is motivated by a desire to inspire artists and audiences alike.
The Evolving Roles of Curators
The traditional role of the curator was closely related to the Latin origins of the word, “curare” refers to “to take care of”, “to nurse” or “to look after”. Curators of museums or art collections were primarily in charge of preserving, overseeing, archiving and cataloging the artifacts that were placed under their guardianship. As outlined in “Thinking Contemporary Curating” by Terry Smith, the latter half of 20th century witnessed the emergence of new roles for art curators, both private curators and those formally employed as curators by museum or art collections. Curators not only organized art exhibitions but were given an increasing degree of freedom in terms of choosing the artists and themes of the exhibitions and creating innovative opportunities for artists to interact with their audiences. The art exhibition itself became a form of art, a collage of art assembled by the curators in a unique manner.
Curatorial roles can be broadly divided into three domains:
1) Custodial – perhaps most in line with traditional curating in which the curator primarily maintains or preserves art collections
2) Navigatory – a role which has traditionally focused on archiving and cataloging pieces of art so that audiences can readily access art
3) Discerning – the responsibility of a curator to decide which artists and themes to include and feature, using the “eye of discernment” described by Nick Waterlow
Creativity and Curating
The diverse roles of curators are characterized by an inherent tension. Curators are charged with conserving and maintaining art (and by extension, culture) in their custodial roles, but they also seek out new forms of art and experiment with novel ways to exhibit art in their electoral roles. Terry Smith’s “Thinking Contemporary Curating” shows how the boundaries between curator and artist are becoming blurry, because exhibiting art itself requires an artistic and creative effort. Others feel that the curators or exhibition makers need to be conscious of their primary role as facilitators and that they should not “compete” with the artists whose works they are exhibiting. This raises the question of whether the process of curating art is actually creative.
It is difficult to find a universal and generally accepted definition of what constitutes creativity because it is such a subjective concept, but the definition provided by Jonathan Plucker and colleagues in their paper “Why Isn’t Creativity More Important to Educational Psychologists? Potentials, Pitfalls, and Future Directions in Creativity Research” is an excellent starting point:
“Creativity is the interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context.”
Using this definition, assembling an art exhibition is indeed creative – it generates a “perceptible product” which is both novel and useful to the audiences that attend the exhibition as well as to the artists who are being provided new opportunities to showcase their work. The aptitude, process and environment that go into the assembly and design of an art exhibition differ among all curators, so that each art exhibition reflects the creative signature of a unique curator.
Ubiquity of Curators
The formal title “curator” is commonly used for art curators or museum curators, but curatorial activity – in its custodial, navigatory and discerning roles – is not limited to these professions. Librarians, for example, have routinely acted as curators of books. Their traditional focus has been directed towards their custodial and navigatory roles, cataloging and preserving books, and helping readers navigate through the vast jungle of published books.
Unlike the key role that art curators play in organizing art exhibitions, librarians are not the primary organizers of author readings, book fairs or other literary events, which are instead primarily organized by literary magazines, literary agents, publishers or independent bookstores. It remains to be seen whether the literary world will also witness the emergence of librarians as curators of such literary events, similar to what has occurred in the art world. Our local public library occasionally organizes a “Big Read” event for which librarians select a specific book and recommend that the whole community read the book. The librarians then lead book discussions with members of the community and also offer additional reading materials that relate to the selected book. Such events do not have the magnitude of an art exhibition, but they are innovative means by which librarians interact with the community and inspire readers.
One of the most significant curatorial contributions in German literary history was the collection of fairy-tales and folk-tales by the Brothers Grimm (Brüder Grimm or Gebrüder Grimm), Jacob and Wilhelm Grimm. Readers may not always realize how much intellectual effort went into assembling the fairy-tales, many of which co-existed in various permutations depending on the region of where the respective tales were being narrated. I own a copy of the German language edition of the “Children's and Household Tales” (Kinder- und Hausmärchen) which contains all their original annotations. These annotations allow the reader to peek behind the scenes and see the breadth of their curatorial efforts, especially their “eye of discernment”. For example, the version of Snow-White that the Brothers Grimm chose for their final edition contains the infamous scene in which the evil Queen asks her mirror, “Mirror, Mirror on the wall, Who is the prettiest in all the land?” She naturally expects the mirror to say that the Queen is the prettiest, because she just finished feasting on what she presumed were Snow-White’s liver and lungs and is convinced that Snow-White is dead. According to the notes of the Brothers Grimm, there was a different version of the Snow-White tale in which the Queen does not ask a mirror, but instead asks Snow-White’s talking pet dog, which is cowering under a bench after Snow-White’s disappearance and happens to be called “Spiegel” (German for “Mirror”)! I am eternally grateful for the curatorial efforts of the Brothers Grimm because I love the symbolism of the Queen speaking to a mirror and because I do not have to agonize over understanding why Snow-White named her pet dog “Mirror” or expect a Disneyesque movie with the title “Woof Woof” instead of “Mirror Mirror”.
The internet is now providing us access to an unprecedented and overwhelming amount of information. Every year, millions of articles, blog posts, images and videos are being published online. Older texts, images and videos that were previously published in more traditional formats are also being made available for online consumption. The book “The Information: A History, a Theory, a Flood” by James Gleick is quite correct in using expressions such as “information glut” or “deluge” to describe how we are drowning in information. Gleick also aptly uses the allegory of the “Library of Babel”, a brilliant short story written by Jorge Luis Borges about an imaginary library consisting of hexagonal rooms that is finite in size but contains an unfathomably large number of books, all possible permutations of sequences of letters. Most of these books are pure gibberish, because they are random sequences of letters, but amidst billions of such books, one is bound to find at least a handful with some coherent phrases. Borges' story also mentions a mythical “Book-Man”, a god-like librarian who has seen the ultimate cipher to the library, a book which is the compendium of all other books. Borges originally wrote the story in 1941, long before the internet era, but the phrase "For every rational line or forthright statement there are leagues of senseless cacophony, verbal nonsense, and incoherency" rings even more true today when we think of the information available on the web.
This overwhelming and disorienting torrent of digital information has given rise to a new group of curators, internet or web curators, who primarily focus on the navigatory and discerning roles of curatorship. Curatorial websites or blogs such as 3quarksdaily, Brainpickings or Longreads comb through mountains of online information and try to select a handful of links to articles, essays, poems, short stories, videos, images or books which they deem to be the most interesting, provocative or inspiring for their readers. They disseminate these links to their readers and followers by posting excerpts or quotes on their respective websites or by using social media networks such as Twitter. The custodial role of preserving online information is not really the focus of internet curators; instead, internet curators are primarily engaged in navigatory and discerning roles. In addition to the emergence of professional internet curatorship through such websites or blogs, a number of individuals have also begun to function as volunteer internet curators and help manage digital information.
Analogous to art curatorship, internet curatorship also requires a significant creative effort. Each internet curator uses individual criteria to create their own collage of information and themes they focus on. Even when internet curators have thematic overlaps, they may still decide to feature or disseminate very different types of information, because the individuals engaged in curatorship have very distinct tastes and subjective curatorial criteria. One curator’s chaff is another curator’s wheat.
Formal Education and Training in Internet Curation
There are no formal training programs that train people to become internet curators. Most popular internet curators usually have a broad range of interests ranging from the humanities, arts and sciences to literature and politics. They use their own experience and expertise in these areas to help them select the best links that they then pass on to their readers or followers. Some internet curators are open to suggestions from their readers, thus crowd-sourcing their curatorial activity, others routinely browse selected websites or social media feeds of individuals which they deem to be the most interesting, others may plug in their favorite words to scour the web for intriguing new articles.
Internet curation will become even more important in the next decades as the amount of information we amass will likely continue to grow exponentially. Not just individuals, but even corporations and governments will need internet curators who can sift through information and distilling it down to manageable levels, without losing critical content. In light of this anticipated need for internet curators, one should ask the question whether it is time to envision formal training programs that help prepare people for future jobs as internet curators. Internet curation is both an art and a science – the art of the curatorial process is to creatively assemble information in a manner that attracts and inspires readers while the science of internet curation involves using search algorithms that do not just rely on subjective and arbitrary criteria but systematically interrogate vast amounts of information that are now globally available. A Bachelor’s or Master’s degree program in Internet Curation could conceivably train students in the art and science of internet curation.
In scientific manuscripts, it is common for scientists to cite the preceding work of colleagues. Other colleagues who provide valuable tools, such as plasmids for molecular biology experiments, are cited in the “Acknowledgements” section of a manuscript. Colleagues whose input substantially contributed to the manuscript or the scientific work are included as co-authors. Current academic etiquette does not necessarily acknowledge the curatorial efforts of scientists who may have nudged their colleagues into a certain research direction by forwarding an important paper that they might have otherwise ignored.
Especially in world in which meaningful information is becoming one of our most valuable commodities, it might be time to start acknowledging the flux of information that shapes our thinking and our creativity. We are beginning to recognize the importance of people who are links in the information chain and help separate out meaningful information from the “senseless cacophony”. Perhaps we should therefore also acknowledge all the sources of information, not only those who generated it but also those who manage the information or guide us towards the information. Such a curatorial credit or Q-credit could be added to the end of an article. It would not only acknowledge the intellectual efforts of the information curators, but it could also serve as a curation map which would inspire readers to look at the individual elements in the information chain. The readers would be able to consult the nodes or elements that were part of the information chain (instead of just relying on lone cited references) and choose to take alternate curation paths.
I will try to illustrate a Q-credit using the example of Abbas Raza who pointed me towards a 3quarksdaily discussion of “Orientalism” and an essay by the philosopher Akeel Bilgrami. Even though I had previously read Edward Said’s book “Orientalism”, the profound insights in Bilgrami’s essay made me re-read Edward Said’s book. The Q-credit could be acknowledged as follows:
Q-Credit: Abbas Raza --> The 2008 3Quarksdaily Forum on Occidentalism --> “Occidentalism, the Very Idea: An Essay on Enlightenment and Enchantment” by Akeel Bilgrami published 2008 on 3Quarksdaily.com and 2006 in Critical Inquiry --> Bilgrami identifies five broad themes in Edward Said’sOrientalism
The acknowledgement of information flux is already part of the Twitter netiquette. The German theologian Barbara Mack uses her Twitter handle @faraway67 to curate important new articles about history, science, music, photography, linguistics and literature. She sees the role of web curators similar to that of music conductors, who do not compose original pieces of music but instead enable the access of an audience to the original creative work. She says that “web curation is a relatively new field of dealing with information and good curation is an act of creativity which requires dedication and a keen sense for content.” She agrees that curators should indeed be given credit, “not only out of courtesy but to acknowledge their efforts of taking upon the challenge of bringing the vast information the web provides into a handy form for their followers to enjoy.”
Twitter curators such as Barbara Mack use abbreviations such as h/t (hat-tip) or RT (retweet) followed by a Twitter handle to acknowledge their sources. Contemporary Twitter netiquette suggests that if curated links of use to followers, these should acknowledge the curators' efforts before tweeting them on.
One challenge that is intrinsic to Twitter (but may in an analogous fashion apply to other social media networks as well) is that each tweet can only contain 140 characters, which presently makes it very difficult to acknowledge the comprehensive curatorial information flux. If I decide to tweet on an interesting article about the philosophy of science, which I found in the Twitter feed of person X, the space limitations may make it impossible for me to give credit to all the preceding members of the information chain which had directed X’s attention to that specific article. The Q-credit system may thus be best suited for acknowledgements at the end of blog posts or articles, but not for social media messaging with strict space limitations.
The Future of Internet Curation
The area of internet curation is still in its infancy and it is very difficult to predict how it will evolve. Managing online information will become increasingly important. Even though such managerial roles may not necessarily carry the title “internet curator”, there is little doubt that managing online information in a meaningful manner is one of the biggest challenges that we will face in the 21st century. I am quite optimistic that we will be able to address this challenge, but the first hurdle is to recognize it.
Image Credit: The Librarian by Giuseppe Arcimboldo (1527–1593)
1. “The Cambridge Handbook of Creativity” (2010) by James C. Kaufman and Robert J. Sternberg --> Chapter 3 “Assessment of Creativity” by Jonathan A. Plucker and Matthew C. Makel --> “Why Isn’t Creativity More Important to Educational Psychologists? Potentials, Pitfalls, and Future Directions in Creativity Research” (2004) by Jonathan A. Plucker et al. in EDUCATIONAL PSYCHOLOGIST, 39(2), 83–96
3. Book review of “The Information” at Brainpickings --> “The Information: A History, a Theory, a Flood” (2011) by James Gleick --> “Library of Babel” by Jorge Luis Borges as an allegory for the information glut
Monday, February 18, 2013
Silicon Valley, Literary Capital of the 21st Century
by James McGirk
Technology seeps into our imaginations, changes the way we think and the way we write. The novel may seem like a relic, a low-bandwidth version of virtual reality better suited to the 19th and 20th Centuries than today. But beneath its grim monochrome interface (a.k.a. “pages”) it glows like the neon-piped suits in Tron. Contemporary fiction is nearly as much a product of Silicon Valley as the integrated circuit.
Fiction, on a crass, fundamental level, isn’t much more than a container for a story. Most stories have already been told (by William Shakespeare—or at least it feels that way), so the challenge of writing fiction is to find a new way to contain a story. This experimental impulse is tempered by a reader’s ability to decode what is going on. As readers have grown more accustomed to following hyperlinks and leaping about the Internet, their ability to understand information out of sequence has changed too.
Consider three popular, experimental novels and the technology of the era: David Foster Wallace’s (1996) Infinite Jest was written at the dawn of the Internet Age. The Internet was in an ugly growth spurt then. Amateurs created most online content. Big chunks of the Internet blossomed and died seemingly overnight. It was common to see gaping holes where content was no longer compatible. Following hyperlinks from page to page felt jarring (particularly given how slow most connections were). Wallace wanted to compress information in the Infinite Jest but he didn’t want to disrupt his timeline. So he chose endnotes to digress with—a fairly conventional device, although one not often used for fiction. He even said (to The New Yorker): “I pray they are nothing like hypertext.”
Endnotes are hypertext, however. They just happen to predate the Internet and, since they are numbered, romp alongside the text in a linear fashion (and nestle at the end of chapters, where they won’t distract readers). That’s not the case for the digressions in Dave Eggers’ A Heart Breaking Work of Staggering Genius (2000). Eggers digresses like Wallace does, but his digressions actually separate from the text, sometimes even forming self-contained documents.
It makes sense that Eggers was a magazine editor before he wrote the book. There’s almost a house style to A Heart Breaking Work. His asides could have been “front of the book” articles, accompanying and amplifying the main feature: Eggers’ story about raising his younger brother. The genius of A Heart Breaking Work is the way that Eggers bound it all together. Without that ever-so slightly smarmy voice, his story would have been unintelligible.
Ten years later, attitudes toward the virtual had changed considerably: Facebook, which didn’t exist when Eggers’ wrote A Heartbreaking Work, reached half a billion users, almost double the population of the United States. Office workers could no longer plead computer illiteracy. Jennifer Egan dropped an entire PowerPoint presentation into her (2010) A Visit from the Goon Squad. Her readers understood what it was, and what it meant, and what’s more Egan got the weird, confined, timeless, disassociated feeling that a PowerPoint presentation imposes on its audience, and she tweezed it out, and used that feeling to amplify the other loosely connected stories in her novel.
This is a reductive way of looking at three important novels; but fiction has changed as technology has penetrated the lives of its readers. Of course, readers, writers and editors are not the only stakeholders in the writing business. Logistics quietly informs what we read. There is a vast industrial apparatus supporting the contemporary novel, and, like writers and readers, it too has evolved as technology has spread.
Literary historian Pascale Casanova described the global marketplace for literature as “the world republic of letters.” Writers are everywhere, but their influence is unevenly distributed. During secondary school, the entire Anglophone world is made to suffer through Shakespeare. Young wannabes flood the outer boroughs of New York City hoping to join the ranks of the “Jonathans” [Lethem, Franzen, Safran Foer…]. Through military power, proximity to printer’s presses and pure accident, cities like Paris, London and New York wield enormous literary influence relative to their size. But the contours of cultural power are changing. Silicon Valley is beginning to surpass the old capitals of literary clout.
This clout is increasingly concentrated in what futurist Bruce Sterling calls the “five stacks.” Apple, Google, Facebook, Amazon and Microsoft are gobbling up the Internet like marauding PacMen and rebuilding it in their images. Apple designs, builds and sells computers and their operating systems, for example; Microsoft does the same with office productivity software and PCs; Facebook does it with social networks; Google for navigating the Internet and advertising; Amazon for selling and shipping items. These companies are vertically integrating, in other words, they are trying to control every aspect of their category, making it as user-friendly and predictable as possible, and walling it off from potential competitors. They do this by meticulously analyzing their users’ behavior and adapting to it. This customer-first mentality is downright corrosive to literature.
Behind the scenes, the great software companies constantly tweak things. They look at what people click on, what they share, how long they spend on pages, and what they search for. The Internet is becoming more intuitive. This is great for shopping but it is killing content. There is a reason why the Daily Mail has become the most popular news website. By the numbers, all people want from the Internet are cheap kicks. The Mail provides them: see the pneumatic sexpots climbing their sidebars, the chilling crimes, zoo babies and kittens, and all those other pretty, petty, treats.
Scientists at John Hopkins University have extrapolated that the Universe, on average, is pale beige in color (“Cosmic Latte”) and smells of burnt sugar. The Internet is a painless, more convenient reflection of the real world. If it were averaged out, rather than be the color of foam bleeding off of a nice latte, it would have the golden sheen of corn syrup: it’s tooth-rot, in other words, and most of what we read, really, most of what we experience now, for better or worse seems to reflect the sinister glow of the ultra-tweaked Internet.
Agents and publishers are reluctant to buy a novel with a narrator whose opinions or actions might revolt or frighten their readers. The sleek charm that held A Heartbreaking Work of Staggering Genius together is all but mandated now. Combine that with a tendency to skate through torrents of information and write about that, rather than trying to animate text with experience, and you get David Mitchell’s (2004) Cloud Atlas.
Cloud Atlas is the Daily Mail of great novels. Here is a novel made up of nested stories, populated by characters whose actions and personalities ripple across space and time. The book is beautifully written; its structure is beyond elegant. The research he’s done is staggering. Yet there is something so cartoonish about it: it is a literary pyrotechnic display, there is not a dram of unpleasant truth. It is as if he, David Mitchell, stopped short of surrendering himself to the evil orbiting in his themes. His reader never gets uncomfortable. It’s all surface.
Not all fiction is shot through with Silicon Valley’s neon-piped charm. Denis Cooper’s Marbled Swarm challenges the way words work and snaps together at the end with a jolt of recognition that condemns the reader as much as it does the story’s murderous protagonist. Helen DeWitt’s Lightning Rods conceives of a plausible device for relieving sexual tension at the office, then follows the inventor as he builds and arduously succeeds at selling the thing; exposing, damning and even celebrating the late capitalist system in a slim little story.
The best books provide an experience of virtual reality more profound than seducing the reader. When it is good, fiction is sneaky; it slithers into the mind and quietly lifts its blinders. But to deliver its payload, writing must use technology rather surrender to its robotic sentiments.
Monday, February 04, 2013
Ecology’s Image Problem
“There are tories in science who regard imagination as a faculty to be avoided rather than employed. They observe its actions in weak vessels and are unduly impressed by its disasters” —John Tyndall, 1870
In his 1881 essay on Mental Imagery, Francis Galton noted that few Fellows of the Royal Society or members of the French Institute, when asked to do so, could imagine themselves sitting at the breakfast-table from which presumably they had only recently arisen. Members of the general public, women especially, fared much better, being able to conjure up vivid images of themselves enjoying their morning meal. From this Galton, an anthropologist, noted polymath, and eugenicist, concluded that learned men, bookish men, relying as they do on abstract thought, depend on mental images little, if at all.
In this rejection of the scientific role for the imagination Galton was in disagreement with Irish physicist John Tyndall who in a 1870 address to the British Association in Liverpool entitled The Scientific Use of the Imagination claimed that in explaining sensible phenomena, scientists habitually form mental images of that which is beyond the immediately sensible. "Newton’s passage from a falling apple to a falling moon”, Tyndall wrote, “was, at the outset, a leap of the prepared imagination.” The imagination, Tyndall claimed, is both the source of poetic genius and an instrument of discovery in science.
The role of the imagination is chemistry, is well enough known. In 1890 the German Chemical Society celebrated the discovery by Friedrich August Kekulé von Stradonitz of the structure of benzene, a ring-shaped aromatic hydrocarbon. At this meeting Kekulé related that the structure of benzene came to him as a reverie of a snake seizing its own tail (the ancient symbol called the Ouroboros).
Since this is quite a celebrated case of the scientific use of the imagination I quote Kekule’s account of the events in full:
“During my stay in Ghent, Belgium, I occupied pleasant bachelor quarters in the main street. My study, however, was in a narrow alleyway and had during the day time no light. For a chemist who spends the hours of daylight in the laboratory this was no disadvantage. I was sitting there engaged in writing my text-book; but it wasn't going very well; my mind was on other things. I turned my chair toward the fireplace and sank into a doze. Again the atoms were flitting before my eyes. Smaller groups now kept modestly in the background. My mind's eye, sharpened by repeated visions of a similar sort, now distinguished larger structures of varying forms. Long rows frequently close together, all, in movement, winding and turning like serpents! And see! What was that? One of the serpents seized its own tail and the form whirled mockingly before my eyes. I came awake like a flash of lightning. This time also [he had had fruitful dreams before] I spent the remainder of the night working out the consequences of the hypothesis. If we learn to dream, gentlemen, then we shall perhaps find truth…” Berichte der deutschen chemischen Gesellsehaft, 1890, 1305-1307 (in Libby 1922).
In supporting his argument about the positive role of the imagination John Tyndall quoted Sir Benjamin Brodie, the chemist, who wrote that the imagination (”that wondrous faculty”) when it is “properly controlled by experience and reflection, becomes the noblest attribute of man”. Brodie cautioned, however, that the imagination when “left to ramble uncontrolled, leads us astray into a wilderness of perplexities and errors…”
The philosopher Vigil Aldrich provided an interesting example of how imagination could be a hindrance to science. Sir Arthur Stanley Eddington, the English astrophysicist, referred frequently, according to Aldrich, to “the world outside us”. Consciousness, in contrast, can be described as being “inside of us.” Using such images Eddington was, said Aldrich, “under the spell of the telephone-exchange analogy.” Where the nerve ending leave off the world beyond us takes over. If the telephone exchange image seems ill-chosen, the image, after all, could be worse. One might imagine inner consciousness as a submarine and from our berth within it we come to know the outside world by means of a periscope! Now, Eddington did not use this image (others did) but when we try to make sense of it we can do so only by saying that inner consciousness is like a submarine only when one supposes that it is nothing at all like a submarine. One must “tone down the analogy” to make it useful. If you do otherwise “the lively imagination begins to protest”. Aldrich speculated that theorists persists with inept picture-making because when toned down, it often appeared as if the image is illuminating even when it is not. Moreover, a flashy image is entertaining. Thus one can easily make the “pleasant mistake” of identifying the image with the “real meaning” of an assertion.
A strength of environmental disciplines is that they bring into proximity bodies of knowledge that are often set apart. Though some quibble with him on this, historian of ecology Donald Worster places both Charles Darwin, the philosophical scientist and Henry David Thoreau the scientific philosopher at the ground of ecology as a natural scientific discipline. And though it is fair to say that ecology has maintained an identity largely separate from the environmentalisms it has inspired, nevertheless ecology and environmentalisms have been good conversation partners. Both have listened to an admirable degree to its poets, artists and philosophers. A good thing this may be in many ways, but my contention here is that the environmental sciences and the practices associated with them — environmentalisms like sustainability — are prone of taking their most arresting images too literally. I wonder if there is not in environmental thought a pathology of the imagination? Too readily, it seems, we transform a provocative image into a proven hypothesis; we smuggle ancient and baffling worldviews into contemporary conceptions of nature.
I sketch a few examples here to illustrate the case. Perhaps you will have ones that you can add.
Nature as an Organism
You are justified in calling Nature your Mother if you have a mother who wants you dead. A Mother who inculcated both your limitations and your accomplishments. Nature: A Mother who birthed a world equipped with tooth and nail and hungry eye; whose family tie is the ripping of flesh. Why, I wonder, are we quick to demand of God an explanation of evil but incline less to asking that question of Mother Nature?
To call Nature our mother is just one manifestation of the image of the Earth as organism. It is enduring, compelling and surely wrong-footing.
University of Wisconsin historian Frank N. Egerton traces the myth of cosmos as organism back to Plato. Timaeus asked “In the likeness of what animal did the Creator make the world?” He then speculated as follows: “For the Deity, intending to make this world like the fairest and most perfect of intelligible beings, framed one visible animal comprehending within itself all other animals of a kindred nature.” Because of Plato’s fateful influence on the history of western thought, Egerton noted that the implications of this myth have been enduring. According to Egerton the myth is the source of two related concepts “the supraorganismic balance-of-nature concept and the microcosm-macrocosm concept.” The supraorganismic concept views the cosmos as having the attributes of a living thing whereas the microcosm-macrocosm concept takes different parts of the universe to correspond with an organismal body.
Both flavors of the organismal concept get expressed in ecosystem ecology. Natural ecosystems, the influential University of Georgia ecology Eugene Odum asserted, are integrated wholes, and developed in a manner that parallels the development of individual organisms or human societies. The development of the natural systems, ecological succession in other words, is orderly, predictable, and directional. It leads, in Odum’s view of things, to a stabilized ecosystem with predictable ratios of biomass, productivity, respiration and so forth. The “strategy” of ecosystem development, as Odum called it, corresponds to the “strategy” for long-term evolutionary development of the biosphere – “namely, increased control of, or homeostasis with, the physical environment in the sense of achieving maximum protection from its perturbations.” Homeostasis etymologically derives from the Greek “standing-still” and in the sense that Odum meant to imply, indicates a dynamic and regulated stability. In other words, the stability of the organism.
Odum does not stand here accused of covertly importing the organismal image into his work; he was quite explicit about it. There is much to admire in Odum’s work and the ecology that he inspired, but the sense of design and purpose that it implied in nature (what philosophers call teleology) put Odum's ecosystem ecology at loggerheads with contemporary evolutionary theory which insists on the purposelessness of nature. It has taken quite some time to reconcile ecosystem thought with evolutionary theory.
Another example of the superorganism’s baleful influence can be found in the Gaia hypothesis. In his preface to Gaia: A New Look at Life on Earth (1979) Lovelock wrote:
“The concept of Mother Earth or, as the Greeks called her long ago, Gaia, has been widely held throughout history and has been the basis of a belief which still coexists with the great religions."
If the development of James Lovelock and Lynn Margulis’s Gaia hypothesis is anything to go by, hypotheses about the workings of nature derived from the organismal image of nature have a shelf life of a decade or so. Lovelock’s Gaia: A New Look at Life on Earth was published in 1979 and he rescinded the teleological claims of the Gaia hypothesis by 1988 in his book Ages of Gaia — or at least he became attentive to the problems that the superorganism concept created. He still maintains that the Earth’s atmosphere is homeostatically regulated but he admitted to not having been led astray by the sirens of the superorganism.
It is a banality of the ecological sciences to state that everything is connected. That ebullient Scot, and eventual stalwart of the American wilderness movement, John Muir, provided the image. He wrote, "When we try to pick out anything by itself, we find it hitched to everything else in the universe."
And if such statements are employed to sponsor a notion that individual organisms cannot be regarded in isolation from those that they consume, and those that can consume them, or furthermore, that as a consequence of the deep intersections of the living and the never-alive, that there can been unforeseen consequences flowing from species additions or removals from ecosystems, then few may argue with this. However, just as the ripples of a stone dropped in a still pond propagate successfully only to its edges (though they may entrain delightful patterns in the finest of its marginal sands), not every ecological event has intolerably large costs to exact. True, if the dominoes line-up and the circumstances are just so, a butterfly’s wing beat over the Pacific may hurl a typhoon against its shores, but more often than not such lepidopterous catastrophes do not come to pass.
Ecosystems, energized so that matter cycles and conjoins the living with the dead, have their lines of demarcation, borders defined by their internal interactions being more powerful than their external ones. They are therefore buffered against many potentially contagious disasters. This, of course, is the essence of resilience - the capacity of a system to absorb disturbance without disruption to habitual structure and function. Ecology is as much the science investigating the limits of connections as it is the thought that everything is connected.
The Community Concept
Is there a greater 20th Century American environmental thinker than Aldo Leopold? Certainly there few that provided as many genuinely poetic images: in the eyes of a dying wolf he saw “a fierce green fire”, he exhorted us to “think like a mountain”, he depicted the crane as “wilderness incarnate”. For all of that, has Leopold not led us astray, with images associated with of the “ethical sequence”? Leopold’s influential land ethic “enlarges the boundaries of the community concept.” The ethical sequence that he proposed progresses stutteringly from free men, to women, to slaves, to animals, plants, rocks and land. It has a compelling lucidity. Leopold admitted, however, that it seems a little too simple. The ethic invites us into community with the land. A person’s self-image will change under a land ethic: “In short,” Leopold writes “a land ethic changes the role of Homo sapiens from conqueror plain member and citizen of it.”
Now, Leopold is a subtle thinker and knows not to confuse the image with the thing. Certainly he expected this transformation to take quite some time. The land ethic would not emerge without “an internal change in our intellectual emphases, loyalties, affections, and convictions.” Now I have little problem with the image of extending the ethical circle other than noting that it makes it seem easier than it has proven to be. My more serious objection concerns the rather thin notion of community that seems to be implied in Leopold image of the plain citizen. As environmental philosopher William Jordan III has illustrated in his book The Sunflower Forest (2003), missing from Leopold’s account is any acknowledgment of the negative elements of the human experience of community: envy, selfishness, fear, hatred, and shame. As Jordan pointed out this leads Leopold and others to “a sentimental, moralizing philosophy that…insists on the naturalness of humans…but that neglects or downplays the radical difficulty of achieving such a sense of self, and also downplays the role of culture and cultural institutions in carrying out this work.” If Leopold’s image of the community and our place within it is an impoverished one, the work of extending the circle becomes impossible.
There are other images that we might have discussed here. Ones that have had, at times at least, unfortunate implications for environmental thinking. For instance, in 1864 George Perkins Marsh wrote that mankind is disruptive, not just occasionally, mind you, but “is everywhere a disturbing agent.” One hundred years later the Wilderness Act renews the image in the definition of wilderness as an area “untrammeled by man.” We might have considered contemporary accounts of social-ecological systems where these systems are posited as a compound substance, but that in depicting them, we tease the components apart again.
So, if environmental thought and ecological science has been susceptible to what my colleague and friend Professor David Wise of University of Illinois, Chicago, has called “malicious metaphors”, is there a more productive way to think about the role of the image in developing environmental thought?
The work of French philosopher Gaston Bachelard (1884 - 1862) — one of the more lovable of the French phenomenologists, certainly the hairiest — is helpful in sorting out of a productive role for the imagination in science. He was renowned for his work on epistemological issues in science as well as for his phenomenological account of the poetic image, and his philosophical meditation on reverie. As much as he was a materialist in his approach to science, he was subjective and personal (as a matter of theoretical orientation) in his philosophical work on the imagination.
Bachelard’s work on first glance is so inviting. Chapters in his book The Poetics of Space (1958) have enticing titles like The House from Cellar to Garret, Nests, Shells. Perhaps this is why the book is a philosophic bestseller. My copy claims “more than 80,000 copies sold”. And though indeed opening a Bachelard book is like relaxing into a warm bath, nevertheless there is an astringent in those waters. The thought is somewhat obscure as Bachelard ransacks the lexicon of the various disciplines he brings together in his work: Kantian philosophy, Husserlian phenomenology, Jungian psychoanalysis etc. Oftentimes his use of technical terms was novel; reinterpreting them, Bachelard pushed them into new service. Because of this density, I wonder how many of those 80,000 copies have languished on bookshelves? Mine certainly did until the past few weeks.
To enjoy the fruits of Bachelard’s insights we should do at least some of the work of appreciating how he produced them. In the hope that this will embolden you to return to your copy of The Poetics of Space, or other works by Bachelard on the imagination, or pick them up for the first time, I will give a summary, as best I understand it, of what his phenomenology of the image is all about. I am, I should tell you, strictly an amateur Bachelardian.
The poetic image is eruptive for both poet and reader. Bachelard say that for its creation “the flicker of the soul is all that is needed.” So, every great image is its own origin. Famously, Bachelard maintained that the imagination, contrary to view of many philosophical accounts, is “the faculty of deforming images offered by perception.” The poetic image emerges into the consciousness as a direct product of “the heart, soul and being of man.” Elsewhere Bachelard claims “the imagination [is] a major power of the human nature.”
The poetic image is therefore not caught up in a network of causalities. Our first recourse should not be to ask what archetypes an image represents, or what aspects of the poet’s psycho-biography explains it away. In this assertion Bachelard remains true to phenomenology’s maxim of going “back to the things themselves.” In as much as such things are possible, one approaches the poetic image freed from all presuppositions.
So it is of secondary importance to ask where an artistic image comes from; what matters more is to explore what opportunities for freedom an image creates. Instead of cause and effect, at the center point of which we traditionally ask the image to stand, rather we might speak of the “resonances and reverberations” of the image. This is not, I think, just some fanciful softening of language, it is a necessary acknowledgment of the way in which an image does not simply reflect a memory, but rather revives an absent one and the way in which an image explodes into images. When we read the poetic image it resonates, when we communicate it it reverberates. The repercussions of the image, said Bachelard, “invite us to give greater depth to our own existence.” What bearing does an image have on our freedom? A great piece of art, Bachelard says “awakens images that have been effaced, at the same time that it confirms the unforeseeable nature of speech. And if we render speech unforeseeable, is this not an apprenticeship to freedom?”
I propose that Gaston Bachelard’s phenomenological account of the poetic image, despite its somewhat unpromising obscurity, is helpful in addressing environmental thought’s special porousness to striking images. In this short sketch I cannot fully substantiate the claim. I will end, however, with an example where an approach such as Bachelard’s seems to have been fruitful.
Tim Morton is one of the most widely read and exciting environmental writers of recent years. As far as I know has not cited Bachelard as a methodological inspiration, although his work is phenomenological and existential. [Added: One of Morton's earlier books on the representation of the spice trade in Romantc Literature was entitled Poetics of Spice (2006) - making him, it would seem, an explicit Bachelardian after all!]. Morton is so concerned about the potential of sedimented ideas leading us into Sir Benjamin Brodie’s “wilderness of perplexities and errors”, that he elected to drop the term “Nature” altogether. In his book Ecology Without Nature (2007) he explained the problem: “…the idea of nature is getting in the way of properly ecological forms of culture, philosophy, politics, and art.”
The results of Morton’s analysis lead us to strange, perplexing, though ultimately interesting places. Out of this natureless ecology comes a suite of insights on “dark ecology”, an ecology reminding us that we are always already implicated in the ecological. There is no outside from which we get a guilt-free view of the fantastic mess. Deriving also from an ecology developed without a sentimental view of nature comes a fresh analysis of connectedness. Morton revives Muir’s hitching image but this time its resonances are weirder than the oceanic feeling that we are all blissfully in this together. His analysis gives us the queer bestiary of “strange strangers” with which we are stickily intimate, and yet we can never fully get to know. Morton develops this account in The Ecological Thought (2010) which I recommend to you. I am not supposing that this is an adequate summary of Morton’s recent books, but I think that Tim is converging on the idea of resonances and reverberations that Bachelard has written about.
The image, and the imagination, can play a positive role in environmental thinking. Darwin’s image of the “tangled bank” is both a pretty and useful way of thinking about the way in which the organismal profusion developed from a common ancestor. But a misapplied image can be a disaster. Understanding our responsibilities with respect to the image is the work of the future, it is the work that will birth the future.
Walter Libby The Scientific Imagination The Scientific Monthly, Vol. 15, No. 3 (Sep., 1922), pp. 263-270
Monday, January 21, 2013
Writing and the World of Tomorrow
by James McGirk
Before we had any idea how dangerous it was to bolt human beings to exploding tubes and launch them into space, when inventions like the lightbulb and airplane and telephone were warping the planet at a ferocious pace and escaping the earth’s gravity well suddenly seemed possible —we imagined that exploring the Universe would be a lot like the famous expeditions we had seen before. Compare Jules Verne or sci-fi serials of the 1950s to Marco Polo’s Travels: worlds squirming with life and adventure, with bizarre wildernesses to traverse, silver cities that gleamed like sunlit crystal, galactic emperors and perfidious foes and glamorous green heartthrobs who wore togas and served slithering banquets and summoned lightning bolts from buttons on their belts.
It seemed natural our future would come to look like this too. Rocketships and sleek shapes seized our imaginations and seeped into our culture. The centerpiece of the 1939 World’s Fair was the Trylon and Perisphere, a 600-foot tall spire that stood beside an enormous sphere while klieg lights roamed the sky. Architects added ringed spines to radio towers, engineers built trains that looked like gleaming bullets; cars became swoopy and streamlined and eventually grew fins. Anything futuristic was swaddled with chrome and extraneous antennae. By day the movie theatres, airports, motels and diners lining the brand new superhighways looked like docking spacecraft, by night their neon blazed until it blotted out the stars.
Literature absorbed and was mutated by this great swell of imagination. The slender prose of Hemingway and F. Scott Fitzgerald was replaced with huge tomes and colossal egos who tried to devour all of postwar America and regurgitate it into a single tome. This was the era of Norman Mailer, of Saul Bellow and William Burroughs and John Updike and Joseph Heller and Thomas Pynchon and Alan Ginsberg. Their work was as larded with glittering things—with extraneous information, details about objects and history and revolution—as the glorious motels and gleaming theatres had been a generation before.
Science fiction writers took even bigger mouthfuls than their highbrow cousins. Writers like Arthur C. Clarke and Isaac Asimov wrote space operas that stretched across entire galaxies and sprawled across two, three, even four books at a time.
By the 1970s, the electroplated luster of the future was flaking off. We knew our resources were finite and the glories of technology wouldn’t save us from losing wars or being scorched by an atomic bomb. Architects and industrial designers began to favor forms that were more functional than fanciful. Motel owners figured guests would feel more reassured by a national franchise than an unidentified flying object hovering over their beds. Literary fiction became grittier and more introspective. It pared down until individual sentences were pulling stories along: Raymond Carver, Martin Amis, Barry Hannah. American techno-culture seemed tasteless and plastic. Writers like Toni Morrison, V.S. Naipaul, Salman Rushdie and Mario Vargas Losa brought stories from other cultures to readers, and to many of us, these neglected voices were as rich and strange as Marco Polo’s Travels.
Science fiction sought out the underworld. A movement of writers called “Cyberpunk” plundered from hard-boiled detective fiction. William Gibson, who coined the word “cyberspace” in his 1983 novel Neuromancer, published one of the first Cyberpunk stories in 1981, when he wrote the “Gernsback Continuum.” It’s a marvelous illustration how technology, imagination and fiction are warped by one another. Gibson called them semiotic ghosts.
The “Gernsback Continuum” is told from a photographer’s point of view. The unnamed narrator is a mercenary of a sort, a little jaded, a good photographer but not the best of them, an updated version of the grizzled private investigators you might encounter in a Dashiell Hammet or a Raymond Chandler story. He takes on an assignment from a femme fatale, who asks him to photograph the crumbling vestiges of America’s “raygun Gothic” culture. Gradually, he succumbs to the illusion. Gibson’s nameless narrator begins seeing fragments of a past that never was: Flying wedges pester him in the desert. Lonely highways bloat into 80-lane super-freeways. He takes a diet pill, crashes, and wakes to find a titanic city floating above him and… Them: a couple, a male and a female, Aryan supermen both, a pair of inhabitants of the future that wasn’t. He overhears the male lecturing the female and “his words were as bright and hollow as the pitch in some Chamber of Commerce brochure, and I knew that he believed them absolutely.” The female listens politely to her male and then reminds him to take his food pill.
Gibson was thumbing his nose at classic science fiction. Seen beside modern technology, the twelve-engined flying wings and silver gyrocopters were preposterous—“it had all the sinister fruitiness of Hitler Youth propaganda” drawls the narrator—and the perfect pair was every bit as empowered and boring as the Rocket Age heroes Gibson’s everyman photographer was replacing. But as much as Gibson may be sneering at Gernsback’s classic aesthetic, he acknowledges that it’s a continuum, a seamless shift from one thing to another; that his photographer couldn’t exist without the glorious blondes who came before him. And in the same manner, contemporary writing grew from soil rich in the residue of its clanking, exuberant, Diesel Age predecessors.
The Internet is a mirror of the Universe, albeit an imperfect one. It’s a richer, happier, more transparent reflection of the real world. And though there is a background noise of snickering and threats and occasional yuckiness, those can’t hurt you (in the U.S.A.). The Internet is all about treats: factoids, pneumatic sexpots prancing at your command, mewling kittens, pithy sayings, and other pretty, shiny, glossy things, all available at your fingertips, all delivered from a deliciously designed device through convenient app.
If the mechanical dreams of the Diesel Age were exuberant and colossal, those of the Internet Age are effervescent and charming. I remember the feeling of logging into the Internet for the first time, of making a million weird discoveries as I traversed space and time from behind a monochrome display. It felt glowy and golden. The way swiping an iPhone does the first time you try. The chirping, friendly infrastructure of the Internet has been scorched into our brains. Our literature has been extruded through its cheerful strictures. As mundane as our glowing Apples may seem to us now, they have changed the way we think and the way we write.
Literature will slide back on the continuum. The next wave of novels will slough the Internet. They will be dark, bitter and angry: like biting down on a hunk of coal. But a trace of the Internet’s tinsel will remain.
Monday, January 07, 2013
Quentin Tarantino - Author of the Gatsby
[Spoiler alert: I discuss in some detail the plot outcome of The Great Gatsby and, for that matter, of Django Unchained]
I do not mean to suggest here that Quentin Tarantino set out in Django Unchained to revive in any sort of deliberate way the characters and themes of F. Scott Fitzgerald’s The Great Gatsby. The differences between these two projects are more substantial than their commonalities. One, after all, is a movie and the other is a novel. More importantly, Tarantino is self-consciously a genre re-configuring story-teller, whereas Fitzgerald wanted in The Great Gatsby to write something new using the form of the traditional novel. The Great Gatsby is that most brazen of beasts The Great American Novel. That being said both, in fact, are distinctively American works. Moreover, in both works the action is driven by a hero’s bid to rescue a gal. Both play games with time, though quite different ones as I will elaborate below. In both, injustices are addressed and resolved with varying degrees of success. To my mind the commonalities of revision, rescue, and redress, though these are perhaps the stuff of all great works, are so distinctively rendered in Django Unchained that one can say that Tarantino has re-authored Gatsby.
Many years ago Bono identified, for the edification of an Irish audience, the differences between Irish and American sensibilities. He was appearing on Gay Byrne’s The Late Late Show — as close as one could get in those times to addressing the Irish nation. He was asked to account for U2’s growing infatuation with the United States. As best as I can remember it now Bono reported that when a man gets wealthy in the US and he builds that large mansion on a hill his neighbors look up and say: “Some day I am going to be that guy.” However, when a man builds that house on the hill in Ireland, his neighbors point up and say: “Some day I am going to get that bastard.” This was around the time that U2 were recreating themselves in anticipation of the release of the The Joshua Tree. One supposes they hoped for mansions and accolades. The interview occurred several years after I first read The Great Gatsby as a Dublin teenager. Despite my infatuation with American literature at the time Gatsby struck me as a dud. It was not so-much that a self-made man was uninteresting to me rather I did not even recognize this sort of hero. Gatsby was Bono’s bastard on the hill.
My second reading of the novel was shortly after I got married in the late 1980s. Not only was The Great Gatsby a favorite novel of my wife’s but she grew up in Queens, NY where we were living at the time and she brought me out to see those Long Island mansions. Naturally, a smitten young man rereads in such circumstances. This second, fairly attentive reading, was more successful. The setting of the novel, and the way in which this geography reinforced the class distinctions among the characters impressed me (my wife and I were living closer to Fitzgerald’s Valley of Ashes — Flushing Meadows, Queens — than to East Egg). As a nature-oriented fellow I was also pleased to notice the scattered but quite crucial references to nature throughout the novel.Grass, for instance, is developed as a minor character in the story (being mentioned in one way or anther over forty times in the novel). For example, we meet Tom and Daisy Buchanan’s lawn before we meet them. “The lawn”, Nick Carraway, our narrator, observed “started at the beach and ran toward the front door for a quarter of a mile, jumping over sundials and brick walks and burning gardens…” Yes, the language is so pretty. Though the novel appealed to me on that reading, yet I still thought it more a gorgeous assemblage of themes yoking together a small set of yarns about inconsequential snobs, rather than a unified novel.
This Christmas break on the occasion of my younger son being compelled to read The Great Gatsby for school I took up the novel for a third time. It had been a quarter century since my last reading. That newly wed man of twenty-five years before may have been the more romantic but the middle-aged man I now am, is apparently more easily overwhelmed. It was as if I was reading another book, discovering in it depths I had gravely overlooked before. It may also have helped my recent reading of Gatsby that I have lived in the US for most of the intervening years. I share, at this point, an immigrant’s enthusiasm for the American project.
Gatsby is compelling not because he is a self-made man, a man about whom swirl rumor and innuendo, a man of gigantic wealth, a creator of fabulous entertainments, but rather he compels because of the sympathetic reasons that prompted his self-creation in the first place. You will recall that Gatsby intended with his riches to woo back Daisy Buchanan. Daisy (again with the lawn references!) is wed to the hulking and extravagantly well-positioned Tom Buchanan. How did we know that Tom is unworthy of her? Because he prattles on about a book called The Rise of the Colored Empires, claiming it to be “a fine book, and everyone ought to read it.” He goes on: “The idea is if we don’t look out the white race will be — will be utterly submerged.” In an early scene of New York revelry Tom smacks Myrtle (yes another plant) Wilson, his ill-fated girlfriend, and breaks her nose. It’s not the worst violence of the book, but is the most boorish. James Gatz, Gatsby’s birth name, had courted Daisy in Louisville before the Great War but being penniless was an unsuccessful suitor. It was in order to be worthy of her that Gatsby recreated himself, doing so, it is hinted, by indecorous means. And it looked as if for a moment he had succeeded — when Daisy and Gatsby convene with Nick Carraway’s assistance, Daisy wept “stormily” over Gatsby’s fantastic array of shirts saying “It makes me sad because I’ve never seen such — such beautiful shirts before.”
Readers have puzzled over the years about how Daisy deserved such enduring devotion from Gatsby. It’s is clear though that in some ways Daisy had little to do with it. What seems important really was the metamorphosis that occurred in Gatsby’s soul when those five years earlier he decided to bestow his affections on Daisy on a moonlit night in Louisville. Fitzgerald describes the transfiguration of Gatsby in that earlier moment in ecstatic tones. Gatsby, he wrote “knew that when he kissed this girl, and forever wed his unutterable visions to her perishable breath, his mind would never romp again like the mind of God.” Gatsby thus become flesh, and it is the fate of all flesh to perish and die. Five years after the God-aspiring Gatsby became mortal — this being the action of the novel — Gatsby plans the almost god-like erasure of time. He and Daisy are to be restored to that glorious moment. Daisy was to nullify her four years with Tom. She was to declare that she never loved her husband. And though she does make that declaration, and perhaps even believed it for a moment, nevertheless daisies, though feral, belong on the lawn, and thus our Daisy returns to Tom and she betrays Gatsby. The sheer impossibility of Gatsby’s aspiration (and Nick tells him that it is impossible) had doomed Gatsby and he is violently killed.
Now as I was immersed in this third and most engaged reading of Gatsby I went to see Django Unchained as a Christmas evening entertainment. The story follows the fate of Django Freeman from slave to bounty hunter to rescuer of his wife Broomhilda from the plantation owner Calvin Candie. Django gratifying triumphs and the denouement is explosive. Unlike F. Scott Fitzgerald’s novel which received mixed reviews at the time it was published, Quentin Tarantino’s movie has been almost universally hailed as a great work. It currently has an 88% favorable rating on Rotten Tomatoes. It is of course a controversial film. It is extremely violent, the N-word is deployed with what some regard as an unsavory frequency, and it has sparked debate on who gets the prerogative of making a movie on the topic of vengeance for the history of slavery. The specificity of the story, about slavery, race, vengeance may be of greatest importance, nevertheless, its themes are also universal and this is what I remark on here.
The claim that Django and Gatsby are parallel stories may still seem fanciful. Consider this though: Both Gatsby and Django had to recreate themselves to meet the challenges of their quests. Gatsby is mentored and transformed by the adventurer Dan Cody; Django by the dentist-cum-bounty hunter Dr King Schultz, who rescued him at the beginning of the film. Gatsby became fabulously wealthy mysteriously and almost overnight; Django acquired the expertise of a bounty hunter (including being the sharpest of shooters and possessing horse dressage skills) mysteriously and almost overnight. Gatsby wanted to rescue Daisy from the dastardly white supremacist Tom Buchanan; Django intended rescuing Broomhilda from the monstrous, and amplified racist, Calvin Candie. Gatsby’s legendary Saturday evening parties were merely a facade to get him close to the Buchanan’s East Egg mansion; Django’s ruse of being a Mandingo fighting expert gets him into Candyland, Candie’s plantation mansion. Nick Carraway, our first person narrator, facilitated the reunion of Gatsby and Daisy, Dr King Schultz facilitated the reunion of Django and Broomhilda. Gatsby wanted to go back in time to revisit his perfect moment; Django wants to go back in time to be reunited with his wife. Both works end in the destruction of a mansion. Django flourishingly rides away with Broomhilda from the demolished Candyland, and figuratively so does Carraway our narrator (in lieu of Gatsby). As Carraway describes it: “And as the moon rose higher the inessential houses began to melt away until gradually I became aware of the old island here that flowered once for Dutch sailors’ eyes — a fresh, green breast of the new world.”
Images of nature play a similar role in both works, though I hold off an a fuller inspection for now. Let me merely note that there is a vegetational sequence in The Great Gatsby that starts in the west (whence came Gatsby and Carraway) that then runs from the trimmed to the unkempt grass lawns of Long Island and ends in a vision of the indigenous pre-settlement state. In Django Unchained it also starts in the ecosystems of the wilder west, to the violent and parkland pastoral of the south. More rugged nature still plays a role here: Schultz and Django pick off the KKK posse from their perch in the wilder vegetation above the scence; the runaway slave d’Artagnan hides up a tree before descending only to be torn apart by dogs.
There is besides a close matching of characters in both stories. Django/Gatsby, Broomhilda/Daisy (both meagerly developed as characters), Calvin Candie/Tom Buchanan, King Schultz/Dan Cody and Nick Carraway. Perhaps one can pair the incompetently hooded KKK with the Gatsby’s sodden revelers. The pairings are not perfect, of course. For instance, in the economy of Tarantino’s film-making Dr Schultz plays a dual role. And though there is no Stephen, Calvin Candie’s house slave, nevertheless Wilson, Myrtle’s husband, plays a role which though not precisely comparable, nonetheless, performs the similar task of triggering the endgame.
For all of this Daisy stays with Tom, whereas Broomhilda rides off with Django. Gatsby dies, Django lives. Since this is the most consequential difference between the two works, why this has to be so bears a little scrutiny. Here is my thumbnail sketch:
Gatsby in the process of materially transforming himself destroys himself — all those shirts are not just for show. Django, however, is magnified and empowered by his transformation (assuming, that is, one approves of the havoc he created). Gatsby chooses mortality, whereas Django is bestowed a god’s capacity for vengeance. Ultimately The Great Gatsby explores the nightmare lurking behind the American dream. Django Unchained starts with that nightmare and responds with a fantasy. Death stalks nightmares, fantasies spawn invulnerability. Fitzgerald sets for himself the task of describing what happens when the goal is full restoration of time, pretending, in other words, that the past never even occurred. Tarantino’s task is the equally complex but seemingly more achievable one of responding when the past is unspeakable.
Both works deal, in a sense, with men — Gatsby, Buchanan and Candie — who builds mansions on the hill. In this sense Bono’s account of the American story might be right. But no one, apparently, likes that guy. Even in the American story we like get those basterds. The Great Gatsby is Fitzgerald’s Great American Novel, Django Unchained is Tarantino’s Great American Movie. Perhaps there is only one great American story. If this is so then it was inevitable that Tarantino rewrote The Great Gatsby.
Many thanks to Oisín and Fiacha Heneghan and Vassia Pavlogianis for comments on earlier drafts - and even if they remain unconvinced, some of their insights have been incorporated into this version. I found Adam Kotsko's review of Django Unchained interesting and helpful, especially his analysis of Django's automatic knowledge (see that here).