Monday, April 21, 2014
A Ramachandran. Portrait of Rajkumari. 1998.
Oil on canvas.
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art
by Yohan J. John
No one knows exactly how life began, but a pivotal chapter in the story was the formation of the first single-celled organism -- the common ancestor to every living thing on the planet. I like to think of the birth of life as the creation of the first boundary -- the cell membrane. That first cell membrane enclosed a drop of the primordial soup, creating a separation between inside and outside, and between life and non-life. Through this act of individuation the cell could become a controlled environment: a chemical safe zone for the sensitive molecular machinery needed to maintain integrity and facilitate replication. The game of life consists in large part of perpetuating the difference between inside and outside for as long as possible. Death, then, is the dissolution of difference. But the paradox at the heart of life is that the inside cannot survive without the outside. The cell requires raw materials -- nutrients and energy -- to sustain itself and to reproduce, and these must be sought outside the safe zone, in the wild and unpredictable outside world.
The cell membrane has a dichotomous role. It must preserve the cell’s identity as an entity that is distinct from everything outside it, but it must not be an impenetrable wall. It must be a gateway through which the cell can absorb raw material and eject waste, but it cannot allow the inside to become inundated by the outside. It fulfills this challenge by being selectively permeable, carefully overseeing the traffic between the inside and the outside. The cell membrane must also be flexible, because it serves the roles of locomotion and consumption. In a single-celled organism, the cell membrane is therefore a primitive sense organ, a transportation system and a digestive system, all rolled into one.
The birth of life was a moment of cleaving: when the first cell membrane enveloped its drop of primordial ooze, it cleaved the inside from the outside, but it also became the conduit through which the inside could cleave to the outside. Like Janus, the two-faced Roman god of beginnings and endings, of doors and passageways, the cell membrane is a sentry looking in two directions simultaneously. Given its role in cellular transaction, transition and transformation, the cell membrane’s function might even be described as a precursor to intelligence.
The connection between boundaries and intelligence may run quite deep. In multicellular organisms like humans, the skin is the boundary between inside and outside. Skin cells, as it turns out, are related to neurons. During embryonic development, cells in the ectoderm, which is the outermost layer of the embryo, gradually differentiate to become the cells of the skin and the nervous system. (Researchers have recently found ways of turning skin cells into neurons, suggesting that the line between these two kindred cells may be somewhat permeable.) The skin of a multicellular organism is much like the cell membrane of a single cell: it separates inside from outside, providing a physical boundary for the organism. But the inkling of intelligence in that first semipermeable membrane finds its full expression in the nervous system, which patrols a very different sort of boundary: the line between predictable and unpredictable, between known and unknown.
Life is an obstacle course full of things an organism needs or desires, like food and shelter, and things it would prefer to avoid, like predators or foul weather. Maximizing the good while minimizing the bad requires being able to use patterns in the environment to anticipate what is going to happen. Plants must be sensitive to the rhythmic pattern of the seasons. Animals in turn must predict the patterns of plants and other animals. The evolution of the central nervous system -- the brain and the spinal cord -- was a great leap forward in the pattern-recognition capabilities of living things. The ability to recognize and categorize the patterns in nature and use them to survive and thrive is central to intelligence. It allows living things to find (and create) islands of order and stability in a swirling sea of change and uncertainty.
But it’s dangerous to just stay put once you’ve found an island of order. Resources are limited and change is the only constant -- the boundary between the solid ground of reliable knowledge and the encircling sea of unpredictability is in a state of flux. Nature seems to always find a way of casting us out of the gardens of Eden we create or discover . A pattern-seeker must be vigilant, staying on the lookout for unforeseen dangers and new opportunities. This vigilance takes the form of exploration, and even very simple animals do it. Insect colonies have specialized scouts that search for fresh sources of food. Introduce a new object into the cage of a lab rat, and the first thing it does is investigate it thoroughly.
We tend to describe the behavior of animals behavior in purely utilitarian terms. The exploratory behavior of rats, or birds, or bees, is just a combination of foraging for food, looking for mates, and keeping an eye out for predators. When it comes to human culture, however, utilitarianism can often seem like a bit of a stretch. Is it fear or hunger that drives people to investigate the depths of the ocean, or the far reaches of space?
We humans get bored on our islands of order, even though we need them for our survival and sanity. We also like to sail off into the unknown from time to time. What constitutes the unknown varies from person to person -- it’s not just scientists or philosophers that contend with it. Only a fraction of the world’s population has the inclination and the good fortune to experience first hand the outer limits of scientific knowledge, but a far larger number of people can contend with the boundaries of their worldviews in the domains of art and culture. The edge is where the action is -- on the beach where the chaotic sea meets the tranquil beach. But what is it that drives us to the experiential edge in the first place? And does it have anything in common with the forces that drive living things out of their comfort zones in search of sustenance?
The difference between a desire and a drive is that a desire subsides when the goal is reached, whereas a drive is independent of the attainment of the goal -- the act of striving becomes pleasurable in itself. Living beings have a variety of desires that can be temporarily satiated, but the lust for life is a drive, not a desire. In the long run life appears to revel in the very attempt to perpetuate itself. Intelligent beings, meanwhile, seem to revel in the attempt to expand their islands of order, fighting back the lapping waves of the unknown.
We have a name for the drive towards the unknown -- it’s called curiosity. Jürgen Schmidhuber, an artificial intelligence researcher, has a theory of “computational aesthetics” that offers us a vivid mathematical analogy for curiosity. The theory can be summed up in one bold assertion: that interestingness is the “first derivative” of beauty. Readers who detect a whiff of scientific imperialism will hopefully bear with me as I unpack this idea, which need not be taken as anything more that playful speculation. I admit, colloquial and intuitive concepts like “beauty” or “interestingness” often get bent out of shape a bit when scientists examine them, but this is not necessarily a bad thing. Sometimes we need to distance ourselves from our intuitions to discern their outlines more clearly.
According to Schmidhuber’s computational theory of aesthetics, the subjective beauty of a thing is defined as the minimum number of bits required to describe it. Since descriptions vary from person to person, beauty is in the eye of the beholder. A definition of beauty based on bits of information is not in itself particularly alluring, but it can be improved if we see it as an attempt to capture subjective simplicity or elegance. It is perhaps unsurprising that a scientist’s definition of beauty has much in common with Occam’s Razor. 
However, beauty is not necessarily interesting. We also seek the shock of the new, the excitement of the unusual. So Schmidhuber goes on to define interestingness as the rate of change of beauty -- the time-derivative of the subjective description length. A derivative measures the rate of change of one thing with respect to something else. The time-derivative of distance is speed (the rate at which your distance from some point changes), and the time-derivative of speed is acceleration (the rate at which your speed changes). For something to be interesting then, the observer’s ability to describe it must change with time. So interestingness is a dynamic quality, whereas a thing can be beautiful even if it never changes.
Some examples will help us understand what this means. Most people will agree that staring at a blank screen is quite a boring experience. A blank screen is extremely simple from an information-theoretic perspective, and so its description length will be very short. The description might be something like “Every pixel is black”. There is clearly a pattern, but it’s trivially simple. The information on a blank screen can be easily compressed. White noise sits at the other extreme. Somewhat counter-intuitively, information theory tells us that random noise is rich in information, so it’s description length is extremely long. Totally random information cannot be compressed. An accurate description of white noise on a screen would require specifying what is happening in each and every pixel. If a pattern is something that has structure and internal coherence, then randomness is the absence of pattern. Most people find random white noise boring too. What people find interesting lies somewhere in the middle -- between what is too easily compressed, like a blank screen, and what is totally incompressible, like white noise. We like patterns that are simple, but not too simple; complex, but not incomprehensibly so.
Schmidhuber’s theory is couched in the language of computer science and artificial intelligence, which is why the concept of data compression plays such a prominent role. We don’t really know if the brains of humans and animals compress experience in the same sense that a computer algorithm does. But we do know that living things use pattern-recognition to make useful predictions about their environments. We compare the patterns we’ve encountered in the past with our present experience, and try to anticipate the future. We categorize the patterns we encounter -- poisonous or edible, sweet or bitter, friend or foe -- so that if we encounter them again, we know how to react. Rather than compressibility per se, perhaps what we find interesting is the possibility of enhancing our categories so they encompass more of our experiences. Knowledge consists of having comprehensive categories for as many experiences as possible, and knowing how to respond to each category.
What might interestingness look like? Let me describe a toy system that is confronted by something unexpected, and shows a spurt of interest. Let’s say we have a system that is experiencing something beautiful. The subjective beauty “B” can change over time. In the diagram above, beauty is the blue line, and it stays boringly constant for a while, but at the halfway point it suddenly changes. Imagine a pleasant but predictable movie that suddenly becomes unpredictable in the middle. The beauty increases! The system has an expectation “E” which in our toy system is a memory of the past value of B. The red line in the diagram is the expectation. The green line represents the interest level “I”, which depends on the difference between the beauty and the expectation. When expectation and reality don’t line up, the value of E is different from B, so the system’s interest level shoots up. But eventually E gets accustomed to the new value of B, and the interest level goes back to zero. If the system had perfect expectations and could perfectly predict the change to the value of B, then there would be no increase in the interest level. A curious system is addicted to these bursts of interest, and actively seeks them out. 
As it turns out, the brain’s dopamine neurons fire in bursts of this sort when something unexpectedly good happens. Researchers call this a “reward prediction error” signal, and it is one of the reasons many people think of dopamine as the “pleasure chemical”. But this misses a subtlety -- if the pleasure is completely predictable, the dopamine cells don’t fire. This dopamine cell pattern is more of a novelty signal than a pleasure signal. (There seem to be several other things that dopamine does, so even calling it a novelty chemical is an oversimplification.) Neural network theorists often employ the dopamine burst as a “reinforcement signal” that allows a network to learn from experience and improve its ability to categorize and predict. 
As we simplify, expand and refine our categories we push forward the boundary between what we understand and what we still don’t quite have a handle on. We expand our islands of order, reclaiming land from the sea of unpredictability. Many of the categories humans obsess about have little or nothing to do with the struggle to survive. Curiosity pushes us to proliferate our aesthetic categories -- and in extreme cases it leads to the infinitessimal parcellations of genre and sub-genre that the internet so effectively reveals and encourages. (I invite the reader who does not know what I am talking about to examine the various sub-genres of heavy metal music.)
Curiosity is the drive towards interestingness, and it brings us to the boundaries of what we understand. A trip to a modern art museum should adequately establish that we don’t just find any baffling experience interesting. We seek experiences that are in the sweet spot -- not totally predictable and monotonous, but not random and formless either. During an interesting experience we don’t know exactly what is going on, but we get the feeling that meaningful resolution is but a few moments away. So a Hollywood blockbuster that is too formulaic and predictable is not very interesting, but an experimental art film with no formula at all can bore us to tears too. We like movies with a few twists -- but in order to recognize them as twists we have to have some expectation of what normally happens. A really interesting movie flirts with the boundary between what we know well enough to anticipate, and what surprises and confounds us.
So how does curiosity help us “compress” or improve our categories? Think of the concept of genre. In order to get a subjective sense of what a genre is, you need to experience many examples. Curiosity is what draws you towards this experience. Even if you go to Wikipedia or tvtropes.com and read up on the conventions of a given genre, you still need first-hand experience to understand how those conventions manifest themselves. You need to listen to several blues songs before you can be sure you know what the basic blueprint is. And the more you listen, the more musical structure you can perceive and predict. Once you understand the conventions -- once you know what to expect -- you can experience a burst of interestingness when someone subverts those conventions and confounds your expectation. A blues aficionado is well placed to appreciate the way a band like Led Zeppelin reinterprets the genre’s conventions. In the experience of such aesthetic subversion, you are once again confronted by what is strange and unpredictable, and the curiosity engine becomes fired up once more.
What drives people to police their subjective aesthetic boundaries so zealously? What makes people so concerned with questions of authenticity or originality in art and music? I think going back to the cell membrane might give us some ways to think about such questions. The cell membrane separates inside from outside, mediating interactions between the two. In maintaining a chemical difference between the inside and the outside, it preserves the identity of the cell as an entity that is distinct from the environment. Perhaps aesthetic boundaries -- and mental boundaries more generally -- are central to our notions of identity. To carve out a distinct identity is to maintain a difference between an in-group (which could be just one person) and an out-group. Just as the cell membrane defines the contours of the cell, artistic and intellectual boundaries may define the contours of a personality, or of a community. For people whose identities are wrapped up in difference, to merge with the mainstream might seem a kind of cultural death: a dissolution of the boundary that sustains individuality and identity.
Staying on the boundaries of what is familiar in order to find sweet spots of interestingness allows us to expand our experiential horizons and reaffirm our existences as distinct individuals. But this can also be quite a tiring experience. What is true for a cell is true for an individual, and perhaps even for a culture -- maintaining a boundary takes energy! Most of us aren’t critics -- we can’t spend all our time refining our categories of experience, or sustaining idiosyncratic differences of taste and opinion. Sometimes we need to return to our comfort zones and replenish our supplies. Visiting a museum, for instance, is an experience that can be simultaneously interesting and mind-numbing. (In this age of endless online novelty, I can’t be the only one who seeks out tried and tested experiences -- comfort food, old familiar songs, trashy television -- as an antidote to too much interestingness!) Perhaps merging with the mainstream from time to time is not such a bad thing.
Individualism is taken as a self-evident virtue in modern liberal societies. But given all the effort involved in maintaining the boundary between inside and outside, between the Self and the Other, the opposite movement can be an act of liberation: dissolving the Self by forgoing, for a time, the maintenance of difference. Consider those moments during a sporting event (like a Wave) or a musical gathering (like a Rave) when everyone is moving in unison. It seems as if there is a kind of ecstasy in this voluntary surrender of individuality and difference.
Aesthetic experience, then, is a twofold process. On the one hand, it leads us to curiosity and wonder, which draw us away from our islands of certainty, transforming the contours of our selves. On the other hand, it offers us dissolution and union, which pull us back from the margins, towards community and commonality. Perhaps the dance of aesthetic experience is a microcosm of the great dance of life -- a dance that began with the undulations of that first cell membrane. We sway in the direction of the unknown, and then drift back to the comfort of the known.
Notes and References
 The Genesis story of the fall from grace tells of how man and woman were cast out from the Garden of Eden. In The Power of Myth, Joseph Campbell interprets the story as follows: “Whenever one moves out of the transcendent, one comes into a field of opposites. One has eaten of the tree of knowledge, not only of good and evil, but of male and female, of right and wrong, of this and that, and of light and dark.” Campbell’s “field of opposites” is where pattern-recognition and categorization happen -- it is the field of boundaries and differences, and also of self-consciousness. And this field is no paradise, because it is constantly threatened by the unfamiliar and the unpredictable.
 Jürgen Schmidhuber summarises his theory of aesthetics in a paper entitled “Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes”.
 The diagram shows the results of a little simulation I coded up in Python. It’s a rudimentary “differentiator” that compares the present reality (B) with the recent past (E), and constantly updates its expectations (E). The burst of interest (I) happens during the transient period when reality exceeds expectation (when B > E). Many simple models of dopamine cells use a similar principle. Similar mechanisms can also be employed for edge-detection in a visual image, a crucial stage in object recognition. The system I demonstrate is pretty rudimentary -- it just expects the present to resemble the recent past. You could say that a major goal of artificial intelligence and computational neuroscience is to create systems that have refined, flexible expectations with which to anticipate reality.
 Perhaps the hype cycle represents a burst of curiosity at the societal level. And perhaps social media frenzies are the dopamine bursts of the internet’s hive mind?
Monday, April 14, 2014
Ciprian Muresan. I'm Too Sad to Tell You. 2009
Monday, April 07, 2014
On the Academic Boycott of Israel
by Akim Reinhardt
Let me begin with some personal disclosure. I am a half-Jewish American who has never been and has no personal connection to Israel. In the early 1960s before I was born, my mother, who has otherwise lived her entire life in The Bronx, spent two years on a northern Israeli kibbutz named Kfar Hanassi. Over the years she has occasionally told stories of her time there and maintained some long distance friendships. That one, small tangent is the full extent of my personal association with Israel; in other words, there is virtually none.
In addition to having never been to Israel and never having had any friends or known relatives who live there, I also have no spiritual connection to the place. Though raised Jewish, my inter-faith parents were ambivalent about religion and occasionally outright hostile to organized, institutional forms. I have also been an atheist my entire adult life. The city of Jerusalem and holy sites like the Wailing Wall have no more religious meaning to me than Catholic Cathedrals or Buddhist monasteries. I simply admire the architecture, as the old saying goes.
Yet despite all this, I'm well aware of the hold that the concept of Israel has on American Jewry in general, which is why I disclose my Jewishness. For many American Jews, regardless of their religiousness or lack thereof, Israel is a powerful symbol. As someone whose maternal Jewish grandparents fled Poland and Rumania not terribly long before WWII, and whose grandmother lost almost all of her entire extended family in the Holocaust, I understand that.
You can't grow up with family stories of violent, pre-war persecution, narrow escapes, the two cousins who survived unspeakable horrors, and seemingly countless dead relatives you never met, and not be affected. Refugee trauma is real and it often reverberates down through several generations.
So even though Israel is a place I have virtually no connection to whatsoever as a country or religious site, I am cognizant of the potent symbol it remains for millions of Jews who don't live there. For many Jews, the historical trauma of the Holocaust, not to mention the longer history of persecutions, violence, and ethnic cleansings in Europe and the Middle East, is real. Although most of today's Jews have never experienced a pogrom, survived a concentration camp, or been a refugee, for many of them the echoes of that past remain.
Thus, for many ethnic Jews, Israel continues to stand as the symbol of last resort, the theoretical lifesaver against the turbulent tides of history. I recognize the power that symbol has for many American Jews. It has the capacity to color people's interpretations, definitions, and understandings of Israeli affairs, particularly if they, like myself, have no real connection to Israel, thereby rendering it more abstract.
I do not believe that Israel, as a symbol to Jews, colors my own thinking of Israel the nation. Nonetheless, disclosure is important, particularly because I am going to discuss the Boycott, Divestment, and Sanctions ovement (BDS) against Israel. Some people may suspect that being half-Jewish (my father's family are White Protestants from North Carolina and California) affects my understanding and interpretations. I don't think it does, but I certainly won't hide the fact or pretend its irrelevant to everyone.
I consider myself to be among the growing ranks of American Jews who accept modern realities: Possessing a massive nuclear arsenal and highly trained and equipped armed forces, Israel has long been the dominant military power in its region and is no longer the fragile upstart in danger of being pushed into the sea; Israel's strong, developed, modern economy is marked more by its vibrant high-tech sector than by starry-eyed and occasionally racist bromides about turning the desert into a garden; like any other government, Israel's is plagued at times by the mortal foibles of human error, dogma, deceit, and greed; a growing tide of religious fundamentalism threatens, influences, and even manipulates Israel's parliamentary politics; and the modern state of Israel should not be automatically or primarily defined by, or insulated from criticism because of, past historical traumas or assumptions of righteousness. Rather, today's Israel should be defined by and critiqued on its current policies and actions, some of which are reprehensible.
Israel is a modern, developed nation state, like many others, not some morally superior paradise in the making or national song of destiny, and as such, it is quite capable of making bad decisions and taking regrettable actions, just like any other modern, developed nation state. It is absolutely wrong to believe past wars or even the Jewish genocide of the 1930s and 1940s somehow render Israel immune to criticism here in the year 2014. And it is morally repugnant to equate reasonable criticism of the Israeli government with anti-Semitism.
As with any nation, there is much to criticize about Israel. In particular, there is the issue of Palestinian rights.
For many reasoned critics of Israeli policies and actions in the occupied territories, the BDS movement has become both a rallying point and a tactic that can possibly help bring about positive change.
Since its inception, the BDS has gained momentum as a form of protest against Israeli policies and actions in the occupied territories, and understandably so. Boycott has sometimes been a very successful tactic for protest movements in the post-WWII world. The American Civil Rights movement and the South African anti-apartheid movement are just two examples of economic boycotts that had a profound impact and helped achieve positive change. In addition to economic ramifications, boycotts can also have the effect of bringing increased attention and scrutiny to an issue.
To that end, I respect the decision of those who commit to boycotting Israeli businesses that profit from the situation in the occupied territories. Personally, I have no economic connections to Israel but, for example, I would have no problem with my pension plan divesting from Israeli institutions that do business in the occupied territories (I don't actually know my pension plan's policy on the matter).
However, I want to address one specific component of the BDS with which I strongly disagree: the wholesale academic boycott of Israeli universities.
Thus, the final piece of personal disclosure I must make is that I am a tenured associate professor at an American university. My specialty is American Indian history and history of the American West, and so my research and teaching have no connection to Israel. But as a university professor, I am a member of academia at large, and it is on these grounds that I feel compelled to voice my opposition to the academic boycott.
To be clear, it is only the academic component of the boycott to which I publicly stand opposed. I do not oppose other elements of the BDS. Furthermore, the reason for my opposition to the academic boycott is not a sign of support for Israelis policies or actions in the occupied territories. To be perfectly frank, and perhaps this is selfish of me, but as a member of the academy I care more about academic issues than Middle Eastern affairs to which I have no connection.
As such, I am opposed to a wholesale academic boycott of Israeli universities. This is because I am generally opposed to the academic boycott of any school or research institution that retains its own academic freedom and does not directly participate in colonial activities. Allow me to explain.
The lifeblood of academia is the free exchange of information and ideas. That is the essential premise upon which nearly all academic activity is based. Whether conducting and presenting research, teaching students, or engaging the public, academics are devoted, first and foremost, to the gathering, development, discussion, and dissemination of information and ideas. Almost every other professional activity is secondary. Almost everything else we do as academics flows from this primary mission.
Thus, anything that impedes the flow of academic information and ideas is antithetical to academia, a strike against its raison d'être.
For this reason, the notion of one group of academics boycotting another group of academics is something I find it very difficult to countenance. It is a stance against academic freedom, and as such, it defies the cardinal rule of academia.
However, there are no absolutes in life. There can be exceptions.
One reason would be if a university has disowned its own academic freedom or has had it stripped away by external forces (typically governments). This would make it a worthy target of a of boycott.
Schools or institutions that officially restrict the academic freedom of their faculty and students, or have been stocked with propagandists instead of scholars, or are substantially compromised by censorious external forces, are not worthy of our support. They are legitimate targets of academic boycott. But so long as scholars and teachers have the freedom to pursue the truth, including the right to be wrong, this justification is null and void.
The other legitimate reason for academic boycott, so far as I am concerned, would be if a school maintained academic freedom, but as an institution nevertheless actively advanced colonialism as part of its official program. Then the academic boycott of such a school would be warranted, for the school itself would be fundamentally betraying academic values.
Indeed, educational institutions often play an important role in larger political and economic affairs, and sometimes in negative ways. One well known example is the crucial role American universities have played in the rise and expansion of the U.S. war machine, including the development and proliferation of nuclear weapons and other weapons of mass destruction.
An example from my own field of Indigenous Studies is even more relevant to the issue at hand. Indigenous Studies scholars are well aware of the role universities have historically played in abetting the colonial conquest and dispossession of Native peoples around the world. Many scholars have studied and exposed academic complicity, perhaps most convincingly Maori scholar Linda Tuhiwai Smith in her 1999 book Decolonizing Methodologies. Historians, anthropologists, and other university based academicians around the world spent decades directly and indirectly supporting, glorifying, and enabling colonial dispossession and conquest.
But it is important to differentiate between the actions of individuals and official institutional policies. Modern instances of universities officially betraying core academic values on such a level are actually quite rare. The Nazification of German universities during the 1930s is perhaps the most notorious example, and was largely the result of external as opposed to internal forces. Today, one can only imagine what goes on in North Korean universities.
When individual academicians behave badly or work in support of reprehensible policies, they must be held to account. But that does not necessarily justify boycotting an entire institution. In the event an academic institution does behave badly or work in support of reprehensible, it too must be held to account, possibly in the form of boycott. But that is a far cry from the wholesale boycott of every school in an entire nation.
Furthermore, I do not believe that guilt by association is enough to warrant a wholesale boycott of all of the universities in a nation. While universities have a relationship with their governments, they are not members of it and have virtually no control over it. Universities are responsible for their own political actions; if they are behaving in a manner consistent with academic values, it is not reasonable to inflict upon them the ultimate academic punishment because of the actions of local, provincial, or national governments within which they reside.
It is important to remember just how severe a punishment academic boycott is within the context of academic actions. Academic boycott itself is a clear violation of academic integrity. Therefor, if it is to be used at all, and used responsibly, then: the justifications must me of the highest academic order; the evidence must be incontrovertible; and the weapon of boycott must be wielded with extreme precision, more like a scalpel than a shotgun.
In that spirit, I am open to discussions of a more limited and responsible approach to academic boycott. For example, perhaps a university like Bar-Ilan, which has built a campus on Palestinian land in the occupied territories, should be boycotted. Or at the very least, perhaps that specific campus should be boycotted. The College of Judea and Samaria, which was built on occupied territory, might also be a worthy target. This seems like a reasonable conversation to me. But it is vitally important to differentiate between those universities that directly participate in colonial actions and those that happen to be in a nation that engages in colonial activities. Indeed, every American university and nearly every European university is within a nation that has spent centuries engaging in colonial activities and continue to do so.
It would also be worth discussing boycott if Israeli universities on Israeli lands were to develop policies that restrict their own academic freedom. And indeed, there is evidence of isolated incidents that are worthy of protest. However, isolated instances are not enough. The official or systemic denigration of academic freedom by a university would be the better litmus for so drastic a step. For example, if a school refused to hire Arab scholars or punished scholars who published material critical of the Israeli occupation. A school that disavows its own academic freedom has no academic freedom to protect, but such is not currently the case in Israel so far as I know, and hopefully will never be.
Both as a scholar and a human being, I generally pride myself on being open-minded. I have heard many arguments in favor of a wholesale academic boycott of Israel. Some of them have been quite flawed, but many of them are important, well thought out, and worth examining further. Along the way, my own attitudes have evolved and grown, and I suspect they will continue to as I learn more and continue to discuss the issues, ie., as I engage in the free flow of information and ideas.
I am more receptive to the idea than I used to be, allowing for the limited causes and approaches I've outlined in this essay. However, while I have been swayed to some degree, I have yet to hear any arguments that trump my basic opposition to academics restricting academic discourse on a wholesale basis. Should I ever encounter one that does, or find that the aggregate of many good ones do, then I reserve the right to change my mind. But until then, I stand opposed to a complete academic boycott of all Israeli universities.
Let me close by saying I think the best thing we academics can do is precisely the opposite of academic boycott. I truly believe that speaking truth to power is a fundamental concern of academia, and that sometimes the most effective place to do so is in the belly of the beast. Instead of cutting off conversations, we should nurture and build them. Instead of boycotting all Israeli universities, people who care deeply about this issue should look for ways to engage them. We should bring our information and ideas to them, and share them. Instead of saying we won't go there or collaborate with them, let's got there and collaborate with them. Perhaps a conference on comparative colonialism. Perhaps a special journal issue on comparative Indigenous Studies. Let the sparks fly, let voices be raised, but better that than silence.
As academics, we are at our best when we are engaging, sharing, writing, and discussing. When seeking ways to effect positive change, we should play to our strengths, not run from them. We should engage colleagues instead of turning our backs on them, including those we disagree with vociferously. Especially those we disagree with vociferously. And we should promote the free exchange of information and ideas instead of restricting them.
For these reasons I stand opposed to a wholesale academic boycott of Israeli universities.
Akim Reinhardt's website is ThePublicProfessor.com
Sughra Raza. Vegas From the 51st Floor Balcony. 2014.
To Kiss the Lips of John the Baptist
by Leanne Ogasawara
Salome doesn't even need to think about it--for she already knows what she wants.
King Herod, mad with love for her, asks if she wouldn't prefer jewels and half his kingdom instead. But Salome stands firm. And so the king has no choice but to deliver the head of John the Baptist on a silver tray.
Oscar Wilde's version of the story, while at first banned in England, was immediately popular in Japan in the late Meiji and early Taisho periods. One of Japan's most famous modern poets, Takamura Kotaro, even included the Wilde version of the story in one of his early poems, Awakening on Winter Mornings (冬の朝のめざめ):
On winter mornings
Even the River Jordan must be covered in a thin layer of ice
Wrapped up in my white blanket there in my bedroom
I imagine how John the Baptist felt a
As he baptized Christ
I imagine how Salome felt
As she held John’s severed head
Wilde was not the first--nor the last-- artist to be fascinated by this idea of a woman gone so mad in love with John that she prefers to see him dead than to live with the thought that he did not love her. Strauss' opera ends with her passionately kissing the lips of his disembodied head in what must be one of the most badass moments in opera history.
And what then became of his beautiful head?
Entwined with the history of Jerusalem, some have claimed that his head was interred in Herod's palace- a city whose history is itself so gruesome and grisly that the story of Salome is but a mere blip.
A few years ago, I was in Shanghai to participate in a conference held on cities and identity at Jiaotong University. I was there presenting a paper on Tokyo-- a city in which I lived throughout my twenties and a place I love very much. Something strange happened to me as I was writing the paper, though. The more I read and wrote about Tokyo, the more I kept thinking and dreaming of Jerusalem--until I began to wonder if any two cities could be as different as these two?
Simon Sebag-Montefiore, in his fabulous biography of the city declares that great cities have great foundation stories—but what of Tokyo? Known as a city of villages at the edge of the world, Tokyo, it could be said, never acquired the religious gravitas of Jerusalem nor the moral compass of Rome. Neither eternal like Kyoto nor celestial like Jerusalem, Tokyo never took center stage at all. Of course, I think Tokyo is one of the great world cities. But Tokyo lacks the civic resonance of most great cities. While Jerusalem was said to be the navel of Christ; Mecca the center of the world and New York City, center of the universe—Tokyo has always been at the edge of things. A self-proclaimed frontier town of samurai and salarymen, it is a city without a plan.
I love Tokyo--but it is no "holy city."
One day in Jerusalem is like a thousand days, one month like a thousand months, and one year like a thousand years. Dying there is like dying in the first sphere of heaven --Kaab al-Ahbar, Fadail
Known in Arabic as Al-Quds, Jerusalem's name in Arabic refers to its holiness. Holy to three religions, over the long stretch of time, it must have the most tangled and bloody history of any city on earth. Considered by many in the middle ages to be the center of the world, aspirations about the city launched the crusades and countless ocean voyages in search of "Christians and Spices." Sebag-Montefiore goes so far as to say the history of Jerusalem is the history of the world.
So, where does one even begin to write about such a place? Approaching the city vis-a-vis religion, Avner de-Shalit, in his chapter to the Spirit of Cities, calls the city: the "City of Religion." Karen Armstrong, likewise, centers her book on Jerusalem on the theme of sacred geography. This is no city of science, and so religion is perhaps the most logical place to begin any meditation on the place. Both holy and harlot, Sebag-Montefiore says, “Jerusalem is the house of the one God, the capital of two peoples, the temple of three religions and she is the only city to exist twice — in heaven and on earth.”
Being so full of history and mythological significance, maps of the city traditionally were not grounded in physical reality at all--for this city transcended physical reality. It was a spiritual location --and existed both in heaven and on earth.
Karen Armstrong's book on the city revolves around this notion of spiritual geography, as she believes that the human propensity to assign special meaning to certain places is as old as humanity itself. She says:
People have developed what has been called a sacred geography that has nothing to do with a scientific map of the world but which charts their interior life. Earthly cities, groves, and mountains have become symbols of this spirituality, which is so omnipresent that it seems to answer a profound human need, whatever our beliefs about "God" or the supernatural.
Tying this to ways in which humans give meaning to their mundane lives, she reminds us the way the apprehension of the sacred used to be regarded as of crucial importance. Having been raised in LA and spending my adult life in Tokyo, the more I thought of all this, the more I realized how utterly ill-equipped I am to really understand the concept of a holy city--but at the same time, I must admit, I also grew ever more fascinated and so suggested to the conference organizers that we really ought to organize a second conference on holy cities.
In order to make my case, I noted some of the cities around the world which were thought to be holy. There were not that many. There were a few I could come up with which are considered to be good places in which to die, or cities of pilgrimage. But even those cities were not all that numerous. And surely there is no other city that I know of which has been thought to exist both in heaven and earth. Are there not countless Jerusalems, Zions and Cities upon a Hill?
The future of the world depends on the future of cities. And this must include holy cities, which probably have the most painful histories of all. That said, probably no one will be surprised that my holy cities conference idea was shot down. Still, returning to Los Angeles, my fascination and desire to see Jerusalem only deepened--and at last, it looks like I am finally going to see this place of my dreams--for next month, my astronomer and I will be heading to Jerusalem. And I am so curious about how the actual city will compare to this city or religion of my imagination.
This is not the first obsession I have had with a place. As a child, I had a strong fascination with Kashmir and was absolutely obsessed with seeing it. It started when I was 12, and I read and dreamt of seeing it for my entire teenage years. At last traveling there as an adult of 20, I found Kashmir to be every bit as wondrous and romantic as I had imagined. I wonder of this time won't be the same? Though I guess I am a lot older now and maybe not so prone to falling in love with places like when I was a girl.
In his essay on the city, de-Shalit wrote evocatively about people going mad in Jerusalem. Known as Jerusalem Syndrome, it reminds me a bit of the reaction pilgrims have to relics, like believers who once tried to bite off bits of the True Cross. Considered one of the top ten relics, the head of John the Baptist was like a Holy Grail for some. The relics associated with the Saint are among the most important in all of Christendom.
Perhaps nothing gets closer to my own feelings about the history of Jerusalem like the story of Salome. My Palestinian friend G reminds me to remember that "believers lived in peace in Jerusalem for centuries." But reading history books Jerusalem, it seems as if there was hardly anything else but grislly bloodletting and religious fanatics and nutters. De-Shalit calls it "a serious place." Serious and intense, I agree with Armstrong that there is a deep longing and pain associated with the desire for a kind of reconciliation that is at the heart of devotion to a holy place.
Like so many people, I suppose I also have a deep longing and pain for this place. And maybe like the unendingly fascinating scene from the opera depicted above, in which the artist re-imagines what the Bible has written (for of course only a woman deeply in love with a man could wish for his head to be cut off and presented to her on a silver platter), I cannot help but imagine that traveling there will feel somehow like that all-too human moment of arriving as when Salome pressed her lips against the cold and decapitated lips of her beloved John the Baptist.
Bundling, Dream Space, Love, and the Farmer’s Daughter
by Bill Benzon
The other day I was reading an old post an eBuddy of mine, Michael Cobb Bowen, had written about the possibly of a female viagra-type drug. Michael ended the post by observing:
Sex is dirty, complicated and embarrassing. You have to get naked and vulnerable. In fully formed human beings, that takes some doing and some mutual obligation. More than we think we know, and more than most are willing to say.
In thinking about it – how, say, vulnerability "takes some doing" in "fully formed human beings" – my mind wandered to bundling, an old courtship practice I'd learned about in my teens and, in the worldly wisdom of youth, thought rather prudish and quaint.
Of bundling the Wikipedia tells us:
Traditionally, participants were adolescents, with a boy staying at the residence of a girl. They were given separate blankets by the girl's parents and expected to talk to one another through the night. The practice was limited to the winter and sometimes the use of a bundling board, placed between the boy and girl, ensured that no sexual conduct would take place.
I am no longer an adolescent. I have learned that sexuality is not, in reality, so simple as it was in my pristine adolescent fantasy.
Perhaps there is wisdom in bundling.
The fact that precautions were taken against sexual activity indicates that people both were fully aware of sexuality, and that they wanted to prevent the practice thereof. That I can understand, but then why incur the risk by having the courting couple sleeps together in the first place? If the object is to have them talk, why not let them talk in the swing on the front porch, or sitting in the front parlor? Why have them talk at night, and in bed?
There is a possible answer. When we are sleeping we are, in the crudest possible way, most vulnerable. We are open to surprise physical attack. Thus we take great precautions to ensure that our sleeping places are safe. Moreover, no longer tethered to the here and now, the mind is free to wander.
The vulnerability Bowen had in mind is not physical; it's psychological, the vulnerability of dream space. It is as though the courting couple was to enter dreamland together and, through talking, share their dreams. I am reminded of a passage from John Milton's Doctrine and Discipline of Divorce, where he asserts: "God in the first ordaining of marriage taught us to what end he did it, in words expressly implying the apt and cheerful conversation of man with woman, to comfort and refresh him against the evil of solitary life, not mentioning the purpose of generation till afterwards, as being but a secondary end in dignity."
Conversation before sex.
It's not at all clear to me that the conversation of bundling couples would be "cheerful," but I rather doubt that they talked of the weather or the stock market. I can imagine, in fact, that they might well have found that talking difficult and awkward at first, that they had to figure out just how to have intimate conversation, as such does not come naturally. I would like to think that they were in fact learning how to become vulnerable.
That is, part of being a fully formed human being is the capacity for deep intimacy with another. Bundling thus served as a training ground for such intimacy. Perhaps the idea was that if and when the courting couple became married, that they would then be sexually comfortable with one another.
This is all speculation. Let's flesh it out. Upon learning of my interest in such matters a colleague, Charles Cameron, gathered some passages about religious practices in which male clerics would enjoy chaste sleep with woman as a mode of spiritual practice. While I recommend the whole set to you, I'm going to look at only one of them, from a 1906 book by William Graham Sumner: Folkways: A Study of the Sociological Importance of Usages, Manners, Customs, Mores, and Morals (the complete text is available through Project Gutenberg).
576. Bundling. One of the most extraordinary instances of what the mores can do to legitimize a custom which, when rationally judged, seems inconsistent with the most elementary requirements of the sex taboo, is bundling. ... Christians, in the third and fourth centuries, practiced it, even without the limiting conditions which were set in the Middle Ages. Having determined to renounce sex, as an evil, they sought to test themselves by extreme temptation. It was a test or proof of the power of moral rule over natural impulse. "It was a widely spread custom in both the east and the west of the Roman empire to live with virgins. Distinguished persons, including one of the greatest bishops of the empire, who was also one of the greatest theologians, joined in the custom. Public opinion in the church judged them lightly, although unfavorably." ...
577. Two forms of bundling. Two cases are to be distinguished: (1) night visits as a mode of wooing; (2) extreme intimacy between two persons who are under the sex taboo (one or both being married, or one or both vowed to celibacy), and who nevertheless observe the taboo.
578. Mediæval bundling. The custom in the second form became common in the woman cult of the twelfth century and it spread all over Europe. As the vassal attended his lord to his bedchamber, so the knight his lady. The woman cult was an aggregation of poses and pretenses to enact a comedy of love, but not to satisfy erotic passion.
Here Sumner is talking about courtly love, the subject of C. S. Lewis's classic, The Allegory of Love. The courtier regards his beloved as an object of almost religious veneration; she inspires him and her love ennobles him. Courtly love is often regarded as a precursor to romantic love, at least in the West, though the matter is complex and debated (see my comments in this post on biology, love, and culture). We'll have to set all that aside, though. Let's return to Sumner:
The custom spread to the peasant classes in later centuries, and it extended to the Netherlands, Scandinavia, Switzerland, England, Scotland, and Wales, but it took rather the first form in the lower classes and in the process of time. In building houses in Holland the windows were built conveniently for this custom. "In 1666-1667 every house on the island of Texel had an opening under the window where the lover could enter so as to sit on the bed and spend the night making love to the daughter of the house." The custom was called queesten. Parents encouraged it. A girl who had no queester was not esteemed. Rarely did any harm occur. If so, the man was mobbed and wounded or killed.... This was the customary mode of wooing in the low countries and Scandinavia. In spite of the disapproval of both civil and ecclesiastical authorities, the custom continued just as round dances continue now, in spite of the disapproval of many parents, because a girl who should refuse to conform to current usage would be left out of the social movement.... The custom is reported from the Schwarzwald as late as 1780. It was there the regular method of wooing for classes who had to work all day. The lover was required to enter by the dormer window.
In short, bundling and its kin have been around for a long time and are widespread.
Let's take a slightly more detailed look by consulting "Little Known Facts about Bunding in the New World," which had been privately printed in 1938 by A. Monroe Aurand, Jr. Here we find a statement from "The Mentor" in 1929:
There were districts in New England where the bundling light was a beacon to the farm lad who, of a Saturday night, went trudging afoot or on horse up the roads invoking and even daring fate. The Yankee with daughters to wed advertised the fact in this poetic manner. He had merely to put a candle in his window (more often it was the mother who lighted it or the marriageable girl herself) and bide the family's time.
That fate might not find her unreceptive, the daughter thus offered for mating enjoyed the distinction of a room of her own and a bed of feathers. To this she was wont to retire early ...
Presently the knight-errant, seeing the light, halted in his quest and tapped briskly on the pane ...
Notice the reference to knight-errant – a faint echo of 12th Century courtly society?
Aurand also informs us that, in a world were beds were sometimes scarce, bundling had its more mundane uses:
Bundling Was a Legitimate Custom, to all intents and purposes - with all its dangers - among most of the American colonists, in one way or another in those early days...
The custom, happily for all concerned, was not confined alone to the courting couples, but was extended to army officers traveling from place to place, the good old peddler, and the traveling salesman; the minister and the doctor had the privilege, if they cared to exercise it; candidates for office could expect to be "invited" to join the family, or the daughter "in bed," if they had no fear as to some of the constituency raising objections as to "morals."
And that brings us to a well-known family of jokes.
The Farmer's Daughter
For the sake of argument, let us assume that the primeval farmer's daughter joke went something like this:
A traveling salesman's car breaks down on a country road one evening. He is miles from town. He walks to a nearby farmhouse, and the farmer doesn't have a phone, but says he'll take the salesman into town in the morning. Since the salesman isn't going anywhere, the farmer offers to put him up for the night. The condition is that he'll have to sleep with his daughter because there aren't any other beds. He is warned to behave himself. The farmer's daughter, who is drop dead gorgeous, is almost 20 years old and has a shape that would easily qualify her as a centerfold.
At bedtime, the farmer's daughter puts a pillow between herself and the salesman. She explains that her father told her to put the pillow there to separate the two of them. Nothing happens that night.
In the morning, the salesman is stowing his bag in the back of the farmer's pickup when he sees the farmer's daughter feeding the chickens on the other side of the fence. He walks up to the fence and offers the farmer's daughter a thank you for sharing her room and her bed. The farmer's daughter walks up to the fence and tells the salesman that he is welcome, and then flashes a bright smile at him and winks. The salesman smiles and says that he has half a mind to climb over the fence and kiss her. She says, "If you can't climb over a pillow, how you gonna climb over this here fence?"
While one doesn't need to know about bundling in order to get the joke; that pillow does seem to derive from the practice. Somewhere back in the pool of conversations that produced this joke, someone knew about bundling.
The joke itself seems both obvious and subtle. It is the farmer himself who suggests the arrangement, and he warns the man to behave. The daughter who puts the precautionary pillow in place, on daddy's instructions, is also the daughter who comes on to the salesman as he leaves. The suggestion, of course, is that he could have had her if he'd removed the pillow.
That, of course, is only one version of the jokes. This web page tells an elaborate variation involving two daughters in which the salesman has sex with both daughters. But there's no hint of bundling. The salesman doesn't actually sleep with either daughter; the sex took place in the salesman's car.
This page gives several such jokes, none of which hint of any relationship to bundling. I like this one because it crosses the farmer's daughter joke with the Polish joke (though, of course, the teller can alter the ethnicities to suit local prejudice):
Three guys were driving in a car when it broke down. One was Irish, one Italian, and one Polish.
When their car broke down they walked to the nearest house. It was raining so they asked if they could stay the night.
The farmer said yes as long as they didn't touch his daughter.
So that night, the farmers hot daughter invited the Irish guy to her room, but to get to her room they had to walk past the farmers room where his cat slept in the doorway.
The Irish guy goes over and the floor squeeks, the farmer wakes up and says "What was that?"
The irish guy quickly went "meeeeoowww". The farmer went back to sleep and Irish guy went to the girls room and they had sex.
Next she wanted the Italian guy, so he went over and the same thing happened, the floor squeeked, farmer wakes up, "meeeowww", farmer goes back to sleep.
Finally the Polish guy goes over, and the floor squeeks, the farmer asks agiain "What was that?",
The Polish guy responds, "Its me the cat!"
Notice, however, that while any indication of bundling has disappeared from the joke, the story is still about a farmer's daughter. Why? There are two obvious considerations. The rural locale motivates the basic situation: a man needs a place to spend the night. And then there are the connotations of rural, at least to city slickers and suburbanites: backwoods, primitive, earthy, animal.
On the whole I am inclined to believe that intimate conversation between husband and wife is a cultural invention; marriage based on it certainly is. It likely owes a debt to bundling as a form of religious practice. Bundling itself seems, at various times and places, to be more casual, owing nothing to religious aspiration and discipline. Bundling in turn has given us the farmer's daughter, who promptly forgot about it.
Such are the loose and profligate ways of culture.
* * * * *
Monday, March 31, 2014
Are women too emotional to be effective leaders?
by Quinn O'Neill
It is a widely held view that women are more emotional than men, and some argue that this makes them unsuitable for positions that demand important, cool-headed decision making. The argument often rears its head in discussions about women in politics - particularly as prospective presidents - and I've heard it asserted by both males and females.
The claim that women are more emotional should immediately raise the question of what we mean by emotional. Perhaps we're referring to the intensity at which one experiences an emotion. It's quite possible that women do feel emotion more intensely but this would be difficult to establish with certainty. Emotions are subjective in nature, as are individuals' ratings of the their intensity. Would two people experiencing the same emotion at the same intensity necessarily rate it similarly? It's hard to say.
Alternatively, we might equate emotionality with emotional demonstrativeness. In this sense, a person crying at a sad movie would be deemed more emotional than his or her dry-eyed companion, even if both are feeling equally sad. In this context, one might guess that women are indeed more emotional than men. It seems to me, at least, that they are more likely to cry when watching a sad movie, and more likely to cry in public for other reasons as well. It's important to consider, however, that social norms and expectations differ for men and women when it comes to crying, with it generally being more acceptable for females. If crying were equally acceptable for both sexes, would women still cry more often? Maybe. Maybe not.
It may also be the case that media portrayals of men and women distort our views on gender and crying. In the political domain, Hillary Clinton's tears seemed to garner a lot more media attention - particularly of the negative variety - than those of George Bush junior or senior, Barack Obama, or Joe Biden. Jessica Wakeman, writing for FAIR, detailed the sexist media portrayal of Clinton's emotional display.
Whether we equate emotionality with the intensity of the experience or with demonstrativeness, there's a wide array of emotions to consider aside from sadness. What about anger? When angry, which sex is more likely to punch walls or other people? The vast majority of violent crime is committed by men, and while all incidents may not result from emotions getting the upper hand, I'd guess that a large proportion does. Violent crime certainly isn't the result of the kind of rational, level-headed decision-making we expect of good leaders.
And what about other emotions, like happiness, jealousy, fear, sadness, disgust, and shame? If we're going to make a blanket statement like "women are more emotional than men" or "women are too emotional to lead", should we not consider these too? By "emotional", are we referring to all emotions or just to some? Are women too happy to lead? Too prone to disgust? Too fearful?
It isn't really clear how the intensity of emotional experience or emotional demonstrativeness might impair leadership. What might be problematic, however, is a tendency to let one's emotions influence decision-making. This doesn't mean, of course, that there's no place for empathy and consideration of others' feelings when making important decisions, but that the decisions should be made carefully, in a well-reasoned manner, and with consideration of all of the facts at hand. We might ask then, which gender is more likely to allow emotion to cloud judgement and to influence behavior? Gender differences in this respect are similarly difficult to assess.
Violent and destructive responses to anger would seem more suggestive of unsuitability for leadership than crying at a sad movie, but I wouldn't argue on this basis for exclusive leadership by one sex. There are too many emotions to consider, too many different contexts with different gender norms and expectations, and too much variability among individuals of a given gender.
Even if we define emotionality as a tendency to allow emotion to influence judgment, it's only one of many factors we might consider in suitability for leadership, and arguably not one of the more important ones. Leadership itself is complex, and comes in a variety of styles. Forbes outlined 10 specific qualities that make a great leader: honesty, ability to delegate, communication, sense of humor, confidence, commitment, and ability to inspire. A CNN piece offered 23, including focus, respect, passion, persuasion abilities, compassion, and integrity, among others. None of these qualities require that one refrain from feeling or showing emotion. In fact, some might benefit from greater emotional savvy. Communication, passion, persuasiveness, and compassion, for example, would be enhanced by an ability to understand and engage others emotionally. So, depending on how we define emotionality, it could be an asset in leadership.
Many of these attributes are relatively rare, and rarer still in combination. Take communication, for example. Most of us can communicate basic ideas and information in everyday settings, but the ability to speak effectively in front of large audiences in high pressure circumstances is much less common. To speak confidently and passionately and persuasively and inspirationally in front of thousands of people is a rarer skill set still, and it doesn't come as a package deal with a Y chromosome and a penis. An array of attributes suitable for effective leadership occurs in a minority of members of any gender.
Emotionality aside, evidence abounds that women can be effective leaders. In their ranking of the world's 50 greatest leaders, Fortune included quite a few females, like Angela Merkel, Aung San Suu Kyi, Christine Lagarde, Maria Klawe, Mary Robinson, Ellen Kullman, Susan Wojcicki, Arati Prabhakar, Juliana Rotich, and Gail Kelly. Some women, at least, can be effective leaders.
A study recently published in the Journal of International Affairs suggested that female leadership may be advantageous in some conditions. The authors found that, in ethnically diverse countries, female leaders outperform their male counterparts in growing the gross domestic product, a measure of national economic progress. On average, having a female leader was associated with a 6% higher GDP growth rate than having a male leader.
A separate study described at the Harvard Business Review and MIT News found that teams perform better when they include more women. Author Thomas Malone commented: "The standard argument is that diversity is good and you should have both men and women in a group. But so far, the data show, the more women, the better." Coauthor Anita Woolley added, "We have early evidence that performance may flatten out at the extreme end—that there should be a little gender diversity rather than all women."
Not only is female gender not incompatible with effective leadership, it appears that female representation may be advantageous. It should concern us then that Western countries have relatively low female representation in government. Out of 189 countries, the US ranks 83rd, with women comprising less than 20% of government. The UK is ranked 64th and Canada is 54th. Myths and stereotypes about females in leadership may contribute to this imbalanced representation and prevent government from functioning optimally.
The claim that women are too emotional to be effective leaders is beyond absurd when we consider the vast array of human emotions and gender-specific social norms that dictate how we express them, the multitude of qualities of effective leaders, the variation among individuals within a single gender, and the abundance of evidence for the ability of women to lead effectively. It is a worse-than-baseless generalization that may ultimately handicap our leadership. Such claims should make us angry - and not because we're "emotional" or because we belong to a particular gender, but because we're informed, thinking people who care about our country's future.
Sharing Our Sorrow Via Facebook
by Jalees Rehman
Geteiltes Leid ist halbes Leid ("Shared sorrow is half the sorrow") is a popular German proverb which refers to the importance of sharing bad news and troubling experiences with others. The therapeutic process of sharing takes on many different forms: we may take comfort in the fact that others have experienced similar forms of sorrow, we are often reassured by the empathy and encouragement we receive from friends, and even the mere process of narrating the details of what is troubling us can be beneficial. Finding an attentive audience that is willing to listen to our troubles is not always easy. In a highly mobile, globalized world, some of our best friends may be located thousands of kilometers away, unable to meet face-to-face. The omnipresence of social media networks may provide a solution. We are now able to stay in touch with hundreds of friends and family members, and commiserate with them. But are people as receptive to sorrow shared via Facebook as they are in face-to-face contacts?
A team of researchers headed by Dr. Andrew High at the University of Iowa recently investigated this question and published their findings in the article "Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook". The researchers created three distinct Facebook profiles of a fictitious person named Sara Thomas who had just experienced a break-up. The three profiles were identical in all respects except for how much information was conveyed about the recent (fictitious) break-up. In their article, High and colleagues use the expression "emotional bandwidth" to describe the extent of emotions conveyed in the Facebook profile.
In the low bandwidth scenario, the profile contained the following status update:
"sad and depressed:("
The medium bandwidth profile included a change in relationship status to "single" in the timeline, in addition to the low bandwidth profile update "sad and depressed:(".
Finally, the high emotional bandwidth profile not only contained the updates of the low and medium bandwidth profiles, but also included a picture of a crying woman (the other two profiles had no photo, just the standard Facebook shadow image).
The researchers then surveyed 84 undergraduate students (enrolled in communications courses, average age 20, 53% female) and presented them with screenshots of one of the three profiles.
They asked the students to imagine that the person in the profile was a member of their Facebook network. After reviewing the assigned profile, each student completed a questionnaire asking about their willingness to provide support for Sara Thomas using a 9-point scale (1 = strongly disagree; 9 = strongly agree). The survey contained questions that evaluated the willingness to provide emotional support (e.g. "Express sorrow or regret for her situation") and network support (e.g. "Connect her with people whom she may turn to for help''). In addition to being queried about their willingness to provide distinct forms of support, the students were also asked about their sense of community engendered by Facebook (e.g., "Facebook makes me feel I am a part of a community'') and their preference for online interactions over face-to-face interactions (e.g., "I prefer communicating with other people online rather than face-to-face'').
High and colleagues hypothesized that the high emotional bandwidth profiles would elicit greater support from the students. In face-to-face interactions, it is quite common for us to provide greater support to a person – friend or stranger – if we see them overtly crying and therefore the researchers' hypothesis was quite reasonable. To their surprise, the researchers found the opposite. The willingness to provide emotional or network support was significantly lower among students who viewed the high emotional bandwidth profile! For example, average emotional support scores were 7.8 among students who saw Sara entering the "sad and depressed:(" update (low bandwidth) but the scores were only 6.5 among students who also saw the image of Sara crying and updating her relationship status to single (high bandwidth). Interestingly, students who preferred online interactions over face-to-face interactions or those who felt that Facebook created a strong sense of community responded positively to the high bandwidth profile.
There are some important limitations of the study. The students were asked to evaluate whether they would provide support to a fictitious person by imagining that she was part of their Facebook friends network. This is a rather artificial situation because actual supportive Facebook interactions occur among people who know each other. It is not easy to envision support for a fictitious person whose profile one sees for the first time. Furthermore, "emotional bandwidth" is a broad concept and it is difficult to draw general conclusions about "emotional bandwidth" from the limited differences between the three profiles. Increasing the sample size of the study subjects as well as creating a broader continuum of emotional bandwidth differences (e.g. including profiles which include pictures of a fictitious Sara Thomas who is not crying, using other status updates, etc.), and also considering scenarios that are not just related to break-ups (e.g. creating profiles of a fictitious grieving person who has lost a loved one) would be useful for an in-depth analysis of "emotional bandwidth".
The study by High and colleagues is an intriguing and important foray into the cyberpsychology of emotional self-disclosure and supportive communication on Facebook. This study raises important questions about how cyberbehavior differs from real world face-to-face behavior, and the even more interesting question of why these behaviors are different. Online interactions omit the dynamic gestures, nuanced intonations and other cues which play a critical role in determining our face-to-face behavior. When we share emotions via Facebook, our communication partners are often spatially and temporally displaced. This allows us to carefully "edit" what we disclose about ourselves, but it also allows our audience to edit their responses, unlike the comparatively spontaneous responses of a person sitting next to us. Facebook invites us to use the "Share" button, but we need to remember that online "sharing" is a sharing between heavily edited and crafted selves that is very different from traditional forms of "sharing".
Acknowledgments: The images from the study profiles were provided by Dr. Andrew High, copyright of the images - Dr. Andrew High.
Reference: Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook, AC High, A Oeldorf-Hirsch, S Bellur, Computers in Human Behavior (2014) 34, 79-88
The Rationalist and the Romantic
By Namit Arora
A few weeks ago, the Indian publisher Navayana released an annotated, "critical edition" of Dr. BR Ambedkar’s classic, Annihilation of Caste (AoC). Written in 1936, AoC was meant to be the keynote address at a conference but was never delivered. Unsettled by the text of the speech, the caste Hindu organizers of the conference had withdrawn their invitation to speak. Ambedkar, an "untouchable", self-published AoC and two expanded editions, including MK Gandhi’s response to it and his own rejoinder.
AoC, as S. Anand points out in his editor’s note, happens to be "one of the most obscure as well as one of the most widely read books in India." The Navayana edition of AoC carries a 164-page introduction by Arundhati Roy, The Doctor and the Saint (read an excerpt). The publisher’s apparent strategy was to harness Roy to raise AoC’s readership among savarna (or caste Hindu) elites to whom it was in fact addressed, but who have largely ignored it for over seven decades, even as countless editions of it in many languages have deeply inspired and empowered generations of Dalits.
Meanwhile, this new edition has drawn a mixed response. Expressions of praise coexist alongside howls of disapproval and allegations of an ugly politics of power and privilege, co-option and misrepresentation. To many Dalit and a few savarna writers and activists, this Roy-Navayana project—Navayana is a small indie publisher run by Anand, a Brahmin—is a bitter reminder that no Dalit-led edition of AoC can get such attention in the national media, that gimmicks are still needed in this benighted land to "introduce" AoC and Ambedkar to the savarnas, that once again, caste elites like Roy, with little history of scholarly or other serious engagement with caste (as Anand himself suggested about Roy three years ago), are appropriating AoC and admitting the beloved leader of Dalits into their pantheon on their own terms—all while promoting themselves en route: socially, professionally, and financially (see this open letter to Roy and her reply).
Such responses may seem provincial, hypersensitive, or even paranoid to some, but they shouldn’t be brushed aside as such. They point to a universally toxic dynamic of power and knowledge to which savarna elites are so alert and sensitive in colonial, orientalist contexts, yet so blind to its parallels within India, propagated by their own class. Is this because it’s easier to see prejudice directed from above at one’s own class, versus the prejudice it doles out below? Especially on a fraught topic like caste, one’s social location shapes how one frames and conducts a debate on annihilating caste, its current state, and the heroes and villains in this fight. The folks at Navayana—a leading English language publisher of anti-caste books, including many by Dalit authors—would surely nod in agreement.
What’s notable in this case is the intensity of disapproval—and how it blindsided Navayana—even before many of the protesting Dalits, men as well as women, had read Roy’s full introduction. It was clear that in their estimation, Roy simply hadn’t earned the stripes to be the sole introducer of a "critical edition" of AoC. Or perhaps, having read the excerpt and her interview, many Ambedkarites didn’t like what they saw as Roy’s facile and unjustified account of Ambedkar’s weaknesses, as in his views on modernity, urbanization, and Adivasis. Wouldn’t it have been more prudent and honorable for Navayana to have also included in this book other "introductions" by Dalits who have engaged the longest with AoC and relate to it differently? Or to publish Roy’s essay as a standalone book? Only time will tell how this project impacts anti-caste struggles and academia’s output in India and abroad. Meanwhile to Anand, a self-described "Ambedkar zealot" who sees himself as a radical champion of the Dalit cause and who I believe published this edition in that spirit, this turn of events—with many Dalit friends and activists questioning his agenda and lumping him with caste Hindus he has ridiculed before—must feel like a sad and painful desertion.
Politics and prudence of this project aside, it’s worth remembering that Roy’s introduction is also a subjective response of a writer to a text that clearly moved her. Like all living classics, AoC too requires new readings in every age, including of celebrity writers relatively new to Ambedkar, as Roy evidently is. Savarna writers may be late but they too are entitled to make him their own as they see fit. Others, in turn, are entitled to critique such efforts, as many Dalits and non-Dalits have done. They can try to show how a writer’s analysis and assessments are shaped by her identity, ideology, and privilege. In what follows, I offer my own response to Roy’s introduction and reflect on the portrait of Ambedkar that I see in it—an exercise shaped no doubt by my own identity, ideology, and privilege.
Roy’s strategy in her introduction is to first lower Gandhi from the high perch of reverence he still commands among caste Hindus (e.g., the Anna Hazare movement, Bollywood "Gandhigiri", etc.). This, she reckons, is necessary to make room for Ambedkar. Here Roy differs from most mainstream historians who, even when they elevate Ambedkar, don’t do so at the expense of Gandhi. "They should both be heroes," said Ramchandra Guha in 2012. "Why must we diminish one figure to praise another? India today needs Gandhi and Ambedkar both." In a recent essay, Caste Iron, I argued that Guha’s is "a specious position given how much the two sides differed on matters of great significance to a liberal democracy, such as advancing equal opportunity, safeguarding minorities, and fighting systemic discrimination." Add to this their approach to caste, religion, politics, and economics. As the scholar Gail Omvedt noted, the two men represented "not simply a confrontation of two idiosyncratic leaders but of two deeply divergent conceptions of the Indian nation itself." Comparing them is to compare more than just two individuals. Roy too finds their major differences irreconcilable, where praising Ambedkar can imply diminishing Gandhi—and vice versa.
Roy revisits Gandhi’s South African past to furnish a persuasive account of his life and mind that’s nothing like the staple of history textbooks. She admits that her account is purposefully selective, since "Gandhi actually said everything and its opposite". Roy points out that in South Africa, Gandhi harbored a host of racial prejudices, identifying more with the whites and upper-class Indians and looking down disdainfully on black Africans and indentured Indians. Roy's portrait of Gandhi—with his views on race, caste, women, labor, religion, and more—helps establish continuity with his later attitudes in India, especially his faith in the varna system, his doctrine of "trusteeship", and his empathy deficit for "untouchables", evident in his patronizing stance and opposition to legislative reservations for them. Roy’s focus on Gandhi seems excessive at times—the main body of AoC mentions Gandhi only once—but it helps illuminate many attitudes that Ambedkar was up against and the context of their exchange that Ambedkar later appended to the AoC.
Roy’s essay, studded with soaring prose and rhetorical flourishes, also covers a lot more ground: how caste manifests itself in the modern economy and persists in so many professions and institutions of democracy, how the savarnas wield "merit" as their "weapon of choice" to protect their privileges, and the discrimination and violence Dalits still face today. She describes Ambedkar’s family background, his early "encounters with humiliation and injustice", his satyagrahas and other civil rights campaigns for "untouchables" and women, his call for a separate electorate and the events that led to the Poona Pact, the causes of the historic rift between Ambedkar and the Left, and more.
Why has caste survived for so long? Roy cites Ambedkar who blamed it on a system of "graded inequality" in which, he wrote, "there is no such class as a completely unprivileged class except the one which is at the base of the social pyramid. The privileges of the rest are graded ... each class being privileged, every class is interested in maintaining the system." Thus, she concludes, "there is a quotient of Brahminism in everybody, regardless of which caste they belong to [and this] makes is impossible to draw a clear line between victims and oppressors." While true, Roy might have added that those near the top of this pyramid of privilege and resources nevertheless deserve the greatest censure, for they have the fewest excuses for not reforming the system and the institutions they control. Eventually, she writes, such Brahminism "precludes the possibility of social or political solidarity across caste lines" and that’s why caste has survived for so long.
Roy faults Ambedkar for his views on the Adivasis, claiming that he didn’t understand them. He saw them as backward, in a "savage state", and in need of civilizing. "Ambedkar speaks about Adivasis in the same patronising way that Gandhi speaks about untouchables", Roy said in an interview. He displayed against them "his own touch of Brahminism", she writes in the introduction. Quoting Ambedkar from AoC, she asks: "How different are Ambedkar’s words on Adivasis from Gandhi’s words on Untouchables"? Some of these judgments feel gratuitous; I think more sympathetic readings are possible, but the case she makes, given Ambedkar’s high standards, is at least a head-scratcher. She however goes further and claims that Ambedkar’s "views on Adivasis had serious consequences. In 1950, the Indian Constitution made the state the custodian of Adivasi homelands", making them "squatters on their own land." Whether Ambedkar or anyone else—given the dominant mood of territorial consolidation in the new nation state—ever had any room to manoeuvre on this front, she does not say.
Roy has, with great vigor and courage, championed a host of social justice issues in India and abroad. Not surprisingly, she extols Ambedkar’s radical egalitarianism across caste, class, and gender, and his language of dignity and rights. She enters more contentious terrain when she evaluates Ambedkar’s approach to modernity. This is the Roy who, in her non-fiction, has argued from positions that could be called anti-modern, anti-industrialization, anti-urbanization, anti-globalization, and even anti-statist. We could see these as pillars of her own utopia, reminiscent more of Gandhi than Ambedkar. Gandhi, she says, "believed (quite rightly) that the state represented violence in a concentrated and organized form". He was "prescient enough to recognize the seed of cataclysm that was implanted in the project of Western modernity." Ambedkar on the other hand, writes Roy, recoiling from the iniquities of the past, "failed to recognize the catastrophic dangers of Western modernity." The very existence of Adivasis, fighting "the pitiless march of modern capitalism", she claims, "poses the most radical questions about modernity and ‘progress’—the ideas that Ambedkar embraced". She adds,
"The impetus towards justice turned Ambedkar’s gaze away from the village towards the city, towards urbanism, modernism, and industrialization—big cities, big dams, big irrigation projects. Ironically, this is the very model of ‘development’ that hundreds of thousands of people today associate with injustice, a model that lays the environment to waste and involves the forcible displacement of millions of people from their villages and homes by mines, dams and other major infrastructural projects."
Many will recognize this recurrent feature in Roy’s writing: daring but simplistic, earnest but overstated, a purveyor of partial truths. She might as well rail against modern medicine because of its side-effects, grossly unequal access, and rampant malpractices. Roy concludes that "The rival utopias of Gandhi and Ambedkar represented the classic battle between tradition and modernity". But Gandhi’s fond fantasy of an idyllic village was very much a byproduct of modernity, so a sharper framing of their differences might be Romanticism vs. Enlightenment Rationalism. While Gandhi raged against machines, railways, hospitals, modern education, and explained floods and earthquakes as divine punishment, Ambedkar eulogized "reason, the purpose of which is to enable man to observe, meditate, cogitate, study and discover the beauties of the Universe and enrich his life." He valued "sufficient leisure" that allowed humans to cultivate their minds, adding that "Machinery and modern civilization are thus indispensable for emancipating man from leading the life of a brute". Gandhism "is merely repeating the views of Rousseau, Ruskin, Tolstoy and their school." Gandhism harks "back to squalor, back to poverty and back to ignorance for the vast mass of the people." Ambedkar continued,
"The economics of Gandhism are hopelessly fallacious. The fact that machinery and modern civilisation have produced many evils may be admitted. But these evils are no argument against them. For the evils are not due to machinery and modern civilisation. They are due to wrong social organisation which has made private property and pursuit of personal gain matters of absolute sanctity. If machinery and civilisation have not benefited everybody the remedy is not to condemn machinery and civilisation but to alter the organisation of society so that the benefits will not be usurped by the few but will accrue to all."
Whether emerging nations like India ever had the option of rejecting modernity is not a question that Roy seems to have considered. Did other viable models exist in a world where power and prosperity accrued to those who embraced modernism, industrialization, urbanism, a constitutional state, science, public health, social security, and liberal education? Couldn’t an alternative model have turned out to be far worse? It’s true that modernity has also spawned huge new problems but, as always, the picture of gains and losses is decidedly mixed and very intertwined. What do we make of the fact that there is also a genuine mass appetite for modernity, which has spread not by diktat but by diffusion? If this has set us on a collision course with nature, we might as well blame it on the tragic human "weakness" that has come to seek greater dignity, pleasure, and freedom in the short run of human lives. How voluptuously romantic and ultimately counter-productive for highly modern citizens of a liberal state, such as Roy, to stand opposed to something as manifold and irrepressible as "modernity" itself, rather than focusing on the only path that’s been open to us: to influence its unfolding, use its tools to reduce its harms, make it more equitable. Isn’t that precisely what Ambedkar would have done?
This is not to say that Ambedkar’s approach to modernity is beyond criticism. Dalit intellectual DR Nagaraj has offered some in The Flaming Feet and Other Essays. Whether one is persuaded by it or not, it is at least a lot more nuanced than Roy’s animus for modernity itself. "The modern city and its development ethos", wrote Nagaraj, "are bound to annihilate the memories of Dalits and leave them in almost a state of culturelessness. [But] this argument is not usually viewed with sympathy by the majority of Ambedkarites, for they believe there is nothing positive or precious in the memories of Dalits, there is only humiliation and pain." Nagaraj argued that "the disappearance of indigenous technology represents a big civilizational blow to the subaltern castes" but Ambedkarites, lured by modernization and urbanization, didn’t fully realize that concentrated "capital and high-tech-based models of development would in the Indian context inevitably lead to the hegemony of the upper castes over the lower." Keen to escape "certain professions and humiliation in traditional society", Ambedkar didn’t take a critical attitude towards "the practices of erasure within modern development" and didn’t factor into his analysis "the nature of new technology and the social basis of its ownership." He had however realized "the tragedy of a memoryless community". Through his founding of, and mass conversions to, Navayana Buddhism—which Nagaraj calls "one of the most moving chapters of Indian history"—Ambedkar tried "to build a new memory" for Dalits, marking "a decisive break with a certain kind of modernization".
"I did not have to read Ambedkar to understand caste," Roy said at a launch event for this book. "I just had to grow up in an Indian village." This struck me as unusual. I wish she had written about her own journey of awakening to caste iniquities. When did she start thinking about it deeply and seeing things afresh? Personal encounters and discoveries are an effective device in good storytelling. Nonetheless, Roy’s essay has already proven useful for the debates it has provoked. It shows that there are indeed irreconcilable differences between Ambedkar and Gandhi. The same can also be said about Ambedkar and Roy.
More writing by Namit Arora?
Digital C type print.
Uncle Warren Thanks You For Playing
by Misha Lepetic
"Is it the media that induce fascination in the masses,
or is it the masses who direct the media into the spectacle?"
I usually buy my cigarettes at a corner store, on Manhattan's Upper West Side, that, not unusually for such establishments, also does a brisk trade in lottery tickets. Now, buyers of both cigarettes and lottery tickets are placing bets on outcomes with dismally known chances of winning. My fellow consumers are betting that they will win something, and I am betting that I won't (I also console myself with the sentiment that I am having more fun in the process). But in both cases, the terms of exchange are clear – we give our cash to the vendor, and buy the option on the pleasure of suspense, waiting to see if we have won. Beyond the potential payout, there really isn't that much more to discuss: the transactions are discrete and anonymous. And in the end, someone always wins the lottery, and someone always lives to a hundred.
I was reminded of the perceived satisfactions of participating in games of chance with hopeless odds after hearing a recent piece on NPR discussing quite the prize: a cool $1 billion dollars for anyone who nailed a 'perfect bracket.' In other words, the accurate identification of the outcomes of all 63 games of the NCAA men's basketball playoffs. Sponsored by a seemingly oddball trinity of Warren Buffett, Quicken Loans and Yahoo!, the prize is, on the face of it, an exercise in absurdity. But its construction is superb, and worth examining further, for reasons that have little to do with basketball, or probability, but rather for the questions it provokes around the value of information.
Now, bracket competitions have been going on at least since the tournament itself, which kicked off in 1939. Although brackets are common for other sports, there are unlikely subjects, too: saints and philosophers both have been thrown into pitched, single-elimination battle. But the NCAA bracket holds pride of place, not least because the number of participating teams is much greater than most other playoffs. This leads to the absolutely astonishing odds: if each game is treated as an independent coin toss, the odds of a perfect bracket are 1 in 9.2 quintillion, a number that even Neil DeGrasse Tyson might have difficulty contextualizing for us. Of course, the distribution of the initial round favors higher-seeded teams, so barring any first-round upsets, our chances may improve to a balmy 1 in 128 billion.
So we have at least an answer to the initial question of "What odds would make you feel comfortable enough to put up $1 billion?" Of course, if someone had won, Warren Buffett, whose net worth clocks in at about $60 billion these days, would have been on the hook, or rather his firm Berkshire Hathaway, whose market cap is five times the size of Buffett's wealth. (I mention both Buffett and his company because Buffett has thrown in a classic game theory move: he is willing to buy out anyone with a perfect bracket going into the Final Four for, say, $100 million.) In any event, it certainly would have been worth seeing the avuncular Oracle of Omaha show up at the door of the lucky winner with a giant cardboard check, just like Ed McMahon used to do with the Publishers Clearing House Sweepstakes. But if the chances of winning are nearly impossible, and there is no cost to enter the contest, we are left with a head-scratcher: who benefits?
There is an obvious pleasure to filling out brackets, of competing for the sake of competition, of measuring ourselves against not just one another but against the unknown. And certainly casual observers of what has become known as the "Buffett bracket" would not be wrong to point out that, on the face of it, Buffett et al. have come up with a great publicity stunt. But a publicity stunt, for all its Barnumesque splashiness, is intrinsically ephemeral. Its principal value lies in the fact that it grabs our attention and confers some brief benefit upon its initiators before sinking beneath the ebb and flow of the 24-hour news cycle. In this age of big data, where the world's most successful technology corporations thrive on dressing up "free" services with ever more finely targeted advertising, we ought to hope that there is a subtler angle.
And there is. Recall the three sponsors of our prize: Berkshire Hathaway, Yahoo! and Quicken Loans. In order to enter the competition, prospective bracketologists (that's a real word) had to visit a Yahoo! page, where they had to first open a Yahoo! account and then fill out a detailed Quicken questionnaire which elicited not just their name, home address, email and phone number, but much more importantly, if they own their home, or plan to purchase one in the future, and, if they own one, the current interest rate on the mortgage. For its part, Berkshire Hathaway receives a fee from Quicken and Yahoo! for insuring the competition, ie, in case the payout actually happens, which never will. Everyone's a winner, baby.
The benefit to these entities – particularly to Quicken, which specializes in mortgage lending – becomes apparent when one combines the quality of the information with the scale of participation. Concerning information, Slate, in one of the few clear-eyed articles on the matter, quotes a mortgage investment banker as saying that "it's not uncommon for companies like Quicken to pay between $50 and $300 for a single high-quality mortgage lead." While Quicken's spokespeople have been at pains to point out that only people who ask will be contacted, the fact is that all of the information on the entry form is required, which allows Quicken to create a massive database from which it can model all sorts of trends and behaviors.
How massive? At first, the organizers limited the number of entrants to 10 million, but based on the response sensibly increased it to 15 million. At this moment it's unclear how many people actually registered, and I doubt that this number will ever be disclosed. But if we take the low range of what Quicken pays for lead generation and assume that 1 million people opt to be contacted (ie, 10% of the low end of the entrant population), Quicken has acquired $50 million of lead generation value, and this does not include any revenue from leads that it manages to close. Even if we knock down the 10% by an order of magnitude, Quicken is still enjoying a $5 million freebie (of course, I am assuming honesty on the part of the respondents).
For its part, Yahoo! gains an equivalent number of users. Obviously, some will already be Yahoo! accountholders, but even if we assume that only half are new users, that is still 5 million fresh fish to subject to new ads, at least for a time. Berkshire Hathaway's benefit, aside from the insurance fee, is less clear, but the language in the contest rules leaves wide open the opportunity for sharing information between Quicken and the conglomerate (and if you have any doubts about the spurious protections afforded by these agreements, have a look at this 60 Minutes report).
So what? People are always giving away something in the hopes that they will gain something that is, in their perception, of even greater value. In the case of the Buffett bracket, even if what they finally get is nothing, I suspect there is still a pleasure in the act of playing – in other words, a bribe. But before discussing bribery, what interests me is the change in what's considered a fair trade. Any economist will maintain that a trade made without coercion is a fair trade, with the libertarian corollary being that people should not be protected from the consequences of their greed and/or stupidity.
But Western law has tended to draw the line at varying points. Nigerian letter scams and boiler room pump-and-dump schemes are illegal precisely because society has decided that there is a point beyond which people need to be protected from their cupidity. And the terms of engagement and success for the Buffett bracket are rather clear: in this sense, the contest is neither a fraud nor a scam. You pay to play, in a way that may not seem obvious or even harmful. But what is not transparent is the purposes for which that data is used, beyond the immediate consequence of the generation of consent, or the persistence of this data. Would people change the way they thought about giving up this information if they knew of the enormous subterranean infrastructure that trafficks in their personal details? Would they value it more? But if there are no mechanisms of valuation (ok, fine: free markets) that make the worth of this information apparent, how do we approach this?
Consider what happens when these mechanisms of valuation are not available to us as individuals. The master-stroke of the Buffett bracket is to force an extraordinary, cognitively unresolvable trade: it somehow makes perfect sense to divulge to some corporation the interest rate on your mortgage in order to gain the right to guess the outcome of a bunch of basketball games (a right which you had anyway, minus the impossible prize). And as proof, millions have chosen to do exactly this. The contest's creators rightly discerned that the value of this information to each individual is trivial, and yet the networked value of the aggregated information is, to those same creators, extremely valuable indeed. Recall a much-abused quote by Stewart Brand: "Information wants to be free." The anthropomorphism implied here is some awful hippie nonsense, but fortunately that is only a fragment. Here is the full quote (with a full exegesis here):
On the one hand information wants to be expensive, because it's so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.
In the Buffett bracket we have the resolution of this paradox – of how what is free (as in costless) is transmuted into value (something that is otherwise expensive to obtain). It is quite clear to whom the information is valuable, and the generation of this value is only possible through the vast systems that aggregate millions of bits of data into models that determine and predict behavior, ultimately driving profit. It is also quite clear how lowering the cost of getting information into the system makes it free (again, as in costless). What the internet and the accompanying utter lack of regulation enable is the hyperefficient siphoning off of that information from any willing individual who hasn't the means to determine what his information might actually be worth - which is pretty much no one. As a further consideration, note that most people will forget they entered the contest within weeks of the tournament's end, but that there are no provisions for their information's expiration. We may be done playing the bracket, but the traces of data that we leave behind are never forgotten.
The problem with this analysis (aside from its melodramatic nature) is that it incomplete. There is no resolution at this moment. Regulation that would give private citizens the right to use their information as an object of the commodity economy (ie, for lease as well as for sale) versus the current state, where it has by default fallen into the realm of the gift economy, is about as likely as a perfect bracket. The best that thinkers such as Jaron Lanier – who has written extensively on the subject – can seem to come up with is a system of micropayments, but the problem with technologists is that they tend to have a dismal grasp of the dismal science. In the meantime, what continues to take place is not so much a fraud or a scam, but really a sort of bribery. As automation continues to replace middle class jobs, we are being bribed for what little we have left that is uniquely our own, and, it being of such little worth to us, we find ourselves willingly trading it for the privilege of, as Žižek says, having "an experience" – in this case, the non-chance to win a billion dollars. This is the heart of ideology, in that it does not need to hide itself. After all, Slate and NPR both published insightful articles on the Buffett bracket and what it meant for participants. There is no need to obfuscate the truth, as it is much more useful for large network actors to be (sufficiently) open about their motives and desires. One doesn't have to look very hard to see that the old Wall Street adage – "They take your money and their experience, and turn it into their money and your experience" – has never been more true, or more subtle, since you are brought to believe that you never had the money in the first place.
So what about the state of the Buffett bracket? Sadly enough, no one made it past the first two days of competition. As fate would have it, the first round saw 14th seed Mercer upsetting 3rd seed Duke, which wiped out a large swathe of punters. Better luck next year, kids. In the meantime, the folks at Quicken have a lot of phone calls to make, and I need to go to the corner store to pick up a fresh pack of smokes. I sometimes think about picking up a lottery ticket while I'm at the counter, too, but somehow never seem to get around to it.
What Is Good Taste?
by Dwight Furrow
I suspect most people would say "good taste" is an ability to discern what other people in your social group (or the social group you aspire to) find attractive. Since most people cannot say much about why they like something, it seems as though good taste is just the ability to identify a shared preference, nothing more.
But looked at from the perspective of artists, musicians, designers, architects, chefs and winemakers, etc. this answer is inadequate. It doesn't explain why creative people, even when they achieve some success, strive to do better. If people find pleasure in what you do and good taste is nothing more than an ability to identify what other people in your social group enjoy, then there is little point in artists trying to get better, since the idea of "better" doesn't refer to any standard aside from "what people like". So it seems like there must be more to good taste than that.
Furthermore, good taste cannot merely be a matter of having a sense of prevailing social conventions because artists and critics often produce unconventional judgments about what is good. Instead, having good taste involves knowing what is truly excellent or of genuine value, which may have little to do with social conventions.
But philosophers have struggled to say more about what good taste is. David Hume, the 18th Century British philosopher, argued that good taste involves "delicacy of sentiment" by which he meant the ability to detect what makes something pleasing or not. In his famous example of the two wine critics, one argued that a wine is good but for a taste of leather he detected; the other argued that the wine is good but for a slight taste of metal. Both were proven right when the container was emptied and a key with a leather thong attached was found at the bottom.
Thus, Hume seemed to think that good taste was roughly what excellent blind tasters have—the ability, acquired through practice and comparison, to taste subtle components of a wine that most non-experts would miss and pass summary judgment on them. The same could be said of the ability to detect subtle, good-making features of a painting or piece of music. The virtue of such analytic tasting of wines is that the detection of discreet components can at least in theory be verified by science and thus aspires to a degree of objectivity. Flavor notes such as "apricot" or "vanilla" are explained by detectable chemical compounds in the wine. The causal theory lends itself to this kind of test of acuity since causal properties can often be independently verified.
Hume's model of taste contains some insight. Someone practiced at discerning elements that ordinary perceivers would miss is an indicator that she has good taste. But I don't think this model is quite right.
Good taste involves evaluating quality, and the quality of a painting, piece of music, or wine is seldom a function of the components of the work taken individually. A wine taster can identify a whole bowl of various fruit aromas wafting from a wine, pronounce the acidity to be bracing and the tannins fine-grained but firm and still have said little about wine quality. Wine quality is a function of structure, balance, complexity, and intensity supplemented by even less concrete features such as deliciousness, power, elegance, gracefulness, or refreshment. None of these features can be detected by analytically breaking down a wine because they are inherently relational, just as describing a painted surface as garish or a piece of music as lyrical would involve relations. No single component can account for them; it is a matter of how the components are related. In wine, even a prominent feature like acidity is not merely a function of Ph; perceived acidity differs substantially from objective measures of acidity and is influenced by the prominence of other components such as sugar and tannin levels. None of these relational properties seem amenable to scientific analysis. I doubt that gas chromatography can identify elegance; a wine's balance cannot be appreciated by measuring PH and sugar levels.
Identifying these aesthetic features involves a holistic judgment, not an analytic one. The wine as a whole must be evaluated just as evaluating painting or music involves judgments about the work as a whole. But although these holistic features in a wine are a product of fruit, acidity, and tannic structure no list of wine components will add up to a wine being balanced, elegant or delicious. Another British philosopher, from the 20th Century, Frank Sibley, argued that this is a general feature of aesthetic judgments. There are no rules that get us from facts about the object, regardless of how subtle, to these holistic aesthetic judgments.
Hence, the problem of good taste. What do you discern when you identify elegance, grace, or deliciousness in a wine? It's not like picking out oak flavors. It's a judgment about how everything comes together—a set of relations that emerge from facts about the wine but are not identical to any particular collection of facts. If it is not an analytic ability, what sort of ability is it?
I think Kant, another 18th Century philosopher, gets us closer to an answer. When I judge something to be beautiful, I do so because I like it. But what about it do I like? For Kant, the pleasure I get from a genuinely beautiful object does not lie in the fact I find it agreeable or pretty. Rather, I enjoy how it makes me think. It stimulates contemplation of a particular kind. Kant called this the free play of understanding and imagination.
Interpreting Kant is a rather perilous journey but I think he has in mind something like this.
A beautiful object exhibits an order or unity that cannot be fully described. Neither words nor aesthetic principles are sufficient. There are no rules, he argues, that govern our use of the term "beauty" and, in any case, feelings of pleasure will be an unreliable guide to when we are in the presence of beauty. He apparently thinks that each object exhibits beauty in a different way so we can't simply point to a set of features that generally cause us to judge something beautiful. We can't understand a beautiful object like we understand tables or chairs that have determinate, repeatable properties. Yet, in great works of art there is something there that we want to learn more about, patterns that we want to learn to follow, a unity we must strive to grasp. A beautiful object can't mean anything we want it to mean. With beautiful objects we have to search for what they mean and that requires imagination. We have to imaginatively search for a principle that helps us to better understand the object, although we are doomed to fail because, given the indeterminacy of beauty, there is always more to be said. It is this searching activity that we find enjoyable—an intellectual fascination with trying discover all the dimensions that a work has to give. Thus, an aesthetic judgment is not based on the object as much as it is based on our reaction to our reflection on the object.
Of course, some objects won't repay that much attention. We explore them for awhile, get bored because we've come to identify and articulate everything important about them, and move on. But according to Kant, an object is genuinely beautiful if it sustains our interest in reflecting on it indefinitely because all attempts to fully understand it fail. The object has an order that constantly opens new ways of understanding it because no particular principle is ever adequate. Beautiful objects are intriguing, mysterious, not fully understood, yet at the same time balanced, harmonious, and well put together.
Thus, taste, on Kant's view must refer to our ability to determine whether an object is worth reflecting on, whether it will repay our attention and produce endless fascination. A person of good taste discovers new patterns to explore, finds unexpected avenues of meaning, and responds with feelings and insights that generate new ways of describing something.
Kant, of course, would never have assented to using his theory to understand the enjoyment of wine or food. "Mouth taste" he argued is a matter of immediately liking or not liking something and does not provoke contemplation as the appreciation of fine art does. But on this point, I think Kant was wrong.
For example, this kind of indeterminate play between our concept of what something is and an intriguing, sensual experience that we cannot quite place in any traditional category is precisely what Modernist cuisine (aka molecular gastronomy) aims for. The moments of uncertainty, surprise, and deconstructive gestures of their dishes provoke the kind of intellectual playfulness that Kant thought was the essence of aesthetic experience. When the flavors are genuinely delicious and we experience the harmony and unity of the flavor profile along with the intellectual pleasures of searching for indeterminate meaning, a judgment that the object is beautiful seems appropriate.
Caviar made from sodium alginate and calcium, burning sherbets, spaghetti made from vegetables produce precisely this kind of response. They challenge the intellect and force our imagination to restructure our conceptual framework just as Kant suggested.
Kant was right to point to this kind of experience as genuinely aesthetic but wrong in his judgment that food could not be the object of such an experience. One wonders what the old professor, who never ventured more than 10 miles from his home in Königsberg, had on his plate for dinner.
But what about wine? Wine too is mysterious and a provocation to further exploration, but it fascinates differently from the mysteries of Modernist cuisine. Its capacity for evolution in the bottle and in the glass and the volatile esters that leap from its surface mean that each bottle promises new and different perceptions, and each sip can reveal hidden layers of flavors and fleeting aromas. Great wines have the ability to arrest our habitual heedlessness and distracted preoccupation and rivet our attention on something awe-inspiring yet utterly inconsequential, without aim or purpose, lacking in survival value, monetary reward, or salutary advance in our assets. These experiences are almost always the result of paradox—power combined with finesse, elegance with carnality, surface sheen and depth.
When we are so transfixed by the sensory surface of the world, we stand outside that nexus of practical concerns and settling of accounts that makes up the everyday. Shorn of that identity we drink in the flavors seduced by the thought that there is goodness in the world—whole, unadulterated, without measure. This is part of the attraction of great art and music as well—a moment of ecstasy.
It is not at all clear that Kant's free play of the understanding and imagination quite captures the sheer sensuality of these experiences, whether the object be wine, music, or a work of visual art. It is more like receptively opening up to sensation rather than an intellectual search for a principle. In the end, Kant's view seems too intellectual, too bound up with understanding to account for our fascination with the sensuous surface of things, the pure enjoyment of appearances.
So I fear we are not quite there in our pursuit of good taste.
Maybe if I open another bottle the answer will become clear.
For more ruminations on the philosophy of food and wine visit Edible Arts.
Monday, March 24, 2014
Walid Siti. Endless Encounters. 2013.
Killing Shias...and Pakistan
by Omar Ali
I have written before about the historical background of the Shia-Sunni conflict, and in particular about its manifestations in Pakistan. Since then, unfortunately but predictably, the phenomenon of Shia-killing in Pakistan has moved a little closer to my personal circle. First it was the universally loved Dr Ali Haider, famous retina surgeon, son of the great Professor Zafar Haider and Professor Tahira Bokhari, killed in broad daylight in Lahore along with his young son.
This week it was Dr Babar Ali, our friend and senior from King Edward Medical College; He was the assistant DHO (district health officer) and head of the anti-Polio campaign in Hasanabdal, who was shot dead by "unknown assailants" as he drove out of his hospital at night. Shia killing portals reported his death but it is worth noting that no TV channel or major news outlet reported on this murder. Such deaths are now so utterly routine that they do not even make the news.
This should scare everyone.
In 2012 I had predicted that:
“The state will make a genuine effort to stop this madness. Shias are still not seen as outsiders by most educated Pakistani Sunnis. When middle class Pakistanis say “this cannot be the work of a Muslim” they are being sincere, even if they are not being accurate.
But as the state makes a greater effort to rein in the most hardcore Sunni militants, it will be forced to confront the “good jihadis” who are frequently linked to the same networks. This confrontation will eventually happen, but between now and “eventually” lies much confusion and bloodshed.
The Jihadist community will feel the pressure and the division between those who are willing to suspend domestic operations and those who no longer feel ISI has the cause of Jihadist Islam at heart will sharpen. The second group will be targeted by the state and will respond with more indiscriminate anti-Shia attacks. Just as in Iraq, jihadist gangs will blow up random innocent Shias whenever they want to make a point of any kind. Things (purely in terms of numbers killed) will get much worse before they get better. As the state opts out of Jihad (a difficult process in itself, but one that is almost inevitable, the alternatives being extremely unpleasant) the killings will greatly accelerate and will continue for many years before order is re-established. The worst is definitely yet to come. This will naturally mean an accelerating Shia brain drain, but given the numbers that are there, total emigration is not an option. Many will remain and some will undoubtedly become very prominent in the anti-terrorist effort (and some will, unfortunately, become special targets for that reason).
IF the state is unable to opt out of Jihadist policies (no more “good jihadis” in Kashmir and Afghanistan and “bad jihadis” within Pakistan) then what? I don’t think even the strategists who want this outcome have thought it through. The economic and political consequences will be horrendous and as conditions deteriorate the weak, corrupt, semi-democratic state will have to give way to a Sunni “purity coup”. Though this may briefly stabilize matters it will eventually end with terrible regional war and the likely breakup of Pakistan. . Since that is a choice that almost no one wants (not India, not the US, not China, though perhaps Afghanistan wouldn’t mind) there will surely be a great deal of multinational effort to prevent such an eventuality.”
Unfortunately, it seems that the state, far from nipping this evil in the bud, remains unable to make up its mind about it.
The need to have a powerful proxy in Afghanistan after the American drawdown seems to take priority over the need to maintain sectarian harmony in Pakistan, as do the financial ties that bind Pakistan to Saudi Arabia. Many (though not all) on the left also remain convinced that pitting Sunnis against Shias is mainly (or even entirely) a project of the CIA, promoted as a way to keep the Middle East in turmoil. But even if this is true (and I personally doubt that the purveyors of this theory have the evidence, or have even worked out the implications of their worldview, but that is a separate story), it does not absolve the ruling elite in Pakistan of their responsibility in this matter. The strangest and most irrational meta-narratives can be sustained while acting rationally and shrewdly in the world of actions and short term consequences (where most politics is necessarily conducted), but the reverse is not always true; there are some blindingly obvious mistakes that should not be tolerated no matter what meta-narrative you wish to subscribe to. The Ahle Sunnat Wal Jamaat (ASWJ)’s campaign against the Shia sect is one of those. Whether people have a Marxist or Islamist or Capitalist worldview hardly matters; the ruling elite cannot possibly sustain itself if this affair progresses much further. I would argue that:
- The ASWJ and its fellow travelers (whatever their historic background and philosophical roots may be) are an existential threat to the modern state of Pakistan. The modern Pakistani state can tolerate (and has tolerated) many amazing contortions and disasters, but open season on the Shia population is not one of them. Unlike Ahmedis or Sindhi Hindus, the Shias of Pakistan are not a small fringe community. They are an integral part of Pakistani society, deeply woven into the Pakistani state, capable of armed retaliation, and able to obtain support from at least one (probably two or even three) well-resourced nieghbors. Their elimination or suppression is not a a realistic option for Pakistan even as a practical matter (quite apart from the blindingly obvious moral issues involved). The ASWJ is very clear about their intentions and makes no secret of it. Those intentions cannot be dismissed as mere words after all that has happened in the last 30 years. They are deadly serious. They will not tolerate Shias as equal partners in the Pakistan project. They have repeatedly insisted that Shias should be removed from “important positions” in the state and their religion must be demarcated as something distinct from “real islam”. With a wink and a nod, they may say that they are willing to accept the existence of Shias “if they do not cross the line”. But that line will be defined as needed by the ASWJ, and will eventually be drawn so tightly across Shia necks that they will not be able to breathe. The parallel with the Nazi view of the Jews is entirely valid. This project has no peaceful resolution. It must be condemned, its leaders ostracized and its violent executioners terminated with maximum prejudice. Otherwise you can say good bye to Pakistan.
- The “strategic priorities” of the state (one of the cruelest jokes perpetrated on our unready institutions by think tanks and teachers from “advanced” countries) have led it to encourage the spread of extremely intolerant and violent ideologies and organizations across the length and breadth of Pakistan. Here I would like to add that I do not disagree with those who say that there are deeper economic and social reasons for the phenomenon of religious fundamentalism and the spread of organized violence (whether Islamist or Maoist) among the “weaker sections of society”. My point is much shallower and more urgent. The social and economic challenges and changes that have driven the rise of Hindu and Sikh militants, Maoists and even South American drug gangs are also operative in Pakistan, but the self-destructiveness and confusion of the Pakistani ruling elite goes well beyond the norm. For 13 years the international community (not just the United States) has poured money and weapons into the Pakistani state to assist it in destroying the network of Jihadist terrorist organizations created (with American help at the beginning) in our region. Even if one believes the most insane conspiracy theories about the CIA acting at the same time to prop up these very organizations as part of some diabolical plan of the trilateral commission or the elders of Zion, the fact remains that the Pakistani ruling elite did not have to actively work for any such diabolical plan. It is not in their interest to sustain and support any of these terrorist organizations or provide them cover. To continue to do so for the sake of “obtaining leverage in Afghanistan post 2014” is insane, and it remains insane no matter what meta-narrative you wish to apply on the situation.
- There are also those who believe that the connection between various “Good Taliban/anti-imperialist resistance” in the tribal areas and the Shia-killers in the rest of the country, is exaggerated by people who are being paid in dollars to make this case. Why the dollar-slaves (Imran Khan’s loving term for those who oppose his pro-Taliban leanings) would make such a connection when the CIA desperately wants to spread sectarian conflict within Pakistan (as Imran Khan and many others also believe) is not clear, but could this claim be true? Could it be that use can be made of the “good Taliban” and their network of Madrassahs and political supporters in Pakistan, while launching a clearly demarcated operation against the Shia-killers of the LEJ? I think not. The ideology of Sunni purity and Shia-hatred that drives the LEJ is also the ideology of the good Taliban. Economic and social pressures may create the target killers, but ideology is the proximate cause for their alignment with this particular form of “protest against real suffering”. Since the socio-economic conditions of Pakistan will not change at any speed rapid enough to defang this beast before it kills Pakistan (simply because they have never changed that fast in any country at any time, all fantasies of overnight successful and productive people’s revolution notwithstanding), it is the proximate causes (the ideology and its armed enforcers) who will have to be dealt with. Any policy that permits the Taliban and their support networks to operate unhindered, will also permit the ASWJ and its network of killers to operate unhindered. To imagine that the good Taliban will be pushed into the coming Afghan civil war fast enough to permit the ruling elite to recover ground in Pakistan while remaining allied with them (the dream scenario of the strategic depth community) is to carry self-delusion to incredible heights. The links between the good and the bad Taliban are too numerous, their cause too closely interlinked, for this to be possible. Whether driven by fantasies of strategic depth or by other (equally “modern”) fantasies of anti-imperialist struggle, this calculation is not tenable.
It is time to change course.
A few snippets and videos worth a look:
This is a section from a report about the arrest of Shia-killer Tariq Shafi alias doctor, a friend of Waseem Baroodi (a policeman who killed many Shias, spent time in prison, was freed and went back to both the police and his job as shia-killer) (whole thing here):
“ During the JIT Interrogation , he told his where about as he was born in 1968 , and was the resident of P.I.B Colony , And got his elementary education from Govt . High School, Sindhi Hotel , Liaqatabad, and during the same Period he also did a Refrigeration Course , and passed his Matriculation Privately in 1989 . And In 1990 he Joined the Garden area Police as a Mechanic . But at the Untimely death of his Brother in 1995 , he left the Job and shifted to Bhawalpur , where he Married his maternal Cousin, and got involved in the Fabric Business , but as the Business could not florish , so he came back to Karachi in 1998 , and his Job also got re Instated in the Police Department .
And During his Job in the Police , he got in contact with a Young Man named Waseem Baroodi , who use to come to one of his students , who was a Prayer leader of Mosque in Orangi Town 11 ½ , who convinced him for the sectarianism & Blood shed of Opponents , So finally one fine day he told that he has a 30 bore Pistol with him , and Waseem Baroodi took him along to kill a Innocent Boy , Both walked toward the Boy , and on Pointation of Waseem Baroodi of that Boy , I fired on him , resulting his death
From 2000 to 2001 before he got arrested he Killed about 9 or 10 Shia men. One day He and Waseem Baroodi were walking on the road as they came across some Street criminal Men , who were trying to snatch cash from Waseem Baroodi , but on his resistance he got injured due to their firing , in the mean time I took out my Pistol , and fired on them , and due to the firing One of the Dacoits got Killed , and as Waseem was also injured , and I was trying to take Waseem to Hospital for treatment , but at the same time we were arrested by the A.S.I Ali Raza of Orangi Ext. P.S , we were arrested on 11 different cases , for which I was in Jail for about Seven and a Half years , till finally I was released on Bail in 2008 – 2009 , and by that time Waseem was already released on Bail , about 7 to 8 months , earlier , and during the Imprisonment period , he was the Group Leader of Sipah e Sahaba Pakistan.”
Also, do not miss this event. It is a gathering of ASWJ leaders in Quetta, under the protection of security forces; awards are being handed out to local ASWJ leaders who have played a prominent role in anti-Shia activities in their region. Since this local branch has the “distinction” of having killed hundreds of Shias at a time (instead of picking them off one by one), one of the speakers recites a poem that commends them as “those who make centuries instead of playing for ones and twos” and the crowd laughs and cheers. Everyone knows what he means. It is an absolute must-see.
The following videos shed light on the aims of the ASWJ/SSP/LEJ:
Somewhere in Europe
by Lisa Lieberman
As Russia annexes Crimea, bringing us back to the bad old days of the Cold War, it's hard to remember the allure that Communism once held, particularly among bourgeois intellectuals. All the old Marxist apologists have died, a good many of them having publicly renounced their faith. The bloom is off the rose. But amidst the devastation of World War II, Europeans dreamed of abolishing the injustice that economic inequality brought, abandoning the nationalism that had caused the war, and remaking their societies from the bottom up.
Playwright Gyula Háy was nineteen when he was forced to flee his native Hungary. Like other supporters of Béla Kun's short-lived Council Republic (an effort to establish a Soviet-style dictatorship of the proletariat in Hungary after its defeat in World War I), he was targeted in the subsequent White Terror instituted by Admiral Horthy's nationalist and authoritarian regime. Háy found his way to Berlin along with other Communists and fellow travelers. After the Nazis came to power, most of these radicals wound up in the Soviet Union, where they led a precarious existence, always at risk of being eliminated in one of Stalin's purges. Yet those who managed to survive emerged from the war with their idealism intact. Here's how Háy described his return to Hungary in a Soviet airplane in April 1945 after twenty-five years in exile, ten of them in the USSR:
All the way from Moscow to Budapest in a bomber over the Carpathians, a solemn feeling had been gathering in my breast. I had been able for ten years to watch one realization of the great idea, full of mistakes and loose ends. Now was my chance to realize the same idea in my own country.
The Song of Freedom
A famous 1947 Hungarian film captures the hope of the immediate postwar period quite well. Somewhere in Europe was written by Béla Balázs, a comrade of Háy's, who taught at Moscow's State Film Institute from 1933-1945. There he came into contact with the great Soviet directors of the revolutionary era: Sergei Eisenstein, Dziga Vertov, Vsevolod Pudovkin, and Alexander Dovzhenko. All were evacuated to the city of Alma-Ata in Kazakhstan during the war—Háy and Balázs included—where they set up a makeshift studio to produce propaganda films urging resistance to the German invaders. Somewhere in Europe demonstrates a good deal of Soviet cinematic technique, from the opening montage of marching German soldiers intercut with scenes of wartime destruction to the angled images throughout the film and the documentary feel of the first half of the picture, with its long shots and sparing use of dialogue.
In the chaotic final months of the war, a group of orphans band together for protection. The traumatized children have turned feral; all they do is fight with one another and steal food, inciting the anger of some villagers, who are still under the thumb of the fascist Arrow Cross. The orphans find refuge with a gentle old man who lives in a ruined castle in the steep hills above the village. He teaches them civility, offers a glimpse of a world without poverty, and trains them to whistle "La Marseillaise," the anthem of the French Revolution. Armed with little more than the song and a few handfuls of rocks, they withstand the townspeople's assault on their safe haven and take possession of the future.
My favorite scene is when the old man, who turns out to be an internationally acclaimed orchestra conductor, Piotr Simon, is noodling on his piano. The melody resolves into "Für Elise," but Beethoven is soon supplanted by the booming chords of Rachmaninoff's "Prelude in C-sharp minor." Kuksi, the smallest and cutest of the orphans, has climbed up onto the piano. He asks Simon why he's playing his music all alone in the castle (side-stepping the question of how the old man got a piano up there, with the war raging all around).
"Down below in the world there's too much noise going on. They wouldn't hear the music," replies the old man.
"What's music for?" Kuksi persists.
"What's music for? If something hurts very much, or if something is too beautiful to put into words, this is the way you tell it."
Now things get serious. Simon launches into a rousing rendition of "La Marseillaise." A young man wrote this song, he explains, and it quickly caught on.
"And when a sea of people were singing it, their song was answered by guns. Canons, tanks, and machine guns. But the song was always stronger. It went around the world because people understood what that young man wanted to say. It's about freedom."
The oldest boy, a reform school escapee, scoffs at this. "Freedom. We played that game on the highway and almost starved."
"You weren't free. Freedom means that you're not forced to suffer, do evil things or hurt others. The worst captivity is poverty," Simon explains patiently to big and small boy alike, with all the other orphans listening raptly.
The Red Fairy Book
Paternalism is the reigning motif of Somewhere in Europe. Under the old man's tutelage, the orphans discover the virtues of solidarity and work. Together they patch up the castle, parceling out the chores according to age, gender, and ability, and making sure that each member of the group has enough to eat and a dry place to sleep. "The world is already yours. You just don't know it," Simon assures them. Once the fascists are gone, he promises, "new people will write new laws in the name of all who need help." He is so fatherly, so benign, that you want to believe him. Who could fail to be enchanted by this fairy tale figure, complete with castle, who has preserved the culture of European humanism within its walls?
Balázs had a thing for fairy tales. In 1912 he wrote the libretto for "Bluebeard's Castle," the famous opera composed by his friend, Béla Bartók. The two men traveled together in the Hungarian countryside collecting folk music and fables, and during his time in Kazakhstan, Balázs continued to collect folk poetry in much the spirit of the brothers Grimm, or Andrew Lang, whose turn-of-the-century Fairy Books of Many Colors preserved the old, magical stories for posterity. In this he was a typical product of his time and place. Educated Hungarians who came of age before the First World War were steeped in western European culture, measuring themselves against their counterparts in France, England, Germany, Italy and particularly Austria, since the two countries were closely allied in the Dual Monarchy. Fin-de-siècle Budapest was a cosmopolitan city of cafés rivaling those of Paris and Vienna, home to a renowned orchestra and opera, its metro system second only to London's. Higher education, the arts, architecture, engineering, and finance all thrived in the Hungarian capital, whose population more than doubled in the final decades of the nineteenth century, making it the fastest-growing city of Europe, the sixth largest by 1900. Budapest was scarcely representative of Hungary as a whole, however. Most of the country remained agricultural, comprised of large estates in the hands of aristocratic landowners with peasant tenants living in dire poverty. Beneath the glittering surface of the Austro-Hungarian empire were vast economic disparities and deep national divisions, as was true in the Russian empire as well.
The way to reach the peasants and bring them into the modern age was by using a language that they understood. The Soviets knew this; when Balázs was out collecting Kazakh folk poetry, he was part of a broader endeavor to preserve the traditions of Asiatic Russia not simply for their own sake, but in order to harness those traditions to the cause:
The Soviet government not only had these glorious old epics written down but saw to it that the last generation of the akins (as they were called in the Kazakh language) turned their attention to the present-day life of the Soviet Union and sang not only of the old heroes but of the new exploits of the Red Army, while still preserving the old folk style and language.
Film, he believed, was the art best suited to imparting truth to the masses. He understood the techniques pioneered by the best directors, the importance of editing and camera angles, for example, the long, sustained shots of the documentary, the form most apt for conveying Socialist Realism. In his famous book, Theory of the Film (1948), he talked about music and gestures as well, and what it was about a great actor's face that made the films they starred in so unforgettable. "Greta Garbo's beauty is a beauty of suffering; she suffers life and all the surrounding world." This suffering beauty affects us more deeply than some bright and sparkling pin-up girl, he continued. "Millions see in her face a protest against this world, millions who may perhaps not even be conscious as yet of their own suffering protest; but they admire Garbo for it and find her beauty the most beautiful of all." Still, at the end of the day, the star's beauty and the director's technique were there to serve the story, and it is here that fairy tales came into their own.
Balázs worked in the spirit of an anthropologist who lays bare universal human experiences by finding their most primitive form of expression. He knew his Freud, too. "Our earliest experiences are the ones that are most deeply imbedded and stay with us longest. And childhood travels are surely some of the greatest, most important experiences a person can have in his whole life," he wrote in 1925, on the heels of a trip to Vienna. Speaking to the child who still resides in all of us, he went on to describe his train journey in terms that evoke the experience of watching a film in a dark theater:
When you fall asleep on the train at night, no matter how hard and uncomfortable your bed, you have for a moment the marvelous, blissful feeling of completely surrendering yourself to some caring, benevolent power that is watching over you . . . And when morning comes you see wet, misty fields, and you are someplace else. You haven't traveled. You have simply gone to sleep and awakened someplace else. Just as in a fairy tale.
Lisa Lieberman is the author of Stalin's Boots: In the Footsteps of the Failed 1956 Revolution.
Monday, March 17, 2014
Was St Patrick a Biocidal Lunatic? Some Sober Reflections on Ireland's Patron Saint and Snakes
Like a Noah in reverse St Patrick kicked snakes off the rain-drenched ark of Ireland. So complete was his mystical sterilization of the land that seven hundred years later in his Topographia Hibernica (1187) Gerald of Wales could write: “There are neither snakes nor adders, toads nor scorpions nor dragons… It does appear wonderful that, when anything venomous is brought there from foreign lands, it never could exist in Ireland.” Indeed, even as late as the 1950s the Irish naturalist Robert Lloyd Praeger wrote, “The belief that “venomous” animals – which term included toad, frogs, lizards, slow worms and harmless as well as poisonous snakes – did not and could not flourish in Ireland, owing to St Patrick’s ban, long held sway, and possibly is not yet extinct.” (Natural History of Ireland (1950))
Snakes, however, are not the only species that can be found in Britain or continental Europe while being entirely absent from Ireland. Moles, several species of bats, many bird species, including the Tawny Owl, several titmouse species, and woodpeckers, innumerable insects species, many plants, and so on, might be added to the roster of St Patrick bio-vandalism. Of course, biogeographers have long known that the impoverished nature of the Irish biota is attributable to a number of factors unrelated to St Patrick.
Firstly, Ireland is a relatively small island with an area of 84,421 km² compared to Great Britain which is almost three times the size (229,848 km²). The European land area is considerable larger still being over one hundred times that of Ireland’s (at 10.18 million km²). Now, one of ecology’s more robust laws posits a relationship between area and species diversity. The more land, the more species. A consideration of the relatively restricted latitudinal range of Ireland in comparison to Europe intuitively suggests why Ireland must have fewer species. For example, since Ireland does not have a considerable southern stretch it has no Mediterranean zone, though it does have an enigmatic “Lusitanian flora” found disjunctly in Ireland and in North Spain and Portugal. This includes a saxifrage commonly known as St Patrick's Cabbage, but, the component to Irish vegetation is rare indeed. Nor does Ireland have tundra habit, though, of course, it can be get chilly there at times.
Secondly, the present day biota of Ireland was assembled largely after the the glaciers of the Last Ice Age retreated. Although there may be some relicts of those formerly icy time, for example the Irish Arctic char, an apparently delicious trout-like fish, which is found in some Irish upland lakes, most Irish wildlife migrated there over the past several thousands of years.
Important to understanding these post-glacial migratory patterns is knowledge of the timing of the closing of putative land-bridges connecting Ireland and Great Britain, and Great Britain and the European mainland. Ireland was separated from a source of biotic colonists early in its post-glacial history, whereas Britain retained these connections until some time later. Naturally, species that flap, float or swim could make their way over to Ireland in their own sweet time. But snakes and other creeping things were quite simply out of luck.
St. Patrick must surely be absolved of the high crime of banishing snakes from Ireland since by that time of his mission in the 5th century there were no snakes to banish. One might wonder, therefore, how he earned his reputation as snake-killer. There are at least two interesting theories about this.
Snakes can be seen as potent symbols of the ancient faiths of Ireland. From this point of view stories of St Patrick grappling with snakes commemorates his mighty struggles to overcome Irish paganism.
Ireland in the fifth century, of course, was a Celtic society. The Celts were relative later-comers to Ireland having only arrived sometime before 300BC. The manner of their arrival is a matter of dispute: was it an intrusion or was it some combination of migration and cultural diffusion? Authorities disagree, disagreeably. The religion of the Celts in Ireland is similarly contested. It was no simple affair being composed not only of its own endogenous elements but it absorbed parts of the older traditions of the island. These traditions stretched back thousands of years to the Mesolithic monument builders and before. For example, the degree to which the human sacrifice practiced by the Druids drew the “cannibalistic feasts” engaged in by the Irish is a matter for grisly speculation. Be that as it may, there are, according to T W Rolleston’s classic Celtic Myths and Legends (1911, republished 1990), a number of distinctive features of Celtic spirituality and intellectual culture. These include adherence to popular superstition and magical observance (including human sacrifice), which largely focused on local topographic features. Underlying such observances was a philosophical creed based upon the sun as a central object of veneration. Additionally, individual personified deities, Lugh for example, oversaw the social order. [Lugh has endured a horrifying fate over the years atrophying from sun god and patron of art and craft to Lug-chorpan (little bodied Lugh) or the Leprechaun]. There was, for all of this, a reputation of learning, especially related to natural phenomena, among the Celts. Finally, the administration of religion, learning and literature was invested in the priestly caste of Druids (an order that seems to have been open to both men and women).
The role of snakes and serpents as religious symbol in Ireland is discussed in fascinating detail in Mary Condren’s The Serpent and the Goddess (1989) [a book that is shamefully out of print for its 25th anniversary]. Crudely put, the snake is a representation of the Triple Goddess in matrifocally inclined pre-Celtic culture. With the advent of the patriarchal and warrior-like Celtic people the symbol of the serpent was crushed and society was transformed. Christianity merely extended the subversion of these earliest symbols and the further ossifying of patriarchal norms. The degree to which snakes are absorbed into the religious symbolism of the druids, which would be a requirement for an argument that St Patrick’s banishment of snakes commemorates his victory of druidism, is debated. It is nevertheless pretty clear that the image of the snake was important to them. W G Moorehead in an essay from 1885 entitled Universality of Serpent-Worship wrote: “That the Druids associated the serpent and the sun with their most solemn ceremonies can hardly be doubted. The creation and the universe they represented by a serpent in a circle, sometimes by an egg (the cosmic egg) coming out of the mouth of the serpent, precisely as was done by Phoenicians and Egyptians… Their temples were circles of stones with a huge boulder in the center, thus embodying the idea of the Deity, and eternity, as the serpent in a circle, and the egg.” : (The Old Testament Student, Vol. 4, No. 5 (1885)).
It must certainly have been the case that St Patrick’s evangelism was a threat to the established order of Celtic religious life. Christianity with its seeming demotion of the here and now, and its emphasis on an afterlife, clashed with the more mundane religion that prevailed at the time, one that moreover promised a rebirth to this world rather than eternal life in another. Of course, the prestige of the Druids was also put at stake in this clash between the old and the new. In his autobiographical “Confessions” St Patrick writes of threats on his life, waylayings and other shenanigans that befell him during his ministry. Perhaps these attacks originated with local druids. But Christianity ultimately prevailed and the old order, though it persisted for a while, waned. As writer Philip Freeman wrote in his biography of St Patrick, after the Christianizing of Ireland, “[m]any [druids] seem to have barely eked out a living by concocting love potions in huts hidden away in the forest.” (St. Patrick of Ireland: A Biography (2005)).
Setting aside the detailed circumstances of St Patrick’s case, the role of hero as snake-serpent-dragon crusher is a universal one. The folklorist Alexander Haggerty Krappe connected St Patrick’s vanquishing of snakes to this tradition in which a hero rids an region of noxious vermin. For example, Herakles, the Greek hero, is associated with snakes. As a youth Herakles survived an attack by Hera, wife of Zeus, who dispatched snakes into a bedroom where our hero and his brother slept. Herakles crushed a snake with each hand and played with their corpses. Furthermore, Krappe noted that “the Erythraean established a cult of Heracles Ipoktonos ('the worm-killer'), because that god was supposed to have destroyed, on one occasion, a sort of phylloxera threatening ruin to the vines.” Since phylloxera are inconspicuous insects the term “worm-killer” refers, I suppose, to generally pestiferous creatures. From this perspective, St Patrick banished the snakes from Ireland because this is just what bad-ass heroes do.
Thus, St Patrick driving out the snakes of Ireland may indeed symbolize his donnybrook with the druids or the story may simple be the sort of universal hyperbole that hagiographers felt compelled to append to his biography. Either way, St Patrick cannot be accused of having especially meddled in the affairs of the Irish biota. That being said, St Patrick was not just an evangelist of the message of Jesus, he brought with him the culture of Romanized Britain. What happened in the fifth century Ireland was in some senses as profound an encounter between two cultures, at least in terms of ecological consequences, as was the initial clashes between settlers and the indigenous peoples of the New World more than a millennium later. The introduction of Christianity to Ireland ushered in a set of changes, many of them technical ones concerning agriculture, that radically transformed the Irish landscape. Of this clash F.H.A. Aalen and his colleagues wrote “Ireland underwent radical change from the fifth century. The pollen record testifies to a huge upsurge in grasses and weeds associated with pasture and arable farming…. A combination of factors led to a revolution in the landscape.” (Atlas of the Irish Rural Landscape. Cork University Press, Cork, Ireland 1997).
Within a few centuries after St Patrick, agriculture had greatly expanded both in the almost innumerable monastic settlements and on secular lands throughtout Ireland. The population of the island is thought to have increased as well. One way or another, Christianity resulted in a reversal of what some archaeologists refer to as an Iron Age lull, and the start of a major assault on the wilder lands of Ireland. St Patrick may be innocent of sepenticide, but the introduction of Christianity to Ireland undeniably had far ranging ecological implications for the island of Ireland.
Why Amazon Reminds Me of the British Empire
by Emrys Westacott
"Life—that is: being cruel and inexorable against everything about us that is growing old and weak….being without reverence for those who are dying, who are wretched, who are ancient." (Friedrich Nietzsche, The Gay Science)
A recent article by George Packer in The New Yorker about Amazon is both eye-opening and thought-provoking. In "Cheap Words" Packer describes Amazon's business practices, the impact of these on writers, publishers, and booksellers, and the seemingly limitless ambitions of Amazon's founder and CEO Jeff Bezos whose "stroke of business genius," he says, was "to have seen in a bookstore a means to world domination."
Amazon began as an online book store, but US books sales now account for only about seven percent of the seventy-five billion dollars it takes in each year. Through selling books, however, Amazon developed perhaps better than any other business two strategies that have been key to its success: it uses to the full sophisticated computerized collection and analysis of data about its customers; and it makes the interaction between buyer and seller maximally simple and convenient. It also, of course, typically offers lower prices than its competitors. Bezos' plan to one day have drones provide same-day delivery of items that have been stocked in warehouses near you in anticipation of your order is the logical next step in this drive toward creating a frictionless customer experience.
Amazon's impact on the world of books has been massive. Over the past twenty years the number of independent bookstores in the US has been cut in half from four thousand to two thousand, and this number continues to dwindle. Because Amazon is by far the biggest bookseller, no publisher can afford to not use its services, and Amazon exploits this situation to the hilt. Publishers are required to pay Amazon millions of dollars in "marketing discount" fees. Those that balked at paying the amount demanded had the ‘Buy' button removed from their titles on Amazon's web site. Amazon used the same tactic to try to force Macmillan to agree to its terms regarding digital books. And of course Amazon's Kindle dominates the world of e-books, another major threat to traditional publishers and booksellers.
The argument for viewing Amazon in a positive light is not difficult to make.
They offer the customer a bigger selection of books than anyone else, usually at lower prices. Buying online as a returning customer with a registered credit card is laughably easy. Any wannabe writer can self-publish with Amazon, and those whose books sell receive a much higher percentage in royalties. In opening up this opportunity to all, and in basing its advertising and promotional decisions on computer analysis of customer behavior rather than on some self-styled expert's opinion, Amazon eliminates the unnecessary middlemen, professional tastemakers, and elitist gatekeepers that have controlled—and constrained—publishing for so long, replacing them with the dynamic democracy of the digital market place.
For all that, more than one person I know reacted to Packer's article by pledging to avoid buying stuff from Amazon in future, at least as far as and for as long as this is possible (which judging from the way things are going may not be too far or very long). Why this reaction? Well, when I told my daughter about Packer's article her immediate response was to say that Amazon sounded a bit like the British Empire. Which set me thinking.
What parallels can be found between the premier online retailer and the largest empire in history? I see similarities in three areas: beliefs and attitudes; practices; and impact on affected populations. Let's consider these in turn.
According to Packer's account, the prevailing attitude among those in charge at Amazon is arrogance. Here is where I think the echoes of imperialism are most apparent. British imperialists typically viewed themselves as superior to those they displaced or ruled on various counts: birth, race, heritage, education, culture, morals, religion, ability, and character, all resulting in and backed up by superior political and military power. The proof of this superiority could be seen on any map of the world that showed the extent of Britannia's rule. The Amazon execs are indifferent, of course, to such things as birth or pedigree; what matters to them is being smart. But thinking of themselves as smart is the basis for a particular kind of arrogance which they seem to share with other successful types in places like Silicon Valley and Wall Street. The way one top exec is described to Packer by a colleague is revealing: he's said to be "the smartest guy in the room at a company where everyone believes himself to be just that."
This fetishism of smartness is certainly not confined to techies, but it assumes a specific and perhaps especially intense form among them. Obviously, there are many different ways of being intelligent. One can excel at abstract reasoning, creative problem-solving, learning languages, understanding people, remembering information, noticing patterns and connections, interpreting works of art, manipulating people and events, mastering a practical skill, recognizing opportunities, artistic creativity, witty repartee—the list is virtually endless. So there are many people out there who are smart in various ways. But at any particular time and place, certain kinds of intelligence will be especially valued. It might be the ability to track an animal, or plan a battle, or discourse fluently in Latin, or demonstrate erudition, or make accurate and discriminating observations, or solve technical problems using mathematics and logic. These are all forms of smartness that at different times have been applauded and rewarded. And of course one kind of smartness is to recognize just what kind of smarts the present or immediate future will reward.
Today we live in an age when science enjoys cultural hegemony and most educated people earn a living by processing information. Naturally enough, therefore, certain kinds of smartness are now much in demand and are rewarded accordingly. Prominent among these is fluency in computer science and technology. The market value of knowledge and skills in this area has been greatly enhanced by the growth of the internet since this has expanded to an unprecedented degree the potential customer base or audience for any online enterprise.
The fetishism of smartness at places like Amazon is thus, naturally enough, oriented towards technological fluency and business acumen. But it seems to be accompanied by a moral subtext. Our success is not due to chance or luck; it's due to our intelligence; therefore it's deserved. On the face of it, this might seem dissimilar to the attitude of a British imperialist who, after all, could hardly claim credit for being born British (Cecil Rhodes supposedly said that "to be born English is to win first prize in the lottery of life"). But it is similar insofar as the British attributed their success in conquering and ruling much of the world to their possession of certain qualities—intelligence, industry, organization, moral and cultural superiority. The similarity extends also to the contemptuous attitude felt and sometimes expressed toward those who suffer as a result of this success. One former Amazon employee cited by Packer says that execs at Amazon view the older publishers as "antediluvian losers" and describe whole sections of the print world as the "Rust Belt media." Imperialists like Winston Churchill regularly referred to the native populations whose settlements, property, and whole way of life he cheerfully helped to destroy when serving as a military officer in Africa as "primitive," "backward," barbarous," "ignorant," "savage," and "improvident."
In the eyes of both, what legitimizes this contempt—and reinforces the arrogance—is the conviction that they are on the side of history. As Jeff Bezos said to Charlie Rose: "Amazon is not happening to bookselling. The future is happening to bookselling." The attitude is a form of Social Darwinism. Countries with superior military power and political organization will naturally dominate people who are lacking in these. ("Whatever happens we have got / The Gatling gun, and they have not.") Businesses that know how to use the latest technology effectively will inevitably send to the wall those that still rely on dated methods that are less efficient: that's the way capitalism functions. The ultimate and unarguable proof of superiority is real world success: the subjugation of native populations; the growth of market share. Might is right.
Seeing themselves as being aligned with the forces of inevitable historical change is accompanied, naturally enough, by the belief that they are agents of progress, that the changes they help being about are desirable. Obviously, this self-perception can be self-serving; but that doesn't make it foolish. There is an idealistic strain in enterprises like Amazon, Google, or Facebook that is not simply a piece of self-deception or a marketing strategy. Amazon really does make books available to people who lack a local bookstore (although in some cases, of course, this lack may be largely due to the local bookstore being put out of business by Amazon). Their constantly expanding inventory–Bezos' eventual goal is to warehouse copies of every book ever written–means that it is now much easier than ever before to buy obscure and out of print titles. Electronic self-publishing makes it easier and cheaper for all writers to put their work out in the public domain. British imperialists also saw themselves as benefiting the world. Churchill, reflecting on what the British had achieved in Africa, thought that future historians would judge them to be "a people, of whom at least it may be said, that they have added to the happiness, the learning and the liberties of mankind." Cecil Rhodes was bracingly blunt: "I contend that we are the first race in the world, and the more of the world we inhabit the better it is for the human race."
Moving from attitudes to actions, we should first of all be fair to Amazon. They don't massacre by the thousand those who resist their growing power; they don't torch villages in acts of punitive reprisal; they don't use gunboats to force the Chinese to keep buying opium from British drug traffickers. But within the parameters of legal business operations, they do seem to be pretty ruthless. Some of their success is undoubtedly due to their clever use of up to date methods, from automated, individual-oriented advertising to warehouses staffed by non-unionized workers who are already being replaced by robots. But according to Packer their success in bookselling is also largely due to a strategy whereby they "created dependency and harshly exploited its leverage." Refusing to sell books by publishers who won't cough up a sufficiently large "marketing discount" fee is a case in point. This is, in effect, a legal extortion racket. To be sure, it isn't as crude as the way the British persuaded the Chinese to sign the Treaty of Nanking, which required China to hand over twenty-one million dollars, grant all sorts of trading concessions, and cede control of Hong Kong (the British method was to threaten Nanking with gunboats). But the underlying mentality isn't so different. Where one isn't constrained by moral considerations, all that remains is a power struggle; and all that ultimately matters in that struggle is who wins. As Quirrell says in Harry Potter and the Philosopher's Stone, echoing Machiavelli, Hobbes, and Nietzsche: "There is no good and evil, there is only power and those too weak to seek it.
Of course, Jeff Bezos is hardly the first capitalist to play hardball, so it wouldn't make much sense to single out his company as singularly ruthless in its business strategies. The ethics of Amazon are pretty much the ethics of any big business striving toward monopoly status. What is troubling, though, about the mindset described by Packer is the seeming indifference to, or even satisfaction over, the negative impact of the company's actions on significant numbers of people. Packer reports that among "people who care about reading, Amazon's unparalleled power generates endless discussion, along with paranoia, resentment, confusion, and yearning." This could equally stand as a description of those who found themselves powerless to resist British rule. But in both cases, the view from the seat of power is that those who aren't with the program either don't recognize what's in their best interests or deserve to disappear.
"Innovate or die." "Move fast and break things" Such mantras are associated with the technological revolution, but there is nothing essentially new here. They express the essential spirit–and reality– of capitalism that Marx describes in The Communist Manifesto. Those who find themselves surfing the waves of innovation naturally enough sing the praises of the new. So much is understandable. It feels good to be a winner, doubly good if you sense the wind of history at your back, and triply good if you believe you're making the world a better place. British imperialists felt good on all three counts, yet we are now critical of their attitude in large part because of their indifference to the individuals, communities and cultures they affected and in many cases destroyed. They could have done with more humility and more humanity. The same goes for the Amazon execs described by Packer. What is unbecoming, even ugly, in both groups is the callousness drifting into contempt toward those who, also understandably, lament the destruction of something they cherish, whether it be a secure job (like working in a bookstore), a respected occupation (like print publishing), a skill that is no longer marketable (like editing), a pleasure that may soon no longer be available (like browsing in used bookstores) or, indeed, an entire form of life.
Sughra Raza. Looking out over Texas. November 2013.
Monday, March 10, 2014
by Akim Reinhardt
In an early episdoe of Mad Men, a character named Ken Cosgrove publishes a short story in the Atlantic Monthly. It'sentitled:
"Tapping a Maple on a Cold Vermont Morning."
That's just about pitch perfect for the American literary scene circa 1960. The coating of influential New England literati is so thick on the young author, you can practically see it glisten.
But the reason I recently remembered "Tapping a Maple on a Cold Vermont Morning" had nothing to do with Mad Men or literature. Rather, it's because of late I've been remembering winter.
For much of the United States, including here in Maryland, it has been a particularly fierce winter. Not the snowiest necessarily, though there has certainly been snow. But long and cold.
This is my 13th consecutive winter in Maryland, and it's the first one that harkens back to my experience of onerous winters in harsher climes.
From the mid-1980s to the late 1990s, I toughed it out, spending the better part of seven winters in southeastern Michigan and another five in eastern Nebraska. These are serious winter places. They're not Siberia or Winnipeg, but they will punch you in the face, and you need to come to terms with that if you live there.
Southern Michigan winters, first and foremost, are just plain long. Snow usually begins falling in November and never quite goes away. Just when you think it might all melt off, boom! Another half foot covers everything. None of this March goes out like a lamb stuff. Every bit of March is winter. So is a chunk of April.
When will it end? you find yourself pleading aloud to no one in particular. It just goes and goes and goes. It grinds you down and forces you to get back up again. Every year you know what you're in for. Body blow after body blow. And you wonder to yourself how the people from northern Michigan and the Upper Peninsula, the ones who mock you for your soft, southern winters, how do they do it?
You have to find a way to adapt or you'll be downright miserable. I still remember the moment it happened for me. Sometimes I'm a slow learner. It wasn't until my fifth winter in Michigan. I had driven over to my friend Rae's house one night. And then the car died. That tan 1979 Dodge Dart with the cream interior. Thing never ran.
I no longer know why, as the details are long forgotten, but there was some reason why I had to hoof it back and forth I think we decided we needed something from my house, which was about a mile away. A bottle of booze, a record, something. So I geared up. Boots, coat, etc. I went out into the quiet night, flakes fluttering about, and muscled through the black of sky and white of snow. I got to my house, grabbed whatever it was we wanted, then turned around and headed back. I was jogging and clopping through the snow to make the trip go faster. And then my imagination took over.
I was a Viking. The Scandinavian wind and snow whipping through my beard. Furry boots laced up to my knees, carrying me to war, horns sounding, a broad sword waving in my hand. My slight frame and average height much bigger in my mind's eye, I was eager for battle, my iron ready to slice torsos and sever heads.
Then all of a sudden, there I was, back at Rae's. Wow, I thought to myself, that went a lot faster than I expected.
I don't think I ever pretended to be a Viking after that, but I had learned the mental trick: embrace the winter. It's not going anywhere, so have fun with it. Own it. That's how you get through so thick a tome.
If Michigan's predictable, Nebraska's downright irrational. In every place I've ever lived (except for Arizona) people love to say: If you don't like the weather, just wait a half-hour, it'll change. But Nebraska's the only place I've ever been where that's actually true on a consistent basis.
Weather systems sweep across the Great Plains unimpeded. Like many a driver passing through on I-80, the jet stream caroms over the rolling landscape, racing eastward towards Chicago. And eastern Nebraska is also right about where the jet stream starts to bend, meaning you can be on either side of it in one of Mother Nature's heartbeats. If it dips below you, cold Arctic air. If it dances above you, warm breezes from the Gulf of Mexico. As a result, the Nebraska whether changes constantly, and the only reliable feature is the ceaseless wind.
One day the wind stopped blowing. All the chickens fell down.
That wind. It just ain't got no quit in it. And during the winter, that's not a good thing.
The only thing between you and the North Pole is a barbed wire fence.
The net effect of a jittery, fast-moving jet stream is micro-weather systems that often last about three days apiece. So the early part of the week could be sub-freezing while the back part of the week could be downright balmy.
I don't know how many times I've seen the thermometer rise or fall more than 50 degrees Fahrenheit in a 24 hour period. It's a good idea to keep a change of clothes in the trunk of your car.
I remember one year there was a massive blizzard in October. Not only hadn't the leaves fallen yet, they hadn't even turned. Big, broad green leafs caught a couple feet of snow. All night long the gun shot pop of snapping branches ricocheted through the air. The following April there was another blizzard, about a foot. But the nearly half-year between those two storms? Nary a flake.
I'd say that was atypical, but saying that would be redundant. Aside from the wind, there just isn't much that's typical about Nebraska weather.
When the cold weather does hit Nebraska, it's damn cold. Not cold like Minot, North Dakota or International Falls, Minnesota. But it's cold. When it comes, it comes. Sub-freezing goes without saying. Teens are common. But that single digit frigid is what really gets you. Once the mercury drops below 10F (that's about -13C), you really notice it. The quicksilver ain't so quick anymore. Add to it the incessant wind, and outside is not a pleasant a place to be. Best get your ass around the corner of a building to find a windbreak, maybe take a nip from a bottle. And let's not even think about the occasional sub-zero temps (0F = -17.7C).
In places with moderate winters, like New York City or Philadelphia, winter feels like a metaphor for Death. In places like Michigan and Nebraska, you feel like you need to stay on your toes or you might actually die.
I remember this one winter night in Nebraska. For whatever reason, I suddenly had an overwhelming sensation of not wanting to die like the stereotypical, mid-20th century urbanite, your neighbors starting to wonder because the milk bottles are lining up outside your apartment door.
So I got in my little Ford Escort wagon and drove around aimlessly. I eventually came to a big empty field and parked in the ruts of frozen mud. After sitting there a while, I got out and walked to the middle of the field. The starry sky was enormous and the wind and snow swirled about ferociously, creating a sense of overwhelming desolation that Hollywood tries to replicate sometimes but can never get right. I laid down on the brittle, frosty grass.
This is a good way to die, I thought to myself. A real way to die. No milk bottles. And if I just close my eyes and continue to lay here, I realized, I probably would die.
I laid there for about ten or fifteen minutes. Then I opened my eyes, got up, walked to my little red wagon with the roll down windows, and drove back to my apartment.
New York City is the most seasonal place I've ever lived. Not only does it have all four, but each of them is right about three months in length. You get a true sense of the earth's quarterly cycle in New York.
Growing up in Gotham, I knew winter and I didn't like it. Three months of mediocre winter is right in the sweet spot for complaining. Just enough to feel entitled. But five months of a Michigan winter? Or Lord knows how long on the circus wheel of a windy Nebraska winter? You don't really complain anymore. Not if you wanna be a happy person. There'd just be too much goddamn complaining. I bucked up.
After New York, the Midwest, and even a year of pure contrast in Phoenix, I was quick to assess the situation when I moved to Baltimore in 2001. Only 200 miles north, New York is the most obvious comparison. The winters in NYC are indeed a bit worse than Charm City, but not that much.
A Baltimore winter, I've often said, is a real winter, but it's a short one. There's snow every year. Sometimes quite a bit. We set a local record with more than 70 inches a few years ago, which is lurching towards southern Michigan totals. Then again, some years there's hardly enough snow to notice. But there's always at least some. And it does get cold. You're bound to have some sub-freezing temperatures, most often at night. Maybe just a few nights, but there can be a good spell of it, depending on the year. Maryland's the South, but just barely.
The actual duration of a Baltimore winter is pretty consistent. It usually doesn't begin until New Year. December is late autumn, drizzly instead of snowy, chilly instead of downright cold. For the most part, down here White Christmas is just an old Bing Crosby song. And by the second week of March, winter's done. Come the 8th or 9th, whatever icy grip the season had on you is broken, and there's no going back. Planting your hydrangeas might be a gamble at that point, but not a bad one.
[For the record, I don't know nothin' about gardening. Following my advice will probably get your plants dead fast, so don't do it.]
By my reckoning, a Maryland winter is about ten weeks in all. Not even a full season in the conventional sense, and patches of it here and there honestly feel more like fall.
I'm happy with that. When I moved here, having already been hardened by Michigan and Nebraska, I found they typical Maryland winter to be some weak-ass shit. And that was perfectly fine by me. I had built up a sturdy winter psychology during my years in the heartland, and was happy to ease into a shorter, softer version of long nights and low sun.
I do enjoy the change of seasons, and there are things I like about winter in particular. It's quiet. It's pretty. It adds an especial dimension to social interactions around a hearth or in a cozy bar. But a little goes a long way. I feel like I've done my time, gracefully, and ten weeks of half-assed winter is A-OK by me. So over the years I remained content and grateful, always remembering how longer, colder, and snowier it could be.
Or so I'd thought.
Memory's a funny thing. Sometimes you think you remember. But then you realize you hadn't, really. Not until something visceral actually reminds you.
This winter made me really remember.
I remembered what a real winter is like. It started early. December was not merely autumnal; it was winter cold. The usual 10 weeks grew to more than 3 months. And too often this year, cold was cold. Not 30s and 40s, but lots of 20s and even 10s. Way too many nights in the 10s for my taste. And more than enough snow.
It made me remember.
I remembered how to get through it. I remembered what I don't like about it, and also its pleasantries. I remembered pretending to be a Viking, or enjoying jokes about chickens and barbed wire. I remembered being goddamned ready for it to be over already. And I'm beginning to remember a sincere yearning for the deep, melting joy of spring. The satisfaction that comes from shedding boots and coats, from sauntering about freely, and from finally feeling muscles loosened and shoulders relaxed by the sun's warm embrace.
As I sat writing this essay on March 5th, it was after yet another sub-freezing day with lows in the teens. After that, the days got a little better, although the lows continued to reside in the land of popsicles.
And then it broke. On Saturday the 8th, the first day of the second week of March, like clockwork, Old Man Winter heaved his death throes in Maryland as the thermometer cracked the 60 degree mark (about 15C).
I remember winter. I remember making peace with it. I remember loving it in ways both peculiar and joyous, engaging and resigned. And I remember having had enough of it.
Welcome, Spring. It's your turn to shine.
Seo Young Deok. Nirvana 2, 2010.
Snow on Hawaii (a medieval cosmology)
by Leanne Ogasawara
It is my second favorite essay of all time: C.S Lewis' Imagination and Thought in the Middle Ages. First delivered as a lecture in 1956, the piece was later published posthumuously in this collection of his essays in 1966. Unlike in my #1 favorite essay, William Golding's magnificent Hot Gates, CS Lewis does not seek to form arguments or to persuade. What he does instead is to transport the reader back in time, illuminating the medieval world-view using nothing more than words alone.
He begins his essay urging the reader to perform an experiment. He says,
Go out on any starry night and walk alone for half an hour, resolutely assuming that pre-Copernican astronomy is true.
Look up at the sky with that assumption in mind. The real difference between living in that universe and living in ours will, I predict, begin to dawn on you.
Intrigued, I decided to take him up on his suggestion. It so happened that my beloved and I had found ourselves up on the summit of Mauna Kea, on the Big Island. Home to the world's greatest collection of large telescopes, the skies up there are dark and famously clear.
As a girl, I had wanted to become a cosmologist. It was my first great passion. And, in addition to reading astronomy books voraciously, I spent many nights using my amateur telescope to look up at the stars from my parent's house in Los Angeles. Growing up, I drifted away from cosmology, turning naturally toward philosophy. Still, I always loved the stars--for as Van Gogh said, they make me dream. Returning home to Los Angeles about twenty five years after leaving it, I have been dismayed by their disappearance. What happened to all those myriad of stars of my childhood? Indeed, I cannot recall the last time I saw the Milky Way--had never seen it in Japan and was sad to see it was simply invisible from LA now. It is dis-heartening, really, since the splendid vision of the stars at night is something that we used to just take for granted.
Fast forward to last week in Hawaii. Surrounded by snow, the summit of Mauna Kea sits above the clouds. Like from the summit of Mount Fuji, you can stand and watch the clouds roiling beneath you. As with all mountaintops, the summit of Mauna Kea is magnetic and the views exhilarating. We were there to visit the KECK Observatory and were so fortunate to be there as they opened the dome to the night sky at sunset. The galaxies that they observe are so faraway that now it is as much about capturing the distant images as it is about subtracting or reducing the turbulance and distortions that is in the way. So, as exciting as it was to watch the dome open up onto the night sky and see the telescope begin to rotate into position, even more interesting was watching them fire up the laser to use the adaptive optics system to get the clearest images possible of galaxies that are very, very faraway. I thought that, while sometimes it seems that not much really has changed theoretically in astronomy since I was a girl (maybe dark energy and exoplanets?) it was this area of instrumentation and optics that have been revolutionized astronomy during my lifetime.
After the laser shot up into the dark skies to create an artificial guide star for imaging the scientific target object, we stood there in the freezing cold, allowing our eyes to get used to darkness. It took several minutes, but finally they began to appear--- stars upon stars upon stars.
First was Jupiter, more beautiful than I had ever seen her; followed by several very familar constellations --old friends that I have not seen in decades. And then finally--at long last-- the vision of the Milky Way appeared before our eyes in all its majesty. 幽玄。
Staring up, I tried to do as CS Lewis suggested to imagine that pre-Copernican astronomy is true. The first thing that dawns on one is that for the Medievals, no matter how impossibly large the universe was to them, it was ultimately something finite--and this is perhaps something that generates a feeling of being embraced, because it forms a kind of edge, or frontier as Lewis says.
You will be looking at a world unimaginably large but quite definitely finite. At no speed possible to man, in no lifetime possible to man, could you ever reach its frontier, but the frontier is there; hard, clear, sudden as a national frontier.
And, secondly, because the earth is at the absolute center, it is not just distance that is felt, but height. So, some stars are not simply a long distance from us, but they are far, far "above" us too.
In this way, beyond the gates of the moon, everything was in timeless and heavenly realm. The stars and galaxies were therefore changeless, necessary and not open to Fortune. The moon was the gateway between our world of decay and change (and Fortune) ---and the heavens, which were perfectly finite and regular. Things in the heavenly realm were not evolving and indeed, there were no ultimate causes and effects, not in the way astronomers look for today. On this topic, Aristotle posited an Unmoved Mover, and as Lewis explains:
The infinite, according to Aristotle, is not actual. No infinite object exists; no infinite processes occur. Hence we cannot explain the movement of one body by another and so on forever. No such infinite series, he thought, could exist. All the movements of the universe must therefore, in the last resort, result from a compulsive force exercised by something immovable...Accordingly we find (not now by analogy but in strictest fact) that in every sphere there is a rational creature called an Intelligence which is compelled to move, and therefore to keep his sphere moving, by his incessant desire for God.
This comes deliciously close to CS Lewis' famous Argument from Desire, but it also illuminates the meaning of Dante's Theology of love. For the medievals, says Lewis, an unmoved mover does not move other things in terms of ends, like balls on a billiard table, but rather it is the things that themselves move from out of their own desire, like food moves a hungry man or a mistress moves her lover.
A modern might ask why a love for God should lead to a perpetual rotation. I think, because this love or appetite for God is a desire to participate as much as possible in his nature; i.e. to imitate it. And the nearest approach to His eternal immobility, the second best, is eternal regular movement in the most perfect figure, which, for any Greek, is the circle.
When the medievals looked out at the night sky, they did not see dark skies as we do now, but they rather saw a universe jam-packed with stars and planets and angels and music (Lewis writes beautifully in the essay about how the heavens were filled with heavenly music). And all this activity, they believed was put in motion not by causes and effects but rather out of love. But he cautions us not to misunderstand Dante's famous line about the love that moves the heaven and stars; for this is less about modern conceptions of love with their ethical connotations as it is an appetites or desires. So, as Lewis describes it, the Medieval universe was rotating in its desire or appetite for God. It was a musical, ordered and festive universe; for Lewis says the angels and seraphim spend their time engaged in festivals of great pagentry):
The motions of the universe are to be conceived not as those of a machine or even an army, but rather as a dance, a festival, a symphony, a ritual, a carnival, or all these in one. They are the unimpeded movement of the most perfect impulse towards the most perfect object.
One has to admit that there is something incredibly aesthetically pleasing to understand the universe in these terms.
That night in Hawaii, seeing once again the great splendor of night sky remembered from my childhood, I realized how much we had lost. Our gracious and wonderful host at the observatory said that he really understood the Dark Sky Movement since the vision of the night sky is such a crucial part of our human heritage --and indeed we have lost so much. Before getting back in the car to go back down the mountain, I took one last look at the myriad stars twinkling so beautifully in the sky. Sadly, I recalled Emerson's famous quote about the stars since the envoys of beauty no longer come out to light the universe in smiles anymore.
"If the stars should appear one night in a thousand years, how would men believe and adore; and preserve for many generations the remembrance of the city of God which had been shown! But every night come out these envoys of beauty, and light the universe with their admonishing smile.”
It is perhaps a dying art in English (?), but if you have a favorite essay, I'm all ears.
Fellow time travelers will love Lewis' essay.
I wrote about Hot Gates here: Fighting in the Shade of 10,000 Arrows.
Part One of my Medieval Triptych is here: WINGS OF DESIRE (A MEDIEVAL PHYSIOLOGY)
And an added bonus for your mucho reading pleasure: Richard Dawkins and the Ascent of Madness
KECK in Motion here! And my favorite KECK video of all below--enjoy!
Mental Illness, the Identity Thief
by Grace Boey
I felt a Funeral, in my Brain,
And Mourners to and fro
Kept treading - treating - till it seemed
That Sense was breaking through -
And when they all were seated,
A Service, like a Drum -
Kept beating - beating - till I thought
My mind was going numb -
And then I heard them lift a Box
And creak across my Soul
With those same Boots of Lead, again,
Then Space - began to toll,
As all the Heavens were a Bell,
And Being, but an Ear,
And I, and Silence, some strange Race,
Wrecked, solitary, here -
And then a Plank in Reason broke,
And I dropped down, and down -
And hit a World, at every plunge,
And Finished knowing - then -
* * *
In the poem, I felt a Funeral, in my Brain, Emily Dickinson watches a part of herself die as she sinks into insanity. The fragmentation and loss of the Self that Dickinson describes is a common theme amongst victims of mental illness. By their very nature, conditions like schizophrenia, depression and bipolar disorder have profound impact on one's personality, behaviour and beliefs. Mental illness can rear its head and usurp one's identity at any time; what happens next can be confusing and frightening, for victims as well as their loved ones.
Transformation of the self
A good place to start examining the loss of identity in mental illness is depression. This is something that many of us will experience in some form, if only briefly and temporarily, at least once in our lives. In addition to simply feeling low, those who are depressed lose interest in pursuing activities they usually enjoy, and struggle with feelings of negative self-worth. I remember watching one of my own close friends slip into depression as a young adult. She was usually a cheerful, kind and bubbly girl. But as her first semester in college progressed, she became increasingly reclusive, pessimistic and irritable. She stopped playing sports as she 'no longer felt like running around', and gained weight as she slept all day.
After witnessing and worrying about her continuous decline for a few months, I suggested she see a psychiatrist. She did. After a few months of being treated with therapy and antidepressants, my friend made a good recovery. She has sinced bounced back to being more or less her old self. The last time we met, she thanked me for pushing her to get treatment. "I felt so crappy - I had never felt so bad about myself in my life. I didn't know what was happening," she said. "I thought that maybe I was just changing, my personality was changing, and it was normal. But it wasn't. I feel like myself again."
Fortunately, my friend made a clean recovery, and hasn't had a relapse since her brush with depression five years ago. But for many others, the effects of mental illness are much more sticky - the 'old self' that used to exist is not fully recoverable. In a New York Times article, Linda Logan describes her long-drawn battle with mental illness. She writes, "During the 20-odd years since my hospitalizations, many parts of my old self have been straggling home. But not everything made the return trip. While I no longer jump from moving cars on the way to parties, I still find social events uncomfortable. And, although I don't have to battle to stay awake during the day, I still don't have full days - I'm only functional mornings to midafternoons. I haven't been able to return to teaching. How many employers would welcome a request for a cot, a soft pillow and half the day off?"
In another personal account, psychiatrist Karen Hochman recalls how paranoid schizophrenia completely transformed her brother Mark, as he descended into delusion and irrationality. "It was following my mother's death that I believe Mark experienced his first psychotic break. I coped with my grief in my way, which was always with the support of and conection to others. Mark, in his characteristic way, bore his grief on his own. His choices for himself had always differed from mine for myself, but during the years immediately following my mother’s death, his ideas, choices, and actions became increasingly incomprehensible to me. His discourse became vague. At the same time, his poorly articulated ideas became of increasing importance in his definition of himself." Six years after his mother's death, Mark hung himself from a maple tree.
Discourse of 'old' and 'real' selves, of course, presupposes the existence of a healthy or authentic self that is separable from the sick one. For victims whose symptoms manifest from young, the notion of such a self in opposition to the sick self might not make sense at all. In the two-part documentary, The Secret Life of the Manic Depressive, Stephen Fry describes how he experienced symptoms of bipolar disorder from an early age. Fry began acting out as a young boy, and was expelled from school for bad behaviour at the age of 15. At 17, he was arrested and served jail time for credit card theft.
What might Stephen Fry be like today if not for his manic depression? Lacking any reference point, it's impossible to say. His troubled adolescence is typical of those who are later diagnosed with bipolar disorder - for such people, mental illness swoops in and takes over one's identity before it even gets the chance to develop. Before they seek help, if they do, these people know no other way of being than the one that mental illness has given them.
Losing social identities
In life, we often find ourselves in the social roles we play. The multiplicity of these roles gives mental illness yet more avenues to wreak havoc on our identity. When struggling to keep up with the demands of everyday life, many victims of mental illness come to identify with labels like 'bad friend', 'bad lover', or 'bad employee'.
For a long time, I myself struggled with being a 'bad student'. Like Fry, I began to experience symptoms of bipolar disorder from a young age, which affected my behaviour performance in school. I was a bright, hardworking child when I entered primary school at 7, and landed myself in a special programme for gifted students. But my performance in school started slipping at around the age of 9 or 10 as my mental life became increasingly troubled. I lost all interest in my academics, and with increasing frequency I was sent out of class or reprimanded for failing to do my homework. By the time I was thirteen, I was a bona fide bad student - I squandered my opportunity in the top girl's school in my country, probably spent more hours in detention than in class, and was effectively kicked out at the age of 14. I continued to underperform and face disciplinary problems right up to my GCSE A-Levels.
Luckily for me, the hardworking young girl who once took pride in her studies somehow emerged again in college, after disappearing for a decade. I remember how confused I was when my one of my professors praised me for being a 'good student' in freshman year - I couldn't believe I was getting positive feedback from an authority figure in school. I thought to myself, he's got to be sarcastic. Me? Good student? It wasn't until then that I realized how much of my poor self-esteem stemmed from identifying as a bad one. Despite working hard and eventually getting straight As, I constantly expected to fail after each test. To this day, I still suffer from a fear of academic authority figures. I still struggle not to engage in self-sabotaging academic behaviour. I still don't really think of myself as a good girl.
Since seeking help for manic depression, in addition to learning what it means to be a good student, I'm also learning what it means to be a better friend, sister and daughter. But perhaps the most heartbreaking way in which mental illness can harm one's identity is through the role of parenthood. Logan describes the transformation of her identity as a loving parent into one who slept all day, and didn't see much of her children. Before the onset of her depression, she was “twirling [her] baby girl under the gloaming sky on a Florida beach and flopping on the bed with [her] husband.” After the onset of her depression, she struggled with taking care of her kids, slept for long stretches at a time, and was in and out of psychiatric units; all of this affected her sense of competency as a mother. When she was out of the hospital, she hired a full-time housekeeper - while Logan was “appreciative of her help, [she] felt as if [her] role had been usurped.”
Sadly, mental illness can steal one's parenthood before it has even begun. Many with mental illness choose to remain childless - either because they are doubtful of their ability to be a good caregiver, or as a recognition of the reality that mental illness is genetically transmitted. Curtis Hartmann, a lawyer and writer with bipolar disorder, writes that "this, unquestionably, has been the cruelty hardest to bear: no children to love for a man who loves to love."
In search of a sustainable sense of identity
Shifts in identity and personality means it is a constant struggle for many of the mentally troubled to reconcile the actions of their 'sick' self with their healthy self. Mental illness causes people to do things that are reckless or irrational - things that they later regret, or that are harmful to those around them. Who's this talking - me or my illness? If someone with psychosis experienced the desire to hop off a building, under the delusion that he could fly, he'd likely distance himself from this desire later, saying it was his illness talking and not him. Yet, not all behaviours and mental states are so clearly divorced from physical reality. It's often difficult for victims - and those around them - to recognize whether behaviour is attributable to the diminished capacity that illness brings.
There are many technical discussions to be had about mental illness, diminished capacity, blameworthiness, and identity. Such questions are undoubtably important, particularly for questions of legal culpability. But what good is the diagnosis of clinical depression when one has damaged a relationship beyond repair?
To cope with everyday life, many of those with mental illness simply come to accept their disorder and its effects as "part of the rich tapestry of life", as so eloquently put by one of the interviewees in Stephen Fry's documentary. Some even come to identify with their illnesses in positive ways: when asked, all of the subjects that Fry interviewed said they would not opt out of their bipolarity if given the chance. Manic depression is a way of life that comes with its own richness and perspective.
One may endlessly ponder the philosophical implications of sickness on the authentic self, and wonder what 'would have been' if not for this and that. But the brute reality is, mental illness saddles one with a set of limitations from which one's identity is forced to develop. The mentally ill cannot choose their condition, and although there are steps they can take to seek help and shape their own identities, they must accept that there are many things that will continue to remain beyond control.
Then again, isn't that so for all of us?
Image by Ana Kova.
Some Varieties of Musical Experience
by Bill Benzon
My earliest memory is of a song about a fly that married a bumblebee. I've been told–I don't really remember this–that early one morning I played that record so often that it drove a visiting uncle to distraction.
I don't know how many people count music as their earliest memory, but I surely can't be unique in that. For music is a basic and compelling form of human experience. Martin Luther believed that "next to the Word of God, the noble art of music is the greatest treasure in the world. It controls our thoughts, minds, hearts, and spirits." And so it does.
Which perhaps is why we are so ambivalent about it. If it can control us, then it is dangerous. Why else would repressive regimes have worked so hard to suppress jazz and rock and roll? Why would the Taliban attempt to suppress all music?
But let us set the danger aside. It is the power that interests me.
Some years ago Roy Eldridge, the jazz great trumpeter, told Whitney Balliett (American Musicians: 56 Portraits in Jazz) about playing with Gene Krupa:
When ... we started to play, I'd fall to pieces. The first three or four bars of my first solo, I'd shake like a leaf, and you could hear it. Then this light would surround me, and it would seem as if there wasn't any band there, and I'd go right through and be all right. It was something I never understood.
What's going on? I suppose we could say it had something to do with the brain and nervous system, but what?
In a similar vein Vladimir Horowitz, the classical pianist, told Helen Epstein (Music Talks: Conversations with Musicians): "The moment that I feel that cutaway–the moment I am in uniform–it's like a horse before the races. You start to perspire. You feel already in you some electricity to do something." Again, the nervous system, getting him primed, for what?
"When I'm right and the band is right and the music is right," [Sonny] Rollins said, "I feel myself getting closer to the place where the sound is less polished and more aboriginal. That's what I'm striving for. The trumpeter Roy Eldridge once told a guy he could only reach a divine state in performance four or five times a year. That sounds about right for me."
A divine state? What's that – perhaps it's another one of those things that the nervous system rigs up, no? Perhaps. We might also wonder whether or not it's the same thing that Martin Luther had in mind when he talked of music as "the greatest treasure in the world." And yet they lived in such different worlds, after all: Martin Luther, Sonny Rollins, Roy Eldridge, and Vladimir Horowitz.
It's like you leave your body. It's like you're dizzy and lightheaded and yet right there. My hands just seem to throb, like a pulse almost. It's the best feeling in the world, bar none. It took me a lot of singing lessons before I finally connected with that feeling. The first time it clicked and I connected, I nearly fell down, and I started crying.
Is her throbbing like Roy Eldridge's shaking? When he was surrounded by light, was he also dizzy and lightheaded?
Boyd also interviewed Eric Clapton, the rock guitarist:
It's a massive rush of adrenaline which comes at a certain point. Usually it's a sharing experience; it's not something I could experience on my own ... other musicians ... an audience ... Everyone in that building or place seems to unify at one point. It's not necessarily me that's doing it, it may be another musician. But it's when you get that completely harmonic experience, where everyone is hearing exactly the same thing without any interpretation whatsoever or any kind of angle. They're all transported toward the same place. That's not very common, but it always seems to happen at least once a show.
Bullard talked of leaving her body. Clapton spoke of everyone being transported. There's a word for that, ecstasy, from the Greek ekstasis ‘standing outside oneself,' based on ek- ‘out' + histanai ‘to place.' Clapton also notes that this – whatever THIS is – is something that that happens, perhaps CAN ONLY happen, with others.
If this is something the nervous system does, then, it must be something that happens between nervous systems as well. And, wouldn't you know? neuroscientists are now investigating brain-to-brain coupling. What happens if you have two people interacting in some way and you examine what's happening in both brains? You discover that activity in the two brains is similar. What if that activity were exactly – of if not that, very very closely – similar? What happens to the (remaining) difference between the two?
Some years ago, in March of 2003, I participated in a large anti-war demonstration in Manhattan, where I met Charlie Keil in midtown and followed the demonstration to Washington Square in the West Village. I had my trumpet and Charlie had his cornet, and a bell or two as I remember. As we walked with and through the demo we encountered other musicians too, drummers, bell players, and horn players. Some had come together as Charlie and I had, and had a few routines worked out. But we all were looking to join up with others and see what happened.
There must have been two dozen or so musicians in the stretch where Charlie and I settled. Sometimes we were closer, within a 5 or 6-yard radius, and sometimes we sprawled over 50 yards. The music was like that too, sometimes close, sometimes sprawled.
Sometimes the music made magic. The drummers would lock on a rhythm, then a horn player–we took turns doing this–would set a riff, with the four or five others joining in on harmony parts or unison with the lead. At the same time the crowd would chant "peace now" between the riffs while raising their hands in the air, in synch.
All of a sudden–it only took two or three seconds for this to happen–thirty-yard swath of people became one. Horn players traded off on solos, the others kept the riffs flowing, percussionists were locked, people changed "peace" and the crowd embraced us all. But no one was directing this activity. It just happened.
What was going on in our brains? Did the crowd become, in some way, one mind? That's a real question, real in the sense that one day investigators are going to be able to "instrument" a crowd, collect a boat load of data, and figure out what's going on.
Let's push the issue a bit further. Some years ago the late Wayne Booth, a distinguished professor of English, wrote about his experiences as an amateur cellist, an avocation he shares with his wife Phyllis: For the Love of It: Amateuring and Its Rivals. In November of 1969 Booth was grieving the recent death of his son. In the process of "trying, sometimes successfully, to regain his lost affirmation of life" Booth began drafting a book about life, death, and music. Concerning a performance of Beethoven's string quartet in C-sharp minor, he said:
Leaving the rest of the audience aside for a moment, there were three of us there: Beethoven ... the quartet members counting as one ... Phyllis and me, also counting only as one whenever we really listened ... Now then: there that "one" was, but where was "there"? The C-sharp minor part of each of us was fusing in a mysterious way ...[contrasting] so sharply with what many people think of as "reality." A part of each of the "three" ... becomes identical.
There is Beethoven, one hundred and forty-three years ago ... writing away at the marvelous theme and variations in the fourth movement. ... Here is the four-players doing the best it can to make the revolutionary welding possible. And here we are, doing the best we can to turn our "self" totally into it: all of us impersonally slogging away (these tears about my son's death? ignore them, irrelevant) to turn ourselves into that deathless quartet.
We've seen some of this before; Clapton spoke to the merging of selves and Eldridge and Horowitz spoke to separation from everyday time and space. Beethoven adds another factor into the mix. If distinctions between one self and another are lost in the, then what difference does it make that it was Beethoven then and Phyllis and Wayne Booth now?
And let's grant that it's all a matter something happening in the nervous system – Beethoven's, Wayne Booth's, Phyllis Booth's, members of the quartet, the rest the audience, you, me, everyone. So what? On the one hand, until we actually know what's going on in these many nervous systems, referring such–strange, interesting, compelling–phenomena to the nervous system doesn't actually explain anything. It just shoves them under the intellectual carpet.
But one day we are going to understand these things in a way we do not now, perhaps even in way we cannot now imagine. What then? What if our best current approximation to that advanced understanding is that, yes, in that performance of Beethoven's string quartet in C-sharp minor the boundaries of space, time, and person collapsed and Wayne Booth, Phyllis Booth, the performers, audience, and Beethoven became one? What would Martin Luther say to that?
* * * * *
Back in the 1980s Leonard Bernstein directed a recording of West Side Story using opera singers. That recording session has been documented on DVD: The Making of West Side Story, Leonard Bernstein, Tatiana Troyanos, José Carreras, Kiri Te Kanawa, BBC Television London, UNITEL 1985. And clips from that DVD are on the web. The performance of "One hand, one heart" is devastatingly beautiful:
If you doubt your own experience of that performance, read through some of the comments.
* * * * *
Monday, March 03, 2014
Transcendental Arguments and Their Discontents
by Scott F. Aikin and Robert B. Talisse
Consider the nihilist who provides us with an argument with the conclusion that nothing exists, or that there are no norms for reason. Take the relativist who contends that all facts are relative to some perspective. Note the skeptic who consistently criticizes not only our claims to knowledge, but our very standards. Call such views Transcendental Pessimism. An appealing and longstanding reply to Transcendental Pessimism is that it is self-defeating in some way. The nihilist nevertheless avows a fact and relies on norms of rationality to run the argument for his own conclusion. The relativist isn't just saying that it's all relative to her perspective, but that it's all relative full stop. The skeptic's conclusion that we have no knowledge or have no reliable means to assess knowledge purports to be a knowledge-like commitment held on purportedly good epistemic grounds. The critical line is this: Transcendental Pessimist views cannot be consistently thought. Such views, to make sense at all, must presuppose precisely what they deny.
So far, this self-defeat maneuver against nihilists, relativists, and skeptics is but an inarticulate hunch. Transcendental arguments are attempts at making that hunch explicit, not only about how the negative views are self-defeating, but also regarding the positive views worth preserving. That is, we deploy transcendental argumentation not only as a critical line against Transcendental Pessimism, but we also (and perhaps thereby) establish some positive conclusion. Call this objective Transcendental Optimism.
Immanuel Kant is widely acknowledged to be the first to overtly use the argument type. The primary example of Kantian transcendental argument comes in the Second Analogy of Kant's Critique of Pure Reason. The rough form of argument runs as follows: One can judge a series of representations is evidence of a series of events only if one holds that the series is asymmetric (it must happen in that order, not in a reverse or other order). One can believe that the representations are asymmetric only if one holds that the events represented are similarly asymmetric. If a series of states is asymmetric, the earlier states are causes of the later states. Therefore: One can take a series of representations as evidence only if one takes them as evidence of a causal order. Experience can be a source of information only if there is a causal order.
In the 20th Century, Donald Davidson employed a transcendental argument in defense of his thesis of radical interpretation. The criterion for identifying anyone as speaking a language is that of taking their utterances as semantically contentful. The condition for identifying semantically contentful utterances is that of interpreting the things people say to be responsive to events in the world around them. In his essay "Radical Interpretation," Davidson explains the constraint thus: "A theory of interpretation must be supportable by evidence available to interpreters." And so, we must have our defaults set on interpreting others as saying mostly true things. In his influential essay "On the Very Idea of a Conceptual Scheme," Davidson writes, "We make maximum sense of the words and thoughts of others when we interpret them in a way that optimizes agreement." Consequently, we have no intelligible reason to hold that others have different conceptual schemes from us. Radical interpretation is transcendentally dependent on the Principle of Charity.
Now, there are two problems with transcendental arguments; one dialectical, one formal. The dialectical challenge for transcendental arguments is that they seem to either beg the question or are otiose. They, consequently, do not play the rebutting or undercutting role in the critical exchange with the Transcendental Pessimist that the Optimist needs them to. Call this the dialectical dilemma for transcendental arguments.
Consider Davidson's argument. It begins from the requirement that any theory of interpretation must be supportable by evidence of connection between utterances and the world. Such a requirement is widely held to be a form of verificationism – the view that the meaning of a statement is delineated by conditions for its confirmation. This view of meaning does all the heavy lifting in Davidson's argument. But no skeptic or relativist or nihilist (no Pessimist) would accept verificationism. So the argument begs the question. Alternately, note that if the verificationism does all the work, the transcendental argument was, in the end, unnecessary. It is otiose. So if you can't convince the relativist of verificationism, you can't run Davidson's transcendental argument, and if you can sell verificationism to the relativist, you don't need the transcendental argument. As a consequence, either way, the transcendental argument is worthless. That's the dilemma.
The formal problem for transcendental arguments is that their optimistic conclusions are helplessly equivocal. Consider a shortened version of Kant's argument:
P1: It is necessary that: Contentful experience is possible for a subject only if that subject deploys the concepts of cause and effect.
P2: Subjects have contentful experience.
C: There must be cause and effect.
Yet the ambitious transcendentally optimistic conclusion C in fact does not follow. The premises rather support a much more modest result:
C*: Subjects must use the concepts of cause and effect.
As Kant puts it, "Experience itself . . . is thus possible only in so far as we subject the succession of appearances . . . to the law of causality; and as likewise follows, the appearances . . . are themselves possible only in conformity with the law." Here we can see the difference between the two kinds of conclusion. The same thing happens in many other forms of transcendental argumentation. In order to ask a real question, one must think there are possible answers; in order to interpret others, one must take them to be in broad agreement with you; the condition for expecting an unsupported stone to drop is believing that gravity is real, and so on. What does not follow from any of these holdings, judgings, and believings are the facts of their assertional contents. That, by the way, was what the Pessimist was affirming all along.
We might call transcendental arguments that show just that something substantive must be used, presupposed, or assumed in order to say positive things at all a form of Modest Transcendental Argument. The trouble with modest transcendental arguments, when posed as arguments that show that the use of certain concepts is not optional, or invincible, as Barry Stroud puts it, is that they sound less like justification for these commitments, and more like exculpations. Just because the concepts or commitments are not optional in having our first-order commitments about the world, minds, and morals does not mean they are justified or are good.
The question is whether we can do better than exculpation for our Transcendental Optimism without committing the fallacy of equivocation. We, the authors, think there is a chance of doing better. It looks like this.
If Transcendental Pessimism is self-defeating (you can't consistently believe it), then we have justification in rejecting the view. That justification doesn't guarantee that Pessmism is false, but that we are rational in recognizing that we cannot ever hold the view with positive justification. Notice, now, that Optimism and Pessimism are the only options – if you suspend judgment between the two, you've slipped into Pessimism. It is, to use a term from William James, a forced move. Since we are justified in rejecting Pessimism, we are then justified in accepting Transcendental Optimism. The consequence, of course, is nothing earth-shaking. In fact, the Optimistic thesis was that we were all reasonable in believing that there is a world of causally efficacious things, other minds, and truths all along. The objective with the argument was to make it explicit why.