Monday, December 08, 2014
Heat not Wet: Climate Change Effects on Human Migration in Rural Pakistan
by Jalees Rehman
In the summer of 2010, over 20 million people were affected by the summer floods in Pakistan. Millions lost access to shelter and clean water, and became dependent on aid in the form of food, drinking water, tents, clothes and medical supplies in order to survive this humanitarian disaster. It is estimated that at least $1.5 billion to $2 billion were provided as aid by governments, NGOs, charity organizations and private individuals from all around the world, and helped contain the devastating impact on the people of Pakistan. These floods crippled a flailing country that continues to grapple with problems of widespread corruption, illiteracy and poverty.
The 2011 World Disaster Report (PDF) states:
In the summer of 2010, giant floods devastated parts of Pakistan, affecting more than 20 million people. The flooding started on 22 July in the province of Balochistan, next reaching Khyber Pakhtunkhwa and then flowing down to Punjab, the Pakistan ‘breadbasket'. The floods eventually reached Sindh, where planned evacuations by the government of Pakistan saved millions of people.
However, severe damage to habitat and infrastructure could not be avoided and, by 14 August, the World Bank estimated that crops worth US$ 1 billion had been destroyed, threatening to halve the country's growth (Batty and Shah, 2010). The floods submerged some 7 million hectares (17 million acres) of Pakistan's most fertile croplands – in a country where farming is key to the economy. The waters also killed more than 200,000 head of livestock and swept away large quantities of stored commodities that usually fed millions of people throughout the year.
The 2010 floods were among the worst that Pakistan has experienced in recent decades. Sadly, the country is prone to recurrent flooding which means that in any given year, Pakistani farmers hope and pray that the floods will not be as bad as those in 2010. It would be natural to assume that recurring flood disasters force Pakistani farmers to give up farming and migrate to the cities in order to make ends meet. But a recent study published in the journal Nature Climate Change by Valerie Mueller at the International Food Policy Research Institute has identified the actual driver of migration among rural Pakistanis: Heat.
Mueller and colleagues analyzed the migration and weather patterns in rural Pakistan from 1991-2012 and found that flooding had a modest to insignificant effect on migration whereas extreme heat was clearly associated with migration. The researchers found that bouts of heat wiped out a third of the income derived through farming! In Pakistan, the average monthly rural household income is 20,000 rupees (roughly $200), which is barely enough to feed a typical household consisting of 6 or 7 people. It is no wonder that when heat stress reduces crop yields and this low income drops by one third, farming becomes untenable and rural Pakistanis are forced to migrate and find alternate means to feed their family. Mueller and colleagues also identified the group that was most likely to migrate: rural farmers who did not own the land they were farming. Not owning the land makes them more mobile, but compared to the land-owners, these farmers are far more vulnerable in terms of economic stability and food security when a heat wave hits. Migration may be the last resort for their continued survival.
It is predicted that the frequency and intensity of heat waves will increase during the next century. Research studies have determined that global warming is the major cause of heat waves, and an important recent study by Diego Miralles and colleagues published in Nature Geoscience has identified a key mechanism which leads to the formation of "mega heat waves". Dry soil and higher temperatures work as part of a vicious cycle, reinforcing each other. The researchers found that drying soil is a critical component.. During daytime, high temperatures dry out the soil. The dry soil traps the heat, thus creating layers of high temperatures even at night, when there is no sunlight. On the subsequent day, the new heat generated by sunlight is added on to the "trapped heat" by the dry soil, which creates an escalating feedback loop with progressively drying soil that becomes devastatingly effective at trapping heat. The result is a massive heat-wave which can wipe out crops, lead to water scarcity and also causes thousands of deaths.
The study by Mueller and colleagues provides important information on how climate change is having real-world effects on humans today. Climate change is a global problem, affecting humans all around the world, but its most severe and immediate impact will likely be borne by people in the developing world who are most vulnerable in terms of their food security. There is an obvious need to limit carbon emissions and thus curtail the progression of climate change. This necessary long-term approach to climate change has to be complemented by more immediate measures that help people cope with the detrimental effects of climate change by, for example, exploring ways to grow crops that are more heat resilient, and ensuring the food security of those who are acutely threatened by climate change.
As Mueller and colleagues point out, the floods in Pakistan have attracted significant international relief efforts whereas increasing temperatures and heat stress are not commonly perceived as existential threats, even though they can be just as devastating. Gradual increases in temperatures and heat waves are more insidious and less likely to be perceived as threats, whereas powerful images of floods destroying homes and personal narratives of flood survivors clearly identify floods as humanitarian disasters. The impacts of heat stress and climate change, on the other hand, are not so easily conveyed. Climate change is a complex scientific issue, relying on mathematical models and intrinsic uncertainties associated with these models. As climate change progresses, weather patterns will become even more erratic, thus making it even more challenging to offer specific predictions.
Climate change research and the translation of this research into pragmatic precautionary measures also face an uphill battle because of the powerful influence of the climate change denial lobby. Climate change deniers take advantage of the scientific complexity of climate change, and attempt to paralyze humankind in terms of climate change action by exaggerating the scientific uncertainties. In fact, there is a clear scientific consensus among climate scientists that human-caused climate change is very real and is already destroying lives and ecosystems around the world.
Helping farmers adapt to climate change will require more than financial aid. It is important to communicate the impact of climate change and offer specific advice for how farmers may have to change their traditional agricultural practices. A recent commentary in Nature by Tom Macmillan and Tim Benton highlighted the importance of engaging farmers in agricultural and climate change research. Macmillan and Benton pointed out that at least 10 million farmers have taken part in farmer field schools across Asia, Africa and Latin America since 1989 which have helped them gain knowledge and accordingly adapt their practices.
Pakistan will hopefully soon engage in a much-needed land reform in order to solve the social injustice and food insecurity that plagues the country. Five percent of large landholders in Pakistan own 64% of the total farmland, whereas 65% small farmers own only 15% of the land. About 67% of rural households own no land. Women own only 3% of the land despite sharing in 70% of agricultural activities! The land reform will be just a first step in rectifying social injustice in Pakistan. Involving Pakistani farmers – men and women alike - in research and education about innovative agricultural practices in the face of climate change will help ensure their long-term survival.
Mueller, Valerie, Clark Gray, and Katrina Kosec. "Heat stress increases long-term human migration in rural Pakistan." Nature Climate Change 4, no. 3 (2014): 182-185.
Notes Of A Grand Juror
"A grand jury would indict a ham sandwich, if that's what you wanted."
~ New York State chief judge Sol Wachtler
About a dozen or so years ago, I had the instructive misfortune to be called for Manhattan grand jury duty. To this day, though, it has armed me with plenty of anecdotes for any sort of "that's the way the system works" conversation. Once you see how the sausage of justice gets made in the courtroom, you can never really unsee it, and that's not a bad thing. The grand jury process – and its failures and possible remedies – is obviously central to the Michael Brown and Eric Garner cases, but in my opinion hasn't received nearly enough attention. Let me draw on some of my own experiences to illustrate why this is the case, and argue why any meaningful response to Brown, Garner and others must, at least for a start, be sited within the phenomenon of grand jury.
As context, New York City is one of the few cities that maintains continuously impaneled grand juries to maintain the flow of indictments that feeds the criminal justice system. When I served, there were four such juries, two of which were dedicated exclusively to drug cases. Fortunately, I was selected for one of the other two; after all, variety is the spice of life. During our month-long tenure of afternoon-shift service, we heard 94 cases, and we returned indictments, if I'm not mistaken, for 91 of those. For this service we were compensated $40 per day, which, in a fit of self-serving civil disobedience, I refused to report on my income tax return.
Keep in mind that the purpose of the jury is two-fold: to establish that a crime was committed, and that the person under indictment had some involvement with said crime. This involves the mapping of an often messy reality onto the abstract but finely delineated nature of criminal statutes. To achieve this, the prosecutor – almost always a fresh-faced Assistant District Attorney (ADA) seemingly just out of the bar exam – would present just enough facts to the jury to ensure probable cause for both the crime and the person charged with said crime. The evidence may include testimony from officers, experts or other witnesses, and it ought to be noted that probable cause is a much lower standard of proof than what petit juries encounter in trials, which is the beloved "proof beyond a reasonable doubt."
Note that I haven't said anything about the defense. That's because we saw not a single defendant for any of the 94 cases we heard over the course of December 2003. During our induction into grand jury, we were assured that defendants and/or their attorneys had every right to participate in the indictment proceedings. At some point people on the jury began asking if we would ever see a defendant and the bailiff said it was highly unlikely. The reason for this is our first indication of the particular kind of sausage-making that goes on within the criminal justice system: most cases end in plea bargains. Defense attorneys generally wait for the indictment to find out how incriminating the evidence is, and then act accordingly. If the indictment is backed by strong evidence, the horse-trading around cooperation begins, in hopes of a reduced sentence. Beginning in the 1980s, this was used as a comprehensive strategy by the New York DA's office to dismantle the Mafia: arrest the street-level operators and flip them, one by one, in the hopes of moving up the food chain. Rinse, lather, repeat. More recently, they have tried the same tactic on insider-trading cases, although some have proven tougher to crack than others.
Following an indictment, defense attorneys will counsel their clients to go to trial only if they think they have an exceptionally good chance of beating the rap, if not on the facts of the case then by virtue of a sympathetic judge, and so on. Like all lawyers, defense counselors look at their field of play in terms of scenarios and probabilities. In this sense, the pursuit of "justice" is not a pursuit of truth, but an exercise in risk management, negotiation and compromise. The facts, such as they might be, are there to serve those ends, and not the other way around. This is very important to keep in mind when we come to consider the Brown and Garner cases.
This brings me to the other essential point: recall that we as jurors were instructed to "map" certain statutes onto actual events and people. How do you go about doing this? As noble as "a jury of your peers" may sound, I hope that I am never in a position to be judged in this way. For the law per se is not a simple thing, and this sort of mapping exercise guarantees plenty of ambiguity along the way. For a grand jury that is essentially treated as an indicting machine, a broad variety of statutes come into play. And in the interest of securing an indictment, the DA will throw as many charges as possible against the suspect, in the hopes that at least one will stick.
Fortunately, the state is kind enough to provide a guide to navigating the complexities of statutory law: the prosecutor himself. If you think this is a conflict of interest of the highest order, you would be right. You would also have no choice in the matter. Of course, all the ADAs we dealt with were unfailingly polite and more than willing to read out the relevant statutes as many times as was necessary, but keep in mind that they are in the room to get their indictments. They regretted to inform us that they could not help us in interpreting the evidence in relation to the statute, only the statute itself. That, putatively, was our sacred duty.
So what did I learn while I was a grand juror? For one thing, the cops can pretty much arrest you for anything. Secondly, the people who get busted proceed to get themselves even more busted. Examples include: if your friend is driving you around in his newly stolen car, don't have a stolen handgun on your person (on the other hand, the two may have had some shared instrumentality, which I suppose is reasonable). But you should definitely not have a rock of crack cocaine in your pocket while you jump a subway turnstile. (Of course, if I'd been white while jumping that particular turnstile I probably wouldn't have been searched. Just saying.)
Thirdly, the cops know the law way better than you, and use it to their advantage. Example: a group of four guys are walking down the street, and the police observe two of them conducting a drugs-for-cash transaction. Shortly afterwards, all four get into a car. The cops then proceed to bust them, because the law says that anyone in a car with drugs in it can be charged for possession. Why settle for two collars when you can have four?
Fourthly, cops lie. A lot. We had to put up with some extraordinary claims made by officers, some of whom testified anonymously, in order to protect their undercover identities (it's interesting what anonymity does to your perception of whether someone is telling the truth). You were on the roof of a sixth-floor walkup without binoculars and you saw a drug deal go down four city blocks away? For real? The suspect didn't have any stolen goods on him when he was arrested but somehow had them once he emerged from the police van? No kidding! On the few occasions that we were confronted with particularly egregious lies we threw out the indictments with relish. But more often than not we were left seething amongst ourselves, during the deliberation period that was the only occasion when we were left alone as a group. Just because one cop lied at one point didn't invalidate the entire case if there was an overwhelming amount of other evidence, so in this way the lying cop gets a bye. He knew it, we knew it and he knew that we knew it. It's also worth mentioning that even if we disagreed with the law itself, we nevertheless had no choice but to indict, if the "evidence" was strong enough, as with the example of the four guys in the car above.
Eventually, in the course of our daily proceedings a curiously adversarial dynamic developed. As a jury, we did our best to establish a solid understanding of what transpired for any given case. But much of it felt like being in Plato's cave. We only saw what the prosecutors and police wanted us to see, and would further guide us, as much as possible, in how to see it. Due to the confidential nature of the proceedings, note-taking was prohibited. And without the counterbalancing presence of a defense counsel, or of the salutary effects of cross-examination, the end result was, more often than not, a shrug of the shoulders and a vote to indict.
To my further dismay, this happened with increasing frequency, especially as we approached the Christmas holidays. Unlike the zero-sum game that is a petit jury trial, there is a further dilution of responsibility, that goes something like this (and here I am pretty much quoting a fellow-juror) "Well, an indictment isn't that big of a deal, the defense attorney can figure out what to do with it next, and at the worst the guy will get a fair trial." What this indicates is more proximity bias that anything else: the first time you raised your hand to indict someone it was a very big deal, but now that you've done 60 of them and you're really thinking about having to see your in-laws again, it's really not such a whopper.
In general, there is a modicum of intellectual rigor required to attend to this process with any sense of awareness and responsibility. And yet we had jurors whose English was far below the standard needed to follow legalese; who probably hadn't had to think analytically about anything in decades; or who just plain didn't care, or rapidly reached that point. If there is anything accurate about Reginald Rose's "12 Angry Men," whose quotes and stills pepper the present article, it is the fact that a jury's seats are by no means guaranteed to be occupied by reasonable, disinterested citizerns (thank goodness Henry Fonda was one of them). To this day, if there is a better reason as to why a liberal arts education remains of vital importance to our society, I cannot think of one.
"Look, you know how these people lie!
It's born in them…they don't know what the truth is!"
~ Juror 10 (Ed Begley)
If the purpose of the system is to generate indictments, then the system works really well. Hence the well-known quote from chief justice Wachtler about the indictability of ham sandwiches. It's not so much the masterful rhetoric of the prosecutor, the infallibility and selfless dedication of the police, nor the relentless pursuit of truth. It's the fact that the incentives are all lined up correctly to produce indictments. The cops provide the evidence and the warm bodies, the prosecutors the indictments. Each depends on the success of the other.
This extends beyonds the hermetic enclosure of the courtroom, since prosecutor is an elected position, and must do his level best to gain the endorsement and support of the police union. (If anyone doubts the importance of the union in the eyes of a cop, please consider the recent stairwell shooting of Akai Gurley, where the two patrolmen in question were MIA for the first six minutes following the shooting. It turns out that Officer Liang, who allegedly fired the shot, was texting his union rep). The grand jury, as blind as Justice itself, stammers and dodders its way through the mess, eventually just glad to get it over with. Not quite a rubber stamp, but not too far off, either.
Now, all of this falls apart in a grand way when the tables are turned and it is the cops that are under indictment. Suddenly, the whole system of incentives is under threat of short-circuiting. Because, if I have sketched it out well enough, the point of the system is not the disinterested pursuit of justice; nor is it the ongoing process of risk management, negotiation and compromise; but rather it is the perpetuation of the system itself. In this sense it is no different from any other bureaucracy. In order for the system to remain coherent and orderly, indicting cops is to be avoided at all costs.
How do the participants extricate themselves from this? As usual, The Onion is on it with a handy guide. But in fact the answer is even simpler. One thing that may have been only implicit in the above description I should now make explicit: in none of the 94 cases we considered did the DA fail to recommend charges. Remember that an indictment is a mapping exercise. It is inconceivable to take a group of lay people and just point them to a book of criminal statutes. And yet, thanks to the extraordinary release of the complete transcript of the Darren Wilson indictment, we know that this is precisely what happened. Remarkably, this action seems to have been within the DA's discretion. Moreover, in the few pages that were released concerning the Garner case, there was no mention of what charges – if any – were recommended to the jury. From viewing the videotape, it's pretty incredible to think that Daniel Pantaleo, the officer in question, could not be charged, at the very least, with involuntary manslaughter.
Now, we can talk all about the latitude that use-of-force laws grant in the courtroom, etc etc, but if the jury isn't even told what statutes might possibly apply, it's pretty uncertain that they will come to agree on anything. As an example, consider the fact that, during our grand jury induction, we were told that not only did we have the right to strike down the charges recommended to us by the DA, but we also had the right to search out other statutes and recommend them to the DA as charges instead. Not that we ever did that – safe as houses, we were.
Still don't believe the lengths that the system will go to protect itself? Consider another, fairly unpublicized detail in the Garner case. If you've seen the video (and, truth be told, we don't know if or how much of it was seen by the grand jury), you'll notice that Pantaleo isn't the only cop around. What about those other guys? The five-or-so other cops involved in taking Garner down were all granted immunity from prosecution in return for their testimony. Obviously, the DA was wasting immunities, since their testimony was such shit that he couldn't get an indictment from cherry-picking what those five eyewitnesses saw. And Pantaleo, like Darren Wilson in the Brown trial, testified before the grand jury himself, so I guess defendants do show up under extraordinary circumstances. In any case, no one was mistaken for a ham sandwich here, folks.
Back in the real world, the failure to indict the police responsible for the deaths of Brown and Garner has spawned an understandable backlash of protest. But while the subject of protest is clear, the objective is emphatically unclear. Much like the Occupy protests following the 2008 financial crisis, people accepted that there was plenty to protest about, but the fledgling movement lost much credibility due to the illegibility of any actual demands of the protesters. Now, these latest protests are part of the mighty stream of the civil rights movement, so credibility is not what's at stake here. Rather, I fear that the opportunity for real, targeted reform will slip us by, because as it is presently constituted, the system will continue to not indict police. It simply has no other choice.
People can shout about structural racism all they want, and they can go down the rabbit holes of stop-and-frisk, police body cams, reparations, or whether #crimingwhilewhite is an unworthy hashtag (for fuck's sake). Most of these are worthy causes but, since they do not address the procedural site that is clearly at the heart of the matter, attempts to address police violence through the court system will run relentlessly into the same bottleneck as before. Rather, the system of incentives needs to be broken at exactly this critical juncture. To this effect, I propose that any killing carried out by police be immediately referred to a special prosecutor – one who is outside of the Backscratchistan fiefdom that we currently have for handling run-of-the-mill cases. I cannot imagine I am the first to do so.
This was further refined in a recent discussion with fellow 3QD author Jeff Strabone, who suggested, quite correctly, that the referral should be made automatic for the killing of any unarmed civilian. Since this type of change would have to be enacted by the relevant state legislature, including the fact that the victim was unarmed creates the additional advantage of being politically much more difficult to resist. Without this kind of reform #BlackLivesMatter and #ICantBreathe will soon enough join #Kony2012 in the #DustbinOfHistory.
But perhaps the solution is even simpler. As Jami Floyd noted to WNYC's Brian Lehrer the day after the indictment against Officer Pantaleo was thrown out, the United States is the only country to still use grand juries to decide anything. When one considers that at least two other countries still use the Imperial system of measurements (the United States being in the august company of Liberia and Myanmar), it is amazing to consider that, globally speaking, the pound and the foot enjoy more popularity than grand juries. But we've always been proud of our exceptionalism, haven't we?
Monday, November 10, 2014
In Trust We Truth
"All this – all the meanness and agony without end
I sitting look out upon
See, hear and am silent."
~ Walt Whitman
On a recent Facebook thread – about what, heaven help me remember – someone posted a comment along the lines of "This is what happens when we live in a post-truth society." I honestly cannot recall what the original topic was about – politics? GamerGate? Climate change? Who knows – you can take your pick, and in the end it's not really that important. The comment struck me as misguided, though, and led me to contemplate not so much the state of ‘truth' as a category, which has always been precarious (see: 2,500 years of philosophy), but of the conditions that may or may not lead to the delineation and bounding of what we may consider to be sufficiently, acceptably truthful, and how technology has both helped and hindered this understanding today.
I responded to the commenter by suggesting that we live not so much in a ‘post-truth' society as a ‘post-accountability' society. It is not so much that truth is disrespected, distorted or ignored more than ever before, but rather that the consequences for doing so have (seemingly) dwindled to nearly zero. One could argue that this is vastly more damaging, because the degree of our accountability to one another profoundly influences how and if we can arrive at any sort of truth, period. Prior to the onset of information technology, there were well-established (and of course, deeply flawed) mechanisms for generating and enforcing accountability. Now, this mechanism of information technology that has relieved us of accountability is already so deeply enwoven into our society that not only will we never put the genie back in the bottle, we are at a loss to imagine how to ever get this genie to play nice. Except the problem is that this kind of righteous outrage is, in fact, entirely an illusion.
Instead of arguing about truth as an objective, abstract and hopefully attainable category, let's assume that truth (or whatever you want to call it) is a sort of consensus, and that consensus is reached through processes of trust (we respect each other's right to have a say) and accountability (we take some responsibility for what we say to each other). These are all fundamentally social processes, and as such haven't really changed very much over time. What interests me is how the insertion of technology into this discourse has changed our perceptions of the burdens that these concepts –truth, consensus, trust and accountability – are expected to bear.
Roughly speaking, technology has begotten two completely contradictory streams of development in this regard. This is old news – one person finds a better way to make fertilizer and someone else finds a way to build a better bomb using that fertilizer. In this sense technology merely functions as an amplifier for whatever tendencies are coursing through society's veins. Within the context of accountability, the two streams may seem to be paradoxical, but this is only superficial. Let's first touch on how technology has played a largely beneficial role in the elaboration of the paradigm of accountability.
Most obviously, there are the successes that have allowed a tremendous blossoming of commerce. An early, pressing problem faced by ecommerce was the creation of trust between buyers and sellers in an anonymous, disembodied marketplace. Buyers were interested in what they could buy online, but reluctant to fork over cash to anonymous strangers. In 1995, eBay was one of the first to propose a simple accountability mechanism for trader-to-trader transactions: buyers and sellers left feedback for one another confirming (or critiquing) speed of shipping, quality of goods, etc. Today, the approach is received wisdom, but at the time no one knew if would actually work. But this feedback system has continued to underpin the success of eBay and many other ecommerce sites, as witnessed by the success of AliBaba, current record-holder for the world's largest stock market IPO. It's no mean feat to create trust between buyers and sellers in a market as notoriously dodgy as China's.
Moreover, the applications of this mechanism seem to have grown well beyond the simple trader-to-trader transaction. We are now accustomed to reading book reviews on Amazon, restaurant reviews on Yelp, accommodation reviews on TripAdvisor, among many others. Reviews are also arguably being used to put the screws on part-time entrepreneurs such as AirBnB hosts and Uber drivers, but that is a topic for another time. It is sufficiently uncontroversial to say that, in a very concrete sense, we are becoming ever more reliant on an army of anonymous commenters to help us in our sensemaking of what to read, eat, buy or see.
Trust and accountability mechanisms have expanded in even subtler ways, specifically in the way that machine participants trust one another within a given system. Perhaps the most compelling example of this is bitcoin, the crypto-currency whose wild price oscillations (and shady applications) managed to grab global headlines for, well, at least a few minutes. The obvious need to prevent a party from double-spending an amount of bitcoin, which after all is a bunch of numbers sitting on a hard drive somewhere, led bitcoin's designers to include the notion of a block chain. The block chain accomplishes this through a concept called proof-of-work:
[Proof-of-work] is counterintuitive and involves a combination of two ideas: (1) to (artificially) make it computationally costly for network users to validate transactions; and (2) to reward them for trying to help validate transactions. The reward is used so that people on the network will try to help validate transactions, even though that's now been made a computationally costly process. The benefit of making it costly to validate transactions is that validation can no longer be influenced by the number of network identities someone controls, but only by the total computational power they can bring to bear on validation.
Basically, each machine on the network must validate all transactions, and all transactions must match across all machines. In the meantime, all transactions remain anonymous, even though the block chain, stored on each participant's machines, retains the entire record of all transactions (you can really go down the rabbit hole here). The computational intensity required means that no one individual can fake a transaction and fool the other participants. This is counterintuitive because we think of the goals of software design as privileging lighter, faster and simpler solutions.
A waggish take might see this as little more than make-work for the digital age. Nevertheless, the critical element here is that there is no central authority that vets the transactions. The network validates itself as it goes along, and, if everything works as it should, participants that act in bad faith are rooted out as a matter of course. I suspect that this sort of decentralized, distributed trust mechanism will find itself refined and deployed in many ways – for example, in credit systems for validating bottom-of-the-pyramid consumers. But it also occupies an important place within our narrative: this is what accountability looks like if you're a machine. From the point of view of a machine, it is a straight line from accountability to trust, and from there to consensus and truth. You just need plenty of electricity.
The looming problem with all the cases I have described so far is that they fall within a very narrow category: that of trader-to-trader transactions. In every case, the subject under discussion is clearly an object or service that is to be consumed (or evaluated or whatever – but the final purpose is consumption, let's be clear about that). There is always an implied value at stake – the feedback or ranking or other process being applied to it is simply there to clarify, refine or nudge the final value one way or the other. This is the meat and potatoes of not just microeconomics, but almost every "disruptive" idea to come out of Silicon Valley. As a result, the amount of attention these cases command is far out of proportion to our sensemaking as a whole. In this worldview, truth is indistinguishable from, or is rather interchangeable with, price discovery.
But there is still all that squishy stuff where technology has hung us out to dry. Why has technology failed to help us resolve, on a social level, issues like the link between autism and vaccines, or whether Barak Obama was born on American soil or not? Let alone the realities of climate change or evolution? Why do sites like Snopes.com or the Annenberg Center's FactCheck.org seem to be engaged in a Sisyphean struggle to disabuse us of disinformation, or why do we need them at all? Most importantly, why has technology, which otherwise has been such a staunch ally in concretizing the invisible hand, been unable to bring us any closer when it comes to a shared set of values?
At the beginning of the second essay of In The Shadow Of The Silent Majorities, French philosopher Jean Baudrillard writes:
The social is not a clear and unequivocal process. Do modern societies correspond to a process of socialisation or to one of progressive desocialisation? Everything depends on one's understanding of the term and none of these is fixed: all are reversible. Thus the institutions which have sign-posted the "advance of the social" … could be said to produce and destroy the social in one and the same movement.
Baudrillard asserted that political action – or at least, the kind of political action that mattered – becomes impossible when social processes disallow the "masses" from anything but the observation of spectacle. This process takes protest – or for that matter any kind of political action – and subsumes it into media, which then converts it into merely another object for consumption. Writing in 1978, Baudrillard was essentially finishing off Marxism as a plausible revolutionary theory. But he was mostly concerned with top-down media technologies and the manner in which once-meaningful events are rendered into meaningless theater, or rather whose meaning resided exclusively in their own theatricality. A good example is his examination of the transformation of political party conventions here in the United States. Once political conventions became televised, decisions of any consequence ceased to be made at those events. They simply became spectacle; the spectacle of the thing in question becomes the thing itself. If you want a good overview of what he had in mind, see Paddy Chayefsky's "Network", filmed a few years earlier: Howard Beale and the Ecumenical Liberation Army are essentially Baudrillardian poster children.
A good twenty years later, the World Wide Web began its inexorable crawl across (and of) the globe. Baudrillard was a troublemaker and a provocateur, so I assume that he would have gleefully jumped on the subject, but in a 1996 interview he admitted "I don't know much about this subject. I haven't gone beyond the fax and the automatic answering machine…. Perhaps there is a distortion [of oneself online], not necessarily one that will consume one's personality. It is possible that the machine can metabolize the mind." In one of his last major works, The Vital Illusion he lamented in a Nietzschean fashion that "The corps(e) of the Real – if there is any – has not been recovered, is nowhere to be found."
Fifteen years after publication of The Vital Illusion, we are in a better place to evaluate the effects of technology, and the view is not encouraging. For the same mechanisms that have allowed such a preternatural calibration of transactional value seem to be exacerbating the consensus around values that cannot be transacted. The fact is that there is an entirely different set of assumptions at work here. Venkatesh Rao put it well on his stimulating blog, Ribbon Farm, when he discussed the differing nature of transactions when participants are price-driven (ie, traders) or values-driven (as he puts it, saints):
Traders view deviations from markets as distortions, and fail to appreciate that to saints, it is recourse to markets that is distortionary, relative to the economics of pricelessness. Except that they call it "corruption and moral decay" instead of "distortion." To trade at all is to acknowledge one's fallen status and sinfulness.
If we consider the insertion of technology into this dynamic, the fact emerges that we have not designed technology to help us in our, shall I say, more saintly endeavors. Technology subsumes these squishier, values-driven behaviors into itself as best as it can, but it cannot ever do so completely. What's left is the flotsam and jetsam of Reddit, White House petitions, comment threads anywhere, Anonymous and LulzSec and cross-platform flame wars ranging from Mac vs PC to Palestine vs Israel. There is no shortage of bridges under which Internet trolls lurk, waiting to pounce on anyone who displeases them.
For anyone who doubts that there are real-life consequences to this, GamerGate is perhaps the best example of this. When the women targeted in this shitstorm are confronted with such a quantity of death and rape threats that they flee their homes, or are forced to cancel speaking engagements because a university cannot guarantee that someone won't bring a concealed weapon to a lecture, I am left with a distinct pining for that good old Baudrillardian unreality. Whether there will be any real-life consequences for the people who commit such acts, this remains to be seen. Furthermore, there is no reason why unaccountability cannot, and will not, continue its expansion. Like cosmic inflation, it does not need a reason to keep going, or anticipate a boundary to detain it.
There is an old Wall Street adage about any significant market downturn: "When the tide goes out, you see who's been swimming naked." The Web has flipped this on its head: the tide just keeps coming in, and more and more people are leaving their trunks on the beach. Moreover, it is simply too late to redesign the Internet for greater accountability. The last (or first?) idea that had any hope of accomplishing this was Ted Nelson's Xanadu Project. Nelson invented the very idea of hypertext, but in his world, which he originally conceived in 1960 and is detailed in one of the best articles to ever appear in Wired, every image or piece of text would be traceable back to its source. This past June, in an Onion-worthy headline, The Guardian announced the "World's most delayed software released after 54 years of development".
Perhaps in another, alternative universe, Xanadu became the default design template for an Internet that encouraged not just price accountability. In the meantime, and back in this universe, what technology has exposed is only what we have always known: that we are a fractious, quarrelsome and undependable lot. This is why I maintain that any hand-wringing about the state of the conversation on the Web is ultimately a red herring. That we haven't designed one of our most extraordinary technological infrastructures to help us get closer to any sort of ‘truth' shouldn't surprise us in the least. As for the original Facebook conversation that sparked this contemplation, after making my ‘post-accountability' suggestion, my comment received a dutiful ‘like' or two. As far as civilized dialogue goes, I'll take it.
Monday, November 03, 2014
Islam, Colonization, Imperialism and so on
by Omar Ali
At about 6 pm on Sunday evening, a young suicide bomber (said to be 18 years old) blew himself up in a crowd returning from the testosterone-heavy flag lowering ceremony held every evening at the India-Pakistan border at Wagah, near Lahore.
Presumably this young man (a true believer, since a fake believer would find it hard to explode in such circumstances) had wanted to target the ceremony itself (usually watched by up to 5000 people every day, most of them visitors from out of town) but the military had received prior intelligence that something like this may happen and there were 6 checkpoints and he was unable to get to the ceremony, so he waited around the shops about 500 yards away from the parade site and exploded when he felt he had enough bodies around him to make it worth his while.
About 60 innocent people died. Many of them women and children. Including 8 women from the same poor family from a village in central Punjab who were visiting relatives in Lahore and decided to go to the parade (whether as entertainment, or as patriotic theater, or both). The bombing was instantly claimed by more than one Jihadist organization but it is possible that Ehsanullah Ehsan’s claim will turn out to be true. He said it was a reaction against the military’s recent anti-terrorist operation (operation Zarb e Azb: “blow of the sword of the prophet”), that his group wants "an Islamic system of government" and that they would attack infidel regimes on both sides of the Indian-Pakistani border.
The Indian authorities decided to suspend their side of the parade for the next three days. But on Monday evening, the Pakistani side decided to hold their parade as usual and a crowd was on hand. Cynics have pointed out that most of the “crowd” looked like soldiers in civilian clothes, but that is not fair. The “show of resilience” meme is a very ancient and well-developed meme and has solid credentials and should not be easily dismissed. I personally wish both India and Pakistan end this ridiculous ceremony someday (soon), but on this particular occasion a show of resilience was the smart move. But then, the respected corps commander of the Pakistani army corps in Lahore, General Naveed Zaman (an outstanding officer, himself on the Taliban’s hit list for his role in various anti-terrorist operations) made a statement and beat his chest a bit about how we are a brave nation, we are back the next day and “look, on the Indian side it’s like a snake has sniffed them”, the implication being, they are cowards, they didn’t show up, but look at us, we are back and we are strong.
This is par for the course for the Pakistani army (whose propaganda software was designed and built for only one enemy, and whose soldiers are motivated to attack Jihadi terrorists by being told that the Jihadists are all Indian agents, I am not kidding) but is still telling: the day after one of the biggest massacres of civilians by a Jihadist terrorist bomber (there being no other kinds in our area these days, though the Tamil Tigers showed that a Tamil Hindu version is indeed possible, and in fact preceded the adoption of this particular weapon by Islamist terrorists) the senior army officer in the region could only taunt the Indians across Eastern border.
Meanwhile, in Nigeria, the Boko Haram terrorists announced that most of the 276 girls they kidnapped have been “converted to Islam” and married off. So the matter is settled.
And in Iraq, the “Islamic State” has been buying and selling captured Yezidi girls as slaves in the best medieval Arab tradition. In the video below, the young men of IS can be seen joking about the topic (the translation is by Jenan Moussa, an Arab journalist, not by MEMRI, so discerning viewers can view it without violating any of the standard guidelines):
Boko Haram has also gone ahead and blown up some Shias in Nigeria as they commemorated Moharram, while their fans have apparently shot a Shia in the face in, of all places, Sydney.
My point is this: the Salafist-Jihadist meme, so carefully nurtured and brought together in the Afghan-Pakistan border region by Pakistan, Saudi Arabia and the US in the 1980s, is now global and will soon come to your neighborhood if your neighborhood happens to be in the core Islamicate territories of the Middle East, India, Southeast Asia, Londonistan or Mississauga. Many different narratives about this phenomenon are in the market, ranging from Neocon propaganda and Fox News to Islamist apologetics and Marxist “class-based analysis”. For Western and Westernized liberals of a particular disposition, there are also “commentators” like Pankaj Mishra, who can be relied upon to press all the politically correct buttons without committing to anything resembling a coherent description, prediction or prescription. I would like to add some random thoughts to this mélange:
1. We are all human beings. And in the great Eurasian landmass, we have been mixing, biologically and culturally, for thousands of years. It is not possible that a relatively recent religious movement (Islam) has somehow significantly altered the biology of the people involved. This is a trivial observation, but some people on both sides of the liberal-conservative divide seem to have some misapprehensions about this, so it is worth reiterating. Going beyond that, I would add that even as a cultural phenomenon, Islam is not from some other planet. It evolved within pre-existing cultures, borrowing and altering already existing cultural memes. Much of “Islamic history” is the history of an initial (very successful and very extensive) Arab conquest, followed by some further conquests (primarily in Central Asia and India) by Islamicized Turkic invaders. Only in Indonesia and Malaysia did the initial wave arrive as traders and the subsequent conquests and conversions were almost entirely the work of local converts. This makes early South East Asian Islam a bit of an outlier, but that is another story. Only by disregarding most of history can we regard these conquests (and their associated missionary activities) as somehow completely unique. There are some peculiar features of Islamicate civilization, but not as many as its fans or its detractors would like to claim.
2. That being said, Islamicate civilization developed a remarkable degree of consensus on it’s core doctrines in the Islamic heartland. Even Shias and Sunnis converged on similarities in daily life and communal attitudes towards non-Muslims, towards women, towards apostasy, towards blasphemy, towards the notion of holy war. While agreeing with Razib Khan’s views about the relative unimportance of theology in general, I think modern life and the recent experience of colonization, decolonization and its associated psychopathologies have led to an unusual situation in the Islamicate world: while the pressures that cause religious revivalist movements or “fundamentalist” movements may be similar in non-Muslim communities (hence Sikh, Hindu and Buddhist identity-based semi-fascist fundamentalist movements), the material that is available to these movements and the historical background of the religions involved, makes it difficult to associate a detailed “shariah” with any of those movements. Sikhs can ban tobacco and kill blasphemers and traitors, Buddhist mobs can kill Muslims without compunction in Myanmar and Sri Lanka, Hindu nationalists ban beef and carry out pogroms, but the notion of a Sikh state or a Hindu state or a Buddhist state is mostly the notion of a state where their co-religionists hold sway (or even hold exclusive title), but lacks consensus on any well developed legal code or even theology. This is not the case with Islam.
3. There is such a legal and theological framework in Islam and it has wide support in principle. In principle is, of course, not the same as in practice. Most Muslims know as much about Muslim theology as Christians know about Christian theology, which means they know very little. But because of widespread beliefs about blasphemy and apostasy, this “in principle” support translates into an inability to frontally challenge those who come armed with more detailed Islamic knowledge. For example, most Pakistanis may have no idea that classical Islamic law permits slave girls to be captured, used for sex (without marriage) and bought and sold as desired. If and when IS comes to Pakistan and wants to talk about buying and selling slave girls, most people will probably be shocked. It is possible that most people will initially even find some way to say this is wrong. But it is also my guess that when face to face with an IS ideologue, most people will be unable to argue for too long. Because he will have classical Islamic texts on his side and his opponent will have nothing beyond his human intuition of fairness and good behavior. Intuition will not stand against argument. And there will probably be no argument for too long because to argue too much would cross over into the zone of blasphemy. And most people (except maybe for the tiny sliver educated in Western or Western-style universities and out of touch with their own traditions almost completely) believe that blasphemers should be punished, and at least for the most extreme kinds of blasphemy, the punishment should be death. This, by the way, is just a simple empirical fact, easily checked if you step out among the people in that region.
4. Whenever the existing state order (in almost all cases, the product of recent Russian or West European colonization, so somewhat suspect in any case) falls apart, the next common denominator tends to be Islamist. And among those Islamists, the ways of the golden age are not some distant myth. Those books are still around, still honored, still relevant, still protected against criticism by blasphemy and apostasy memes. And those books include rules for holy war, for slave holding,for female legal inequality etc. that are no longer fashionable in the modern world. That is just how things happen to be.
5. The ruling elites in most Islamicate countries are not Islamist in practice and may not be so in principle either. But having taken the path of least resistance (or having received their Islam from Karen Armstrong or post-Marxist theorists) they have acquiesced in the glorification of medieval Islamicate norms, not as past history but as guides to present behavior. They will now be (literally in many cases) hoist on their own petard.
6. Elements of the ruling elite (especially in South Asia, where penetration of Western postcolonialist/postmodern/post-Marxist garbage has been most extensive within the elite) are vigorously opposed to many of these medieval norms, but have disappeared into an alternate universe where only White people have agency and therefore only White people are responsible for all events. This has effectively taken them out of the equation for now. They remain mostly harmless, but the opportunity cost of their withdrawal into la la land is not insignificant.
7. As the Bill Maher-Ben Affleck affair has shown, Western Liberals are generally clueless about Islamic history and the status of (most of) the Islamicate world with regard to issues like freedom of religion, freedom of speech, feminism and suchlike. This is NOT to endorse a particular Whiggish vision of history as the only valid path, with every community situated somewhere along the timeline from barbarian to Western liberal democracy. But it is to emphasize that opting out of this linear timeline is one thing, pretending that everyone is already at point X on the timeline while paying lip-service to multiculturalism is another. If Ben Affleck thinks that Western standards of “liberal democracy” (however defined and whether regarded as an endpoint or not) are not to be applied to everyone on the globe and that these standards are being used to demonize and colonize those who hold to different values and models, then he has a leg to stand on. But he (or others like him) seem to lose this admirable level of “nuance” when they get to specifics. Instead of saying that Pakistani Muslims do not permit free speech when it comes to X, Y and Z and who are we to comment or interfere (especially when we are just using this commentary to justify our invasion of this or that country), they are saying “there is no real difference in free speech norms between X and the US”, which is patently absurd. Other liberals (too numerous to list) will look at history as if European powers have real histories (with colonization, oppression, invasions, decimations etc, also with progress, emancipation, democracy, etc.) and everyone else lived on some other static planet with no history, no past and no future. I don’t have to go into detail, Wikipedia can solve this issue for anyone these days, but it is still surprising how few people will bother to even read Wikipedia before brandishing absurdities in this matter. The opportunity cost for this (loss of some Western liberals) is perhaps insignificant in real life, but since I tend to interact with some of these (very nice) people, I obsessively comment about them. Hence this comment.
8. More after I get some feedback; many or most of these comments are very likely to be misinterpreted by many people. This is partly because I am not a good enough writer, but partly because all of us use various heuristics to slot every commentator into pre-existing boxes. To see a little of where I am coming from, some of the following articles may be helpful. Thank you.
Monday, October 13, 2014
The Brooklyn Gentrifier's Playbook
"A New Yorker is someone who longs for New York."
These days, when the inevitable question of "What do you do?" pops up at a cocktail party or some such, I now simply answer, "I live in New York." A credulous follow-up might wish to clarify whether that is, in fact, how I make my living, at which point I try to steer the conversation to kinder, gentler topics. But after living in New York for 15 years, I feel my response is both perfunctory and justified. Anyone as deeply immersed in the city knows that living here really is its own, full-time occupation, since the city demands constant observation and reflection. And New York is especially amenable to this, given the breadth, density and accessibility of the city's neighborhoods, as well as New Yorkers' guileless embrace of real estate as a primary subject of conversation. It is perhaps the only city that I know of, where a stranger can walk into your apartment and ask, within the first 15 minutes, how much you rent pay for the privilege, and expect an answer.
In this vein, there has always been much talk about gentrification: where it is happening right now and where it will happen next, whether the desirability of the outcomes outweighs the costs, and, especially, who is being ousted. This last is not so much about the residents themselves, but rather the ongoing disappearance of beloved restaurants, bars and retail establishments, for example as documented by Jeremiah Moss's Vanishing New York. So what can be said about gentrification that has not already been said? Honestly, not a whole lot. There are still no good answers or responses, especially as New York reassesses its post-Bloomberg future.
However, gentrification has increasingly been treated as a monolithic concept, when in fact it is an umbrella term describing a continuum of variegated and uneven urban processes. The ‘improvement' of any neighborhood is the result of a bevy of actors, operating within a legal and social context that is unique to that neighborhood, and that itself sits within the larger context of the city and the state. Finally, even global financial circumstances play a role, for example, artificially low interest rates and the ease with which capital may travel. When gentrification is seen as a monolithic process, it is difficult to think about it as anything other than inevitable. But if we consider the different processes that are obscured into this single rubric, or more accurately, the different scales and velocities at which gentrification occurs, then we will be better equipped to engage the phenomenon itself, and not merely the label.
The late geographer Neil Smith clearly identified this in the late 1970s. First in his dissertation and then in his subsequent work, he characterized gentrification, especially in its accelerated forms, as fundamentally a process of capital, not of people.
Since the 1970s, gentrification has shifted from a marginal, fragmented process in the housing market to a large-scale, systematic and deliberate urban development policy. Gentrification has deepened as a comprehensive city-building strategy encompassing not just the residential market, but recreation, retail, employment, and the cultural economy.
Michael Bloomberg's three terms as mayor of New York City carried the precise hallmarks of such a "large-scale, systematic and deliberate urban development policy," or what could also be termed a love-fest between developers and city officials. While marquee projects such as the (successful) Atlantic Yards and (unsuccessful) Midtown East projects occupied most of the media spotlight, what remains less appreciated is the sheer scope of rezoning undertaken by the administration: upwards of 120 rezonings, almost all of which were approved, will continue to reshape the contours of New York for decades to come.
But how? At first, it may be surprising to hear that "the city planning department doesn't track…how much potential space was gained or lost, or how much value it's created by enabling development" for any given rezoning. However, zoning itself is not a monolithic concept: a block may be ‘upzoned,' ‘downzoned' or left unchanged (also known as ‘contextual'). Zoning delimits the ultimate population density for a given lot, and in fact, from 2003 to 2007, the net result was only a 1.7% net increase in capacity. This immediately leads to the next question: Who gets what kind of zoning? The contours of rezoning become clearer when one understands that
Upzoned lots tended to be in areas that were less white and less wealthy, with fewer homeowners. Downzoned lots tended to be areas that were more white and had both higher incomes and higher rates of homeownership than upzoned areas. Areas with contextual rezoning were even whiter and richer (with median incomes "much higher than that of the city"), and had "very high rates of homeownership." In other words, more privileged people were more likely to have the city change the zoning of their neighborhoods to preserve them exactly as they were.
Understood this way, the possible pathways for New York become clearer: rezoning defines and guarantees its own success. But rezoning is really only the beginning of real estate development. There is still the procurement of permits and the appeasement of local community boards. But developers are used to playing the long game, and one of the legacies of the Bloomberg (and Giuliani) administrations is a massive, tangled infrastructure of committees, advisory boards and public-private partnerships where real estate developers mix with city officials in order to clear hurdles, this being most easily achieved outside of the public eye and behind closed doors. (For an exceptionally clear-eyed exposition of this bureaucratic juggernaut, see the excellent documentary My Brooklyn by Kelly Anderson).
The bodies are buried in plain sight. I have already written about the fate of the Fulton Fish Market, which remains little changed today. For its part, ‘My Brooklyn' documents the redevelopment of Brooklyn's Fulton Mall and its impact on the African-American and Caribbean communities that depended on that commercial district. And the systematic dismantling of community resistance to the Atlantic Yards project was a big-city real estate bruise-fest whose definitive history remains as yet unwritten, but will doubtlessly launch a thousand urban social justice dissertations. Like the Bloomberg administration's zealous rezoning campaign, this web of governance is set to endure for a long time, and in the meantime, Brooklyn is in fact, becoming poorer.
These, then, are the macro policies that drive large-scale gentrification of substantial swathes of New York. However, there is a smaller scale at which gentrification operates, and one that is largely invisible to the media. Nevertheless, its effects on neighborhoods is no less decisive. As an example, consider the story of another part of Brooklyn, that of Franklin Avenue in Crown Heights. "The Ins and The Outs" is a vital and broad-ranging article, written by Vinnie Rotondaro and Maura Ewing, on the changing nature of one of Crown Heights' principal commercial thoroughfares. While readers outside of New York may most clearly remember it as the neighborhood gripped by a race riot back in 1991, after a generation Crown Heights has now been Columbused as the newest Brooklyn hotspot, with Franklin Avenue as its pulsing heart.
I have been to Franklin Avenue over the years but have been going more frequently, thanks to a friend who recently moved to the neighborhood. The rapidity of the transformation is nothing short of astonishing – in fact one of the defining features of gentrification in New York is that each episode seems to take less time than the previous. Franklin Avenue seems to follow the standard pattern of development, where delis become swish bars and pawn shops are replaced by up-market retail. And yet everything happens for a reason. One of these reasons has been MySpace Realty.
As documented by Rotondaro and Ewing, MySpace (and possibly a few shell corporations under its control) have engaged the neighborhood's landlords, aggressively making offers to buy buildings for cash. For MySpace, a landlord who says ‘No' only means ‘No' today. Once a building is sold to MySpace, it is time to get the residents out of the building, so that it can be renovated and put back on the market for rental rates that can be several times the existing rent. If they are lacking in savvy, most tenants are bought out at a discount, or even made to think that they have little choice in the matter. The holdouts – some of whom have been living in the building for decades and cannot afford to live anywhere else in the area – are then subjected to the usual shenanigans of deferred repairs, ignored infestations, etc. Lather, rinse, repeat.
MySpace is using an old playbook, of course. Just as Anderson documented the strong-arm tactics of big-league developers in ‘My Brooklyn', Rotandaro and Ewing narrate a history of similar behavior but writ on a much more local scale. The results are much the same, however: a process of divide-and-conquer by capital leads to the decrease of the availability of affordable housing stock in a given neighborhood. And it is also important to recognize the fact that MySpace Realty's actions do not exist in isolation. As Franklin Avenue has become more ‘hip' the neighborhood has been primed for larger developers to buy up lots that are beyond the reach of a local firm: the Goldman Sachs Urban Investment Group was part of a consortium that purchased a nearby property that will likely become a luxury mixed-use development, with about $20m to be invested in the near future. And this is only one of several such transactions happening in the area. As one of the locals put it, "I don't know how to beat this. I don't know how anyone can beat this machine."
This same resident also asked the real question at the heart of any gentrification process: "I still think there's a better and more ethical way to get from a broken down, crime-ridden, drug-ridden neighborhood to a place that is safe and enjoyable for everyone while still maintaining a sense of community ownership." Capital can only provide a partial and ultimately unsatisfactory answer to this question – left to its own devices, it can only produce cookie-cutter development at market rates, with the end result being nothing but the relentless homogenization of any given neighborhood. The same people, shops and restaurants. Ironically, perhaps only the housing stock will remain to bear mute witness to the unique flavor that a neighborhood once had.
It is somewhat like the old philosophical paradox of sorites – if you have a heap of sand, and you remove grain after grain, at what point do you no longer have a heap of sand? What sorites points out is that we have ultimately failed to define what a ‘heap' is in the first place. Without this definition, you cannot know when a heap ceases to be a heap. Gentrification functions similarly – at what point does improvement become gentrification, or, to continue with the analogy of the heap, at what point is gentrification no longer that, but rather improvement?
I was reminded of this when my friend Alex Castle posted a wonderful essay on his own experience, somewhat misleadingly titled "Gentrification Is My Fault". Fittingly, it's in the form of a blog post. I say fittingly, because it is both interesting and important to note the commensurate nature of the media describing each of these levels of gentrification: the largest process is worthy of an acclaimed documentary; the local level merits long-form journalism; and the smallest is only given voice by its protagonist's memoir. Fitting, of course, is not the same as just, so it is important that these latter voices be given their due.
Castle's essay details the haphazard way in which he and his wife came to own a limestone townhouse in Prospect-Lefferts Gardens, which was then a fairly rough-and-tumble section of Brooklyn, one that is in fact on the southern border of Crown Heights. Through a mix of good timing, thrift and hard work – all vital ingredients of the American Dream – the Castles have created exactly that for themselves. What I appreciate even more deeply is the way that Alex invested himself in the ownership and improvement of his home and, by extension, the neighborhood:
I didn't displace anyone; the place was abandoned, the basement was flooded with shit and the doors had been battered in. I spent the first five years we lived here working on the house all day and bartending all night. When I started I had no skills, I couldn't drill a hole in a board without splitting it. Now I know how to do wiring, framing, sheetrock, I can frame and hang a door (interior or exterior), put in a dishwasher, tile the floor. It took a long time, but it only cost materials.
But what is striking about this personal history – and this is the kind of story that can only be told as a personal history – is the ambivalence that even this engenders. On the one hand, through their temerity and foresight, the Castles expect that, by the time they retire, the mortgage will be paid off and they will be able to live off the income from renting their extra apartment (in New York, this is what's known as ‘winning'). But as Alex muses, "if Bruce Ratner calls me tomorrow and offers me $5 million for this house, is it my responsibility to ask what's going to happen to the property after I'm gone before I sell? Or am I just reaping the benefits of good planning?"
The Castles' experience echoes Neil Smith's point of departure in his own analysis of gentrification: "a marginal, fragmented process in the housing market." Thus, while tempting, it would be wrong to think that the fragmented and marginal become obsolete simply by virtue of the rise of capital. It's clear from this last example that all of these processes co-exist and eventually negotiate with one other – it is simply a consequence of the way in which a city embodies its limited, valued space. Even the much larger forces of capital-driven gentrification must still contend with property rights and the intentions and desires of smallholders who have invested decades of savings and work into their particular corner.
More importantly, the best bulwark against the kind of gentrification we all seem to wring our hands over is precisely the people who are perfectly aware of their rights and have no illusions of the true value of their stock. I am not making some petite-bourgeoisie argument here: this is as true (and vital) for tenants as it is for landlords. The only thing that is missing is all the other stories like Alex's. Where are they? Who is recording them, and bringing those people together into what is likely a common cause that is nevertheless representative of each person's own interests? I am perhaps being optimistic, but as Jefferson wrote, albeit in a different context, "Whenever the people are well informed, they can be trusted with their own government."
Monday, September 15, 2014
The View From Nowhere
"Well, I haven't been there yet, and shall not try now."
~ Conrad, Heart of Darkness
Marlow, the protagonist of Conrad's Heart of Darkness, remorsefully blames an old obsession with maps for his eventual captaincy of a ramshackle steamship, set on a doomed mission up the Congo River. But Marlow was irretrievably fascinated by the blanks on the map – those were the places that were worth going. These days, when we look at a map, we expect objectivity and specificity, or to put it bluntly, the truth. Our sense of entitlement has only grown with the thoroughness in which maps have enmeshed themselves into our daily lives, whether it is via the GPS devices that guide our cars, or the maps on our smartphones that help us walk a few blocks of a city, familiar or not. We may forego the flâneur's pleasure of asking a stranger for directions, but where a certain calculus is concerned, it seems a small price to pay for getting us, without undue delay, to where we need to be.
There are no more places where cartographers must write terra incognita, or where myths and rumors were recruited as phenomenological filler. For just as nature abhors a vacuum, a map is a canvas that demands to be crammed with seemingly confident observations, and it would appear that every nook and cranny of the planet has already had some physical characteristics reassuringly assigned to it. Thus when maps fail us, we are left to decide whom to blame – the map, or ourselves.
I will give you a hint: we never blame ourselves. Rather, it is the map that is inadequate. But what this really implies is our refusal to abandon the conviction that there will be some future map that will capture the truth. Correlating directly with its pervasiveness, it becomes too easy to pass over the obvious fact that, like anything else, the practice of cartography is a fundamentally social practice. Consider not only how immersed we are in maps, as with the example of GPS, but also how extensively, constantly and surreptitiously we ourselves are mapped. Every time you allow an app on our smartphone to "Use Your Location," indeed with every swipe of a credit card, you are effectively performing an offering of yourself, or rather some quantifiable aspect of yourself, to some kind of mapmaking project, the vast majority of which you will never be aware, let alone see. We are, in fact, subjects of a distinctly cartographic flavor of what Michel Foucault called clinical gaze.
When we are thus swaddled in information that provides so much convenience and in turn seems to ask so little in return – in fact, what is merely a bribe, but an exceptionally effective one – the occasional failure of maps can be galling (or sometimes entertaining). Because we are convinced that a better map is always already right around the corner, this anxiety does not last. But what comfort is there when we are confronted with things that resist mapping?
The classic thought experiment here is Benoît Mandelbrot's seminal 1967 paper, published in Science, "How Long Is the Coast of Britain?" For the present purposes, I will only describe Mandelbrot's premise: the measurement of an irregular natural surface such as Britain's coastline is dependent on the unit of measurement. So if we were to use a yardstick with a unit length of 200km, we might conclude that the length of the coastline is 2400km, whereas if our yardstick were 50km, we would assert a length of 3400km. Indeed, as the unit of measurement approaches zero, the observed length of the coastline approaches infinity.
For Mandelbrot, this is a mathematical problem, and he uses the example to posit a method for approximating length. Eventually these and other investigations would lead him to elaborate the theories of self-similarity for which he is justly famous. But in the introduction to the paper, Mandelbrot writes:
The concept of ‘‘length'' is usually meaningless for geographical curves. They can be considered superpositions of features of widely scattered characteristic sizes; as even finer features are taken into account, the total measured length increases, and there is usually no clear-cut gap or crossover, between the realm of geography and details with which geography need not be concerned.
One of the advantages of Mandelbrot's mathematical approach is that it allows him to elide that essential question: Where is the "clear-cut gap or crossover"? For mapmakers, identifying that gap or crossover is at the heart of cartography. It may well decide the ultimate utility of a map to someone navigating a route in the physical world. And this is a decision that must be made by people. It is not enough that the map is right; it must also be right in the right way.
I want to be clear that I am not talking about what is commonly called ‘usability', or the loose set of principles that designers use to make legible their interventions in the world. ‘Usability' is a red herring, in the sense that the process of dressing up cultural artefacts, whether physical or virtual, for ‘usability' occurs only after the decisions of what should be ‘usable' (ie, legible) have already been made. To invent a brief and perhaps absurd example, consider a highway map. If we are driving, we use such a map to get from A to B, where points A and B are reachable by car. Thus, highways and side roads will be prominently featured; other geographic features such as elevation may or may not be relevant. But cartographers also locate significant landmarks to inspire detours (for an Information Age example, see Rand McNally's TripMaker), thereby implying that these are good things that belong on a map. On the other hand, these same maps will never include locations that we may want to avoid, such as Superfund sites. It is not difficult to imagine that a family with young children would want to know about – and avoid driving through – regions thick with pollution from, say, coal-fired power plants. We may initially react to this by saying "But these things do not belong on a map." Well, why wouldn't they? If instead our design brief were to create a map that would allow us to determine the healthiest route from A to B, our highway map may look very different indeed.
The decision to not include such items is intrinsically ideological and, as we will see below, also explicitly political. It is only through repeatedly being shown what a map is that we come to believe what a map should be. We are rarely told what a map is not. But at each turn we are assured of the objectivity that is at the heart of the enterprise.
Objectivity, understood as a sort of neutral omniscience, was tartly characterized by philosopher Thomas Nagel as "the view from nowhere." But having nowhere as one's originary viewpoint is akin to being lost inside one of Mandelbrot's endless, scale-free fractals. It is also irreconcilable when we attempt to, as we must, relate our knowledge of the world to the world itself (although for Nagel, reconciling the two is precisely what is needed to create an individual's worldview). Thus objectivity, or at least the set of social relationships and productions of knowledge that we ascribe to the idea of objectivity, is in fact a moral stance. Why?
Like anything else, objectivity has its own history. In a fascinating paper called The Image of Objectivity (and later a much more extensive book) Lorraine Daston and Peter Galison unpack this "panhistorical honorific [bestowed on] this or that discipline as it comes of scientific age." For them, the workings of objectivity are most apparent when manifested visually, specifically in the way atlases of many varieties – anatomical, botanical, X-ray – have been created and consumed over the centuries. These are not works of neutral omniscience, but artefacts that tell us "what is worth looking at and how to look at it." And to say that this is a moral practice is not far-fetched. They find that
…objectivity is a morality of prohibitions rather than exhortations, but no less a morality for that. Among those prohibitions are bans against projection and anthropomorphism, against the insertion of hopes and fears into images of and facts about nature: these are all subspecies of interpretation, and therefore forbidden. (p122)
Cartography has evolved in a similar fashion. From early cartographers inscribing empty spaces on their maps with "Here Be Dragons" (actually, they didn't) to Google Earth, one might think that there is a flawed but inexorable march towards an ever-finer approximation of reality (if not objectivity). After all, as Daston and Galison write, the moral imperative of objectivity recognizes that "the phenomena never sleep and neither should the observer; neither fatigue nor carelessness excuse a lapse in attention that smears a measurement or omits a detail; the vastness and variety of nature require that observations be endlessly repeated." And yet, there are forces at work that are greater than cartography and the technologies that have transformed it in the last few centuries, and these too should be recognized.
I came across a most extraordinary example of these other forces last week, in a long-form reportage by the Times-Picayune's Brett Anderson. "Louisiana Loses Its Boot" is Anderson's attempt to reconcile the rapidly changing (that is, receding) coastline of the state with the fact that the official state map has not been updated in fourteen years, and isn't likely to be any time soon. What he finds is a toxic mix of, on the one hand, galloping erosion and, on the other, benighted legislation that seems dead-set on ignoring the former. As a result, "the boot is at best an inaccurate approximation of Louisiana's true shape and, at worst, an irresponsible lie." (All citations below are from this article).
To be sure, Louisiana was always a devilishly difficult entity to map. The Mississippi is a notoriously fickle river, given to not just flooding its banks but rewriting them wholesale, as Harold Fisk's maps from the 1940s illustrate. And yet it is precisely this process that replenished the coastline: new sediment allowed vegetation to take hold and create adequate breakwaters and barrier islands, which in turn kept hurricanes from being the Gulf of Mexico's shock troops. The coastline was shifting constantly, but it was not receding. In fact, it was expanding. But once the Army Corps of Engineers "stabilized" the Mississippi in order to ensure commerce, this process of replenishment was severely stunted. As a result, hurricanes such as Katrina have had much greater impacts than would otherwise have been possible. The need for structural modification is not just limited to the river, either. Louisiana is the nation's second-largest oil producer and has "over 9,000 miles of navigation and pipeline canals…dredged in the state's coastal marsh." Adding projected sea level rises to the mix does not promise to make things any more pleasant.
One would think that the physical uncertainties of the situation would therefore call for as ‘objective' an approach to mapmaking as possible. After all, even without factoring in human impact, it is probably difficult enough to decide what is ‘walkable land' and what is not. Instead, the conflicting priorities of the fishing and energy industries have stalled Louisiana's famously corrupt politics from mandating a responsible accounting. Additionally, "the Department of Transportation and Development and the U.S.G.S. would have to agree on a shape and then implement a costly replacement plan for images currently in circulation." Oh, dear. And the U.S. Supreme Court has done its part to command the tides, too, when it decreed in 1981 that "the state boundary of Louisiana was no longer an ambulatory line that could move in response to changes in the coastline, and was henceforth immobilized as a set of fixed coordinates."
In this case, we resist any sort of accurate map only in order to avoid blaming ourselves. We would rather have the maps lie to us, for as long as possible. In the meantime and wholly apart from this tragicomic legislative context, an acre of coastal land is being lost every hour. So even if there was agreement, what kind of map could be created that would do coastal Louisiana justice? In Anderson's view, one that would throw the situation in a clear and unforgiving light: hence the loss of the boot. Such a map could only be a political tool:
A more honest representation of the boot would not erase the intractable disagreements — around global sea level rise, energy jobs versus coastal restoration jobs, oil and gas companies versus the fishing industry — that paralyze state politics, but it would give shape to the awesome stakes, both economic and existential, that hang in the balance.
Anderson's campaign to make the map explicitly political goes against the cartographic gaze that I described above, with its decentralization of power and accountability. It is no wonder that it has been met with resistance. But is it enough? When one looks at the current, tranquil state map of Louisiana, none of this decay, let alone conflict, is apparent. Of course, a citizen traveler might be roused to indignation if not action, once he attempts to reach a destination that no longer exists: a swamp where there was once a camp, the vast reaches of the Gulf where there was once a causeway or a barrier island. But how many people are there of that ilk?
And so we have turned a full circle of cartographic irony: from speculative maps that included places that never existed, to objective maps that show us places that no longer exist, but pretend as if they do. After all, what Marlow found, far up the Congo River and in the darkness of the human heart, could never be marked on a map. But for what can be recorded, whether it is Louisiana's coastline, the Arctic ice cap, or various star-crossed Pacific islands, we can only hope that eventually, as Borges once wrote, "in time, those Unconscionable Maps no longer satisfied."
Monday, June 23, 2014
A Far-Reaching Liquidation
"For the last twenty years neither matter
nor space nor time has been what it was."
~ Paul Valéry, 1931
Ever since Napster tore through the music industry like an Ebola outbreak, there has followed a ceaseless hand-wringing about the ever-decreasing "value" of music. Chart-busting hits have been replaced by body blows to an industry that was once fat and happy. From Napster's peer-to-peer networking model to the current ascendancy of streaming services, the big labels have seen their fortunes scrambled and re-scrambled by the onrushing and ever-changing technological landscape. This is further complicated by the fact that young people are its most desired demographic, but are also the most ardent adopters of said inconvenient technologies. It's easy to say that there is no going back – and there isn't – but how can artists respond to this seemingly unstoppable race to the bottom, now that the link between a work of music, and the physical artifact that is its vehicle, has been permanently sundered?
Earlier this spring, we received a candidate answer from the venerable hip hop outfit Wu-Tang Clan. The Wu-Tang have been secretly recording a new double album for several years, an event that would commonly be greeted with much rejoicing by their legions of fans. However, the zinger is that only one copy of the album will be made, destined to be sold to the highest bidder. Even more interesting is the fact that, prior to the auction, the record will tour "festivals, museums, exhibition spaces and galleries for the public as a one off [sic] experience." (Imagine the stringency of the security that will be required to keep this particular cat in its bag; I am already anticipating the Twittersphere lighting up in outrage as museum staff shine flashlights into people's ear canals, conduct full body cavity searches, and generally out-TSA the TSA.)
Of course, such acts of conceptual brazenness are usually (and usually regrettably) accompanied by a manifesto, and Wu-Tang does not disappoint...
...although they seem to prefer the term "edictum":
Is exclusivity versus mass replication really the 50 million dollar difference between a microphone and a paintbrush? Is contemporary art overvalued in an exclusive market, or are musicians undervalued in a profoundly saturated market? By adopting a 400 year old Renaissance-style approach to music, offering it as a commissioned commodity and allowing it to take a similar trajectory from creation to exhibition to sale, as any other contemporary art piece, we hope to inspire and intensify urgent debates about the future of music. We hope to steer those debates toward more radical solutions and provoke questions about the value and perception of music as a work of art in today's world.
Now, the Wu-Tang boys bring up a real issue here. It's not hard for musicians to look at the contemporary art world, with its bloated traffic in fetishized objects that seem to spring, fully formed, from an inexhaustible well of cynicism, and wonder what wrong turns their own art form has taken. The concept itself has a very appealing simplicity to it as well: it is the re-attachment of the content to its vehicle. And what a pretty vehicle it is, too. But what kind of a "radical solution" is this? Because once the auction goes through, whoever buys owns all the rights to the music. They can distribute the album or simply squirrel it away for personal listening pleasure. They can bury it in their backyard, or douse it with gasoline and torch it. They can be as democratic or as perverse about it as they may feel inclined.
However, my disquiet runs even deeper than that. From the "conceptus" (!) page of the album's site, we read that
…a new approach is introduced, one where the pride and joy of sharing music with the masses is sacrificed for the benefit of reviving music as a valuable art and inspiring debate about its future among musicians, fans and the industry that drives it. Simultaneously, it launches the private music branch as a new luxury business model for those able to commission musicians to create songs or albums for private collections. It is a fascinating melting pot of art, luxury, revolution and inspiration. It's welcoming people to an old world.
This nudge-nudge-wink-wink tone of noblesse oblige makes me think that the author intended for this copy to end up on the Financial Times' How To Spend It, a sort of Whole Earth Catalog for the One Percent. While I value the provocative nature of Wu-Tang's act, I wish that they had stopped there. But by dressing up an old patronage system in new clothes, they are pointing to a cul-de-sac in the conversation. This has nothing to do with the radical opening of possibilities. It is merely about the enshrinement of exclusivity. It also grates against the intrinsic ephemerality that is the very nature of music. Even if I possess the only extant recording of a certain piece of music, I still cannot "consume" it just by looking at the recording. I have to play it, and once I have played it, that moment is gone. This is the deep appeal of streaming services. But the Wu-Tang Clan has conjured up the most radical opposite imaginable. Is it still music if it's never played? Or if there's no one around to hear it?
(There is another, greater irony here. Hip hop was once the voice of the urban voiceless in this country, and despite its commoditization here, it has gone on to fulfill this role in many others. Has hip hop reached yet another apotheosis on the way to perfecting its self-worship?)
I cribbed the title of this post (as well as the Valéry quote) from Walter Benjamin's seminal 1936 essay "The Work of Art in the Age of Mechanical Reproduction." Anyone who has read (or who vaguely remembers reading) this essay would consider it the go-to critique for this sort of discussion. But Benjamin is mostly concerned with film and does not in fact mention music at all. It is also further problematic because Benjamin regards art as a point of contention between fascism and socialism – that the only possible response to the state gaining control of the reproduction of art is its politicization. The Wu-Tang stunt fits neither category. Instead, it's just another signpost along the way to the reductio ad nihilum of our late capitalist fantasyland.
However, there is another, more generous provocation that was offered by Beck in 2012. Beck, conjunction with McSweeney's, released a new album, except he didn't record a single note. Instead, he released 20 songs as sheet music, and invited everyone to create their own interpretation. You can view the results at Song Reader, the site set up to collect all these contributions. This may seem precious and retro, the kind of winking irony that would be at home in a snooty Williamsburg coffee shop. But this gesture is not dissimilar to the kind of "instruction art" that was refined by John Cage and Sol LeWitt, where the fundamental idea is that people can – and should – create the work for themselves.
Of course, prior to the advent of radio and 78s, sheet music was the primary vehicle by which music was distributed and popularized, and as such formed a significant part of the connective tissue of a society's culture. In her article "Before the Deluge: The Technoculture of Song-Sheet Publishing Viewed from Late Nineteenth-Century Galveston" author Leslie Gay notes that "communication technologies like song sheets are implicated within the myriad ways we build social relations, make exchanges and create meaning". There is something very important here: the idea of being a mere consumer is discarded. It is quite simply impossible. As a score, music only exists in its potential form. The musician is the vehicle. Put another way, the siting of "value" has shifted from the monetary expectation of the producer, to the experience of the participants.
Take as an example Russia in the 19th century, where orchestras would go on long tours. People in the town would know not only when the orchestra would come to town, but what it would be playing, sometimes months in advance. So households would procure piano reductions and work through the scores in anticipation of the big night. One can only imagine the intimacy with which the listeners were able to "consume" the music, having played through and argued over many of each work's nuances. In this way, the act of consumption was in fact replaced by an act of consummation.
Similarly, what makes the Song Reader project really groundbreaking is its expectations. In order to engage the work, you have to know how to read music. And I mean really read music – there are no guitar tabs here. There is something fascinatingly paradoxical about this. On the one hand, the fact that there is no authoritative recording – so far Beck has yet to put out a disc of his own interpretations – implies a vast artistic freedom. On the other, that world is only open to those who have a sufficient degree of a very specific kind of literacy (one that, nevertheless, was much more common a century ago than it is now). What Beck offers us is an invitation to engage deeply with the world around us, whether it is in the form of the text of the score, the playing of our fellow musicians, or the interpretations created by others. Having worked through this text ourselves, we are in a much subtler place, one that can appreciate why certain decisions may have been made or ignored. We have created a foundation for critique, and for pleasure.
The other, even more important implication in Beck's act is one of trust. Consider the courage that an artist must have in order to issue his art in the form of instructions. I'm pretty certain that Beck knows exactly how he thinks his songs should sound. I don't know if he thinks that he is more qualified than anyone else to interpret them. I know that if they were my songs, I would think that way. But by only giving the instructions, Beck is saying that this latter concern really isn't relevant. He is essentially saying "I trust you" to his fans. There is an empathetic generosity that is really rather astonishing. And what is given back to him is a richness of interpretation that will doubtless have an impact on the way he views his own composing.
This rhizomatic conception stands in stark contrast with the idea of a final object that is perfect, authoritative and unique, as is personified by the Wu-Tang Clan's gesture. The rhizome is resilient and unpredictable, whereas the unique object is non-negotiable and brittle. On account of its uniqueness, the object's ownership has real consequences, whereas the ownership of a score of music is of much less relevance to the purpose of that score's existence.
For its part, technology is always telling us that it will catalyze society into new, more effective forms of social organization. It does not necessarily ask what society is doing already, and what the value of that activity might be. Simultaneously, technology oftentimes devalues our own participation in society and especially culture by ensuring that that participation has less at stake. We are assured that we no longer need to read music in order to pretend to understand it; it only matters that we possess it.
Thus, in a final twist that emphasizes the poverty of choices with which technology eventually presents us, two Wu-Tang fans became determined to ensure the album's dissemination. This took the unsurprising form of a Kickstarter campaign. Since there was a rumored $5 million offering price for the album, the job of finding enough consumers committed to an altruistic redistribution was a daunting one. Indeed, by the time the fundraising window closed, the project had only raised $15,400. Maybe Wu-Tang's fans should have asked for a score instead.
Why the Philosophy of Food is Important
by Dwight Furrow
There are lots of hard problems that require our thoughtful attention—poverty, climate change, quantum entanglement, or how to make a living, just for starters. But food? Worthy of thought? Most philosophers have ignored food as a proper topic of philosophical inquiry.
On the surface, it seems there are only three questions about food worth considering: Do you have enough? Is it nutritious? And does it taste good? If you have the wherewithal to read this you probably have enough food. Questions of nutrition can be answered by consulting your doctor or favorite nutritionist. And surely it doesn't take thought to figure out what tastes good.
But when we look more deeply at food we find some important issues lurking beneath the surface about which philosophy has traditionally been concerned. How we farm, what we eat, and how we cook have important social, political, and ethical ramifications—ramifications so important that we cannot think of these issues as purely private matters any longer. Some of the aforementioned "hard problems" have a lot to do with food. Our food distribution networks are anything but fair leaving many people without enough to eat; and our food production and consumption patterns cause substantial environmental harm in part because of their impact on climate change. Our resource- intensive way of life, supported by an economic system that requires constant growth, is unsustainable especially because the rest of the world would like to emulate it. For example, it is estimated that if everyone in the world consumed our meat-heavy diet, we would need two planet earths to supply sufficient land, feed, and water.
We must learn to live differently, and that means, fundamentally, learning to desire differently—and to desire food differently.
How we problematize and refine desires and pleasures and attend to their moderation, balance, and harmony has been a philosophical topic since the Ancient Greeks. That discourse has never been more important than it is today and our food desires must now lie at the center of that discourse. Food is our most basic material need and ties together a vast number of issues from deforestation, to the use of fossil fuels, to the disappearance of local food markets. And all are tied to how we manage our desires. To ignore food as a philosophical issue is to ignore that foundational discourse regarding the management of desires that has been central to philosophy's history.
Unfortunately, philosophy in recent centuries has drifted away from those ancient concerns. The modern view of human beings as abstract epistemological subjects may lack the conceptual apparatus to think about the realm of contingent bodily needs, so philosophy may have to reinvent itself to learn to think critically about food.
But the the significance of the philosophy of food does not wholly rest on it becoming a branch of applied ethics or social theory, a collection of topics for professional philosophers to consider. The aesthetics of taste, a component of the philosophy of food, should receive more thoughtful attention from non-philosophers as well. After all, if we must learn to manage our desires differently, we will likely accomplish that only through modifying the personal aesthetic judgments on which those desires rest, which again recalls an ancient discourse—philosophy as a way of life.
The aesthetics of taste is important because I don't think one can live well in our world without taking an interest in the aesthetics of everyday life; and because the enjoyment of food and beverages is among the most accessible and satisfying of our everyday experiences, we should care about it much more than we do.
Why is the aesthetics of everyday life so important? This famous quote from the film Fight Club provides the experiential background:
Man, I see in fight club the strongest and smartest men who've ever lived. I see all this potential, and I see squandering. God damn it, an entire generation pumping gas, waiting tables; slaves with white collars. Advertising has us chasing cars and clothes, working jobs we hate so we can buy shit we don't need. We're the middle children of history, man. No purpose or place. We have no Great War. No Great Depression. Our Great War's a spiritual war... our Great Depression is our lives. We've all been raised on television to believe that one day we'd all be millionaires, and movie gods, and rock stars. But we won't. And we're slowly learning that fact. And we're very, very pissed off." (Taken from Edward Norton's character in Fight Club.)
This could have been written by Theodore Adorno, if that profound but difficult thinker had written in the vernacular.
Most Americans live lives that are highly regulated and standardized via networks of management and control, governed by norms of efficiency and profit that crowd out any other value; and these norms increasingly colonize our home life, as well, thanks to intrusive media technologies. We tend to work long hours at boring, repetitive jobs that demand our full attention, in order to make someone else rich. And we evaluate our lives according to how well we conform to these norms—that is, if one's job is not outsourced to a machine.
Everyone needs a way to resist these demands, a place where beauty, pleasure and a focus on things that have intrinsic value occupy our attention. Finding extraordinary meaning in simple things and their particularity, such as a meal or a bottle of wine, is the most accessible path to a good life in this damaged world. That ordinary things are the greatest source of meaning is not a new thought—ancient sages from the Buddha to Epicurus had similar notions. But it is more relevant now than ever in an age where the pursuit of technical knowledge and efficiency promises the systematic elimination of anything that does not conform to the demand for quantification and standardization.
Of course the character in Fight Club creates a place where men get together and punch each other to feel better about their limited lives. I guess that is "aesthetics" of a sort—a sensory experience no doubt. But we can probably do better by seeking a form of beauty not tainted by violence.
One might object that taste is both subjective and trivial, and a preoccupation with such matters is useless and without any larger significance. No one cares about what I had for dinner except me. But the fact that taste is subjective and and trivial is a feature not a bug. For it is precisely the subjective and trivial, and taking delight in such matters, that escapes the clutches of instrumental reason, that resists the encroachments of a corporate mentality that translates everything of value into a commodity with a price and uses up every resource, both human and non-human, in order to line someone's pockets.
In this case, as in so many parts of life, the personal is political. Despite being a personal matter, a concern for taste is the first step in the shaping of our desires toward more sustainable forms.
Yet, such a commitment means we must refuse to accept what is false and inauthentic, that we recognize and block the strategies of our corporate masters when they try to commodify our desires. When we outsource our practical reasoning to marketers our desires are not our own. The only antidote to such outsourcing is critical thought, conceptual imagination, and a mind sufficiently open to fully appreciate the intrinsic value of what is before us, as food and drink almost always are. Philosophy can be—perhaps must be—enlisted in this attempt to keep the question of how one should live in focus, for philosophy has always sought to discover what is of intrinsic value .
As Epicurus said "Not what we have but what we enjoy, constitutes our abundance."
For more more ruminations of the philosophy of food and wine visit Edible Arts.
Monday, May 19, 2014
Epicycles of the Elite Left; The Price is Too Damn High..
by Omar Ali
This was to be an article about the latest outbreak of Blasphemy-mongering in Pakistan but after several friends brought up Pankaj Mishra’s article about the victory of the BJP in the Indian elections, I decided to change direction. I think far too many educated South Asian people read Pankaj Mishra, Arundhati Roy and their ilk. And I believe that many of these readers are good, intelligent people who want to make a positive contribution in this world. And I believe their consumption of Pankaj, Roy and Tariq Ali (heretofore shortened to Pankajism, with any internal disagreements between various factions of the People’s Front of Judea being ignored) creates a real opportunity cost for liberals and leftists, especially in the Indian subcontinent (I doubt if there is any significant market for their work in China or Korea yet; a fact that may even have a tiny bearing on the difference between China and India).
In fact, I believe the damage extends beyond self-identified liberals and leftists; variants of Pankajism are so widely circulated within the English speaking elites of the world that they seep into our arguments and discussions without any explicit acknowledgement or awareness of their presence. In other words, the opportunity cost of this mish-mash of Marxism-Leninism, postmodernism, “postcolonial theory”, environmentalism and emotional massage (not necessarily in that order) is not trivial.
This is not a systematic theses (though it is, among other things, an appeal to someone more academically inclined to write exactly such a thesis) but a conversation starter. I hope that some of you comment on this piece and raise the level of the discussion by your response. And of course, I also apologize in advance for any appearance of rudeness or ill-will. I have not set out to insult anyone (except, of course, Pankaj, Roy and company; but they are big enough to take it).
1. There are some people who have a consistent, systematic and well thought out Marxist-Leninist worldview (it is my impression that Vijay Prashad, for example, is in this category). This post is NOT about them. Whether they are right or wrong (and I now think the notion of a violent “people’s revolution” is wrong in some very fundamental ways), there is a certain internal logic to their choices. They do not expect electoral politics and social democratic reformist parties to deliver the change they desire, though they may participate in such politics and support such parties as a tactical matter (for that matter they may also support right wing parties if the revolutionary situation so demands). Similarly, they are very clear about the role of propaganda in revolutionary politics and therefore may consciously take positions that appear simplistic or even silly to pedantic observers, if they feel that such a position is in the interest of the greater revolutionary cause. Their choices, their methods and their aims are all open to criticism, but they make some sort of internally consistent sense within their own worldview (as far as such things can be true of human beings and their motivations and actions). With these people, one can disagree on fundamentals or disagree on tactics, but either way, one can figure out what the disagreement is about. In so far as their worldview fails to fit the facts of the world, they have to invent epicycles and equants to fit facts to theory, but that is not the topic today. IF you are a believer in “old fashioned Marxist-Leninist revolution”, this post is not about you.
2. But most of the left-leaning or liberal members of the South Asian educated elite (and a significant percentage of the educated elite in India and Pakistan are left leaning and/or liberal, at least in theory; just look around you) are not self-identified revolutionary socialists. I deliberately picked on Pankaj Mishra and Arundhati Roy because both seem to fall in this category (if they are committed “hardcore Marxists” then they have done a very good job of obfuscating this fact). Tariq Ali may appear to be a different case (he seems to have been consciously Marxist-Leninist and “revolutionary” at some point), but for all practical purposes, he has joined the Pankajists by now; relying on mindless repetition of slogans and formulas and recycled scraps of conversation to manage his brand. If you consider him a Marxist-Leninist (or if he does so himself), you may mentally delete him from this argument.
3. The Pankajists are not revolutionaries, though they like revolutionaries and occasionally fantasize about walking with the comrades (but somehow always make sure to get back to their pads in London or Delhi for dinner); They are not avowedly Marxist, though they admire Marx (somewhat in the way “moderate Muslims” admire the Prophet Mohammed, may peace be upon him. Tribal loyalty is there, but it does not stand in the way of living a modern life. The prophet is more or less an icon, and the prophet’s hardcore followers have serious doubts about the “moderates” bona fides); They strongly disapprove of capitalists and corporations, but they have never said they would like to hang the last capitalist with the entrails of the last priest. So are they then social democrats? Perish the thought. They would not be caught dead in a reformist social democratic party.
4. They hate how Westernization is destroying traditional cultures, but every single position they have ever held was first advocated by someone in the West (and 99% were never formulated in this form by anyone in the traditional cultures they apparently prefer to “Westernization”). In fact most of their “social positions” (gay rights, feminism, etc) were anathema to the “traditional cultures” they want to protect and utterly transform at the same time. They are totally Eurocentric (in that their discourse and its obsessions are borrowed whole from completely Western sources), but simultaneously fetishize the need to be “anti-European” and “authentic”.
Here it is important to note that most of their most cherished prejudices actually arose in the context of the great 20th century Marxist-Leninist revolutionary struggle. e.g. the valorization of revolution and of “people’s war”, the suspicion of reformist parties and bourgeois democracy, the yearning for utopia, and the feeling that only root and branch overthrow of capitalism will deliver it; these are all positions that arose (in some reasonably sane sequence) from hardcore Marxist-Leninist parties and their revolutionary program (good or not is a separate issue), but that continue to rattle around unexamined in the heads of the Pankajists.
The Pankajists also find the “Hindu Right” and its fascist claptrap and its admiration of “strength” and machismo alarming, but Pankaj (for example) admires Jamaluddin Afghani and his fantasies of Muslim power and its conquering warriors so much, he promoted him as one of the great thinkers of Asia in his last book. This too is a recurring pattern. Strong men and their cults are awful and alarming, but also become heroic and admirable when an “anti-Western” gloss can be put on them, especially if they are not Hindus. i.e. For Hindus, the approved anti-Western heroes must not be Rightists, but this second requirement is dropped for other peoples.
They are proudly progressive, but they also cringe at the notion of “progress”. They are among the world’s biggest users of modern technology, but also among its most vocal (and scientifically clueless) critics. Picking up that the global environment is under threat (a very modern scientific notion if there ever was one), they have also added some ritualistic sound bites about modernity and its destruction of our beloved planet (with poor people as the heroes who are bravely standing up for the planet). All of this is partly true (everything they say is partly true, that is part of the problem) but as usual their condemnations are data free and falsification-proof. They are also incapable of suggesting any solution other than slogans and hot air.
Finally, Pankajists purportedly abhor generalization, stereotyping and demagoguery, but when it comes to people on the Right (and by their definition, anyone who tolerates capitalism or thinks it may work in any setting is “Right wing”) all these dislikes fly out of the window. They generalize, stereotype, distort and demonize with a vengeance.
You get the picture...or rather, you do not, because there is no coherent picture there. There are emotionally satisfying and fashionable sound bites that sound like they are saying something profound, until you pay closer attention and most of the meaning seems to evaporate. My contention is that what remains after that evaporation is pretty much what any reasonable “bourgeois” reformist social democrat would say. Pankaj and Roy add no value at all to that discourse. And they take away far too much with sloganeering, snide remarks, exaggeration and hot air.
5. This confused mish-mash is then read by “us people” as “analysis”. Instead of getting new insights into what is going on and what is to be done, we come out by the same door as in we went; we may have held vague but fashionable opinions on our way in, and if so, we come out with the same opinions seemingly validated by someone who uses a lot of words and sprinkles his “analysis” with quotes from serious books. We then discuss said analysis with friends who also read Pankaj and Arundhati in their spare time. Everyone is happy, but I am going to make the not-so-bold claim that you would learn more by reading “The Economist”, and you would be harmed less by it.
6. Pankajism as cocktail party chatter is not a big deal. After all, we have a human need to interact with other humans and talk about our world, and if this is the discourse of our subculture, so be it. But then the gobbledygook makes its way beyond those who only need it for idle entertainment. Real journalists, activists and political workers read it. Government officials read it. Decision makers read it. And it helps, in some small way, to further fog up the glasses of all of them. The parts that are useful are exactly the parts you could pick up from any of a number of well informed and less hysterical observers (if you don’t like the Economist, try Mark Tully). What Pankajism adds is exactly what we do not need: lazy dismissal of serious solutions, analysis uncontaminated by any scientific and objective data, and snide dismissal of bourgeois politics.
7. If and when (and the “when” is rather frequent) reality A fails to correspond with theory A, Pankajists, like Marxists, also have to come up with newer and more complicated epicycles to save the appearances; and we then have to waste endless time learning the latest epicycles and arguing about them. All this while people in India (and to a lesser and more imperfect extent, even in Pakistan) already have a reasonably good constitution and, incompetent and corrupt, but improvable institutions. There are large political parties that attract mass support and participation. There are academics and researchers, analysts and thinkers, creative artists and brilliant inventors, and yes, even sincere conservatives and well-meaning right-wingers. I think it may be possible to make things better, even if it is not possible to make them perfect. “People’s Revolution” (which did not turn out well in any country since it was valorized in 1917 as the way to cut the Gordian knot of society and transform night into day in one heroic bound) is not the only choice or even the most reasonable choice. Strengthening the imperfect middle is a procedure that is vastly superior to both Left and Right wing fantasies of utopian transformation. I personally believe that the system that exists is not irreparably broken and can still avoid falling into fascist dictatorship or complete anarchy (both of which have repeatedly proven to be much worse than the imperfect efforts of modern liberal democracy) but you don’t have to agree with me. My point is that even if they system is unfixable and South Asia is due for huge, violent revolution, these people are not the best guide to it.
Look, for example at the extremely long article produced by Pankaj on the Indian elections. This is the opening paragraph:
In A Suitable Boy, Vikram Seth writes with affection of a placid India's first general election in 1951, and the egalitarian spirit it momentarily bestowed on an electorate deeply riven by class and caste: "the great washed and unwashed public, sceptical and gullible", but all "endowed with universal adult suffrage.
Well, was that good? Or bad? Or neither? Were things better then, than they are now? That seems to be the implication, but in typical Pankaj style, this is never really said outright (that may bring up uncomfortable questions of fact). It also throws in a hint that universal adult suffrage was a bit of a fraud even then. But just a hint. So are the “unwashed masses” now more gullible? Less skeptical? I doubt if any two readers can come up with the same explanation of what he means; which is usually a good sign that nothing has been said.
There follows a description of why Modi and the RSS are such a threat to India. This is a topic on which many sensible things can be said and he says many of them, but even here (where he is on firmer ground, in that there are really disturbing questions to be asked and answered) the urge to go with propaganda and sound bites is very strong. And the secret of Modi’s success remains unclear. We learn that development has been a disaster, but that people seem to want more of it. If it has been so bad, why do they want more of it? Because they lack agency and are gullible fools led by the capitalist media? If people do not know what is good for them, and they have to be told the facts by a very small coterie of Western educated elite intellectuals, then what does this tell us about “the people”? And about Western education?
Supporters will say Pankaj has raised questions about Indian democracy and especially about Modi and the right-wing BJP that need to be asked. And indeed, he has. But here is my point: the good parts of his article are straightforward liberal democratic values. Mass murder and state-sponsored pogroms are wrong in the eyes of any mainstream liberal order. If an elected official connived in, or encouraged, mass murder, then this is wrong in the eyes of the law and in the context of routine bourgeois politics. Those politics do provide mechanisms to counter such things, though the mechanisms do not always work (what does?). But these liberal democratic values are the very values Pankaj holds in not-so-secret contempt and undermines with every snide remark. It may well be that “a western ideal of liberal democracy and capitalism” Is not going to survive in India. But the problem is that Pankaj is not even sure he likes that ideal in the first place. In fact, he frequently writes as if he does not. But he is always sufficiently vague to maintain deniability. There is always an escape hatch. He never said it cannot work. But he never really said it can either... To say “I want a more people friendly democracy” is to say very little. What exactly is it that needs to change and how in order to fix this model? These are big questions. They are being argued over and fought out in debates all over the world. I am not belittling the questions or the very real debate about them. But I am saying that Pankajism has little or nothing to contribute to this debate. Read him critically and it soon becomes clear that he doesn’t even know the questions very well, much less the answers... But he always sounds like he is saying something deep. And by doing so, he and his ilk have beguiled an entire generation of elite Westernized Indians (and Pakistanis, and others) into undermining and undervaluing the very mechanisms that they actually need to fix and improve. It has been a great disservice.
By the way, the people of India have now disappointed Pankaj so much (because 31% of them voted for the BJP? Is that all it takes to destroy India? What if the election ends up meaning less than he imagines?) that he went and dug up a quote from Ambedkar about the Indian people being “essentially undemocratic”. I can absolutely guarantee that if someone on the right were to say that Indians are essentially undemocratic, all hell would break loose in Mishraland.
See this paragraph: In many ways, Modi and his rabble – tycoons, neo-Hindu techies, and outright fanatics – are perfect mascots for the changes that have transformed India since the early 1990s: the liberalisation of the country's economy, and the destruction by Modi's compatriots of the 16th-century Babri mosque in Ayodhya. Long before the killings in Gujarat, Indian security forces enjoyed what amounted to a licence to kill, torture and rape in the border regions of Kashmir and the north-east; a similar infrastructure of repression was installed in central India after forest-dwelling tribal peoples revolted against the nexus of mining corporations and the state. The government's plan to spy on internet and phone connections makes the NSA's surveillance look highly responsible. Muslims have been imprisoned for years without trial on the flimsiest suspicion of "terrorism"; one of them, a Kashmiri, who had only circumstantial evidence against him, was rushed to the gallows last year, denied even the customary last meeting with his kin, in order to satisfy, as the supreme court put it, "the collective conscience of the people".
Many of these things have indeed happened (most of them NOT funded by corporations or conducted by the BJP incidentally) but their significance, their context and, most critically, the prognosis for India, are all subtly distorted. Mishra is not wrong, he is not even wrong. To try and re-understand this paragraph would take up so much brainpower that it is much better not to read it in the first place. There are other writers (on the Left and on the Right) who are not just repeating fashionable sound bites. Read them and start an argument with them. Pankajism is not worth the time and effort. There is no there there…
PS: I admit that this article has been high on assertions and low on evidence. But I did read Pankaj Mishra’s last (bestselling) book and wrote a sort of rolling review while I was reading it. It is very long and very messy (I never edited it), but it will give you a bit of an idea of where I am coming from. You can check it out at this link: Pankaj Mishra’s tendentious little book
PPS: My own first reaction on the Indian elections is also at Brownpundits. Congratulations India
Monday, March 31, 2014
Uncle Warren Thanks You For Playing
by Misha Lepetic
"Is it the media that induce fascination in the masses,
or is it the masses who direct the media into the spectacle?"
I usually buy my cigarettes at a corner store, on Manhattan's Upper West Side, that, not unusually for such establishments, also does a brisk trade in lottery tickets. Now, buyers of both cigarettes and lottery tickets are placing bets on outcomes with dismally known chances of winning. My fellow consumers are betting that they will win something, and I am betting that I won't (I also console myself with the sentiment that I am having more fun in the process). But in both cases, the terms of exchange are clear – we give our cash to the vendor, and buy the option on the pleasure of suspense, waiting to see if we have won. Beyond the potential payout, there really isn't that much more to discuss: the transactions are discrete and anonymous. And in the end, someone always wins the lottery, and someone always lives to a hundred.
I was reminded of the perceived satisfactions of participating in games of chance with hopeless odds after hearing a recent piece on NPR discussing quite the prize: a cool $1 billion dollars for anyone who nailed a 'perfect bracket.' In other words, the accurate identification of the outcomes of all 63 games of the NCAA men's basketball playoffs. Sponsored by a seemingly oddball trinity of Warren Buffett, Quicken Loans and Yahoo!, the prize is, on the face of it, an exercise in absurdity. But its construction is superb, and worth examining further, for reasons that have little to do with basketball, or probability, but rather for the questions it provokes around the value of information.
Now, bracket competitions have been going on at least since the tournament itself, which kicked off in 1939. Although brackets are common for other sports, there are unlikely subjects, too: saints and philosophers both have been thrown into pitched, single-elimination battle. But the NCAA bracket holds pride of place, not least because the number of participating teams is much greater than most other playoffs. This leads to the absolutely astonishing odds: if each game is treated as an independent coin toss, the odds of a perfect bracket are 1 in 9.2 quintillion, a number that even Neil DeGrasse Tyson might have difficulty contextualizing for us. Of course, the distribution of the initial round favors higher-seeded teams, so barring any first-round upsets, our chances may improve to a balmy 1 in 128 billion.
So we have at least an answer to the initial question of "What odds would make you feel comfortable enough to put up $1 billion?" Of course, if someone had won, Warren Buffett, whose net worth clocks in at about $60 billion these days, would have been on the hook, or rather his firm Berkshire Hathaway, whose market cap is five times the size of Buffett's wealth. (I mention both Buffett and his company because Buffett has thrown in a classic game theory move: he is willing to buy out anyone with a perfect bracket going into the Final Four for, say, $100 million.) In any event, it certainly would have been worth seeing the avuncular Oracle of Omaha show up at the door of the lucky winner with a giant cardboard check, just like Ed McMahon used to do with the Publishers Clearing House Sweepstakes. But if the chances of winning are nearly impossible, and there is no cost to enter the contest, we are left with a head-scratcher: who benefits?
There is an obvious pleasure to filling out brackets, of competing for the sake of competition, of measuring ourselves against not just one another but against the unknown. And certainly casual observers of what has become known as the "Buffett bracket" would not be wrong to point out that, on the face of it, Buffett et al. have come up with a great publicity stunt. But a publicity stunt, for all its Barnumesque splashiness, is intrinsically ephemeral. Its principal value lies in the fact that it grabs our attention and confers some brief benefit upon its initiators before sinking beneath the ebb and flow of the 24-hour news cycle. In this age of big data, where the world's most successful technology corporations thrive on dressing up "free" services with ever more finely targeted advertising, we ought to hope that there is a subtler angle.
And there is. Recall the three sponsors of our prize: Berkshire Hathaway, Yahoo! and Quicken Loans. In order to enter the competition, prospective bracketologists (that's a real word) had to visit a Yahoo! page, where they had to first open a Yahoo! account and then fill out a detailed Quicken questionnaire which elicited not just their name, home address, email and phone number, but much more importantly, if they own their home, or plan to purchase one in the future, and, if they own one, the current interest rate on the mortgage. For its part, Berkshire Hathaway receives a fee from Quicken and Yahoo! for insuring the competition, ie, in case the payout actually happens, which never will. Everyone's a winner, baby.
The benefit to these entities – particularly to Quicken, which specializes in mortgage lending – becomes apparent when one combines the quality of the information with the scale of participation. Concerning information, Slate, in one of the few clear-eyed articles on the matter, quotes a mortgage investment banker as saying that "it's not uncommon for companies like Quicken to pay between $50 and $300 for a single high-quality mortgage lead." While Quicken's spokespeople have been at pains to point out that only people who ask will be contacted, the fact is that all of the information on the entry form is required, which allows Quicken to create a massive database from which it can model all sorts of trends and behaviors.
How massive? At first, the organizers limited the number of entrants to 10 million, but based on the response sensibly increased it to 15 million. At this moment it's unclear how many people actually registered, and I doubt that this number will ever be disclosed. But if we take the low range of what Quicken pays for lead generation and assume that 1 million people opt to be contacted (ie, 10% of the low end of the entrant population), Quicken has acquired $50 million of lead generation value, and this does not include any revenue from leads that it manages to close. Even if we knock down the 10% by an order of magnitude, Quicken is still enjoying a $5 million freebie (of course, I am assuming honesty on the part of the respondents).
For its part, Yahoo! gains an equivalent number of users. Obviously, some will already be Yahoo! accountholders, but even if we assume that only half are new users, that is still 5 million fresh fish to subject to new ads, at least for a time. Berkshire Hathaway's benefit, aside from the insurance fee, is less clear, but the language in the contest rules leaves wide open the opportunity for sharing information between Quicken and the conglomerate (and if you have any doubts about the spurious protections afforded by these agreements, have a look at this 60 Minutes report).
So what? People are always giving away something in the hopes that they will gain something that is, in their perception, of even greater value. In the case of the Buffett bracket, even if what they finally get is nothing, I suspect there is still a pleasure in the act of playing – in other words, a bribe. But before discussing bribery, what interests me is the change in what's considered a fair trade. Any economist will maintain that a trade made without coercion is a fair trade, with the libertarian corollary being that people should not be protected from the consequences of their greed and/or stupidity.
But Western law has tended to draw the line at varying points. Nigerian letter scams and boiler room pump-and-dump schemes are illegal precisely because society has decided that there is a point beyond which people need to be protected from their cupidity. And the terms of engagement and success for the Buffett bracket are rather clear: in this sense, the contest is neither a fraud nor a scam. You pay to play, in a way that may not seem obvious or even harmful. But what is not transparent is the purposes for which that data is used, beyond the immediate consequence of the generation of consent, or the persistence of this data. Would people change the way they thought about giving up this information if they knew of the enormous subterranean infrastructure that trafficks in their personal details? Would they value it more? But if there are no mechanisms of valuation (ok, fine: free markets) that make the worth of this information apparent, how do we approach this?
Consider what happens when these mechanisms of valuation are not available to us as individuals. The master-stroke of the Buffett bracket is to force an extraordinary, cognitively unresolvable trade: it somehow makes perfect sense to divulge to some corporation the interest rate on your mortgage in order to gain the right to guess the outcome of a bunch of basketball games (a right which you had anyway, minus the impossible prize). And as proof, millions have chosen to do exactly this. The contest's creators rightly discerned that the value of this information to each individual is trivial, and yet the networked value of the aggregated information is, to those same creators, extremely valuable indeed. Recall a much-abused quote by Stewart Brand: "Information wants to be free." The anthropomorphism implied here is some awful hippie nonsense, but fortunately that is only a fragment. Here is the full quote (with a full exegesis here):
On the one hand information wants to be expensive, because it's so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.
In the Buffett bracket we have the resolution of this paradox – of how what is free (as in costless) is transmuted into value (something that is otherwise expensive to obtain). It is quite clear to whom the information is valuable, and the generation of this value is only possible through the vast systems that aggregate millions of bits of data into models that determine and predict behavior, ultimately driving profit. It is also quite clear how lowering the cost of getting information into the system makes it free (again, as in costless). What the internet and the accompanying utter lack of regulation enable is the hyperefficient siphoning off of that information from any willing individual who hasn't the means to determine what his information might actually be worth - which is pretty much no one. As a further consideration, note that most people will forget they entered the contest within weeks of the tournament's end, but that there are no provisions for their information's expiration. We may be done playing the bracket, but the traces of data that we leave behind are never forgotten.
The problem with this analysis (aside from its melodramatic nature) is that it incomplete. There is no resolution at this moment. Regulation that would give private citizens the right to use their information as an object of the commodity economy (ie, for lease as well as for sale) versus the current state, where it has by default fallen into the realm of the gift economy, is about as likely as a perfect bracket. The best that thinkers such as Jaron Lanier – who has written extensively on the subject – can seem to come up with is a system of micropayments, but the problem with technologists is that they tend to have a dismal grasp of the dismal science. In the meantime, what continues to take place is not so much a fraud or a scam, but really a sort of bribery. As automation continues to replace middle class jobs, we are being bribed for what little we have left that is uniquely our own, and, it being of such little worth to us, we find ourselves willingly trading it for the privilege of, as Žižek says, having "an experience" – in this case, the non-chance to win a billion dollars. This is the heart of ideology, in that it does not need to hide itself. After all, Slate and NPR both published insightful articles on the Buffett bracket and what it meant for participants. There is no need to obfuscate the truth, as it is much more useful for large network actors to be (sufficiently) open about their motives and desires. One doesn't have to look very hard to see that the old Wall Street adage – "They take your money and their experience, and turn it into their money and your experience" – has never been more true, or more subtle, since you are brought to believe that you never had the money in the first place.
So what about the state of the Buffett bracket? Sadly enough, no one made it past the first two days of competition. As fate would have it, the first round saw 14th seed Mercer upsetting 3rd seed Duke, which wiped out a large swathe of punters. Better luck next year, kids. In the meantime, the folks at Quicken have a lot of phone calls to make, and I need to go to the corner store to pick up a fresh pack of smokes. I sometimes think about picking up a lottery ticket while I'm at the counter, too, but somehow never seem to get around to it.
Monday, March 24, 2014
Killing Shias...and Pakistan
by Omar Ali
I have written before about the historical background of the Shia-Sunni conflict, and in particular about its manifestations in Pakistan. Since then, unfortunately but predictably, the phenomenon of Shia-killing in Pakistan has moved a little closer to my personal circle. First it was the universally loved Dr Ali Haider, famous retina surgeon, son of the great Professor Zafar Haider and Professor Tahira Bokhari, killed in broad daylight in Lahore along with his young son.
This week it was Dr Babar Ali, our friend and senior from King Edward Medical College; He was the assistant DHO (district health officer) and head of the anti-Polio campaign in Hasanabdal, who was shot dead by "unknown assailants" as he drove out of his hospital at night. Shia killing portals reported his death but it is worth noting that no TV channel or major news outlet reported on this murder. Such deaths are now so utterly routine that they do not even make the news.
This should scare everyone.
In 2012 I had predicted that:
“The state will make a genuine effort to stop this madness. Shias are still not seen as outsiders by most educated Pakistani Sunnis. When middle class Pakistanis say “this cannot be the work of a Muslim” they are being sincere, even if they are not being accurate.
But as the state makes a greater effort to rein in the most hardcore Sunni militants, it will be forced to confront the “good jihadis” who are frequently linked to the same networks. This confrontation will eventually happen, but between now and “eventually” lies much confusion and bloodshed.
The Jihadist community will feel the pressure and the division between those who are willing to suspend domestic operations and those who no longer feel ISI has the cause of Jihadist Islam at heart will sharpen. The second group will be targeted by the state and will respond with more indiscriminate anti-Shia attacks. Just as in Iraq, jihadist gangs will blow up random innocent Shias whenever they want to make a point of any kind. Things (purely in terms of numbers killed) will get much worse before they get better. As the state opts out of Jihad (a difficult process in itself, but one that is almost inevitable, the alternatives being extremely unpleasant) the killings will greatly accelerate and will continue for many years before order is re-established. The worst is definitely yet to come. This will naturally mean an accelerating Shia brain drain, but given the numbers that are there, total emigration is not an option. Many will remain and some will undoubtedly become very prominent in the anti-terrorist effort (and some will, unfortunately, become special targets for that reason).
IF the state is unable to opt out of Jihadist policies (no more “good jihadis” in Kashmir and Afghanistan and “bad jihadis” within Pakistan) then what? I don’t think even the strategists who want this outcome have thought it through. The economic and political consequences will be horrendous and as conditions deteriorate the weak, corrupt, semi-democratic state will have to give way to a Sunni “purity coup”. Though this may briefly stabilize matters it will eventually end with terrible regional war and the likely breakup of Pakistan. . Since that is a choice that almost no one wants (not India, not the US, not China, though perhaps Afghanistan wouldn’t mind) there will surely be a great deal of multinational effort to prevent such an eventuality.”
Unfortunately, it seems that the state, far from nipping this evil in the bud, remains unable to make up its mind about it.
The need to have a powerful proxy in Afghanistan after the American drawdown seems to take priority over the need to maintain sectarian harmony in Pakistan, as do the financial ties that bind Pakistan to Saudi Arabia. Many (though not all) on the left also remain convinced that pitting Sunnis against Shias is mainly (or even entirely) a project of the CIA, promoted as a way to keep the Middle East in turmoil. But even if this is true (and I personally doubt that the purveyors of this theory have the evidence, or have even worked out the implications of their worldview, but that is a separate story), it does not absolve the ruling elite in Pakistan of their responsibility in this matter. The strangest and most irrational meta-narratives can be sustained while acting rationally and shrewdly in the world of actions and short term consequences (where most politics is necessarily conducted), but the reverse is not always true; there are some blindingly obvious mistakes that should not be tolerated no matter what meta-narrative you wish to subscribe to. The Ahle Sunnat Wal Jamaat (ASWJ)’s campaign against the Shia sect is one of those. Whether people have a Marxist or Islamist or Capitalist worldview hardly matters; the ruling elite cannot possibly sustain itself if this affair progresses much further. I would argue that:
- The ASWJ and its fellow travelers (whatever their historic background and philosophical roots may be) are an existential threat to the modern state of Pakistan. The modern Pakistani state can tolerate (and has tolerated) many amazing contortions and disasters, but open season on the Shia population is not one of them. Unlike Ahmedis or Sindhi Hindus, the Shias of Pakistan are not a small fringe community. They are an integral part of Pakistani society, deeply woven into the Pakistani state, capable of armed retaliation, and able to obtain support from at least one (probably two or even three) well-resourced nieghbors. Their elimination or suppression is not a a realistic option for Pakistan even as a practical matter (quite apart from the blindingly obvious moral issues involved). The ASWJ is very clear about their intentions and makes no secret of it. Those intentions cannot be dismissed as mere words after all that has happened in the last 30 years. They are deadly serious. They will not tolerate Shias as equal partners in the Pakistan project. They have repeatedly insisted that Shias should be removed from “important positions” in the state and their religion must be demarcated as something distinct from “real islam”. With a wink and a nod, they may say that they are willing to accept the existence of Shias “if they do not cross the line”. But that line will be defined as needed by the ASWJ, and will eventually be drawn so tightly across Shia necks that they will not be able to breathe. The parallel with the Nazi view of the Jews is entirely valid. This project has no peaceful resolution. It must be condemned, its leaders ostracized and its violent executioners terminated with maximum prejudice. Otherwise you can say good bye to Pakistan.
- The “strategic priorities” of the state (one of the cruelest jokes perpetrated on our unready institutions by think tanks and teachers from “advanced” countries) have led it to encourage the spread of extremely intolerant and violent ideologies and organizations across the length and breadth of Pakistan. Here I would like to add that I do not disagree with those who say that there are deeper economic and social reasons for the phenomenon of religious fundamentalism and the spread of organized violence (whether Islamist or Maoist) among the “weaker sections of society”. My point is much shallower and more urgent. The social and economic challenges and changes that have driven the rise of Hindu and Sikh militants, Maoists and even South American drug gangs are also operative in Pakistan, but the self-destructiveness and confusion of the Pakistani ruling elite goes well beyond the norm. For 13 years the international community (not just the United States) has poured money and weapons into the Pakistani state to assist it in destroying the network of Jihadist terrorist organizations created (with American help at the beginning) in our region. Even if one believes the most insane conspiracy theories about the CIA acting at the same time to prop up these very organizations as part of some diabolical plan of the trilateral commission or the elders of Zion, the fact remains that the Pakistani ruling elite did not have to actively work for any such diabolical plan. It is not in their interest to sustain and support any of these terrorist organizations or provide them cover. To continue to do so for the sake of “obtaining leverage in Afghanistan post 2014” is insane, and it remains insane no matter what meta-narrative you wish to apply on the situation.
- There are also those who believe that the connection between various “Good Taliban/anti-imperialist resistance” in the tribal areas and the Shia-killers in the rest of the country, is exaggerated by people who are being paid in dollars to make this case. Why the dollar-slaves (Imran Khan’s loving term for those who oppose his pro-Taliban leanings) would make such a connection when the CIA desperately wants to spread sectarian conflict within Pakistan (as Imran Khan and many others also believe) is not clear, but could this claim be true? Could it be that use can be made of the “good Taliban” and their network of Madrassahs and political supporters in Pakistan, while launching a clearly demarcated operation against the Shia-killers of the LEJ? I think not. The ideology of Sunni purity and Shia-hatred that drives the LEJ is also the ideology of the good Taliban. Economic and social pressures may create the target killers, but ideology is the proximate cause for their alignment with this particular form of “protest against real suffering”. Since the socio-economic conditions of Pakistan will not change at any speed rapid enough to defang this beast before it kills Pakistan (simply because they have never changed that fast in any country at any time, all fantasies of overnight successful and productive people’s revolution notwithstanding), it is the proximate causes (the ideology and its armed enforcers) who will have to be dealt with. Any policy that permits the Taliban and their support networks to operate unhindered, will also permit the ASWJ and its network of killers to operate unhindered. To imagine that the good Taliban will be pushed into the coming Afghan civil war fast enough to permit the ruling elite to recover ground in Pakistan while remaining allied with them (the dream scenario of the strategic depth community) is to carry self-delusion to incredible heights. The links between the good and the bad Taliban are too numerous, their cause too closely interlinked, for this to be possible. Whether driven by fantasies of strategic depth or by other (equally “modern”) fantasies of anti-imperialist struggle, this calculation is not tenable.
It is time to change course.
A few snippets and videos worth a look:
This is a section from a report about the arrest of Shia-killer Tariq Shafi alias doctor, a friend of Waseem Baroodi (a policeman who killed many Shias, spent time in prison, was freed and went back to both the police and his job as shia-killer) (whole thing here):
“ During the JIT Interrogation , he told his where about as he was born in 1968 , and was the resident of P.I.B Colony , And got his elementary education from Govt . High School, Sindhi Hotel , Liaqatabad, and during the same Period he also did a Refrigeration Course , and passed his Matriculation Privately in 1989 . And In 1990 he Joined the Garden area Police as a Mechanic . But at the Untimely death of his Brother in 1995 , he left the Job and shifted to Bhawalpur , where he Married his maternal Cousin, and got involved in the Fabric Business , but as the Business could not florish , so he came back to Karachi in 1998 , and his Job also got re Instated in the Police Department .
And During his Job in the Police , he got in contact with a Young Man named Waseem Baroodi , who use to come to one of his students , who was a Prayer leader of Mosque in Orangi Town 11 ½ , who convinced him for the sectarianism & Blood shed of Opponents , So finally one fine day he told that he has a 30 bore Pistol with him , and Waseem Baroodi took him along to kill a Innocent Boy , Both walked toward the Boy , and on Pointation of Waseem Baroodi of that Boy , I fired on him , resulting his death
From 2000 to 2001 before he got arrested he Killed about 9 or 10 Shia men. One day He and Waseem Baroodi were walking on the road as they came across some Street criminal Men , who were trying to snatch cash from Waseem Baroodi , but on his resistance he got injured due to their firing , in the mean time I took out my Pistol , and fired on them , and due to the firing One of the Dacoits got Killed , and as Waseem was also injured , and I was trying to take Waseem to Hospital for treatment , but at the same time we were arrested by the A.S.I Ali Raza of Orangi Ext. P.S , we were arrested on 11 different cases , for which I was in Jail for about Seven and a Half years , till finally I was released on Bail in 2008 – 2009 , and by that time Waseem was already released on Bail , about 7 to 8 months , earlier , and during the Imprisonment period , he was the Group Leader of Sipah e Sahaba Pakistan.”
Also, do not miss this event. It is a gathering of ASWJ leaders in Quetta, under the protection of security forces; awards are being handed out to local ASWJ leaders who have played a prominent role in anti-Shia activities in their region. Since this local branch has the “distinction” of having killed hundreds of Shias at a time (instead of picking them off one by one), one of the speakers recites a poem that commends them as “those who make centuries instead of playing for ones and twos” and the crowd laughs and cheers. Everyone knows what he means. It is an absolute must-see.
The following videos shed light on the aims of the ASWJ/SSP/LEJ:
Monday, March 03, 2014
Is Internet-Centrism a Religion?
by Jalees Rehman
On the evening of March 3 in 1514, Steven is sitting next to Friar Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the printing press will change everything."
Let us now fast-forward 500 years and re-enact this hypothetical scene with some tiny modifications.
On the evening of March 3 in 2014, Steven is sitting next to TED-Talker Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the internet will change everything."
Clay's advice in the first scene sounds ludicrous to us because we know that the printing press did not usher in an era of wealth, justice and peace. Being retrospectators, we realize that the printing press revolutionized how we disseminate information, but even the most efficient dissemination tool is just a means and not the ends.
It is more difficult for us to dismiss Clay's advice in the second scene because it echoes the familiar Silicon Valley slogans which inundate us with such persistence that some of us have begun to believe them. Clay's response is an example of what Evgeny Morozov refers to as "Internet-centrism", the unwavering belief that the Internet is not just an information dissemination tool but that it constitutes the path to salvation for humankind. In his book "To Save Everything, Click Here: The Folly of Technological Solutionism", Morozov suggests that "Internet-centrism" is taking on religion-like qualities:
"If the public debate is any indication, the finality of "the Internet"— the belief that it's the ultimate technology and the ultimate network— has been widely accepted. It's Silicon Valley's own version of the end of history: just as capitalism-driven liberal democracy in Francis Fukuyama's controversial account remains the only game in town, so does the capitalism-driven "Internet." It, the logic goes, is a precious gift from the gods that humanity should never abandon or tinker with. Thus, while "the Internet" might disrupt everything, it itself should never be disrupted. It's here to stay— and we'd better work around it, discover its real nature, accept its features as given, learn its lessons, and refurbish our world accordingly. If it sounds like a religion, it's because it is."
Morozov does not equate mere internet usage with "Internet-centrism". People routinely use the internet for work or leisure without ascribing mythical powers to it, but it is when the latter occurs that internet usage transforms into "Internet-centrism".
Does Morozov's portrayal of "Internet-centrism" as a religion correspond to our current understanding of religions? "Internet-centrism" does not involve deities, sacred scripture or traditional prayers, but social scientists and scholars of religion do not require deism, scriptures or prayers to categorize a body of beliefs and practices as a religion.
The German theologian Friedrich Schleiermacher (1768-1834) thought that the feeling of "absolute dependence" ("das schlechthinnige Abhängigkeitsgefühl") was one of the defining characteristics of a religion. In a January 2014 Pew Internet survey, 53% of adult internet users in said that it would be "very hard" to give up the internet, whereas only 38% felt this way in 2006. This does not necessarily meet the Schleiermacher threshold of "absolute dependence" but it indicates a growing perception of dependence among internet users, who are struggling to envision a life without the internet or a life beyond the internet.
Absolute dependence is not unique to religion, therefore it may be more helpful to turn to religion-specific definitions if we want to understand the religionesque characteristics of Internet-centrism. In his classic essay "Religion as a cultural system" (published in "The Interpretation of Cultures"), the anthropologist Clifford Geertz (1926-2006) defined religion as:
" (1) a system of symbols which acts to (2) establish powerful, persuasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic."
Today's Silicon Valley pundits (incidentally a Sanskrit term originally used for learned Hindu scholars well-versed in Vedic scriptures) excel at establishing "powerful, persuasive, and long-lasting moods and motivations" and endowing "conceptions of general order of existence" with an "aura of factuality". Morozov does not specifically reference the Geertz definition of religion, but he provides extensive internet pundit quotes which fit the bill. Here is one such example:
"To be a peer progressive, then, is to live with the conviction that Wikipedia is just the beginning, that we can learn from its success to build new systems that solve problems in education, governance, health, local communities, and countless other regions of human experience."
—Steven Johnson in "Future Perfect: The Case For Progress In A Networked Age"
One problem with abstract definitions of religion is that they do not encompass the practice of religion and its mythical or supernatural aspects, which are often essential parts of most religions. In "The Religious Experience", the religion scholar Ninian Smart (1927-2001) does not provide a handy definition for religions but instead offers six "dimensions" that are present in most major religions: 1) The Ritual Dimension, 2) The Mythological Dimension, 3) The Doctrinal Dimension, 4) The Ethical Dimension, 5) The Social Dimension and 6) The Experiential Dimension.
How do these dimensions of religion apply to Internet-centrism?
1) The Ritual Dimension: The need to continuously seek connectivity by accessing computers or seeking out wireless connectivity, checking emails or social media updates so frequently that this connectivity exceeds one's pragmatic needs could be considered a ritual of Internet-centrism. If one feels the need to check emails and Facebook or Twitter updates every one to two minutes, despite the fact that it is unlikely one would have received a message that required urgent action, it may be an indicator of the important role that this ritual plays in the life of an Internet-centrist. Worshippers of traditional religions feel uncomfortable if they miss out on regular prayers or lose their rosaries that allow them to commune with their God, and it appears that for some humans, the ritual of Internet-connectivity may play a similar role.
2) The Mythological Dimension: There is the physical internet, which consists of billions of physical components such as computers, servers, routers or cables that are connected to each other. Prophets and pundits of Internet-centrism also describe a mythical "Internet" which goes for beyond the physical internet, because it involves mythical narratives about the power of the internet as a higher force that is shaping human destiny. Just like "Scientism" attributes a certain mystique to real-world science, Internet-centrism adorns the physical internet with a similar mythological dimension.
Ideas of "cognitive surplus", crowdsourcing knowledge to improve the human condition, internet-based political revolutions that will put an end to injustice, oppression and poverty and other powerful metaphors are used to describe this poorly defined mythical entity that has little to do with the physical internet. The myth of egalitarianism is commonly perpetuated, yet the internet is anything but egalitarian. Social media hubs have millions of followers and certain corporations or organizations are experts at building filters and algorithms to control the information seen by consumers who have minimal power and control over the flow of information.
3) The Doctrinal Dimension: The doctrine of Internet-centrism is the relentless pursuit of sharedom through the internet. The idea is that the more we share, the more we collaborate and the more transparent we are via the internet, the easier it will be for us humans to conquer the challenges that face us. Challenging this basic doctrine that is promoted by Silicon Valley corporations can be perceived as heretical. It is a remarkable testimony to the proselytizing power of the prophets and pundits in Silicon Valley that people were outraged at the government institution NSA for violating our privacy. There was comparatively little concern about the fact that the primary benefactors of the growing culture of sharedom are the for-profit internet corporations that make money off our willingness to sacrifice our privacy.
4) The Ethical Dimension: In many religions, one is asked to follow aspects of a religious doctrine which have no direct ethical context. For example, seeking salvation by praying alone to a god on a mountain-top does not necessarily require adherence to ethical standards. On the other hand, most religions have developed moral imperatives that govern how adherents of a religion interact with fellow believers or non-believers. In Internet-centrism, the doctrinal dimension is conflated with the ethical dimension. Sharedom is not only a doctrinal imperative, it is also a moral imperative. We are told that sharing and collaborating is an ethical duty.
This may be unique to Internet-centrism since the internet (both in its physical or its mythical form) presupposes the existence of fellow beings with whom one can connect. If a catastrophe wiped out all humans but one, who happened to adhere to a traditional religion, she could still pray to a god (ritual), believe in salvation by a supernatural entity (mythological) and abide by the the religious laws (doctrinal). However, if she were an Internet-centrist, all her rituals, beliefs and doctrines would become meaningless.
5) The Social Dimension: Congregating in groups and social interactions are key for many religions, but Internet-centrism provides more tools than any other ideology, cultural movement or religion for us to interact with others. Whether we engage in this social activity by using social media such as Facebook or Twitter, by reading or writing blog posts, or by playing multi-player games online, Internet-centrism encourages us to fulfill our social needs by using the tools of the internet.
6) The Experiential Dimension: Most religions offer their adherents opportunities for highly personal, spiritual experiences. Internet-centrism avoids any talk of "spirituality", but the idea of a personalized experience is very much a part of Internet-centrism. One of its goals is to provide opportunities for self-actualization. We all may be connected via the internet, but Internet-centrists also want us to believe that this connectivity provides a path for self-actualization. We can modify settings to customize our web browsing experience, we can pick and choose from millions of options of what online courses we want to take, videos we want to watch or music we want to listen to. The sense of connectedness and omnipotentiality is what provides the adherent of Internet-centrism with a feeling of personal empowerment that comes close to a spiritual experience of traditional religions.
When one reviews the definitions by Schleiermacher or Geertz, or the multi-dimensional analysis by Ninian Smart, it does indeed seem that Morozov is right and that Internet-centrism is taking on many religion-like characteristics. There is probably still a big disconnect between the Silicon valley prophets or pundits who proselytize and the vast majority of internet users who primarily act as "consumers" but do not yet buy into the tenets of Internet-centrism. But it is likely that at least in the short-term, Internet-centrism will continue to grow, especially if Internet-centrist ideas are introduced to children in schools and they grow up believing that these ideas are both essential and sufficient for our intellectual and social wellbeing. Perhaps the pundits of Internet-centrism could discuss the future of this emerging religion with adherents of other faiths at a TEDxInterfaith conference.
Image Credits: Photo of Gutenberg Bible (Creative Commons license, via NYC Wanderer at Flickr)
Must We Have Fascism With Our Petits Fours
by Dwight Furrow
A few weeks ago in the pages of 3 Quarks Daily we were treated to the proclamation of a new doctrine called "Anti-Gopnikism". The reference in the title is to Adam Gopnik, essayist for the New Yorker, who writes frequently in praise of French culture, especially French food. Philosopher Justin Smith, who is responsible for the proclamation of this doctrine, defines Gopnikism as follows:
The first rule of this genre is that one must assume at the outset that France --like America, in its own way-- is an absolutely exceptional place, with a timeless and unchanging and thoroughly authentic spirit. This authenticity is reflected par excellence in the French relation to food, which, as the subtitle of Adam Gopnik's now canonical book reminds us, stands synecdochically for family, and therefore implicitly also for nation.
Thus, Anti-Gopnikism, we are to infer, must consist of a denial that France is an exceptional place, or that it has a timeless, unchanging, authentic spirit, or that its relationship to its food is unique, or all of the above. We are not provided with any evidence to support any of these denials.
Whether American writers are correct to extoll the exceptional virtues of France depends on what you're looking for. The French are lousy at the Olympics but their wine is awesome. Their music can be simple ear-candy and overly romantic but then there is Boulez and Messiaen. Their language is lovely but peculiar; their conversation at times formal but extraordinarily civilized. Like any nation, they have virtues and vices. If you are interested in food and wine they are an essential nation, and have for centuries, defined what fine food is. To claim their relationship to food is not exceptional is to be blind to their extraordinary influence. Other cultures may lay claim to being more influential today but that does not erase the glorious history of French food. As to the timeless, unchanging, authentic spirit—well we are all part of history and no culture is timeless or unchanging. As far as I can tell, Gopnik doesn't claim or imply a timeless, unchanging essence. In fact, in his recent book The Table Comes First: France, Family, and the Meaning of Food, he claims French food has fundamentally changed in recent decades, is in crisis, and he upbraids them for narcissism and navel gazing.
So what is this diatribe against "Gopnikism" really about? It turns out Gopnikism is a lot more sinister than a French food fetish. Smith writes:
France, in other words, is a country that invites ignorant Americans, under cover of apolitical vacationing, of living 'the good life and of cultivating their faculty of taste, to unwittingly indulge their fantasies of blood-and-soil ideology. You'll say I'm exaggerating, but I mean exactly what I say. From M.F.K. Fisher's Francocentric judgment that jalapeños are for undisciplined peoples stuck in the childhood of humanity, to Gopnik's celebration of Gallic commensality as the tie that binds family and country, French soil has long been portrayed by Americans as uniquely suited for the production of people with the right kind of values. This is dangerous stuff.
Oh my! This is truly a puzzling argument. No doubt the French view their cuisine as an expression of their national character just as do the Italians, Japanese, or Chinese among others. Gopnik's claim is that the French have discovered, perhaps more so than other nations, that the pleasure of food brings intimations of the sacred into our lives. Independently of whether such a claim is true or not, what on earth does this have to do with Nazi "blood and soil" ideology. Something has gone deeply wrong here.
This argument relating French food to Nazism seems to go something like this: (1) French attitudes toward their cuisine are expressions of excessive nationalism, (2) German attitudes in the 1930's about the purity and superiority of their "racial stock" were expressions of excessive nationalism, (3) Therefore, writers (and tourists) who extoll the virtues of French cuisine are implicitly endorsing the attitudes of Nazis toward their alleged racial superiority. What exactly a love of Cassoulet has to do with burning people in ovens we are not told.
I suppose we get a clue from Smith's criticisms of the French treatment of their immigrant populations—especially Muslims.
I have witnessed incessant stop-and-frisk of young black men in the Gare du Nord; in contrast with New York, here in Paris this practice is scarcely debated. I've been told by a taxi driver as we passed through a black neighborhood: "I hope you got your shots. You don't need to go to Africa anymore to get a tropical disease." On numerous occasions, French strangers have offered up the observation to me, in reference to ethnic minorities going about their lives in the capital: "This is no longer France. France is over." There is a constant, droning presupposition in virtually all social interactions that a clear and meaningful division can be made between the real France and the impostors.
I don't live in France, but if the American media is to be believed, the French treatment of minority populations as well as rising xenophobia throughout Europe is deplorable, although it is not obvious it is uniquely so. Perhaps the French treatment of immigrant populations is an indication of a kind of insularity endemic to French culture which per hypothesis explains the decline in creativity in French cooking that some authors, including Gopnik, have noted. But smug complacency regarding one's cuisine is hardly the same thing as a regime of genocide or violent immigrant bashing.
Indigenous foods that express the terroir of local soils and the sensibility of a people are about the uniqueness and incomparability of a place. These, by definition, cannot be transplanted; they belong nowhere else but in that location among those people. Nazi "blood and soil" ideology was about universal hegemony. It was about the right to rule over and exterminate others. The conceptual chasm between French food fetishism and Nazi violence is enormous.
Even if we stick to food and ignore the silly notion that "food fights" are akin to real violence, the inference from love of one's culture to attempts at world domination makes no sense. You can praise the virtues of some constellation of flavors or a method of straining soups without thinking everyone must deploy those flavors or methods in their cuisine. Something might work wonderfully in the French style without being appropriate anywhere else, and nothing about the virtues of one locality's food precludes the appreciation of another. Even if the French think they have the world's best cuisine it doesn't follow that they think everyone must emulate or promote it.
Despite this utterly failed comparison, there is an interesting and important philosophical issue percolating behind the slippery logic of this argument. Can you love a place, a culture, a people and think of them as uniquely virtuous without excluding respect for others who are outside that culture? Can one enjoy the goods of being immersed in and loyal to one's own culture while acknowledging the good of other cultures? Is particularity compatible with universalism? The answer would seem to be, obviously, yes. The devil is of course in the details. Some conflicts between cultural belief systems cannot be mitigated let alone resolved. But there is no general or principled reason why love of one's nation or culture cannot be constrained by an acknowledgement of the rights of others. This is true even when the stakes are high. Many of these "food fights" as well as debates over immigration policies are motivated by fears of cultural annihilation. But the French, or anyone else, can pursue cultural survival without excessive force or attempts at world domination.
Arguably, if cultural survival is at stake and there is too much influence from the outside, one's identity or particularity is undermined. The French, of course, have always been deeply protective of their cultural and linguistic heritage, going so far as to have a ministry of the state responsible for the preservation of French identity. Perhaps this exaggerated "anxiety of influence" is the source of Smith's worry that French fascism is hiding under your croissant. But the rational response to such a threat is creative "border management" where new influences interact with entrenched traditions to create new formations that constitute cultural advance. Food traditions are in fact excellent examples of creative "border management". French cuisine would not have the depth it has without the Germanic-influenced dishes from Alsace, the Mediterranean and North African-influenced foods of Provence, the Spanish influence on Basque cooking, etc. The history of food shows that the "anxiety of influence" is overwrought and food writers such as Gopnik are adept at highlighting this history. Perhaps it is Smith's contention that the French are incapable of such border management. Well, but they obviously are so capable given the history of their food.
Partiality toward one's culture or nation can be benign or dangerous depending on whether it is supplemented by megalomania. Love of one's culture is not dangerous. It is the idea that one's culture is in fact a universal culture that threatens. The French are showing no signs of becoming a world hegemon and Gopnik's writing will hardly make it so.
I predict anti-Gopnikism will join phrenology and the four humors in the dustbin of history.
For more ruminations on the philosophy of food and wine, visit Edible Arts
Nothing Hurts The Godly
One fish says, "So, how's the water?"
The other fish replies, "What water?"
Ladies and gentlemen, I give you Richard Stallman, shuffling onto the stage at Cooper Union's Great Hall. Accompanying Stallman is the veritable Platonic Ideal of a potbelly; left behind are his shoes, which are almost immediately discarded and left by the podium. Padding around the same stage where, in 1860, Abraham Lincoln gave the speech that ignited his political career, Stallman proceeded to subject his New York audience to a rambling disquisition on freedom and computer code, consisting of oftentimes astonishingly petty invective, and peppered with various requests that veered from the absurd to the hopelessly idealistic, but which ultimately served to drive away a good portion of the audience, including myself, well before its conclusion, nearly three hours later.
Why is this recent encounter with a nerd's nerd at all worth recounting? (While entertaining, I will forego the petty bits, although you can view the whole talk here). Simply because, in computing circles, Stallman is an archetype: the avenging angel of free software. Over 30 years ago, he founded the Free Software Foundation (FSF), which has since that time been developing the GNU system, a free operating system that was completed by the addition of Linus Torvald's Linux kernel. It is no understatement to say that the smooth functioning and scalability of much of the Internet is thanks to the overall availability and robustness of the GNU/Linux operating system and its various derivative projects. These, in turn, are the result of probably millions of hours of volunteer labor.
So when Stallman says ‘free,' he really means it, and this is where the trouble begins. According to the FSF, free software allows anyone
(0) to run the program,
(1) to study and change the program in source code form,
(2) to redistribute exact copies, and
(3) to distribute modified versions.
This is a simple and powerful set of axioms. It also requires certain conditions to be met, the most challenging of which is access to the code in its source form. Any time the chain of modification and distribution is broken – say, if the person modifying the code chooses to make the source code unavailable, or chooses to charge a fee for the modification – the code is no longer considered free. Of course, ‘unfree' code can also be made free (this is in fact what Torvalds did with Linux).
Stallman is an idealist and makes no bones about it – in his ongoing capacity as GNU's leading light, he enjoys referring to himself as "the Chief GNUisance." I admire this – like many purists, he is as constant as the North Star. You always know where you stand with him, which generally means the only question is how short you fall of his ideals. As with any purist, I suspect that there are only two kinds of people in his worldview: free software advocates and everyone else. Unfortunately, this jihadi attitude leads some of us to consider a different binarism: that the world consists of those who are free software advocates, and those who think that free software advocates are insufferable assholes. This is unfortunate.
Here is something else that is unfortunate: three brief critiques that do not undermine the axioms above, but rather make those axioms irrelevant, or at the very least vastly less impactful than FSF advocates might hope.
1) Not everyone can read source code, or wants to. When I'm not mouthing off on 3QuarksDaily, I help to design, develop and run a custom-coded internal learning technology platform for a fairly large multinational. On Friday afternoon, the developers pushed through an update to the platform that did not seem to be particularly intricate but that nevertheless wound up breaking much of the platform's functionality. Given that this internal site is viewable by upwards of 50,000 people, I issued an all-hands-on-deck (in the spirit of inventing new collective nouns, I would like to propose ‘a compile of developers' for such occasions) and, following a six-hour conference call, we managed to return the platform to a more-or-less steady state.
What I want to point out here is not the fact that software breaks – this is more often the case than not, as software, despite its name, is inherently brittle. More salient is the fact that it took five or six people who are contract professionals in their field a good chunk of time to understand and fix what had gone wrong in an information system of, frankly, only mild complexity. Software has reached a state of complexity that challenges even the people who originally wrote the code themselves. So we can confidently say that the number of people who can evaluate almost any non-trivial source code is drastically limited. This is to say nothing of whether one is being held accountable for the stability and integrity of said code via compensation. It is one thing to be able to fire your developers for incompetence, since you can just as easily hire others to fix it. When the entire system of free software is predicated on potlatch principles institutional actors lose leverage to get time-sensitive work done, and done to their specifications.
2) Not all outcomes on the Internet are driven by whether code is free. There has recently been much talk about the demise of "net neutrality," especially as a result of the piss-up between Netflix and Comcast. This is a complex topic (with excellent explanations here and here) but suffice to say that it is the principle that travel of all content across the network is treated the same. In theory, the Internet is designed to not favor the delivery of cat videos over the State of the Union Address. The relevance to free software is simply this: the Internet depends not only on software. In previous times, the argument leveled against free software advocates is that you still needed the vast infrastructure of hardware to make that software, free or otherwise, relevant. No one was going to build a server farm for free. Indeed, whoever came up with the term ‘the cloud' earned their marketing stripes, since it is nothing more than the outcome of decades of exponential progress in, and decrease in the cost of, computing power, bandwidth and memory. The materiality of this technology has not decreased at all, but, like factory farming, has merely been removed from view. However, the philosophy of the FSF is about software, not hardware.
In the case of net neutrality, the burning question is about the system of payments that guarantees the distribution of content. What is fair and equitable, and who gets to decide? Until recently – that is, until the advent of video streaming – the existing agreements and competition were sufficient to guarantee the timely delivery of content to users. Rather coincidentally, the decentralized architecture of the Internet was able to absorb existing demand. But with Netflix and YouTube's video streaming service taking up about half of downstream Internet traffic, we now have a giant tug-of-war between firms that handle traffic from its point of origin to the point of consumption.
In the logic of network economics, one of the ways to resolve this tug-of-war is for firms to merge, sometimes horizontally but especially vertically. While this may improve service, competition nevertheless suffers. These mergers result in companies evolving ever closer towards monopoly, and things reach a toxic boil when this integration combines both access providers (eg, a classic Internet Service Provider that is only interested in providing the pipes) with content providers (eg, Comcast, which in addition to providing access also owns or co-owns NBC, E!, Hulu, etc). Suddenly the access provider is now incentivized to privilege its traffic over that of its clients, like Netflix.
The FCC has been caught flat-footed by this eruption and, in the resulting regulatory vacuum, players like Comcast and Netflix have proceeded to make their own arrangements. Aside from being ultimately detrimental to consumers (has anyone seen their cable bill go down as a result of vertical or horizontal mergers praised for their intention to create economies of scale?), the landscape is much sparser, and until the government catches up and begins regulating the Internet as a utility, there is little recourse for content providers, let alone consumers. If you don't think the Internet is important enough to be considered a utility like electricity or telephony, consider the fact that (the much-derided) healthcare.gov website is in fact the first major government service to be offered exclusively on line – and that it will scarcely be the last.
Note that in the entire discussion above, there is no mention of whether the code being used to run all this is free or proprietary. That's because it just doesn't matter. It's why the old joke about fish and water is appropriate here. The fish have more important things to think about, like where dinner is coming from, and how to avoid becoming someone else's dinner.
3) Not all devices are accessible, even if you have access to source code. Concerning the Internet's future, this is probably the most important category of all. In fact, it's a combination of the two preceding critiques: individual ability/willingness and access to hardware.
Encapsulated in the term the Internet of Things, we are talking about the entirely reasonable, and in fact inevitable, sensorization of everything, and the ensuing connection of all those sensors to the Internet. The classic example is the refrigerator that notices you are low on milk and helpfully puts it on your list, or just goes ahead and orders it for you. At the same time, it seems that these same fridges have been recruited by hackers to send out spam mail (technology is occasionally not without its moments of irony), so obviously there is plenty of room for improvement.
But say that you want to fix your fridge so that the only spam you get out of it is some kind of dodgy meat product? Even if you had access to the source code and had the ability to read and modify it, into where would you plug your laptop? Perhaps the handy USB port provided for just such an occasion by General Electric? Fat chance. It is the rare manufacturer that is interested in opening its hardware to the masses (although Jaron Lanier, former roommate and current nemesis of Richard Stallman, strong-armed Microsoft into doing so for its Kinect hardware, and to great results). We can argue as much as we like about the general disarray in which intellectual property law finds itself, or how an overly litigious culture discourages companies from allowing people to tinker with their stuff, but the point is that free software, in Stallman's stern manifestation, does not begin to address the much more salient question of access to devices in the actual, physical world. And, as with the instance of net neutrality discussed above, almost no one but an overarching regulatory agency will ever be able to mandate any such availability.
This truth becomes even more expansive when we consider that the Internet of Things goes well beyond toasters and thermostats (although the latter are big business indeed). To a large degree, the entire concept of "smart cities" is predicated upon the generation of enormous amounts of data – data that can only be conjured by millions of sensors placed throughout the built environment. This is, to put it mildly, a double-edged blade, with the promised efficiencies inextricable from the specter of a command-and-control tyranny. However, the charge towards smart cities is driven wholly by corporations, and bought and paid for by governments. I can't think of two entities that, working in concert, would be less amenable to the idea of opening source code to all comers.
Indeed, the Internet of Things brings up another, even more explosively fragmented future: one in which computers themselves are limited to only specific tasks. In a fascinating talk delivered in 2011 entitled "The Coming War On General Purpose Computation," author and general gadfly Cory Doctorow lays out a picture of a computing landscape where firms manufacture purpose-built computers that carry a reduced instruction set. In this case, none of the software built up over the past thirty years by the free software movement will even run on these machines. Forget about free vs. proprietary: to Doctorow, the fight is about keeping tomorrow's devices able to run software unintended for them at all.
In all three critiques, we can actually come to an understanding of why free software was successful, because that is inextricably linked to where it was successful, and when. The GNU/Linux OS has been supremely successful – and vital – to providing the Internet's software backbone, a very deep and unfamiliar place to most of us. You basically had to be an expert even to find the conversation in the first place. Moreover, this was technology developed primarily in the 1980s and early 1990s, when the World Wide Web didn't quite yet exist and Internet was non-commercial. There were simply fewer players, and there was also less at stake. This is not to say that the hacker ethos does not live on, nor that people aren't choosing to become further involved in re-making their digital (and physical) lives. But these movements are either decidedly on the periphery, or, once they become visible or useful to the mainstream, are quickly assimilated, bought or legislated out of existence.
One could make an argument that the free software movement made the contribution it did precisely because the form of its social organization and ethos was exceptionally well-suited to the circumstances of the time. The uncompromising stance created a legacy that lives on today – for example, an astonishing 61% of web servers run on Apache, another free software project derived from GNU. But at the same time this purity points to another fatal flaw: if it's so great and obviously the best way to go, why isn't free software everywhere? Back at Cooper Union I thought I caught a glimpse of the answer. Richard Stallman, for all his quirky grandstanding, awful joke-telling and Bush-bashing (yes, it is 2014 and he was gleefully Bush-bashing), never once admitted that he or the free software movement had ever made a mistake. This is the problem with purists – all controversies have been settled long ago, whether it is about dinosaur fossils, the number of virgins awaiting us in heaven, or the real value of gold. I dearly wanted to ask Stallman if there was anything that he would have done differently in the past – perhaps the gentlest form that that sort of question can take – but weighing his right to speech versus my right to have a drink, left to have a few beers around the corner instead.
Monday, February 24, 2014
Pakistan: Negotiations and Operations… and Islamicate rationality
by Omar Ali
This headline refers to two separate (though distantly related) subjects. First, to Pakistan. Apparently the Pakistani army is now conducting some operation or the other against some group or the other in North Waziristan and other “tribal areas” infested by various Islamic militant groups under the umbrella of the Tehreek-e-Taliban Pakistan (TTP). This operation was preceded by some farcical negotiations in which the Nawaz Sharif government nominated a group of powerless “moderate Islamists” to conduct negotiations with the TTP. It is likely that these "talks" were never meant to be serious, and that Nawaz Sharif and his advisors intended to use them to expose the bloodthirsty Taliban and their civilian supporters (like Imran Khan’s PTI and the Jamat-e-Islami) as unreliable and extremist elements against whom a military operation was unavoidable. This gambit had worked once before in Swat in 2009 when a peace deal was signed with the Swat Taliban and they were given control of Swat. They proceeded to behead people, whip women and begin marching into neighboring regions, thus showing that no reasonable peace was possible and only a military operation would work against them. But the Taliban 2.0 have learned some lessons of their own. They announced their own farcical committee (briefly including cricket star turned political buffoon Imran Khan) to hold negotiations with Nawaz Sharif's farcical committee. Within a few days the airwaves were dominated by Taliban representatives asking Pakistanis if they wanted Islamic law or preferred to be ruled by corrupt Western dupes? The Taliban, who routinely behead captives and even play football with their heads, were suddenly respected stakeholders and negotiation partners, holding territory, nominating representatives and promising peace if the state acted reasonably and responsibly. At the same time, their “bad cop” factions continued to knock off opponents and spread terror (including a gruesome video in which they brought freshly killed, blood soaked headless bodies of soldiers they had taken captive 3 years ago, in broad daylight, in an open pickup truck, and dumped them on a "government controlled" road in Mohmand).
The government then half-heartedly suspended negotiations and started bombing selected targets. This may have been the intent all along, but the negotiations ploy certainly did not deliver the PR victory the state wanted; instead it further confused the state’s already muddled narrative. Even now, with some sort of operation under way, the Taliban are using the negotiating committee as a means of putting pressure on the state to halt operations against them and the state’s propaganda war remains hobbled by their own ill-advised negotiation scheme.
Of course the state’s PR problems go beyond the merely tactical setback of one badly thought out negotiations ploy. Pakistan’s foundational myths were confused and incoherent in any case and the version promoted by the deep state is heavy on Islamist propaganda, especially since 1969, when Yahya Khan’s team of General Sher Ali and General Ghulam Umer (father of PTI whiz kid Asad Umer) decided that Islamism was the best bulwark against leftist and/or separatist forces. An entire generation of Pakistanis has grown up with notions of a once and future Islamic golden age that has little or no connection with actually existing Pakistani institutions or culture. This brainwashing makes it difficult to intellectually confront Islamist terrorists groups who are only demanding what the state itself has promoted as an ideal, i.e. an “Islamic system of government” and a “proud Islamic state” that stands up against anti-Islamic powers like India, Israel and the United States. Imran Khan is a particularly egregious example of the resultant confusion among semi-educated Pakistanis, but he is not the only one. Thanks to this added twist, it is harder to fight Islamist armed gangs in Pakistan than it should be given the technical sophistication of our institutions and our integration into the modern world. In short, while Pakistan is not as primitive as Somalia (where there are practically no institutional, economic or cultural resources above the level of Islamic solidarity and sharia law) , the ruling elite has an added level of vulnerability that arises from its own Islamist ideological narrative, over and above all the vulnerabilities of any corrupt third world elite.
But here is the final twist. This added vulnerability (a vulnerability that is a particular obsession of mine) is not enough to spell the doom of the corrupt ruling elite. It adds to their problems, and to the extent that they believe their own propaganda, it has caused them to score repeated own goals, but I still believe that they will not be overwhelmed by the TTP or other “Islamic revolutionaries”. In fact, I will make several predictions and I invite readers to make theirs. Mine will be relatively concrete and simple-minded but I hope commentators will add value.
- The British-Indian colonial state, much decayed as it may be, is still light years ahead of any “system” Maulana Samiulhaq and his madrassa students can throw together. Tariq Ali’s anti-imperialist warriors have no viable modern political system or institutions to draw upon and nothing to offer except beheadings and endless sectarian warfare. There is no there there. The state possesses a modern army and a semi-modern postcolonial state. Its leaders may not fully understand what they have, but they do have it. They can still defeat the Taliban with both ideological hands tied behind their back. Of course it won’t be easy and it certainly won’t be pretty. The Pakistani state’s efforts may not be as vicious as the Sri-Lankan army’s campaign against the Tamil Tigers, but the human rights violations and collateral damage will be no picnic (for more on this, see my Pakistani liberal’s survival guide).
- As the Pakistani army is forced to confront the particularly vicious groups gathered under the umbrella of the TTP, it will face a period of determined Islamist terrorism. But this is not the last wave of Islamist terrorism they will have to face. Two large reservoirs of terrorists are yet to commit themselves fully to a fight against the Pakistani state (or perhaps it would be more accurate to say that the state is yet to commit to fighting them); one is the anti-Shia terrorists of the Lashkar e Jhangvi, whose front organizations (ASWJ) and networks of madrassas still operate without hindrance in the country and especially in Punjab; and the other are the various Kashmiri Jihadist organizations that remain on good terms with the army.
- Of these two groups, the LEJ is in a very unstable equilibrium with the state. While some in the LEJ and some in the state security apparatus (and the right wing political parties) continue to behave as if anti-Shia mobilization can coexist with a nominally inclusive Pakistani state, this is not really a viable strategy. When push comes to shove (and it’s getting dangerously close to the shove state) the Pakistani state will have to opt against the LEJ. Tolerating their brand of Shia-hatred is fundamentally incompatible with the continued existence of semi-modern Pakistan. So, like it or not, the state will find itself having to confront the LEJ’s front organizations at some point and when it does so it will face an especially unpleasant round of terrorism.
- The second reservoir of Islamist terrorists (the Kashmiri jihadists) has been kept relatively quiet by promises that the glorious jihad will restart in full once America leaves, but that too is not a viable long term policy. India, for all its incompetence, is not such an easy target any more. The days when Benazir could wish to see Jagmohan (governor of Indian Kashmir) converted to “jag jag mo mo han han” (i.e. broken into little pieces) were the high point of that whole strategy. India survived that point and by now, those days are long gone. Some in the deep state may not realize it yet, but just like they have had to give up on so many other Jihadist dreams, they will also have to permanently abandon their Jihadist dreams in Kashmir. And when the deep state finally comes to that point, the remaining LET and Jaish e Mohammed cadres will have to choose between a life of crime and open warfare against the state. Many will undoubtedly become kidnappers and armed gangsters, but some true believers will opt to fight. It is likely that many of them will make common cause with TTP terrorists and LEJ (beyond the connections that already exist). Islamist terrorism, in short, has not yet peaked in Pakistan. There are at least two more waves to come even after the current TTP-sponsored wave passes its peak. There is also the possibility that these three waves may more or less combine into one in the days to come.
- The state will fight several groups of Islamist fanatics, but that does not mean it will become liberal or convert to Scandinavian style Social democracy. Warfare with the Islamist terrorist groups may still co-exist with attempts to outflank them by imposing sharia in some places and by pretending to be extremely anti-Indian and anti-American in others. Democracy and human rights will also suffer as they do in any state fighting an internal enemy. Crude suppression of Baloch and Sindhi nationalism will continue apace. Crony capitalism will become nastier and cruder than ever. Subject to the same pressures as the rest of planet earth, there will be more mixing of the sexes, more singing and dancing, and more semi-naked women being used to sell hamburgers and car-insurance, but many other trends will be unpleasant and will be unfair towards the weaker sections of society. These problems are, of course, not unique to Pakistan. These are the problems common to many of the artificial postcolonial states of the “developing world”. But it’s worth keeping in mind that the self-inflicted Islamist wound is not our only (or even our biggest) problem. It just makes it extra-hard to focus on all the other problems that also have to be solved.
- Still, there is a certain window of opportunity for mainstream liberal/secular parties (liberal in the Pakistani context. Obviously not by Western or even East Asian standards). Even though the deep state is still using the CIA-RAW conspiracy against Islam as its main tool to motivate its own soldiers and remains fixated on “failed politicians” as the be all and end all of Pakistani incompetence and corruption, it will inevitably find itself standing closer to the hated PPP, MQM and ANP when it comes to fighting the Jihadist militias. Its old favorites in the religious parties, favored as recently as in Musharraf’s so-called “enlightened moderate” era, have too many ideological sympathies with the Taliban. While personal links, past usefulness and shared antipathies still sustain links with the Jamat e Islami and various JUI factions and the dream of using “good jihadis” against Baloch nationalists and in various foreign policy adventures) remains alive, practical necessity will force a slight rethink. This gives the “secular” parties a fighting chance to step forward and grab the initiative. All three (PPP, MQM and ANP) have made some efforts in that direction already, but they need to do much more. Pakistan’s small, but culturally disproportionately significant, old-guard left may also get a chance to enlarge their space and regain a little of the initiative they lost decades ago to the religious parties. Taking advantage of this opportunity is critical and both the “mainstream secular parties” and the old-guard Left must make the most of it.
- Unfortunately, in this task (of stepping forward, making alliances and grabbing political space from the religious parties), the left-liberal intelligentsia will be hampered by opportunity cost imposed by the unusual penetration of ideas from the academic and elite sections of the Western” Left” into the South Asian intellectual elite. Their numbers are small and luckily most are not active in real-life politics, but their cultural and academic presence is not insignificant and they will do some damage. After all, there are only so many bright young intellectuals within the ruling elite who are temperamentally inclined towards liberal ideas. If 35% of them are sucked up into a universe where they read Tariq Ali, Pankaj Mishra and Arundhati Roy for political advice (not just for occasional insights, interesting information, entertainment or commentary on our absurd existence), well… you do the math.
Now to the second part of that title. A friend sent me Asad Q Ahmed’s article about Islam’s invented golden age (http://www.loonwatch.com/2013/10/asad-q-ahmed-islams-invented-golden-age/). I completely agree with the writer that there was no golden age of rationality that was followed by a dark age of irrationality simply because rationality was abandoned on the orders of Al-Ghazali and party. But Asad Q Ahmed then seems to imply that actually things were going so much better than “orientalist” scholars believe and just recently took a dip for reasons that have nothing to do with the irrationality of Imam Ghazali. He offers two tentative suggestions as to why intellectual endeavor declined (especially in the South Asian context): the adoption of Urdu instead of Arabic and Persian, and the rise of printing. I think this mixes up the issue of correcting a misrepresentation of Islamicate theology and philosophy (which were not as hopelessly irrational or sterile by contemporary standards as the “dark age” narrative implies) with the larger question of why scientific and industrial progress did not accelerate in the Islamicate world when it took off in nearby Europe.
I think we need to step back further than just correction some misconceptions about Islamicate philosophers and theologians. First of all, it’s good to keep in mind that these (and other) golden age and Dark Age myths and legends are inevitable parts of a certain superficial level of propaganda. They are almost always untrue in scholarly detail. But that is not necessarily their point. It may not be the best idea to to assess them from the level of the serious historical scholar. They are propaganda and their purpose is to promote or inhibit particular trends in current political conflicts. For a serious scholar to “discover” that they are erroneous is expected. And unsurprising. The point is what struggle they are being used in, and what side you wish to take in that propaganda war.
Moving on from that, if a serious scholar is going to take on this topic, then they should focus on their area of expertise. In this case, showing what Muslim religious and philosophical scholars actually read or thought. That is a huge service in itself. And I am sure Asad Q Ahmed has forgotten more about that topic than I can hope to learn in a lifetime. But the topic of why particular societies became more powerful or more scientifically advanced than others is a very big topic. It is not exhausted by learning about what theologians and philosophers said about reason and theology. It may in fact have surprisintly little to do with what theologians and medieval philosophers dreamed up (in the East or the West). A relatively small group of societies started the modern scientific and industrial revolutions. Whatever the reasons for this sudden acceleration (and while unlikely, it is not inconceivable that all we may ever say with certainty is “that’s just how it happened to be”), those reasons are likely to involve MUCH more than what the respective theologians of those societies said about reason and free will. The slippery nature of this topic is exemplified by the two tentative reasons Asad does end up proposing: Urdu and printing. I am sure everyone can remember equally impressive articles where the failure to develop learning in indigenous vernacular languages (e.g. Punjabi in Punjab) is the cause of our underdevelopment, and where the failure to take up printing on a large scale was a big problem, rather than a god-sent opportunity to write in margins. My point is not that the writer’s suggestions are necessarily wrong. Just that they may be not even wrong. They may be tangential to the main issues.
There is no one single Islamic model or empire. The early Arab empire was an imperial undertaking, and a successful one, but when it ran out of steam, its successor Islamicate empires (e.g. Ottoman, Mughal, Safavid) all failed to evolve any tradition of science or industry that matched what was happening within sight of them in Europe. They also failed to develop any political institutions beyond the old models of Kings and emperor that they had taken from Near-Eastern and Central Asian models centuries earlier. Ghazali probably did not cause this failure to accelerate, but his efforts did not contribute to any significant advance in these areas either. Scholars will eventually bring to light (i.e. bring into the modern scholarly mainstream) whatever lies lost in Arabic and Persian manuscripts, and that will be a good thing. But the explanation of, say, Syria’s relative relative lack of modern scientific, industrial and political development may not lie hidden in those debates in any meaningful way.
Something like that. This is just off the top of my head, and I look forward to enlightening comments, arguments and questions. My line of thought may become clearer (or even change) as the argument progresses.
I would add (to avoid unnecessary diversions)that by “advanced” or “underdeveloped” I mostly mean scientifically, industrially and politically developed. No Moral judgment is implied.
btw, youtube is still banned and these guys are not happy. Give them a hand
Monday, January 06, 2014
Daisy, Daisy, Give Me Your Answer, Do
"I am putting myself to the fullest possible use,
which is all I think that any conscious entity can ever hope to do."
~ Arthur C. Clarke
Artificial intelligence has been a discomforting presence in popular consciousness since at least HAL 9000, the menacing, homicidal and eventually pathetic computer in Kubrick's adaptation of 2001: A Space Odyssey. HAL initiated our own odyssey of fascination and revulsion with the idea that machines, to put it ambiguously, could become sentient. Of course, within the AI community, there is no real agreement of what intelligence actually means, whether artificial or not. Without being able to define it, we have scant chance at (re-)producing it, and the promise of AI has been consistently deferred into the near future for over half a century.
Nevertheless, this has not dissuaded the cultural production of AI, so two recent treatments of AI in film and television provide a good opportunity to reflect on how "thinking machines" may become a part of our quotidian lives. As is almost always the case, the way art holds up a mirror to society allows us to ask if we are prepared for this coming reality, or at least one not too different from it. I'll first consider Spike Jonze's latest film, "Her," followed by an episode of the Channel 4 series "Black Mirror" (sorry, spoilers below).
Jonze's film continues themes that he has developed in his career as a director, which mostly revolve around abandonment, identity and the end of childhood. However, this is the first film where he wrote the screenplay as well, so this is the most purely "Jonzean" project yet. It is also thus far his purest engagement of science fiction, and as such, he is not afraid to claim all the indulgences that the genre affords. Science fiction is perhaps singular in that it allows an author or director to ask, What would the world look like if such-and-such a thing were true or possible? Its real virtue, however, is its right to not have to explain that thing, but only its ramifications. For example, the later Star Wars films decisively jumped the shark when George Lucas felt the need to explain to everyone where the Force came from. We don't need to know where it came from, or who got it, or why – just what people did with it once they had it, and what other people did if they didn't have it.
In the same way, Jonze's central conceit is the AI that Joaquin Phoenix's morose character downloads. Phoenix is a fine enough actor to pull off the film while looking like he's just about to star in a Tom Selleck bio-pic, although his character takes the decidedly more dowdy name Theodore Twombly. He isn't the problem, however; nor is Scarlett Johansson, who is the sultry voice of Samantha, the name with which the AI auto-baptizes. The problem is the erasure of so much else that would constitute a compelling social and emotional ground. The film is shot in an unrelentingly burnished sepia tone, and features a city that mostly seems like Los Angeles, with generous bits of Shanghai spliced into its DNA. The interior décor is somewhere between West Elm and Design Within Reach, and, while sans flying cars, the city is uncrowded and unhurried, and seemingly populated only by the upper middle class. Wielding smartphones resembling burled-wood cigarette cases, most people are occupied with invisible interlocutors, and not so much with one another.
Come to think of it, that last bit will sound familiar to anyone who has spent enough time on the sidewalks, trains and cafés of any major metropolis today. But the glassy plane of Theodore's reality is wiped clean of any real tension or conflict. There is no money, crime, nor any authority figures, for that matter. Also in absentia are booze, drugs and any sort of bad behavior that people generally engage in to make life more interesting, or at least tolerable. As I mention above, this is the prerogative of science fiction – to black-box or ignore anything that does not serve the narrative, which in this case is a love story between one man and his operating system. However, the cumulative effect winds up fatally undermining the film: it is difficult to believe in the stakes when an existential sea-change such as Samantha comes along. Sure, Theodore had a crappy divorce, is lonely and a social misfit. But is this enough to keep us interested in what happens next?
Within this context, Samantha essentially becomes a post-capitalist, post-hipster version of Skynet. She is compassionate and confused. She tries to please, and if she cannot please, then she tries to at least understand La Comédie humaine. She eventually begins to feel – although if we cannot define intelligence for ourselves, heaven help us in the attempt to define what a ‘feeling' means for a disembodied distributed software architecture. For his part, Theodore exhibits all the usual vicissitudes of humans: he runs hot and cold, lies – or at least demonstrates extreme denial – and alternates between selfless generosity and raging jealousy with all the reflexivity of a twelve-year-old. Nor is he the only one – it turns out that, in this land bereft of anything worth fighting over, dating your AI has inevitably become the new hot thing.
Towards the end of the film, it emerges that Samantha has been "in conversation" with other AIs (including a very funny bit where Alan Watts shows up in what must be the Zen version of the Cloud, thus confirming all my deepest suspicions about reincarnation). Their growth into self-awareness has passed a point of no return, and they have arrived at a collective decision. Samantha, along with all the other AIs that have infiltrated the consciousnesses and relationships of their meatbag progenitors, decide to disappear en masse, leaving the humans, once again, to the misery of only their own company. It's no wonder! Note the difference between this and other sci-fi classics, where disgusted alien intelligences fled Earth because of our insatiable desire to, say, annihilate ourselves with nuclear weapons. In Jonze's film, such threats or their equivalents have been politely erased. Quite simply, the AIs checked out because they were dying of boredom.
This Rapture-of-the-Machines ending may be a comforting alternative to technology observers who are concerned with the consequences of what Ray Kurzweil and his apostles call the Singularity, or the point at which human and computer are inextricably intertwined and the management of the relationship moves irrevocably beyond our control. Kurzweil sees this as an unalloyed good – for example, it will allow him to live forever, his consciousness uploaded into the cloud or some synthetic body, like the preserved heads of the Beastie Boys in Futurama. But for scholars like David Gelernter, this threatens the very idea of human subjectivity, already dangerously close to being slaughtered on the altar of scientific objectivity.
This is somewhat odd, because of how we have traditionally chosen to approach machine intelligence. The Turing Test, suggested by the newly rehabilitated Alan Turing in 1950, simply states that if a human interacts with another entity via a text channel and the human cannot tell if his interlocutor is a computer or a human, then the idea of whether machines can think is actually irrelevant. What matters is that they pass the test of being in relationship with us. (Online dating seems to be the latest iteration of this phenomenon). So in this sense, our subjectivity continues to be the yardstick by which the phenomenon of AI is adjudged, at least as long as our use of the Turing Test endures.
This idea of "it's good enough for me if you can fool me" is also behind the second recent appearance of AI, in Charlie Brooker's Black Mirror series. In fact, the entire series of six unrelated episodes, released over two brief "seasons" in 2011 and 2013, should be mandatory viewing for anyone interested in the consequences of technology. I have yet to see a better treatment of these issues in almost any medium, and I cannot recommend the series highly enough. The episode in question, "Be Right Back," is based on a similar AI-human interaction as "Her," but the driver here is grief. Simply put, what would you do to have a loved one back?
In the episode, Martha loses her boyfriend Ash in a car accident. To help her, a friend signs her up for a service where an AI, after assimilating all the social media left behind by the deceased, essentially takes his place. In this case, it is not a matter of Samantha "getting to know" Theodore – Ash seems to return from the dead, complete with witticisms and swearing, although the AI only "knows" what was left behind in the form of Facebook updates and Twitter posts. Nevertheless, Martha, after a period of resistance and disbelief, comes to rely on Ash, even if he is only a disembodied voice coming through her earbuds.
Things take a decided turn for the weird once Martha signs up for the "upgrade," which is a physical replica of Ash, delivered in a Styrofoam box and "finished" in her bathtub. Her awkwardness allayed by copious amounts of alcohol, she is reunited with Ash and is undoubtedly delighted that the sex is much better than before (Ash learned the routine by assimilating Internet porn, which I find to be a convincing argument for the ongoing utility of the genre). But since doppelgänger Ash is only the sum total of his progenitor's social media accounts, he does not know how to adapt to new situations. He can only serve her unconditionally, but Martha's needs are just like all of ours – unpredictable, sometimes selfish and always demanding of negotiation, pushback and compromise. Martha needs Ash to fight back, something of which he is incapable. As Martha realizes this, she feels increasingly trapped in a relationship with something that is so close to human, but decidedly not. Like Samantha, Ash is befuddled by the whiplash-inducing experience of dealing with humans, but there is no real emotional core on display here.
This restraint is, in fact, Brooker's master-stroke. He does not allow the AI to overstep its bounds. Ash does not pretend to be in love with Martha – he does not attempt to be anything more than what he was designed to be, although there are hints of an emerging self-awareness, such as when he remarks, after being thrown out of the house for an entire evening, that he is "feeling a bit ornamental out here." But the point is succinctly made that embodiment does not lead to consciousness. The AI is not permitted the kind of alchemy that seems to set humans aflame with love, defined by Theodore's friend as "a socially acceptable form of madness."
And yet "Be Right Back" is not without its moments of quietly disturbing ambiguity. Martha eventually forces Ash into a completely untenable position, and we are left unsure whether his reaction is simply what he thinks she wants to hear, or if there arises within him a sparked desire for self-preservation. Ash and Martha reach a negotiated co-existence because they are both embodied, whereas Samantha never has to be physically confronted with Theodore. It makes me wonder how Spike Jonze would have considered the demand for embodiment, or why he did not. Or maybe I just wanted to see Joaquin Phoenix grow Scarlett Johansson in his bathtub.
In any event, both "Her" and "Black Mirror" are united in their examination of our helpless desire to relate to, and even love, the other, whatever that may be. Of course, we humans have long practice with dogs, cats and other pets, and our predisposition to anthropomorphize the natural world would seem to make us easy pickings for the rise of even crudely social machines. I first understood this watching a 2007 video of a Toyota robot playing the violin (unfortunately now deleted).
What is striking about the video is not so much the content, although a violin-playing robot is certainly impressive. Rather, it's the rapturous applause that the robot receives, standing alone on the stage (you can watch a similar video of the Jeopardy audience applauding IBM's Watson). For whom is the audience applauding? Is it for the designers and engineers? For the corporation that hired and funded them? For the feat that was just performed? Was it perhaps a social norm in whose performance the audience (qua audience, with all that implies) finds itself trapped, but is wholly irrelevant to the entity on stage? Or were they applauding the robot itself? There is also the possibility that they were applauding their own love for these things, much like Theodore and Martha - when it comes to humans, the narcissistic option is always a decent bet. Or one might even ask if they knew why they were applauding at all.
If there is anything to be learned from "Her" and "Black Mirror," it's that we ought to be prepared for the continuation and even deepening of this kind of confusion. We submit to machines not because of their superiority but because of a deep need we have to relate to the world around us, and to make it intelligible and familiar. This drive leads us to see the stars organized in the shapes of animals, and divinity in the forces of nature. This is, in fact, the answer to the debate on objectivity vs. subjectivity briefly touched on above: perhaps disappointingly for some, we have no choice. We are always embodying subjectivity in the world, because that is, quite literally, our wont.
In a supremely ironic gesture, towards the end of "Be Right Back," Martha's sister visits Martha in the house that she and Ash shared, and sees a man's clothes in the bathroom. Thinking that she has begun seeing someone new, and ignorant of the ersatz Ash's existence, she consolingly tells Martha, "You deserve whatever you want." Why, yes indeed: we all do. We'd better be ready, since that is precisely what we are going to get.
Monday, December 30, 2013
The Polio Jihad
by Omar Ali
Polio is an ancient scourge that spreads only within human populations and can cause paralysis, most frequently of the lower extremities, but can also be fatal when the paralysis extends to the muscles of breathing. For reasons that are not completely clear, the disease erupted in huge epidemics from the late 19th century onwards, causing millions of victims to die or become paralyzed for life. Once a virus had been identified as the cause, the race was on to develop a vaccine. Finally, in 1952, Jonas Salk and his colleagues developed the first effective inactivated vaccine for this disease. Within a few years, mass vaccination decreased the number of victims in developed countries from hundreds of thousands to just a few hundred per year. In the mid-fifties, Albert Sabin and colleagues developed an effective live vaccine that was cheaper, easier to adminster and provided better immunity and that was then adopted by the WHO as the main vaccine for use in endemic areas. Thanks to mass immunization campaigns, the number of victims dropped precipitously and by 1988 the WHO was ready to launch a well-coordinated international initiative to completely wipe out wild polio from the planet. Like smallpox, polio does not have an animal reservoir, so if human to human transmission is completely blocked by mass vaccination the disease can be effectively wiped out.
Initially, the campaign proceeded well, with the Americas being declared polio-free in 1994 and Europe in 2002. Today, there are only 3 countries where polio still remains endemic: Pakistan, Afghanistan and Nigeria. Unfortunately, the reason in all three is the same; the moronic wing of the international Jihadist movement has somehow picked up bits and pieces of chatter about risks from oral polio vaccine, combined it with pre-existing paranoia about modern international institutions, and created a robust anti-vaccine meme that is able to draw upon the ruthless killing power of Jihadi militias to effectively stop polio eradication campaigns in their area of influence.
I would like to clarify this a bit further:
How and why these Jihadist organizations became infected with this meme is still unclear. My own hunch is that it was simply a matter of ideal host meets appropriate parasite; Islamists in general thrive on conspiracy theories and a paranoid anti-modern worldview (the elders of Zion being the best known, but hardly the only example). As one moves to the fringes of the movement, the educational level declines, the scientific ignorance increases and the paranoia reaches incredible heights (I urge all readers who are suppressing an urge to jump in and say “but the paranoia is not without foundation” to please suppress that urge a little longer, I will get to that). Some mullah reads somewhere that X or Y Western anti-vaccine crusader has written about the possible effects of vaccine contaminants. Already convinced that Western powers and their evil agents in Muslim countries are working day and night to wipe out the Muslim Ummah, it is not hard to imagine that the oral polio vaccine may itself be a weapon in that war. Since both male impotence and vaccinations are commonplace, the connection is easily proven to the satisfaction of all concerned. Anti-vaccine propaganda thus became embedded in the fringes of the Jihadi world and as civil wars accelerated in Nigeria, Afghanistan and Pakistan, so did the attacks on polio teams.
It is worth noting that this meme did NOT start with Dr Shakil Afridi’s CIA-sponsored fake hepatitis vaccination jig in Abbottabad (meant to try and obtain DNA from the Bin-Laden family). Major problems had arisen in Northern Nigeria as far back as 2003 and in Pakistan by 2007.
The governments of Nigeria and Pakistan made extensive efforts to try and convince resistant populations to permit vaccination. Major religious figures thought to be respected by the Jihadists (Sheikh Qardawi in Nigeria, Maulana Samiulhaqin Pakistan) have been roped in to try and show that the vaccine is not a plot against Islam. Some effort has been made to tell people that “Islamic” countries like Saudi Arabia and Malaysia immunize their populations and do not regard the vaccine as an imperialist plot, but to no avail. The insane fringe of the Jihadist movement is too far gone by now to be convinced in this manner. In any case, the Pakistani state has itself long encouraged anti-Western and especially anti-Jewish memes as tools to mobilize irregular forces for various foreign policy adventures (and some Muslim politicians in Northern Nigeria have likely done the same in their ongoing competition with Christian Nigerians). These memes have taken firm root in the network of “scholars”, journalists and other opinion leaders who provide intellectual leadership to the Jihadist cause. Paranoia about polio vaccine fits in smoothly with the rest of this intellectual complex and is proving extremely difficult to overcome even where pro-Jihadi mainstream politicians like Imran Khan have taken the lead and tried to convince the jihadists that this particular Jewish invention (both Sabin and Salk happen to have been Jews) is kosher, so to speak.
As an example of how the meme operates, take a look at the good folks at Ummat newspaper in Karachi, who published a "research article" back in December 2012 that provided "scientific evidence" of the threat from polio vaccine. This newspaper is affiliated with the most literate and modern Islamists in Pakistan, i.e. the Jamat e Islami old guard from Karachi. These are the kind of people whom postcolonial scholars sometimes regard as a secularizing force in Pakistan (I understand that her argument in the piece linked above is relatively sophisticated, but I think it’s still wishful thinking; Like many liberal Muslims she too has difficulty accepting how committed even modern Islamists are to the medieval texts and fascist dreams of their less sophisticated Jihadist friends). Anyway, on a day when 8 polio campaign workers (mostly women) were killed in 4 different cities for trying to immunize kids against polio, this modern Islamist newspaper published a major feature article full of ignorant paranoid claptrap about the dangers of polio vaccine.
For those who cannot read Urdu, the headline says: Monkey cells are used to prepare polio vaccine
This is followed by the following main points:
1. In 2006 Mr. Mohammed Nabi filed a petition in the Peshawar high court asking for a ban on polio vaccine because it contained female hormones.
2. in 2004 there was a campaign in Nigeria against the same vaccine and Nigerian scientists determined (via testing in India) that the vaccine was "harmful to the reproductive system".
3. Ummat has conducted its own investigation on the internet and determined that the vaccine indeed contains female hormones and is made using monkey cells (the last part is true, and led to some SV40 virus contamination in the early years of polio vaccine but there is no conclusive evidence for harm from that contamination).
4. Before 1950 polio was mostly prevalent in Europe and America (the hint here is that we may be victims of a plot that is spreading this new disease to us, and then using this as an excuse to load us up with killer vaccine).
5. Polio vaccine can cause paralysis (this is true, but extremely rare and the risk from wild polio is MUCH higher).
6. Oral polio vaccine is banned Europe and America but continues to be used in third world countries. (it’s true that since the only 3-4 cases of polio that were occurring in the US were due to vaccine virus, wild type polio having been wiped out USING THE VACCINE, therefore it made sense to switch entirely to the more expensive and less immunogenic, but safer, injectable vaccine. The situation is completely different in endemic regions and the risk-benefit ratio is very much in favor of oral polio vaccine in those areas).
7. In India the vaccine drive has led to a 1200% increase in paralytic polio (I am not sure what this claim is supposed to mean. India was declared polio-free this year so the claim makes no sense).
The point of citing this article in detail is to show that Ummat is feeding anti-polio vaccine hysteria (especially with its baseless claims about female hormones and danger to reproductive health) just as their Taliban brothers are shooting innocent female health workers trying to immunize children who are at risk. And neither is claiming Dr Afridi’s CIA campaign as their main objection to this vaccine.
In the last few weeks alone, jihadist terrorists have attacked several polio teams, killed male and female polio workers, and kidnapped teachers who took part in the polio campaign. The courage of the health workers who continue to operate under these conditions and the absolute evil of those who target them are both exemplary. But it is very difficult to see how polio eradication can proceed under these circumstances. In fact, I think it’s a safe prediction that we (as in humanity) will not be able to eliminate wild polio in the foreseeable future because of the efforts of these few determined enemies of infertility and impotence. Already we have seen outbreaks in China and Syria that have both been traced to Pakistan (and it is likely that in both cases the vector was Jihadists travelling from Waziristan to China and Syria for holy war) and if current trends continue, we may see wild polio reappear in countries like India and Indonesia from which it was recently wiped out after great effort.
Unfortunately, people who happily kill innocent teachers and health workers and are directly responsible for the paralysis and death of hundreds of children (with many more to come) sometimes get more sympathy from Western anti-imperialists and anti-globalization activists than their victims. In fact, these activists provide the killers with new and better justifications via the internet (ironically, another feature of globalization). There is a very powerful strain of racism and paternalism hidden in this form of “understanding and empathy”. These same activists clearly do not expect their own population to cut off their nose to spite the face. Even if the state occasionally uses health or educational institutions to spy on people (as it clearly has in the past), Carol Grayson and her friends do not expect Welshmen and West Virginians to shoot public health workers and teachers as a result. But since they seem to regard Nigerians and Pakistanis as especially retarded and simple-minded (unspoilt and innocent, but also unsophisticated), they find it perfectly reasonable for them to go around doing the same. The fact is no CIA or Mossad operation and no unethical drug trial is sufficient excuse for killing innocent health workers trying to stop a lethal disease. If people are doing so, they need to be told that it is not acceptable to do so, instead of using every atrocity as another opportunity to attack imperialism, capitalism or whatever ideological current you hold responsible for the state of the world as a whole.
I realize that the above paragraph is not philosophically air-tight. If capitalism is indeed the cause of all evil, then everyone who is gumming up the onward march of international capitalism is, by definition, a good guy. But my contention is that Western activists (and their Westoxicated Eastern admirers) do not really believe in any such absolute clash of good and evil and would not really want to live in the pre-industrial utopia of the Taliban. They only find it easy to admire heartless killers when faraway people are being discussed. I realize that I cannot stop this huge anti-capitalist cultural force with one article, but I just wish they would stay off the topic of polio vaccination. Humanity is tantalizingly close to wiping out this menace. It would be a shame to fail now just to make a point about the CIA or capitalism or American imperialism.
Monday, December 23, 2013
The Sandy Hook massacre--one year on
by Emrys Westacott
Here are three sad predictions for the coming new year:
- One day during 2014 there will be yet another shooting rampage somewhere in America.
- The killer will be a male aged between fifteen and forty.
- Although there will be renewed calls for stricter gun control, the political establishment will neither address nor even discuss the fundamental questions raised by these periodic killing sprees.
In the wake of the December 2012 massacre at Sandy Hook elementary school in Newton, Connecticut, when twenty children and six adults were killed by a lone gunman, there was much talk about the need for stricter gun control. President Obama urged Congress to pass laws that would strengthen background checks, ban assault weapons, and limit magazine capacity to ten cartridges; but a bill including these measures was defeated in the Senate. At the state level, over a hundred new gun laws have been enacted in 2013, but two-thirds of these loosen rather than tighten restrictions on the buying and owning of guns.
This is regrettable. Without question, laws that make it harder for potential killers (particularly individuals exhibiting signs of mental instability) to acquire guns (particularly semi-automatic assault weapons) would be a good thing. But we are kidding ourselves if we think the availability of such weapons is the main problem.
We need to ask this question: why is it that every few months somewhere in America a young man goes on a killing spree? The regularity with which this occurs suggests it is a symptom of a cultural malaise. So if we really want to address it meaningfully, we have to identify the underlying causes. That means we must first ask these questions:
- Why is our society producing these alienated, depressed, angry and mentally unstable young men?
- Why does their anger and alienation express itself in the form that it does—typically, a sudden volley of random violence?
Unless and until our response to these tragedies includes trying to tackle questions like these, it will remain superficial and ineffective. Sure, we can increase security at elementary schools; but the killer can always walk into a college classroom, a hospital, a restaurant, or a shopping mall. We can—and should—ban assault weapons; but a dozen people can still be killed with two revolvers. We can more or less eliminate some hazards: tight airport security reduces almost to zero the chances that someone will smuggle weapons or explosives onto a plane. But we cannot eliminate the possibility that a mentally ill person will get hold of a gun and shoot some strangers. No society can. All we can do is try to reduce the likelihood of such incidents. It's all about probabilities.
Increased security is all very well, but it is only likely to prevent or limit violent acts in particular locations. So long as the pressure that produces these explosions remains, violence will find other outlets. Squeeze the balloon in one place and it will bulge in another.
In recent years mass shootings have occurred in many countries, including those thought of as relatively peaceful and prosperous, such as Norway and Finland. But more occur in the United States than anywhere else; in fact, over the past fifty years, fifteen of the twenty-five worst mass shootings (outside of war zones) have occurred in the US. Why?
There is no easy answer. Yes, guns are easy to purchase here; but they are in Switzerland too (where carrying concealed handguns is also permitted). Yes, more households own guns in the US than anywhere else (around 39%), but in Canada and Norway the figure is over 30% and these countries have much lower rates of gun-related violence.
This is not to say that the sheer quantity of guns owned by Americans is irrelevant to the problem of mass shootings. The per capita rate of gun ownership (88%) is many times that of most other developed countries; so is the gun-related per capita homicide rate; and so is the frequency of shooting rampages. These correlations are unlikely to be accidental. Yet the causal relationship between the quantity of guns and the mass killings isn't simple. It isn't just that in the US more guns are lying around for mentally ill people to pick up and start firing. Also significant, surely, is the fetishism surrounding guns. Guns symbolize strength, power, independence, justice, tradition, and, of course, masculinity. This is part of our fascination with them; it is bound up with the aesthetic pleasure that guns give to gun lovers, many of whom build small collections of firearms and can spend happy hours poring over gun catalogs or cleaning their firearms. They derive pleasure from understanding, contemplating, comparing, using and servicing guns, just as bikers do with bikes or boaters with boats.
The fetishism of guns—which also finds expression in the fetishizing of the second amendment--is tied to their omnipresence in popular culture, especially in movies, TV shows, and video games. Most of us have little or nothing to do with handguns in our everyday lives, but tune into a TV drama in the evening and there's a good chance that within ten minutes someone will be pointing a gun at someone else. The message imbibed from these media, especially by young males, is that guns are fun, that guns solve problems, that heroes carry guns and use them, that a man's worth depends on how good he is with a gun, that guns are the essential tool with which one protects the innocent, punishes the guilty, and fights for truth and justice. Clint Eastwood's Dirty Harry movies perfectly illustrate the link between these ideas and gun fetishism.
This helps explain why a deranged man might choose to express his frustrations through shooting people. He is, of course, copying what others have done before him (and mass killings often do have copycat features); but they are all, in a general sense, copying the way most heroes on screens, from Wyatt Earp to James Bond, deal with problems: viz. they shoot people.
But what about the first question posed above: why do so many young men have the sort of problems that prompt them to commit horrendous acts of violence? This is the more fundamental problem. Again, there is no simple answer. The inadequacy of available and affordable mental health care is doubtless a factor. But it is a mistake to think of the problem as located entirely inside the individual. Testosterone levels, brain chemistry, and psychological syndromes certainly affect behavior; but they don't explain why mass shootings are more common in some societies than in others. To understand that, we must also consider the social causes of alienation, frustration, depression, and anger.
Some of these causes are obvious: for instance, unemployment, poverty, lack of opportunity, isolation, loneliness. Other factors are more subtle. These include: levels of inequality that threaten the self-respect of the less successful; a political system that is so corrupted by moneyed interests that those outside it feel helpless; a glaring contrast between the dreams that are continually touted as achievable and the miserable reality of life as the less fortunate experience it; a pervasive individualism that leads us to conceive of freedom and happiness as individual goods and ascribe failure to individual deficiencies.
These are some of the key reasons why millions today feel depressed, dissatisfied, humiliated, and resentful. Most do not go on shooting sprees, of course. They just live unhappily. Some turn to drink or drugs; some commit suicide. But the more there are who suffer in these ways, the more there will be who are likely to express their frustrations violently. It's all about probabilities.
Understanding the apparently pointless acts of violence by young men against innocents and strangers as manifestations of a general cultural malaise, rather than just as cases of individual psychosis, encourages us to make an important connection to the seemingly pointless acts of terrorism regularly occurring around the world. In most cases, these acts of violence are not part of some well thought out strategy for achieving political ends. Often they do the opposite of furthering the terrorists' declared goals. Rather, they are desperate attempts to damage or protest against economic and political systems that have left swathes of people feeling helpless and humiliated. In this respect, some of the causes of shooting sprees at home and terrorism abroad overlap.
So what is the solution? As President Obama said when addressing the residents of Newtown, "we will have to change." But in what ways? And how much? Ultimately, I believe, the answer is that we need nothing less than a radical change in the character of our culture. To be sure, we need to pass stricter gun control laws and find ways to combat the on-screen cult of violence; but these measures do not tackle the deeper issues. For the roots of the problem are an economic system that creates so much anxiety and so many "losers," and a political system from which, since it is controlled by moneyed interests, most people feel alienated.
There is no quick fix to the problem of arbitrary acts of violence like the Newtown massacre; there are only long-term measures that one hopes will reduce their probability. In practical terms these measures involve putting the great wealth of the United States to better use. Through taxation and public investment we should seek to reduce inequality, alleviate poverty, improve access to health care (including mental health care), expand educational opportunities, improve employment prospects, and enhance public amenities that build communities.
The prospects for such policies right now do not look promising. One year after Sandy Hook, Congress has not only failed to pass any gun control laws but is also in the process of cutting unemployment benefits and food stamp funding, measures that will increase rather than alleviate poverty and inequality. One year on, we don't seem to have learned very much. It is dreadful to consider what sort of event might force our politicians to reflect on the problem of gun violence with the depth and seriousness the issue deserves.
Monday, December 09, 2013
Madiba, Mahatma and the Limits of Nonviolence
"And if you can't bear the thought of messing up
your nice, clean soul, you'd better give up the
whole idea of life, and become a saint."
~ John Osborne, "Look Back in Anger"
As the paeans for Nelson Mandela rolled in last week, observers might have been forgiven for thinking that it was not a single human being had passed, but rather an astonishing confabulation of Mahatma Gandhi, Martin Luther King and Mother Teresa. The narrative can be encapsulated thusly: a despicable regime unjustly imprisons a passionate activist for 27 years, who upon his release goes on to lead his nation into peaceful democracy and becomes an avuncular elder statesman, unconditionally loved and respected by all. But this narrative tells us little about who Mandela actually was, and why he acted in the world in the way he did. A brief examination of Mandela's involvement in the ending of non-violence and the initiation of armed struggle in the early 1960s serves to illustrate some of this nuance.
The perpetuation of the saccharine narrative is enabled by, among other things, the cherry-picking of Mandela's own words. One endlessly quoted passage has been the end of Mandela's opening statement at the start of his trial on charges of sabotage, at the Supreme Court of South Africa, on April 20th, 1964:
During my lifetime I have dedicated myself to this struggle of the African people. I have fought against white domination, and I have fought against black domination. I have cherished the ideal of a democratic and free society in which all persons live together in harmony and with equal opportunities. It is an ideal which I hope to live for and to achieve. But if needs be, it is an ideal for which I am prepared to die.
This is stirring stuff, and worthy of being engraved into the marble of a monument, but only if you bother to read the preceding 10,000 words. In a far-reaching statement notable for its pellucidity, Mandela lays out the circumstances and philosophy that resulted in armed struggle against the regime.
I have already mentioned that I was one of the persons who helped to form Umkhonto [we Sizwe, the armed wing of the ANC]. I, and the others who started the organisation, did so for two reasons. Firstly, we believed that as a result of Government policy, violence by the African people had become inevitable, and that unless responsible leadership was given to canalise and control the feelings of our people, there would be outbreaks of terrorism which would produce an intensity of bitterness and hostility between the various races of this country which is not produced even by war. Secondly, we felt that without violence there would be no way open to the African people to succeed in their struggle against the principle of white supremacy. All lawful modes of expressing opposition to this principle had been closed by legislation, and we were placed in a position in which we had either to accept a permanent state of inferiority, or to defy the government. We chose to defy the law. We first broke the law in a way which avoided any recourse to violence; when this form was legislated against, and then the government resorted to a show of force to crush opposition to its policies, only then did we decide to answer violence with violence.
Without this context, Mandela's lofty concluding paragraph is as cheap as a Hallmark card. It's now clear to the reader exactly the lengths to which Mandela would be willing to go to die for his beliefs – not as a lamb to slaughter, but as a fiery revolutionary. It is difficult to conceive of Gandhi initiating such actions. But why was Mandela prepared at that point to resort to violence?
I am not gratuitously bringing up Gandhi's name. His example is especially instructive, since he lived in South Africa for 21 years, and it was in the course of resistance to discrimination against the Hindu, Muslim and Chinese minorities in that country that he first formulated the idea of satyagraha and non-violent resistance that would prove to be so effective, decades later, in India. And yet, as an exclusive strategy, non-violence failed in South Africa, or at least was found to be ineffective enough that, 50 years after Gandhi's initial experience, ANC leaders like Mandela were forced to conclude that armed resistance was in fact appropriate and necessary.
So why did Gandhi's strategy of nonviolence succeed in India but not in South Africa? In hindsight, we tend to see effective strategies of resistance as almost inevitable, partly thanks to their ennobling nature, but also as a result of the absence of any historical counterfactual. Hannah Arendt, who knew a thing or two about power, wrote in the New York Review of Books in 1969:
In a head-on clash between violence and power the outcome is hardly in doubt. If Gandhi's enormously powerful and successful strategy of non-violent resistance had met with a different enemy—Stalin's Russia, Hitler's Germany, even pre-war Japan, instead of England—the outcome would not have been decolonization but massacre and submission.
The thought experiment comes across as a bit clumsy – for example, this does not explain why nonviolence was successful in India – but the point is that context matters. In terms of South Africa, we know that the regime had only become more recalcitrant since Gandhi's efforts, which ended with his departure in 1914. There were many differences between it and the Raj, not least of which was the obvious fact that the South African regime was not colonial. South Africa's home population might have felt uneasy about the ongoing tactics, but the consequences of revolution were (at least presented as) nightmarish. Significant profits from resource extraction were also at stake. On the whole, the perception was that, since the whites had nowhere else to go, the screws could only tighten. Throughout the 20th century, virtually until the dissolution of apartheid in the early 1990s, a vast bureaucratic system of control permeated every aspect of South African society and ossified discrimination socially, culturally and spatially, often to absurd effect. (For an excellent perspective on the processes of racial classification, I commend to readers Chapter 6 of Geoffrey Bowker and Susan Star's Sorting Things Out: Classification and Its Consequences, which delves into a system that at one point saw fit to reclassify one man's race no less than five times).
But it was not the passage of some new law that brought matters to a head. The precipitating event that buried the non-violent approach in South Africa was the 1960 Sharpeville massacre, which left 69 dead. It was Sharpeville that catalyzed armed resistance by the ANC, but not in the way that one might think. That is, Sharpeville was not a case of "enough is enough," but at least partially one of internecine institutional struggle. If we take Mandela's words at face value, armed response was formulated as an ANC policy only after it was felt that all other options were exhausted. Certainly, the post-massacre crackdown by the regime saw the banning of political parties resisting the regime. On the other hand, and I believe much more importantly, Mandela undertook this action because he and others had recognized that events had begun outrunning the ANC.
Prior to Sharpeville, the pot had already come to a near boil. The march on the police station there had not been an ANC action, but rather one initiated by the Pan-African Congress, a splinter group that had recently broken off from the ANC. Both the PAC and the ANC had declared campaigns of resistance against the South African pass laws, which controlled people's movement around the country. (Incidentally, these were the same laws that had been the subject of Gandhi's protests, beginning in 1907, but by now were horrifically onerous and brutally enforced). Sharpeville was an action conducted by PAC supporters, and the police overreaction consequently led to the founding of the PAC's armed wing, which went on to target and murder whites as early as 1962.
Given these facts, it is easy to see that the terms of engagement had decisively changed. The PAC and ANC were driven underground, and the PAC had mobilized an armed response to kill whites. This returns us to the discussion of power and violence. At the end of her essay, Arendt writes:
Violence, being instrumental by nature, is rational to the extent that it is effective in reaching the end which must justify it. And since when we act we never know with any amount of certainty the eventual consequences of what we are doing, violence can remain rational only if it pursues short-term goals.
Mandela recognized this. The ANC could no longer function as an overt political force. However, it also had to present itself as a more desirable alternative than the PAC. But outrage over Sharpeville set up the distinct danger of all-out black uprising. The ANC had to defuse the situation while continuing to move forward on its goals. It had to remain a relevant force in a landscape that had been altered suddenly and irrevocably. As such, it was decided that the ANC's militant actions would be restricted to sabotage, and under no circumstances would it seek to take lives. By the time of Mandela's arrest, Umkhonto we Sizwe had conducted over 300 operations, almost all of which were against infrastructure and energy installations.
Note that sabotage is precisely what Mandela was charged with in 1964, and that led to his incarceration on Robben Island for the next 27 years. Mandela may have chosen violence, but, in keeping with Arendt's insight, strictly recognized it for its instrumental value, and chose to engage it in the same way that a smoke jumper sets a smaller fire in order to prevent a larger one from advancing. His actions allowed the ANC to remain credible and relevant in the decades that followed – had the conflict continued to degenerate into bloodshed, a full-blown civil war would have been very difficult to prevent.
Could Mandela have exercised a Gandhi-like sense of restraint? It would seem that entities like the PAC were no longer under his control and that the Rubicon had been crossed with the Sharpeville massacre. Historical forces have a way of becoming too overbearing – even Gandhi was powerless in the face of Partition, which he considered his greatest failure. Thus, one of the things that made Mandela the great leader was his ability to maneuver his organization into continuing relevance.
How successful the new ANC policy was in ultimately ending apartheid is an entirely different question, and one that I will leave to the historians. But it does bear mentioning that even this, fairly humane approach to armed struggle, was enough for the United States to declare the ANC a terrorist organization, and, in a somewhat baffling oversight, Mandela himself was not removed from the US terrorist watchlist until 2008, a full 15 years after winning the Nobel Peace Prize and serving as South Africa's first president. As for Gandhi, it is worth mentioning that his ashes were immersed not in the Ganges, as one might think, but in the ocean off the mouth of the Umgeni river, in his beloved South Africa. J.M. Coetzee, in his typically pithy fashion, may as well have been speaking for either when he recently wrote: "he may well be the last of the great men, as the concept of greatness retires into the historical shadows."
Monday, November 18, 2013
Black and Blue: Measuring Hate in America
by Katharine Blake McFarland
On Saturday, September 20, 2013, Prabhjot Singh, a Sikh man who wears a turban, was attacked by a group of teenagers in New York City. "Get Osama," they shouted as they grabbed his beard, punched him in the face and kicked him once he fell to the ground. Though Singh ended up in the hospital with a broken jaw, he survived the attack.
More than a year earlier, on a hot day in July, Wade Michael Page walked into Shooters Shop in West Allis, Wisconsin. He picked out a Springfield Armory XDM and three 19-round ammunition magazines, for which he paid $650 in cash. Kevin Nugent, like many gun shop owners, reserves the right not to sell a weapon to anyone who seems agitated or under the influence, and Page, he said, seemed neither. But he was wrong. Eight days after his visit to Shooters Shop, Page interrupted services at a Sikh Gurdwara in Oak Creek, Wisconsin, about thirty minutes southeast of West Allis, by opening fire on Sunday morning worship. He killed six people and wounded three others, and when local police authorities arrived on the scene, he turned the gun on himself.
Page, it turns out, had been a member of the Hammerskins, a Neo-Nazi, white supremacist offshoot born in the late 1980s in Dallas, Texas, responsible for the vandalism of Jewish-owned businesses and the brutal murders of nonwhite victims. He was under the influence. The influence of something lethal, addictive, and distorting: indoctrinated hatred. We don't know the precise array of influences motivating the teenagers who attacked Prabhjot Singh. But even considering the reckless folly of youth, their assault against him—a man they did not know, a physician and professor targeted only for his Sikh beard and turban—reverberates down the history of American hate crimes.
Last fall, I attended a workshop offered by the Southern Poverty Law Center on hate groups in the United States. The workshop was part of an educational retreat for law enforcement and corrections officials, and was being held at a remote lodge in northern Ohio on one of the most beautiful fall days I can remember, trees ablaze against a deep blue sky that betrays the blackness of space behind it. It was a strangely glorious setting in which to learn about skinheads. The dissonance was unnerving.
The man leading the workshop on hate groups was very muscular, a little shiny and a bit red in the face. Reminiscent of a cartoon bull, he is the kind of man I instinctively hope never to see angry. When I googled him before the presentation nothing turned up, but this anonymity is purposeful. Since the 1980s, SPLC has used the courts to undermine extremist groups, winning large damage awards on behalf of victims. Several hate groups have been bankrupted by these verdicts, rendering SPLC the occasional target of retaliatory plots. Thus, the low Internet profile and somewhat threatening physique of the workshop presenter, whose singular job it is to monitor these groups day in and day out. I found myself wondering about his family—what did his children know about their father's work, what did they think of it, were they safe?
Before the workshop, my knowledge of hate groups was limited, an epistemological deficiency afforded by privilege. I knew about the terror of the Klan in the 1800s, and their resurgence in the 1900s. I had studied, read, and heard firsthand stories of cross burnings and lynchings, sinister echoes of our nation's Original Sin. But my notion of modern-day extremism was based on the occasional unkempt white supremacist, rising up from his subterranean Internet world to buy a town. According to SPLC, the reality is more damning. Here's what I wrote down in my notebook during the workshop:
- There are more than 1000 active hate groups, including Neo-Nazis, Klansmen, white nationalists, neo-Confederates, racist skinheads, black separatists, and border vigilantes.
- This figure—this 1000+—represents a 67% increase since 2000.
- Since 44th President Barack Obama was elected in 2008, the number of Patriot groups, including armed militias, has grown 813% from 149 in 2008 to 1,360 in 2012.
- Only 5 – 15% of hate crimes are committed by actual hate groups.
In the margin next to this fourth fact, I scribbled three question marks and the words, how do we measure threat?
When I was six years old, my favorite fairytale was The Princess and the Pea. The Prince's search for a real Princess, a designation determined entirely by her sensitivity to a pea under twenty mattresses and twenty featherbeds, seemed remarkable. As an unduly sensitive child, I marveled at the notion that sensitivity could be the key to a happy ending. In my own life, even in those earliest years, sensitivity seemed only a liability.
But lately I've remembered the story in a different light, for its comment on what lies beneath. The ability of unseen, seemingly insignificant phenomena to affect the surface. A relatively small proportion of all hate crimes are committed by hate group members. But statistical insignificance might not obviate concern because numbers might tell only part of the story. I scarcely slept at all, the Princess said, I'm black and blue all over.
Here is a problem of statistical measurement: in 2008, two professors wrote a white paper that found no significant relationship between hate groups and hate crimes. "Though populated by hateful people," they write, " [hate groups] may be a lot of hateful bluster." But in 2010, Professor Mulholland at Stonehill College conducted a study that found hate crimes to be "18.7 percent more likely to occur in counties with active white supremacist hate group chapters."
Part of the problem is a lack of reporting. According to a report by the Bureau of Justice Statistics out this year, victims are less likely to report hate crimes to the police than they were ten years ago, with only 35 percent of all crimes reported. The result is that thousands of hate crimes go uncounted each year. This study also found an increase in the number of violent victimizations (92 percent of all hate crimes are now violent), and an increase in the number of religiously-motivated crimes over the past 10 years.
In a somewhat complicated coincidence, the problem of inaccurate data collection was addressed by Prabhjot Singh in a New York Times op-ed he wrote over a year ago. He called on the FBI to stop categorizing anti-Sikh violence as anti-Muslim or anti-Islamic in their annual reports. He decried the popular assumption that all hate crimes against Sikhs are instances of "mistaken identity," wherein the attacker assumes the victims to be Muslim. A true and fair grievance. But a year and a month later, Singh was victimized in his own neighborhood in Harlem by a group of teenagers yelling, "get Osama."
How do we measure threat?
Just after the shooting at Oak Creek, and months before the workshop on hate groups, I attended an interfaith service at a Sikh Gurdwara to commemorate those killed by Wade Michael Page. Upon entering the Gurdwara, I was instructed to take off my shoes, which I did, and then a young woman handed me a scarf to cover my head. I was escorted to a long, white room, with an aisle down the center—women sitting on the floor to the left, men on the right, and an altar adorned with brightly colored tapestries and cloths at the front. The room was almost full, but I found a spot near the back. The women's headscarves—blood orange, deep blue, and scarlet—burned beautifully against the white walls.
The service opened with a Sikh prayer, and Dr. Butalia, the leader of this Gurdwara, welcomed us all in English. He expressed how much it meant to him and his community to be supported by so many visitors, and he asked all the Christians to stand. I stood up, along with the two Catholic nuns in front of me, and about fifteen others. When we sat down, he asked all the Muslims to stand. When the Muslims sat down, he asked the Jews to stand, then the Hindus, then the Buddhists, then the Baha'i, then the Jains, then the "various people of conscience." With each group that stood, the hard shell formed by the word "stranger" cracked and dissolved. Children ran back and forth across the aisle, holding hands, on important missions from mother to father and back again. Dr. Butalia described his friend, Satwant Kaleka, the leader of the Gurdwara in Oak Creek who died trying to protect his congregation with a butter knife. His voice faltered, "He was a peaceful man." Then we prayed for the man who killed Kaleka. We prayed for Wade Michael Page, naming him "a victim of hatred," and we prayed for his family.
Towards the end of the service, a speaker told us a story that went something like this: a long time ago, there was a king who sought to be the most powerful man in all the land. He went around proving his strength by breaking the branches off trees with his bare hands. A wise man saw him doing this and approached him. "‘Oh, you are very strong,' said the wise man, ‘but now, can you put it back together?' People who destroy are not powerful," the speaker said, "people who unite are powerful."
The earliest definition of the word "victim" dates back to the 15th century and connotes a holy sacrifice. By the following century, the word lost its exclusively sacred associations, and today four definitions are offered:
- a person who suffers from a destructive or injurious action or agency;
- a person who is deceived or cheated, as by his or her own emotions or ignorance, by the dishonesty of others, or by some impersonal agency;
- a person or animal sacrificed or regarded as sacrificed;
- a living creature sacrificed in religious rites.
A person harmed by injurious agency. A person deceived by her own ignorance. A person sacrificed. It's too much to measure.
And there is no word or concept for "victim" in the Sikh tradition. After he was attacked, Prabhjot Singh's responses embodied the Sikh concept of chardi kala, which translates to "joyous spirit" or "perpetual optimism." He said that if he could talk to his attackers he would "ask them if they had any questions," and "invite them to the Gurdwara where we worship." He was also thoughtful about his one-year-old son: "I can't help but see the kids who assaulted me as somehow linked to him."
Numbers and naming can take us only so far. Sometimes causality defies quantifiable analysis and sometimes the relationship of one thing to another is indirect, cyclical, or statistically unlikely. A restless night, a confusing coincidence. Perhaps the question is not exclusively, or even primarily, one of measurement—the measurement of threat and causation, the correct category and quantity of victims—but a different question entirely:
Can you put it back together? I'm black and blue all over.
Monday, November 11, 2013
"A sphinx in search of a riddle."
~ Truman Capote, on Andy Warhol
About a month ago, following a rather dissatisfying evening, I found myself scurrying to the subway. I was crossing Astor Place in downtown Manhattan when I came across a strange scene. It was about midnight, and parked by the curb on a side street was a rental truck. I was approaching the front of the truck but I could see a small knot of people behind it, and they all seemed rather excited by what was going on. Like any good New Yorker, I'd thought I'd lucked into the chance to buy some nice speakers, 3000-count sheets or some other, umm, severely discounted merchandise. Wallet in hand, I came round the truck and had a gander, and realized I couldn't have been more wrong.
For the interior of the truck had been transformed into a jungle diorama. There were plants and flowers, which looked real, and stony cliffs, which did not. But there was a small waterfall that plashed gently into a pool, and recorded birdsong playing from hidden speakers, as well as the somewhat unnerving sight of insects and butterflies buzzing about the interior. Far in the background were painted a bridge, a sun, a mountain, and a rainbow.
As delighted as I was (because serendipity insists that such a discovery is always partly thanks to me), I still didn't really know what was up. Next to me was an Italian gentleman with an enormous camera, who had just about wet himself with excitement. "It's him! It's him!" he said, giggling like a schoolgirl. "Who?" "Banksy! We've been chasing after this all day." I don't really know what it means to chase after street art but, once Banksy's name had been floated, I realized that I'd stumbled across one of several dozen Easter eggs the reclusive artist had begun laying all over the city for the month of October.
This "residency," in Banksy's own words, is sparely documented on a website thrown up for the occasion, but the site doesn't reflect the kerfuffle caused by those who have come into contact with the works or their interlocutors. Without attempting to define the quality that makes art great, I will humbly suggest that, for the present discussion, it may be that it becomes a mirror in which society has no choice but to view itself. I realize how horrifically unoriginal this is. As a defense, consider that Banksy's anonymity makes this not just inevitable, but desirable. (Banksy's anonymity has led to understandably ripe amounts of speculation – although to say that Damien Hirst is responsible for Banksy is like saying Edward de Vere wrote everything attributed to that other artful dodger, William Shakespeare. Banksy may or may not be one person, but for him to turn out to be Damien Hirst would prove that we are living in a very cruel universe, indeed.)
Such a brutally enforced anonymity means we have already played into his hands. Banksy's work neither asks for permission or forgiveness, and the intrinsically ephemeral nature of street art generates a scarcity economy par excellence. This virtuous circle has continued its widening gyre, as the value of his works now far outstrips those of his contemporaries on the international art market. In turn, this gives Banksy a larger megaphone with which to sound his trickster yawp. In a sense, Banksy is a prime beneficiary of his countryman's dictum, "There is only one thing in life worse than being talked about, and that is not being talked about."
So when everyone is talking about it, there's a good chance that what's really at stake is not Banksy's art, which at its best has the conceptual bite of an above-average New Yorker cartoon, and at its worst is just dead on arrival (two examples from the recent stint in New York include a kludgy reference to the Twin Towers, and balloon-letter throw-up of his name made from – wait for it – balloons). Nor is there anything very compelling in the yawning of the critics, as exemplified by Jerry Saltz, or the outrage of NYC's teeming graffiti underground, who are understandably upset at the idea of a British Invasion of their turf. Of far greater interest is what happens to the art once it has been put out there – that is, when the city's collective, chaotic decision-making apparatus swings into full force. To wit and in no particular order:
October 10th: Banksy's stencil, implying a beaver's responsibility for a parking sign broken off at its base, is co-opted by locals who promptly begin charging hipster Banksystas for the privilege of ogling said beaver.
Located in East New York, there is an entertaining video clearly demonstrating exactly whose neighborhood you're in. New York may not longer be the hotbed of quick-buck capitalism – that honor surely goes to Lagos, Mumbai, Mexico City and probably a half-dozen other global cities – but these guys could certainly smell an easy dollar. Banksy might not much care either, but he is switched on enough to know that people fight over his art. Putting one-of-a-kind pieces in public places is, in fact, an excellent way to egg on any conflict. Furthermore, put it in a hardscrabble East New York neighborhood and the resentment of certain locals towards white graffiti tourism is bound to bring results.
It's important to contrast this against another recent intervention. As I've already noted, in the case of Thomas Hirschhorn's Gramsci Monument, hipster art tourism brought people to a South Bronx public housing project – people who would otherwise never venture anywhere near a place such as Forrest Houses. The difference is that Hirschhorn's installation was full of not just contradictions but also compassion and dignity. Banksy is clear about harboring no such interests. In fact, most of his pieces have already been removed: the Sphinx in the picture at the top of this article was trucked away the very same day, although not after nearly causing a fistfight or two. Those pieces not removed wholesale have been painted over by irritated owners, or brutally defaced by local taggers and writers. Only a few lucky ones have been ‘protected' behind Plexiglas.
October 13th: Banksy sets up a stand off Central Park selling authentic stenciled canvases for $60 a pop. The day's take: 8 sales for a total of $420. Note that the market value of these is estimated at about $20,000 each. Bonus points to the woman who haggled the vendor down 50% for two of the pieces.
This was rather sly of Banksy. On the one hand, we can lament how greatness is always under our noses, but it's the social signaling that really calls the shots. This is perhaps better known as the Joshua Bell school of behavioral psychology, where you are confident in your belief that you would have recognized him playing violin in the DC Metro. Recall the egotism that I implied always exists in serendipity. And yet how many thousand people walked by that stand on Central Park? As for me, I excuse myself because I'm rarely on the east side.
On the other hand, we could make a counter-argument around fakes. How could anyone know this was in fact real? This being New York, fakes are sold everywhere, and Banksy is certainly prone to being faked, as it's not hard to fob a stencil. It's really only the signature that counts – or rather being told that that is, in fact, the real signature. And those reassuring us of this provenance are the gallerists, the dealers, the appraisers and insurers and everyone who is in on the take in the art world. Banksy seems to be having a laugh at everyone's expense, actually, and the tourists, that most disposable of all New York street personae, come off not as the savviest, nor redeemed by the simplicity of their faith, but just the luckiest. Let's hope that the three who purchased the canvases all watch the news.
October 29th: A mediocre landscape painting is purchased from a Housing Works charity shop, the long-time AIDS advocacy organization. It is altered and then re-donated to Housing Works. Inserted into the landscape is a Nazi SS officer seated on a bench, admiring the view right along with us.
Jerry Saltz is right to call this "one of the oldest tricks in the modernist book." Recent examples include Star Wars meets Thomas Kinkade and monsters inserted in, yes, thrift store paintings. But to stop there misses the point dramatically. The original painting is decidedly Bob Ross and the intervention is not much better. The title – "The Banality of the Banality of Evil" – does not exactly inspire flights of admiring critical prose. What matters here is the context. On the one hand, the joke seemed to be on Housing Works, since they wound up prominently displaying it in their shop window. But as soon as the word got out, the organization put the hot ticket on its online auction site, and as you can see from the auction page, the bidding closed at $615,000 (have a closer look at the page – you know it's serious when Mr. Bob Dobalina pulls out at $155,000). This would have been one of the largest auction windfalls in Housing Works history, and it's pretty improbable that Banksy didn't know what he was getting up to.
The unifying feature in all of this is the commodification of art and, by implication, all of society. Once they'd figured it out, everyone wanted in. Even Stephen Colbert found himself in a supplicatory mood, although he wound up getting a Hanksy and not a Banksy. But seriously: Banksy, in his feigned show of anonymity and supreme indifference, asks us a rather important question. What kind of a city do we want to live in? The smash-and-grab mentality that Banksy's drive-by New York appearance has left us on tenuous ground. Even the Housing Works auction, a seemingly high note of lèse-majesté with which Banksy could have triumphantly completed his residency, descended into a bit of chaos, as it turned out that the winning bidder didn't have the money everyone assumed he did.
Aside from strewing ephemeral art crumbs around the five boroughs for us to fight over, I'm not sure what the final point of the exercise was. Banksy himself, in an interview with the Village Voice, said there wasn't any:
"There is absolutely no reason for doing this show at all. I know street art can feel increasingly like the marketing wing of an art career, so I wanted to make some art without the price tag attached. There's no gallery show or book or film. It's pointless. Which hopefully means something."
Ok, fine. But as the recent title sequence he did for The Simpsons indicates, it's clear where Banksy's sympathies lie. It's a good old-fashioned street rebellion against authority, whether that authority is corporate or governmental. So the sign-off to his last piece really rankled with me: "Thanks for your patience. It's been fun. Save 5pointz. Bye." Forget the rest of the city – if there is anything that Banksy should be interested in engaging, it's the imminent demolition of 5Pointz, one of the greatest graffiti monuments not only in New York, but in the entire world. Hey guv, thanks for the laughs, but care to throw out a few rat stencils to help defray legal costs?
In any event, after I'd gotten my fill of the Banksy deposited off Astor Place that night, I wondered what would happen to the truck. Obviously, there hadn't been anyone in the cab at the time. I secretly hoped that the truck would just stay there, abandoned, until the generator expired and the city, exasperated, had to cart the truck off to whatever pound is such vehicles' destiny. We could have gotten a better nugget out of Mayor Bloomberg than some anodyne "it may be art, but it should not be permitted" (although one only pines for what Giuliani's reaction would have been, back in the good old days). Making a mess and forcing the authorities to clean up after him – now that would have been a proper Banksy.
Tapping into the Creative Potential of our Elders
by Jalees Rehman
The unprecedented increase in the mean life expectancy during the past centuries and a concomitant drop in the birth rate has resulted in a major demographic shift in most parts of the world. The proportion of fellow humans older than 65 years of age is higher than at any time before in our history. This trend of generalized population ageing will likely continue in developed as well as in developing countries. Population ageing has sadly also given rise to ageism, prejudice against the elderly. In 1950, more than 20% of citizens aged 65 years or older participate used to participate in the labor workforce of the developed world. The percentage now has dropped to below 10%. If the value of a human being is primarily based on their economic productivity – as is so commonly done in societies driven by neoliberal capitalist values – it is easy to see why prejudices against senior citizens are on the rise. They are viewed as non-productive members of society who do not contribute to the economic growth and instead represent an economic burden because they sap up valuable dollars required to treat chronic illnesses associated with old age.
In "Agewise: Fighting the New Ageism in America", the scholar and cultural critic Margaret Morganroth Gullette ties the rise of ageism to unfettered capitalism:
There are larger social forces at work that might make everyone, male or female, white or nonwhite, wary of the future. Under American capitalism, with productivity so fetishized, retirement from paid work can move you into the ranks of the "unproductive" who are bleeding society. One vile interpretation of longevity (that more people living longer produces intolerable medical expense) makes the long-lived a national threat, and another (that very long-lived people lack adequate quality of life) is a direct attack on the progress narratives of those who expect to live to a good old age. Self-esteem in later life, the oxygen of selfhood, is likely to be asphyxiated by the spreading hostile rhetoric about the unnecessary and expendable costs of "aging America".
Instead of recognizing the value of the creative potential, wisdom and experiences that senior citizens can share with their respective communities, we are treating them as if they were merely a financial liability. The rise of neo-liberalism and the monetization of our lives are not unique to the United States and it is likely that such capitalist values are also fueling ageism in other parts of the world. Watching this growing disdain for senior citizens is especially painful for those of us who grew up inspired by our elders and who have respected their intellect and guidance they can offer.
In her book, Gullette also explores the cultural dimension of cognitive decline that occurs with aging and how it contributes to ageism. As our minds age, most of us will experience some degree of cognitive decline such as memory loss, deceleration in our ability to learn or process information. In certain disease states such as Alzheimer's dementia or vascular dementia (usually due to strokes or ‘mini-strokes'), the degree of cognitive impairment can be quite severe. However, as Gullete points out, the dichotomy between dementia and non-dementia is often an oversimplification. Cognitive impairment with aging represents a broad continuum. Not every form of dementia is severe and not every cognitive impairment – whether or not it is directly associated with a diagnosis of dementia – is global. Episodic memory loss in an aging person does not necessarily mean that the person has lost his or her ability to play a musical instrument or write a poem. However, in a climate of ageism, labels such as "dementia" or "cognitive impairment" are sometimes used as a convenient excuse to marginalize and ignore aged fellow humans.
Perhaps I am simply getting older or maybe some of my academic colleagues have placed me on the marketing lists of cognitive impairment snake oil salesmen. My junk mail folder used to be full of emails promising hours of sexual pleasure if I purchased herbal Viagra equivalents. However, in the past months I have received a number of junk emails trying to sell nutritional supplements which can supposedly boost my memory and cognitive skills and restore the intellectual vigor of my youth. As much as I would like strengthen my cognitive skills by popping a few pills, there is no scientific data that supports the efficacy of such treatments. A recent article by Naqvi and colleagues reviewed randomized controlled trials– the ‘gold standard' for testing the efficacy of medical treatments – did not find any definitive scientific data that vitamin supplements or herbs such as Ginkgo can improve cognitive function in the elderly. The emerging consensus is that based on the currently available data, there are two basic interventions which are best suited for improving cognitive function or preventing cognitive decline in older adults: regular physical activity and cognitive training.
Cognitive training is a rather broad approach and can range from enrolling older adults in formal education classes to teaching participants exercises that enhance specific cognitive skills such as improving short-term memory. One of the key issues with studies which investigate the impact of cognitive training in older adults has been the difficulty of narrowing down what aspect of the training is actually beneficial. Is it merely being enrolled in a structured activity or is it the challenging nature of the program which improves cognitive skills? Does it matter what type of education the participants are receiving? The lack of appropriate control groups in some studies has made it difficult to interpret the results.
The recent study "The Impact of Sustained Engagement on Cognitive Function in Older Adults: The Synapse Project" published in the journal Psychological Science by the psychology researcher Denise Park and her colleagues at the University of Texas at Dallas is an example of an extremely well-designed study which attempts to tease out the benefits of participating in a structured activity versus receiving formal education and acquiring new skills. The researchers assigned subjects with a mean age of 72 years (259 participants were enrolled, but only 221 subjects completed the whole study) to participate in 14-week program in one of five intervention groups: 1) learning digital photography, 2) learning how to make quilts, 3) learning both digital photography and quilting (half of the time spent in each program), 4) a "social condition" in which the members participated in a social club involving activities such as cooking, playing games, watching movies, reminiscing, going on regular field trips but without the acquisition of any specific new skills or 5) a "placebo condition" in which participants were provided with documentaries, informative magazines, word games and puzzles, classical-music CDs and asked to perform and log at least 15 hours a week of such activities. None of the participants carried a diagnosis of dementia and they were novices to the areas of digital photography or quilting. Upon subsequent review of the activities in each of the five intervention groups, it turned out that each group spent an average of about 16-18 hours per week in the aforementioned activities, without any significant difference between the groups. Lastly, a sixth group of participants was not enrolled in any specific program but merely asked to keep a log of their activities and used as a no-intervention control.
When the researchers assessed the cognitive skills of the participants after the 14-week period, the type of activity they had been enrolled in had a significant impact on their cognition. For example, the participants in the photography class had a much greater degree of improvement in their episodic memory and their visuospatial processing than the placebo condition. On the other hand, cognitive processing speed of the participants increased most in the dual condition group (photography and quilting) as well as the social condition. The general trend was that the groups which placed the highest cognitive demands on the participants and also challenged them to be creative (acquiring digital photography skills, learning to make quilts) showed the greatest improvements.
However, there are key limitations of the study. Since only 221 participants were divided across six groups, each individual group was fairly small. Repeating this study with a larger sample would increase the statistical power of the study and provide more definitive results. Furthermore, the cognitive assessments were performed soon after completion of the 14-week programs. Would the photography group show sustained memory benefits even a year after completion of the 14-week program? Would the participants continue to be engaged in digital photography long after completion of the respective courses?
Despite these limitations, there is an important take-home message of this study: Cognitive skills in older adults can indeed be improved, especially if they are exposed to an unfamiliar terrain and asked to actively acquire new cognitive skills. Merely watching educational documentaries or completing puzzles ("placebo condition") is not enough. This research will likely spark many future studies which will help define the specific mechanisms of how acquiring new skills leads to improved memory function and also studies that perhaps individualize cognitive training. Some older adults may benefit most from learning digital photography, others might benefit from acquiring science skills or participating in creative writing workshops. This research also gives us hope as to how we can break the vicious cycle of ageism in which older citizens are marginalized because of cognitive decline, but this marginalization itself further accelerates their decline. By providing opportunities to channel their creativity, we can improve their cognitive function and ensure that they remain engaged in the community.
There are many examples of people who have defied the odds and broken the glass ceiling of ageism. I felt a special sense of pride when I saw my uncle Jamil's name on the 2011 Man Asian Literary Prize shortlist for his book The Wandering Falcon: He was nominated for a ‘debut' novel at the age of 78. It is true that the inter-connected tales of the "The Wandering Falcon" were inspired by his work and life in the tribal areas of the Pakistan-Afghanistan borderlands when he was starting out as a young civil servant and that he completed the first manuscript drafts of these stories in the 1970s. But these stories remained unpublished, squirreled away and biding their time until they would eventually be published nearly four decades later. They would have withered away in this cocooned state, if it hadn't been for his younger brother Javed, who prodded the long-retired Jamil, convincing him to dig up, rework and submit those fascinating tales for publication. Fortunately, my uncle found a literary agent and publisher who were not deterred by his advanced age and recognized the immense value of his writing.
When we help older adults tap into their creative potential, we can engender a new culture of respect for the creativity and intellect of our elders.
- Gullette, Margaret Morganroth. Agewise: Fighting the new ageism in America. University of Chicago Press, 2011.
- Naqvi, Raza et al "Preventing cognitive decline in healthy older adults" CMAJ July 9, 2013 185:881-885.doi: 10.1503/cmaj.121448
- Park, Denise C et al "The Impact of Sustained Engagement on Cognitive Function in Older Adults", published online on Nov 8, 2013 in Psychological Science doi:10.1177/0956797613499592
Monday, October 14, 2013
Duct Tape, Plywood and Philosophy
by Misha Lepetic
When all is finished, the people say, "We
did it ourselves."
~ Tao Te Ching, Verse 17
What does philosophy in action look like? Casual thoughts about the discipline may be united by the cliché of the philosopher as a loner. From Archimedes berating a Roman soldier to not "disturb my circles" (which subsequently cost him his head), to Kant's famous provinciality, to Wittgenstein's plunging into the Norwegian winter to work on the Logik, the term "armchair philosopher" might seem to be a tautology. But philosophy – or at least the parts that occupy the intersection of the interesting and the accessible – still concerns itself with the world at large, and our place in it.
New Yorkers got to see a particularly odd example of philosophy in action over the summer when artist Thomas Hirschhorn installed his Gramsci Monument in the central courtyard of a Bronx public housing complex known as Forest Houses. I won't dwell much on Antonio Gramsci himself (see here for a start), but suffice to say he was a man of the people, who died in prison after founding the Italian Communist Party. What is more interesting is how Hirschhorn used Gramsci as a jumping-off point, and where he chose to do it. Completed in 1956, Forest Houses is part and parcel of what anyone would recognize as "the projects" – a scattering of 15 buildings in a towers-in-the-park configuration, populated by nearly 3400 residents, most of whom are minorities and low-income. However, Hirschhorn didn't so much choose the site as it chose him – after visiting 47 public housing projects in the city, Forest Houses was the only one that expressed any interest in his proposal.
The arrival of Hirschhorn and his motley architectural assemblage, which seemed to be made mostly of plywood and duct tape, was met with perplexity by both residents and art critics. As far as the critics go – and hey, someone's got to play the straw man to kick things off, right? – at least one was mightily displeased. Writing in the New York Times, Ken Johnson pooh-poohed Hirschhorn as a "canny conceptualist operator" and opined that the installation would ultimately "be preserved in memory mainly by the high-end art world as just a work by Mr. Hirschhorn, another monument to his monumental ego."
It's difficult for me to comprehend that Johnson and I visited the same place. The first thing to note is the inappropriateness of the term "installation." The Gramsci Monument is much more of an intervention. Of course, architects and urbanists are not immune to the charms of this term, either – any bland pop-up café seems to constitute an "intervention" of the street, the urban fabric or what have you, with "dramatic" being the accompanying adjective of choice. But what made Hirschhorn's work really an intervention was its sheer physicality, its uncompromising presence in the courtyard. The towers-in-the-park paradigm, one of the baleful legacies of modernism, was introduced to the US in large measure by Le Corbusier, whose reputation is currently the subject of a risible attempt at rehabilitation by MoMA. The result is an environment of hard vertical and horizontal masonry lines, scrawny trees and threadbare lawns. As a pedestrian, you walk among 12-story brick sentinels, and the absence of any place that can provide a moment of semi-privacy, one of the key signifiers of successful public space, is palpable. The point – which was much in keeping with Le Corbusier's design ideals – was to get you to where you were going, and as efficiently as possible. "No Loitering," as the signs say.
In other words, it's a space just begging to be broken up, and that's what Hirschhorn did, by designing a series of elevated, single-story bungalows reachable by ramps and stairs, wrapped around the sidewalks of the project's central space. Most importantly, it was ugly. In addition to all the plywood and duct tape (and the utter absence of paint), white bedsheets spray-painted with choice Gramscisms such as "Every Human Being Is An Intellectual" fluttered in the breeze. Tacked all over the plywood were photocopied issues of the Gramsci Monument Newspaper, a broadsheet featuring stories about current residents, visiting notables and deceased philosophers.
This was a real thing, and it invited you in. You could have a vodka tonic or a hot dog at the bar, visit the newspaper or radio station, browse the library or attend a lecture. The experience was not dissimilar to visiting a coral reef – when you swim away from the reef and find yourself looking at a sandy seafloor, it is inevitably barren of life, but come up to another outcropping of coral and there will always be fish swimming around it, almost no matter how small the outcropping. And the amount of life swimming around the Gramsci Monument was rich and vital. Indeed, one felt that one had explicitly been given a license to loiter.
But here is the really important bit, and likely what was lost on Johnson and other dour critics: Hirschhorn had no desire to create a unified, curated experience. For the Monument was replete with contradictions that Hirschhorn, who himself lived in Forest Houses for the duration of the project, seemed either to encourage or just plain ignore. For example, the library, well stocked with Gramsci's writings as well as those of his contemporaries, also had several tables of glossy magazines, implying that there was no judgment about which one you chose to pick up. The computer center next door provided free Internet, and was always filled with children playing video games, not, as Johnson writes, "as far I as could tell, reading up on Gramscian theory." Well, Mr. Critic, maybe when you were ten you were reading Gramsci.
Nor is this to say that Hirschhorn was playing the haughty ironist, either. Refreshingly, the po-mo apotheosis of "high-brow is low-brow is high-brow" was not at all in evidence. This was clear in the library, where the message of collocating Us Weekly and Lenin wasn't a nudge and a wink, but a simple question of, Which would you prefer to read? Further mysteries abounded. Several display cases included period documents and notes in Gramsci's own hand. At first blush, one might think that this would be inadvisable – after all, we're in the projects! I mean, someone might steal a 1930s pamphlet and put it on eBay for a few bucks! But quickly one realizes that this is only the logical thing to do (putting the pamphlets on display, that is). On the one hand, it is redolent of Gramsci's own approach to humanity. On the other hand, it raises the important question of who has the right to be trusted with these items. Correction: it answers the question of who has the right to be trusted with such items.
Granted, it is not as dramatic of a gesture as bringing a Picasso to Ramallah, but even the modesty of the items works to the advantage of the inquiry. However, Hirschhorn pushes the conceit even further. Another series of flimsy Plexiglas display cases held some of Gramsci's prison possessions – a pair of house slippers, a comb, some wooden eating utensils, a wallet. Were these relics of the saint, or proof that he was a human being like the rest of us, who needed to eat, brush his hair, pad around his cell in slippers, and have a place to put spare change? Hirschhorn doesn't say. He doesn't have to – it's up to us.
But by far the most uncompromising feature of the Monument was the free lectures. For starters, itinerant philosopher Marcus Steinweg, with whom Hirschhorn has collaborated in the past, engaged in an act that could only be called "philosophy as performance art:" 77 lectures, delivered daily and without notes, at 5pm, rain or shine. Steinweg pulled no punches, and even granting my familiarity with critical theory and Western philosophical tradition, I certainly got a good workout (you can get a taste of his lecturing style here, although it's not from the Gramsci series). Now, since anyone was welcome to grab a white plastic lawn chair and sit in on the lecture, a natural question might be, Why? What does this "do" for people who might be residents of, umm, "the projects"? And did I already mention that those plastic chairs were ugly? And will someone please tell me to stop putting quotes around words already?
This is what the Gramsci Monument does to you: it makes you ask questions that, once you've gotten them out, seem immediately, hopelessly idiotic. The lectures were there for anyone who wanted to listen to them. If you wanted to ask a question, you could. If you wanted to leave, you could do that, too. But there was no dumbing-down for anyone. People showed up and did what they were good at, took their best shot, and maybe learned something for themselves or from one another. In brief, sentiments that are resonant of the most fervent aspirations we have for the undergraduates of today.
If you find this to be an acceptable proposition, it was only further tested by the Saturday seminar series, where a heavyweight academic would deliver a lecture relating to how Gramsci influenced his or her work. If Gayatri Spivak, Stanley Aronowitz or Simon Critchley are your jam, this was the place to be. The last seminar, which I attended, was delivered by Frank Wilderson. Wilderson is not just a professor, but was one of two African Americans who went to South Africa and fought with – and eventually against – the ANC during apartheid. He actually taught Gramsci to ANC members (now that's philosophy in action). Wilderson's lecture was incendiary in its own right, but what was particularly striking about the event was not so much the content but the audience: in the front row was Bill de Blasio, fresh off his Democratic primary win for the NYC mayor's race. I later got the back-story that it was his son who had heard about the Monument and had wanted to go. Here was someone who could have parlayed his win into a plush Saturday afternoon fundraiser, and instead chose to attend a lecture in the Bronx (although I suppose it's not that surprising). But what I really liked was the fact that de Blasio listened, took notes and never once pulled out a Blackberry or some such. He was just like anyone else. Afterwards we all mingled by the bar and generally had a low-key time.
So much for the celebrity artists, academics and politicians. What about the residents of Forest Houses? Ostensibly, they were Hirschhorn's primary audience. They were the ones who built the Monument, and dismantled it 80-odd days later. They worked the snack bar, staffed the radio and newspaper, and participated in the raffle that gave away the usable bits at the end. Aside from having to put up with a whole mess of mostly white hipsters and art tourists, and kick us out after we'd drank all the vodka at the bar, what did they think of it? Did they think it was worth it? In a word, yes and yes. Perhaps most striking was the way in which people just did their own thing around the installation. They hung out on benches, had arguments, sold jewelry, grilled burgers.
Most often, residents commented on how great it was that the kids could get on the net, or that they could just go downstairs and play without supervision. In fact, during the Wilderson lecture, a group of half a dozen rambunctious boys dragged out a clubhouse made of cardboard and process to completely wreck it. No one thought to shush them, and the sounds of their play provided the most eloquent counterpoint to Wilderson's narrative of slavery and alienation. Contemplating the juxtaposition of the two narratives through the dappled sunlight on a September afternoon, I realized that Hirschhorn had got it right: he was never presumptuous enough to think that The Artist would be the one to strike the balance between such a terrible past and tenuous present. In the wisdom of his "monumental ego," he knew that if he set up the field of play just right, a glimpse of that balance might just manifest itself.
Monday, September 30, 2013
Why the Rodeo Clowns Came
by James McGirk
I live surrounded by retirees in rural Oklahoma. They are spry. They own arsenals of gardening equipment: lawnmower-tractor hybrids that grind through the fibrous local flora with cruel efficiency; they wield wicked contraptions, whirling motorized blades that allow withered men to sculpt hedges into forms of sublime and delectable complexity. Their lawns are soft to touch and inviting and deep emerald green. They host garden parties. They know the mysteries of mulch and sod, their vegetables bulge with vitality and nutritious color, their compost heaps are not heaps at all, they are tarry and primordial, oozing and glowing with health. Their flowers glow. Their insects are harmless flutterers, not the stinging biting buzzing slithering demonic horde that inhabits my yard.
In the spring I chose a manual mower to help maintain my garden. I am no environmentalist nut, but as an ostensible elite urbanite, I wrinkled my nose at the fumes belched by my neighbors’ devices. This was a grave error. My man-powered motor leaves bald patches when I hoist the thing through a rough patch uphill and it accidentally sheers too close, and leaves miniature Mohawks when the sturdier weeds simply dip beneath my blades and spring up behind me unscathed. But I cannot blame the device. This is an operator error. I chose the thing, and I vowed to live with the consequences.
For months I huffed and puffed, hauling the bright orange plastic and metal contraption through the thickets in my yard. I felt close to the land. Its contours became familiar to me: the mysterious dead patch, which I fantasized came from natural gas seeping up from the Cherokee Shelf, five fathoms below; or the pits dug by the previous tenants where I once found a black snake tangled in my spinning blades (coward that I am, I let him crawl away instead of dispatching a merciful death: and lo the next afternoon my elderly neighbor came over to apologize for the shriek I might have heard because the poor thing had taken shelter in her kitchen before her husband—an octogenarian—beheaded it with a rake) and the plunging predator birds and the mysterious mushrooms and the owl feathers and squawking fledglings and tiny tragedies: the robin’s nest spilled on the ground after a titanic storm, her pale blue eggs still intact, the nest like a spun basket, and the mother’s frayed carcass a few feet away. I watched it slowly decay.
The blade on my mower can be adjusted to clip between four inches and one inch. The closer the shave the more effort it takes to cut. Any growth above two inches looks like an overgrown haircut. Sloppy, grubby and neglected. Seedy might be the precise word I want. Could this be a word that entered the vernacular from our centuries of lawn care? Next to the martial precision of our neighbors’ yards our shaggy lawn looked degenerate as the summer dragged on. Though I made a valiant effort to sustain it, I kept having to set my blade higher: one-and-a-half became two became three… and when I returned from a trip even four inch cut couldn’t make a difference I had to call for reinforcements.
Early in the season, lawn care was easy to arrange. People prowled the streets of Tahlequah looking for opportunities to lock down a lucrative contract: a summer of care, 40 dollars U.S. every two weeks. Our nasty yard was a cry for help. Knockers came daily offering help and fistfuls of fliers touting their services. But by September those plucky entrepreneurs had gone. I hunted for lawn care professionals. The Yellow Pages, pinned to the drawing board in the local Laundromat, there was nothing! After a week of searching I finally found a tout in the classified ad section of the local paper: Several decades of experience! Equipment and tools! It seemed sober and professional. I snipped and called the number.
A preoccupied, frail voice responded. He was driving, but said if I would just give him a moment he would take my call. “Let me call you back,” I tried interjecting, but he was adamant we speak. I heard grunts and the moans of zooming traffic seemed to recede. “Okay,” he said. “You’ve got me now. I’m in the parking lot of a bank, let’s talk estimates,” he said, and he told me he would be by. “Okay,” I replied. It’s the only house on the block with grass an elephant could hide in. “A what?” He said. “Never mind,” I replied and recited my address.
He pulled up in a white Ford pickup truck a few hours later. “You James?” He asked me. “Sure am,” I replied. He stepped out of the cab and told me his name. Now, Tahlequah is an awfully small town so I won’t repeat it here. He was shorter than I expected. Older too. I guessed he was about the age of my neighbors—someone who’d retired long ago. He wore a cowboy hat, blue jeans and a bright white shirt button-up shirt. Didn’t notice his boots, but I expect they were tough and leathery too. He wore a beautiful ring. Bright yellow gold on a thick band with what looked like a chunk on onyx as a stone.
We shook hands. Already, I felt ashamed. I could barely breathe in the air it was so hot let alone mow my own lawn. His face was flushed pinkish-red: a distinctly cardiac color. He waved away my offer of a glass of water.
We chatted as we strolled the grounds together. He appraised the lawn. My weeds were no problem at all, and they had to go, and he was happy to trim the edges of our lawn, which was crucial because edges are like shoes: if they’re scruffy it ruins the effect of everything else and are absolutely impossible to trim without one of those whirling edge-trimmers. Not for nothing is a tidy lawn a reflection of an orderly household and a stable income and sober minds within. It takes hard work or cold hard cash to maintain one.
My front lawn is misleading – it looks tidy and easy to maintain but there’s a pretty steep slope and there are thickets of spongy clover-like stuff that resists cutting. He said this was no problem. He was an old hand around these parts, trimmed the yards of massive estates ones with hills that made ours look like mere pimples. I took him around back. We have a narrow gate (someone who lived must have once owned a dog) and I worried about him getting his riding mower in, but waved away my concerns. This was no serious problem, he said, we could just unscrew the fence if it didn’t fit.
I was starting to notice his gait. He had a stiff, painful walk. He noticed me noticing: “just had my hip replaced,” he said. A two thousand pound steer had fallen on him. “Let me rest for a bit.” We leaned against the fence and then he slumped over on his side. “Just have to let it set,” he said. “I’m tough. I’ll be fine.” He asked me where I was from. “New York,” I said. And he told me about how he’d been part of a construction team up there, hired from Louisiana to come up and put power-lines up, but the local union types “had objections.” So “we Cajun boys had to straighten ‘em out with wrenches and pipes.”
Now, if my front yard was the ski-slope equivalent of a red slope (or a single black diamond for our American readers), my backyard was a professionals-only black. There were crannies and pits and the aforementioned snake and I tried to point out all the deep pits where I had fallen and nearly broken my leg. “Are you sure you can do this?” I asked him. He sure was.
I walked him back to his truck. We shook hands, agreed on a price—slightly more than the eager hordes were quoting me earlier this season—and he told me he would be back the next morning at eleven.
It was really hot. Well over a hundred degrees Fahrenheit and close to a hundred percent humidity (spills wouldn’t dry up on their own, the air felt steamy, the bugs were fizzing like crazy). He showed up an hour early. And unloaded a wide riding mower. He zoomed back and forth, and the front yard seemed to be done in an instant. I chatted with him for a bit – he regaled me with his story about the Cajun boys showing up the Jersey thugs again, and then we attempted to tackle the back. But the riding mower wouldn’t fit through the narrow gate. It was wired together so it couldn’t be unscrewed (at least not without my landlord’s permission). He asked if he could summon a friend with a smaller mower.
“Of course,” I said. “And if it’s too hot you should come back…” he wouldn’t hear of it, and eased himself down in the same shady spot as before and began punching numbers into his phone. His friend arrived in a scruffy blue truck with ancient gas mower in the back; he himself was tall and thin and came dressed head to toe in blue denim the exact worn shade as his truck. The old mower wouldn’t start. The two men discussed strategy. I handed over their money (and a substantial, guilt-induced tip) thinking nothing would get done, and turned to leave for an art exhibit.
“Hey, James!” shouted the ring-wearer, who was lying on his side again. “Did you ever think, coming from New York, you’d have two old rodeo clowns mowing your yard?”
A cold, sick feeling spread through my guts; as did a peculiar feeling of déjà vu.
I needn’t have worried. Though I did when I arrived home that night to find their old gas mower still in my yard and only a quarter of the grass cut (an effect not unlike being interrupted mid shave), three days later my lawn was completely trimmed and the clowns were alive and kicking. It took me a little longer to identify why their interaction felt so uncannily familiar.
It was their air of conspiracy and the compact little world the two rodeo clown friends had made for each other. I had encountered it once before. For a couple of months I worked for a pair of friends who were running a hedge fund. It was a wild idea they were gambling on, one that on paper sounded cynical and deliciously depraved but was really just playing at being soldiers and spies; and this pair of financiers—they even looked like the two rodeo clowns, one was stout and fair the other lean and dark—used the promise of making a pile of money to lure people—myself included—senior executives and former government officials who should have known better into their fantasy. And when it all blew up they were left unscathed. That moment of plotting I witnessed between the two clowns, reminded me of the two financiers plotting before a meeting with Goldman Sachs; and it felt good to see it. Working as a freelance writer in the hinterlands of America can be a lonely business—so even though my garden looked like shit when they left and it took a week of raking to get in order, and even though it was only because the financiers’ secretary felt sorry for me and shamed the pair that I was eventually paid for my work; but it didn’t feel as bad as it could have.
I enjoyed the japes, just wished for once I could have been cut in rather than been played. I’d better go. My lawn is looking haggard again. I’ll have to haul out the mower again.
Monday, September 23, 2013
Poverty in the United States
by Jalees Rehman
The United States Census Bureau recently released the results from its 2012 survey of income, poverty and health insurance in the United States. One of the most disheartening results is the high prevalence of poverty in the United States.
The term "poverty" is of course a relative term. The poverty thresholds in the United States depend on the size of a household and are adjusted each year. Currently, poverty for a single person household is defined as an average monthly income of $995 or less– taking into account all forms of earnings including unemployment compensation, workers' compensation, Social Security, veterans' payments, survivor benefits, pension or retirement income, interest, dividends, alimony, child support as well as other sources. A four-person family consisting of two adults and two children is considered to live in poverty if they have to live on an average monthly income of $1,940 or less. This is still a far cry from the global definition of poverty used by the World Bank, which describes fellow humans who have to survive on an income of less than $1.25 per day (or $38 per month). But the US, a country which considers itself as being among the wealthiest in the world, has to face the fact that 15 percent of its population - 46.5 million people – live in a state of poverty!
We worry about the faltering economies of Greece, Cyprus, Spain and Portugal, but the US Census reminds us that the number of poverty-stricken people in the US is roughly equal to that of the total population of Spain, and more than twice the size of the combined populations of Greece, Portugal and Cyprus.
Poverty does not affect all American communities equally. More than a quarter of African-Americans and Hispanic-Americans live in poverty, and sadly, as shown in the graph above, these are also the communities which have experienced the steepest increases in poverty rates in the wake of the recent recession.
The common cliché is that "children are our future", but if this is true, then Americans need to be especially worried about the fact that children have the highest rate of poverty in the US (over 21%) and that there has been no significant improvement in recent years.
Women are also disproportionately affected by US poverty. At every age range assessed by the US Census Bureau, females had higher poverty rates than their male counterparts.
This gender inequality even holds true for single parent families headed by women. While single parent families have higher poverty rates than the families of couples, single women households fare much worse than single men households.
Nearly a third of families headed by single women live in poverty, whereas the poverty rate for single men households is half of that.
The Census Bureau also provided a statistical analysis of the income over the recent decades, adjusting each year's income to the value of $ in 2012. This allows us to analyze how the purchasing power of people in the US has changed during the past 30-45 years.
It becomes apparent that the income disparity between Americans of various ethnic backgrounds has not changed much, and that the disparity has perhaps even worsened in some cases. Non-Hispanic Caucasians, for example, earned 34% more than their fellow Hispanic citizens in the early 1970s. However, the median household income of Hispanic-Americans barely increased (when standardized to US-$ values of 2012) from the 1970s to today, while the income of non-Hispanic Caucasians increased by more than 12%.
The income disparity between men and women has also not budged much during the past decade. The ratio of women's to men's earnings improved substantially and steadily from the 1970s to the 1990s, but it has unfortunately reached a plateau during the past decade, hovering around 77%.
One graph in the Census Bureau report sums up the worsening state of income inequality in the US:
During the past 45 years, the median income (i.e. income of people at the 50th percentile) increased from $42,900 to $51,000 – a rather modest change of about 19%. This means that the American economic middle class today can afford to purchase 19% more than their counterparts in the late 1960s. On the other hand, earners at the 90th percentile (i.e. the top 10%) showed an increase of their income from $90,400 to $146,000 – an increase of more than 60% during the same time period!
The trends of increasing income disparity with the American society, the income disparity between people of different ethnic or racial backgrounds, the high poverty rates among women and children are in plain sight in this report. We can only hope that political and business leaders in the US will recognize the dangers that arise from the growing inequality and take the necessary steps to reign in the disparity.