Monday, May 25, 2015
The “Invisible Web” Undermines Health Information Privacy
by Jalees Rehman
"The goal of privacy is not to protect some stable self from erosion but to create boundaries where this self can emerge, mutate, and stabilize. What matters here is the framework— or the procedure— rather than the outcome or the substance. Limits and constraints, in other words, can be productive— even if the entire conceit of "the Internet" suggests otherwise.
Evgeny Morozov in "To Save Everything, Click Here: The Folly of Technological Solutionism"
We cherish privacy in health matters because our health has such a profound impact on how we interact with other humans. If you are diagnosed with an illness, it should be your right to decide when and with whom you share this piece of information. Perhaps you want to hold off on telling your loved ones because you are worried about how it might affect them. Maybe you do not want your employer to know about your diagnosis because it could get you fired. And if your bank finds out, they could deny you a mortgage loan. These and many other reasons have resulted in laws and regulations that protect our personal health information. Family members, employers and insurances have no access to your health death unless you specifically authorize it. Even healthcare providers from two different medical institutions cannot share your medical information unless they can document your consent.
The recent study "Privacy Implications of Health Information Seeking on the Web" conducted by Tim Libert at the Annenberg School for Communication (University of Pennsylvania) shows that we have a for more nonchalant attitude regarding health privacy when it comes to personal health information on the internet. Libert analyzed 80,142 health-related webpages that users might come across while performing online searches for common diseases. For example, if a user uses Google to search for information on HIV, the Center for Disease Control and Prevention (CDC) webpage on HIV/AIDS (http://www.cdc.gov/hiv/) is one of the top hits and users will likely click on it. The information provided by the CDC will likely provide solid advice based on scientific results but Libert was more interested in investigating whether visits to the CDC website were being tracked. He found that by visiting the CDC website, information of the visit is relayed to third-party corporate entities such as Google, Facebook and Twitter. The webpage contains "Share" or "Like" buttons which is why the URL of the visited webpage (which contains the word "HIV") is passed on to them – even if the user does not explicitly click on the buttons.
Libert found that 91% of health-related pages relay the URL to third parties, often unbeknownst to the user, and in 70% of the cases, the URL contains sensitive information such as "HIV" or "cancer" which is sufficient to tip off these third parties that you have been searching for information related to a specific disease. Most users probably do not know that they are being tracked which is why Libert refers to this form of tracking as the "Invisible Web" which can only be unveiled when analyzing the hidden http requests between the servers. Here are some of the most common (invisible) partners which participate in the third-party exchanges:
Entity Percent of health-related pages
What do the third parties do with your data? We do not really know because the laws and regulations are rather fuzzy here. We do know that Google, Facebook and Twitter primarily make money by advertising so they could potentially use your info and customize the ads you see. Just because you visited a page on breast cancer does not mean that the "Invisible Web" knows your name and address but they do know that you have some interest in breast cancer. It would make financial sense to send breast cancer related ads your way: books about breast cancer, new herbal miracle cures for cancer or even ads by pharmaceutical companies. It would be illegal for your physician to pass on your diagnosis or inquiry about breast cancer to an advertiser without your consent but when it comes to the "Invisible Web" there is a continuous chatter going on in the background about your health interests without your knowledge.
Some users won't mind receiving targeted ads. "If I am interested in web pages related to breast cancer, I could benefit from a few book suggestions by Amazon," you might say. But we do not know what else the information is being used for. The appearance of the data broker Experian on the third-party request list should serve as a red flag. Experian's main source of revenue is not advertising but amassing personal data for reports such as credit reports which are then sold to clients. If Experian knows that you are checking out breast cancer pages then you should not be surprised if this information will be stored in some personal data file about you.
How do we contain this sharing of personal health information? One obvious approach is to demand accountability from the third parties regarding the fate of your browsing history. We need laws that regulate how information can be used, whether it can be passed on to advertisers or data brokers and how long the information is stored.
We may use information we collect about you to:
· Administer your account;
· Provide you with access to particular tools and services;
· Respond to your inquiries and send you administrative communications;
· Obtain your feedback on our sites and our offerings;
· Statistically analyze user behavior and activity;
· Provide you and people with similar demographic characteristics and interests with more relevant content and advertisements;
· Conduct research and measurement activities;
· Send you personalized emails or secure electronic messages pertaining to your health interests, including news, announcements, reminders and opportunities from WebMD; or
· Send you relevant offers and informational materials on behalf of our sponsors pertaining to your health interests.
Perhaps one of the most effective solutions would be to make the "Invisible Web" more visible. If health-related pages were mandated to disclose all third-party requests in real-time such as pop-ups ("Information about your visit to this page is now being sent to Amazon") and ask for consent in each case, users would be far more aware of the threat to personal privacy posed by health-related pages. Such awareness of health privacy and potential threats to privacy are routinely addressed in the real world and there is no reason why this awareness should not be extended to online information.
Libert, Tim. "Privacy implications of health information seeking on the Web" Communications of the ACM, Vol. 58 No. 3, Pages 68-77, March 2015, doi: 10.1145/2658983 (PDF)
Monday, April 27, 2015
Freedom as Floating or Falling
Nine days after 9/11, on 20 September 2001, President George W. Bush responded to the World Trade Centre attacks by addressing a joint session of Congress. He lamented that in the space of a 'single day' the country had been changed irrevocably, its people 'awakened to danger and called to defend freedom'. Out of the painful deaths of almost 3000 people germinates anger and a drive for retribution. The attackers, whom Bush terms 'enemies of freedom', are apparently motivated by envy as well as hatred:
They hate what they see right here in this chamber: a democratically elected government. Their leaders are self-appointed. They hate our freedoms: our freedom of religion, our freedom of speech, our freedom to vote and assemble and disagree with each other.
In this passage alone, there are four instances of 'freedom', and in the approximately 3,000-word-long speech from which it is taken, 'freedom' is invoked 13 times.
Given that the speech was a major statement of Bush's intent following the wound of 9/11 and that the US government uses the name 'Operation Enduring Freedom' to describe its War on Terrorism, it is clear that freedom is a crucial concept to the US and its allies. This is unsurprising, since the Statue of Liberty on Liberty Island off the coast of New York City has long served as a symbol of freedom and the vaunted American myth of social mobility. But what does freedom consist of and is it a universal value? In other words, does everyone – men and women, and people from different classes, races, or religious backgrounds – experience it in the same way?
In 2014, Bangladeshi-origin writer Zia Haider Rahman published his fascinating and very male debut novel In the Light of What We Know. The book deals in part with 9/11 and its aftermath. One of Rahman's two main protagonists, Zafar, works in Afghanistan soon after the outbreak of war in 2001. He avers that the American occupiers 'justify their invasion of Afghanistan with platitudes about freedom and liberating the Afghani people'. Having studied law and worked for a US bank, Zafar is in some ways part of the American 'relief effort'. And yet he is simultaneously not part of it, due to his Bangladeshi background and brown skin. Because of this, coupled with his working-class origins, he sees through the rhetoric of freedom as platitudinous.
Later, Rahman's Zafar describes a raucous, sexually charged UN bar in Kabul, concluding, 'It was a scene of horror. This is the freedom for which war is waged'. Here he unpicks what the Americans mean by freedom. It bathetically involves a person being free to drink alcohol and explore his or her sexuality – whether within or outside marriage is not usually seen as important. To the occupiers, freedom is about individual choice in the free market. This means little to the majority of Afghans. As Zafar points out, the efflorescence of new drinking establishments under the occupation is popular with the local elite class, but 'the poor are disgusted'.
Out of freedom's sister word liberty comes the verb 'liberate', another word for saving. This idea of liberation and saving brings us to Lila Abu-Lughod's book Do Muslim Women Need Saving? (2013). The anthropologist moves ideas about freedom into the realms of race, class, and gender. Herself a feminist with heritage partly in the global south, Abu-Lughod suggest that Western feminists see themselves as 'saving' their benighted Muslim sisters.
Abu-Lughod also scrutinizes the repercussions from one notion of freedom being extolled above all other values. She questions whether women's clothing can symbolize freedom or unfreedom, and whether forces that put limits on every individual's free will mean that, as Wendy Brown puts it, 'choice … is an impoverished account of freedom'. Abu-Lughod seems to suggest that the binary opposition of free and unfree is at the heart of twenty-first-century versions of Orientalism. She argues that American feminism is deceived by the 'powerful national ideology' of freedom and fails to recognize the unequal power relations that underpin this ideology.
Rather than accepting the premise that Western freedom contrasts with imprisonment by Islam, Abu- Lughod shows that believing Muslims have their own ideas about and goals for liberation. The Islamic scholar Abdal Hakim Murad, also known by his birth name of Tim Winter, similarly writes that Islam represents 'radical freedom, a freedom from the encroachments of the State, the claws of the ego, narrow fanaticism and sectarian bigotry and an intrusive state or priesthood'.
The fundamentalist believes that we believe in nothing. ... To prove him wrong, we must first know that he is wrong. We must agree on what matters: kissing in public places, bacon sandwiches, disagreement, cutting-edge fashion, literature, generosity, water, a more equitable distribution of the world's resources, movies, music, freedom of thought, beauty, love. These will be our weapons. Not by making war, but by the unafraid way we choose to live shall we defeat them.
This somewhat tongue-in-cheek list intermixes trivial things, ideals, and rights. It also neatly illustrates that many apparent freedoms are culturally specific shibboleths that might alienate not just 'fundamentalists', but a good number of non-Western, non-Christian, non-male people (and many Western vegetarians including me would be put off by the bacon sandwiches!). Ideas of freedom are culturally located. Notwithstanding Rushdie's claims, liberty does not equate to wearing miniskirts rather than burqas.
I move now to Muslim women writers' ideas about freedom in Britain. Attia Hosain, who died in 1998, is known for the short story collection Phoenix Fled and novel Sunlight on a Broken Column, both set in India. However, she also wrote a promising putative novel about diasporic Britain, 'No New Lands, No New Seas'. Hosain worked on this between the 1950s and 1970s, but eventually abandoned the novel, perhaps because the virulent racism of the late 1960s onwards (typified by Enoch Powell's 'Rivers of Blood' speech and the subsequent rise of the National Front) made her migrant topics too painful to complete.
Her central migrant character Murad is having a minor breakdown in the paradoxically crowded, isolating capital of London. He frequently expresses the idea that he has been unmoored, thinking that his thoughts should be 'pegged down, hammered to solidity or he would fly into space, dissolving all matter into formlessness'. He remembers that when he first arrived in London 'he floated away with a wild incredulous sense of freedom'. Perhaps the most interesting instance of Hosain's portrayal of freedom as what David Bowie memorably described as 'floating in a most peculiar way' is this passage:
his happiest moments were in the in-between world where he was free yet not free of intrusive presences, as when at a concert his submerged thoughts would float above the music and cover it with a drifting film until he pushed it away, and under the sounds to which he forcibly attached his mind until the music emerged clearly as if he, with every nerve-end vibrating, were himself one of the instruments.
Although at certain moments, Hosain's text represents freedom as floating in an unnerving way, in this passage Murad's thoughts soar with the music. They are then brought back to earth by a 'drifting film', an image of feather lightness that nonetheless weighs down, of the film's transparency that still manages to ground the character again.
This idea of happiness coming from an in-between realm that at once represents freedom and bondage is illuminating. It's a notion that women especially can appreciate. The Cairo-born, London-resident writer and activist Ahdaf Soueif once said that she feels most free when she is writing on her own in a room but can hear her family busy with happy activities not far away. She finds contentment in being free and yet not free.
Although Hosain's ideal is a sublime freedom coexisting with unfreedom, breaking down the binary that Abu-Lughod so dislikes, the novelist recognizes that a banal version of freedom as individual choice is the one that prevails. Murad and his friend Isa together investigate 'the areas of liberty that London had given them'. The narrator notes that this is initially 'mostly in respect of women and wine, then through pubs and prostitutes to the poetry of freedom and friendship without the taboos of tradition, the constraint of custom and duress of duty'. It is a similar version of freedom to that which Rahman criticized: a prosaic lack of restraint in relation to 'women and wine'. Alliteration underscores the glibness of Murad's free indirect discourse on freedom here.
Formlessness, lack of solidity, freedom, and loneliness: these images echo again through the pages of Sudanese author Leila Aboulela's London novel, Minaret (2005). Hosain's notion of flying or floating up into space is inverted in the later text. Aboulela describes her Sudanese protagonist Najwa's metaphorical ‘fall' through space due to an encounter with the vertiginous liberties of the West.
What makes Minaret distinctive as a novel of Muslim experience is that it centres on a character's journey towards religion, rather than away. Many Anglophone novels about the British Muslim experience from the 1990s and early 2000s are about young Muslims discovering 'freedom', in the shape of a secular life, and independence from familial or kinship ties. In contrast, Aboulela's novel traces the Westernized protagonist, Najwa's, downwardly-mobile journey from her privileged position as a Sudanese minister's daughter, to exile in London when a coup dislodges her father from power, and eventually life as a domestic servant to a wealthy Arab family in the former imperial capital. During this descent, an unfurling religious identity sustains Najwa through her losses.
The supportive ties that Najwa discovers in her mosque are starkly contrasted with the supposed 'freedoms' of the non-religious world, which Aboulela portrays as being constrictive rather than liberatory. The notion of liberty in Western thought, since the time of Hobbes's Leviathan, has meant a freedom from external constraints and the right of individual self-determination. In Arab and South Asian thought, by contrast, freedom, hurriya in Arabic or azadi in Persian and Urdu, has typically had political, communitarian connotations. It would be wrong to suggest that Muslims have not hotly debated the concept of freedom over the centuries. In the Sufi tradition, freedom has been compared to ‘perfect slavery', which indicates not only that slavery in the Arab world was, in Amitav Ghosh's words, a relatively 'flexible set of hierarchies', but also that the institution was often used as a metaphor for understanding 'the relationship between Allah the "master" and his human "slaves"'.
I don't explain … my fantasies. My involvement in Tamer's wedding to a young suitable girl who knows him less than I do. She will mother children who spend more time with me… I would like to be his family's concubine, like something out of The Arabian Nights, with life-long security and a sense of belonging. But I must settle for freedom in this modern time.
The issue of clashing cultural understandings of liberty highlighted by this passage is particularly pertinent in the light of Abu-Lughod's analysis of the rhetoric of 'freedom' used to justify the War on Terror. With her evocation of Alf Laylah wa Laylah or The Arabian Nights, Najwa indicates that feminism has not usually considered non-Euro-American traditions when defining 'women's lib'. Yet Najwa's wish is itself problematic, especially since later chooses to perform Hajj rather than marry Tamer. This internal monologue smacks of lugubrious, even masochistic propensities.
Najwa has been brought up in a broadly Western tradition: she comes from an elite family that only pays lip service to Islam. Her early life, while affluent and sheltered, is nonetheless depicted as lacking some essential component. Within conventional limits, Najwa has considerable freedom in her dress, education and sexual relations. Yet she feels uneasy when strange men appraise her body in its revealing clothes, and her only sexual relationship with a Marxist exile in London is sordid and guilt-ridden. After a Leftist coup in Sudan leads to her father's imprisonment and eventual execution, her family is described as ‘falling' through space. This image of descent evokes the 'horror' inherent in too much liberty. Of course it also suggests the fall common to both Judeo-Christian and Qur'anic theology, whereby Adam and Eve/Hawwa were banished from the Garden of Paradise to live on earth. Najwa's fall is complete once her brother Omar is imprisoned for drugs and her mother dies. Freed from her caring duties, Najwa supposes that she should feel a sense of emancipation, but instead observes, 'this empty space was called freedom'.
To recapitulate the ideas explored in this article, the War in Afghanistan has led to the privileging of a Western dichotomy of freedom vs. unfreedom. Lila Abu-Lughod interrogates and genders this binary. Hosain anticipates these debates in her 1950s-70s fragment, while in a post-9/11 context Aboulela robustly challenges them. We should not forget, though, that ideas of political freedom are more crucial in the Muslim world now than ever. This is easily perceptible in the Arab Spring (now mournfully becoming known as the Arab Winter). I conclude with Soueif's quoting of a chant against the Egyptian regime: 'They said trouble ran in our blood and how'd we dare demand our rights | Oh dumb regime | understand | what I want: | Liberty! Liberty!'
Monday, March 30, 2015
STEM Education Promotes Critical Thinking and Creativity: A Response to Fareed Zakaria
by Jalees Rehman
All obsessions can be dangerous. When I read the title "Why America's obsession with STEM education is dangerous" of Fareed Zakaria's article in the Washington Post, I assumed that he would call for more balance in education. An exclusive focus on STEM (science, technology, engineering and mathematics) is unhealthy because students miss out on the valuable knowledge that the arts and humanities teach us. I would wholeheartedly agree with such a call for balance because I believe that a comprehensive education makes us better human beings. This is the reason why I encourage discussions about literature and philosophy in my scientific laboratory. To my surprise and dismay, Zakaria did not analyze the respective strengths of liberal arts education and STEM education. Instead, his article is laced with odd clichés and misrepresentations of STEM.
Misrepresentation #1: STEM teaches technical skills instead of critical thinking and creativity
If Americans are united in any conviction these days, it is that we urgently need to shift the country's education toward the teaching of specific, technical skills. Every month, it seems, we hear about our children's bad test scores in math and science — and about new initiatives from companies, universities or foundations to expand STEM courses (science, technology, engineering and math) and deemphasize the humanities.
"The United States has led the world in economic dynamism, innovation and entrepreneurship thanks to exactly the kind of teaching we are now told to defenestrate. A broad general education helps foster critical thinking and creativity."
Zakaria is correct when he states that a broad education fosters creativity and critical thinking but his article portrays STEM as being primarily focused on technical skills whereas liberal education focuses on critical thinking and creativity. Zakaria's view is at odds with the goals of STEM education. As a scientist who mentors Ph.D students in the life sciences and in engineering, my goal is to help our students become critical and creative thinkers.
Students learn technical skills such as how to culture cells in a dish, insert DNA into cells, use microscopes or quantify protein levels but these technical skills are not the focus of the educational program. Learning a few technical skills is easy but the real goal is for students to learn how to develop innovative scientific hypotheses, be creative in terms of designing experiments that test those hypotheses, learn how to be critical of their own results and use logic to analyze their experiments.
My own teaching and mentoring experience focuses on STEM graduate students but the STEM programs that I have attended at elementary and middle schools also emphasize teaching basic concepts and critical thinking instead of "technical skills". The United States needs to promote STEM education because of the prevailing science illiteracy in the country and not because it needs to train technically skilled worker bees. Here are some examples of science illiteracy in the US: Fort-two percent of Americans are creationists who believe that God created humans in their present form within the last 10,000 years or so. Fifty-two percent of Americans are unsure whether there is a link between vaccines and autism and six percent are convinced that vaccines can cause autism even though there is broad consensus among scientists from all over the world that vaccines do NOT cause autism. And only sixty-one percent are convinced that there is solid evidence for global warming.
A solid STEM education helps citizens apply critical thinking to distinguish quackery from true science, benefiting their own well-being as well as society.
Zakaria's criticism of obsessing about test scores is spot on. The subservience to test scores undermines the educational system because some teachers and school administrators may focus on teaching test-taking instead of critical thinking and creativity. But this applies to the arts and humanities as well as the STEM fields because language skills are also assessed by standardized tests. Just like the STEM fields, the arts and humanities have to find a balance between teaching required technical skills (i.e. grammar, punctuation, test-taking strategies, technical ability to play an instrument) and the more challenging tasks of teaching students how to be critical and creative.
Misrepresentation #2: Japanese aren't creative
Zakaria's views on Japan are laced with racist clichés:
"Asian countries like Japan and South Korea have benefitted enormously from having skilled workforces. But technical chops are just one ingredient needed for innovation and economic success. America overcomes its disadvantage — a less-technically-trained workforce — with other advantages such as creativity, critical thinking and an optimistic outlook. A country like Japan, by contrast, can't do as much with its well-trained workers because it lacks many of the factors that produce continuous innovation."
Some of the most innovative scientific work in my own field of scientific research – stem cell biology – is carried out in Japan. Referring to Japanese as "well-trained workers" does not do justice to the innovation and creativity in the STEM fields and it also conveniently ignores Japanese contributions to the arts and humanities. I doubt that the US movie directors who have re-made Kurosawa movies or the literary critics who each year expect that Haruki Murakami will receive the Nobel Prize in Literature would agree with Zakaria.
Misrepresentation #3: STEM does not value good writing
Writing well, good study habits and clear thinking are important. But Zakaria seems to suggest that these are not necessarily part of a good math and science education:
"No matter how strong your math and science skills are, you still need to know how to learn, think and even write. Jeff Bezos, the founder of Amazon (and the owner of this newspaper), insists that his senior executives write memos, often as long as six printed pages, and begins senior-management meetings with a period of quiet time, sometimes as long as 30 minutes, while everyone reads the "narratives" to themselves and makes notes on them. In an interview with Fortune's Adam Lashinsky, Bezos said: "Full sentences are harder to write. They have verbs. The paragraphs have topic sentences. There is no way to write a six-page, narratively structured memo and not have clear thinking."
Communicating science is an essential part of science. Until scientific work is reviewed by other scientists and published as a paper it is not considered complete. There is a substantial amount of variability in the quality of writing among scientists. Some scientists are great at logically structuring their papers and conveying the core ideas whereas other scientific papers leave the reader in a state of utter confusion. What Jeff Bezos proposes for his employees is already common practice in the STEM world. In preparation for scientific meetings and discussions, scientists structure their ideas into outlines for manuscripts or grant proposals using proper paragraphs and sentences. Well-written scientific manuscripts are highly valued but the overall quality of writing in the STEM fields could be greatly improved. However, the same probably also holds true for people with a liberal arts education. Not every philosopher is a great writer. Decoding the human genome is a breeze when compared to decoding certain postmodern philosophical texts.
Misrepresentation #4: We should study the humanities and arts because Silicon Valley wants us to.
In support of his arguments for a stronger liberal arts education, Zakaria primarily quotes Silicon Valley celebrities such as Steve Jobs, Mark Zuckerberg and Jeff Bezos. The article suggests that a liberal arts education will increase entrepreneurship and protect American jobs. Are these the main reasons for why we need to reinvigorate liberal arts education? The importance of a general, balanced education makes a lot of sense to me but is increased job security a convincing argument for pursuing a liberal arts degree? Instead of a handful of anecdotal comments by Silicon Valley prophets, I would prefer to see some actual data that supports Zakaria's assertion. But perhaps I am being too STEMy.
There is a lot of room to improve STEM education. We have to make sure that we strive to focus on the essence of STEM which is critical thinking and creativity. We should also make a stronger effort to integrate arts and humanities into STEM education. In the same vein, it would be good to incorporate more STEM education into liberal arts education in order to combat scientific illiteracy. Instead of invoking "Two Cultures" scenarios and creating straw man arguments, educators of all fields need to collaborate in order to improve the overall quality of education.
Illegibility And Its Anxieties
"I would like to understand things better,
but I don't want to understand them perfectly."
~ Douglas Hofstadter, Metamagical Themas
A few weeks ago I went to an evening of presentations by startups working in the artificial intelligence field. By far the most interesting was a group that for several years had been quietly working on using AI to create a new compression algorithm for video. While this may seem to be a niche application, their work in fact responds to a pressing need. As demand for video streaming, first in high definition and increasingly in formats such as 4K, hopelessly outruns the buildout of new infrastructure, there is a commensurate need for ever-greater ratios of compression of video data. It is the only viable way to keep up with the reqirements of video streaming, and companies such as Netflix are willing to pay boatloads of cash for the best technologies. But the presentation also crystallized some interesting and important aspects of AI that go well beyond not just niche applications, but the alarmist predictions of people like Steven Hawking, Elon Musk and Bill Gates. What are we really creating here?
This startup, bankrolled by a former currency trader who, as founder and CEO, was the one giving the talk, has engaged in a three-step development program. The first step involved feeding their AI – charmingly named Rita – with every single video compression algorithm already in use, and having it (her?) cherry-pick the best aspects of each. The ensuing Franken-algorithm has already been tested and confirmed to provide lossless compression at a rate of 75%, which is already best in its class. The second step in their program, which is currently in development, charges Rita with the taking the results of everything learned in the first step, and creating its own algorithm. The expectation is that they will reach up to 90% compression, which is really rather extraordinary.
So far, so good. The final step of the program – one which expects to yield a mind-boggling 99% compression ratio – is where things get really interesting. For Rita's creators are now ‘entrusting her' (I know, the more you talk about AI, the more hopeless it is to attempt avoiding anthropomorphization) with the task of creating her own programming language that will be solely dedicated to video compression. There was an appreciative gasp in the room when the CEO outlined this brave next step, and during the Q&A I wanted him to explain more about what this meant.
The exchange went something like this:
Me: Ok, so I understand the first two steps. People have been using techniques of fitness selection to evolve algorithms in ways that humans could not design or even anticipate. Also, there is no reason why an AI couldn't evolve its own algorithm, given a well-defined outcome and enough inputs. But this last step – the creation of an entirely new, purpose-built language, for one purpose only – is this a language that will then be available to human programmers via some sort of interface?
CEO: No. It will be a black box. We won't know how Rita came to design what she did, or how it works. Just that it does what it needs to do.
Me, stammering: But…but…how do you feel about that?
Random guy in the audience: How does he feel about it? He feels pretty good! After all, he's a shareholder.
At which point the entire room erupted in laughter.
It became quickly apparent that the intent of my question was misconstrued, however, since the discussion then turned to what always seems to be the elephant in the room when it comes to AI research: What are the moral implications of surrendering our agency, of which this seemed to be a prime example? The usual suspects were trotted out – Skynet, the Matrix, HAL9000, blah blah blah. (They could have also included Colossus: The Forbin Project, a 1970 sci-fi thriller along the same lines, whose stills I include here). But my point wasn't about whether or how we ought best welcome our new robotic overlords. Rather, it was about legibility. What happens when we create things, that then go ahead and create other things that we don't understand, or even have access to? More to the point, what is lost?
Arguably, this signifies an inversion of what is understood as ‘progress', at least in an epistemological sense. For example, plant and animal breeders have refined and elaborated breeds to bring out desirable traits (drought resistance, hunting skills, cuteness) for hundreds, if not thousands of years, without knowing the underlying genetic principles. The identification of DNA as the enabling epistemological substrate of this program has rapidly accelerated these activities, but this has only added to the general illumination of these previously known processes. Genetically modified organisms fall into this category, even if the eventual consequences do not. What AIs such as Rita are empowered to effect, on the other hand, is a deliberately sponsored obfuscation of these processes of knowing. The implied trajectory is that we are willing to create tools that will help us do more things in the world, but that in the process we strike a somewhat Faustian bargain, pleased to arrive at our destination but forfeiting the knowledge of how we got there.
Now, I want to be clear that I am not at all interested in making a moral argument. Unlike what Hollywood would have us believe, there seems little point in arguing whether AIs will turn out to be good or evil. Such anxieties are more redolent of our narcissistic desire to feel threatened by apocalypses of our own manufacture (eg, nuclear war) than a genuine willingness to think through what it might mean for a machine intelligence to be authentically evil, or good, or – which is much more likely – something in between. And the above exchange with the startup's CEO illustrates the blithe manner in which capital will always perform an end-run around these considerations. "Being a shareholder" is sufficient justification for the illegibility of the final outcome, with the further implication that we should all be so lucky as to be shareholders in such enterprises.
Rather, any moral argument should be understood as a proxy for how alien any given technology may seem to us. Perhaps our tendency to assign it a moral status is more indicative of how unsure we are about the role it may play in society. The operational inscrutability of an AI (and not, I should emphasize, its ‘motivations') make the possible consequences so unpredictable that we may seek to legislate its right to exist, and the easiest means for enabling a legislative act is to locate it on a moral continuum.
The use of the word ‘legislate' is appropriate here, since what we are attempting to do is to, quite literally, make the technology and its action in society legible to us. Linguistically, both words share the same Latin root, legere. And if we cannot make the phenomenon of AI legible, then we may at least quarantine its actions and sphere of influence. In William Gibson's novel Neuromancer, this was the remit of the Turing Registry, which enforced an uneasy peace between AIs, the corporations that run them, and the world at large:
The Turing Registry, named after the father of modern computing, operates out of Geneva. Turing is technically not a megacorp, but instead a registry, and the closest thing to a body of government as far as artificial intelligences are concerned. The Turing Registry exists to keep corporations who use AIs and the AIs themselves in check. Every AI in existence, whether directly connected to the matrix or not, must be registered with Turing to enjoy the full rights of an AI. AIs registered with Turing enjoy Swiss citizenship, though the hardware itself that contains the 'soul' is connected to enough explosives to incapacitate the being. Any AI suspected of attempting to remove this device, escape Turing control, or enhance itself without proper Turing approval is controlled immediately.
Aside from the delicious detail that AIs are Swiss citizens (hey, it's not just corporations that can be people), what Gibson indicates to us is that the battle for legibility, in an epistemological sense, is already lost. Pre-emptively quarantining and, failing that, blowing up miscreant AIs is the best that the inhabitants of Neuromancer can hope for. Of course, the narrative arc of the novel concerns precisely this: the protean manner in which an AI attempts to transcend this restricted state. And Gibson implies that humanity, with its toxic mix of curiosity, greed and anthropomorphizing tendencies, is all too willing to be enlisted in this task.
And yet, to a large extent AI as the container par excellence for these anxieties is just a red herring, for this kind of illegibility is already rampant. Superficially, we seem to require a locus – a concrete something to which we can point and say "That's an AI" – that then becomes the appointed site for these anxieties. In this sense we are content to believe that, when we saw Watson clobbering his fellow contestants on Jeopardy!, the AI is ‘located' behind a lectern, with his hapless human competitors standing side-by-side behind their own lecterns: a level playing field if there ever was one. Our imagination does not accede to the notion that Watson is a large bank of computers located off-stage, in a different state, even, and ministered to by a team of highly trained scientists and engineers.
In fact, AI is not at all needed to fulfill the anxieties of illegibility. It certainly ‘embodies' those anxieties successfully, despite its own distinct lack of embodiment, by playing on the idea that an AI is something that is kind of like us, but isn't us, but perhaps wants to become more like us, until in the end it becomes something decidedly not like us at all, at which point it will already be too late (see: Hollywood). Except the traces of illegibility are already ubiquitous, in the form of algorithms that may not fall under the rubric of AI but certainly instigate a cascade of events that correspond to what we would identify as AI-like consequences.
Consider this 2011 talk by developer and designer Kevin Slavin (you can get the Cliffs Notes version in his TED Talk): the fact that, at the time, about 70% of all stock trading was driven by algorithms buy and selling shares to other algorithms. Sure, computer scientists would tweak things here and there, but the cumulative effect of unassisted trading has led to some extraordinary outcomes. Most dramatically, the Flash Crash of 2010, which saw the Dow Jones Industrial Average plunge about 9% in a matter of minutes and on no news at all, was likely precipitated by a few rogue algorithms. In the absence of substantive regulation, the markets have learned to live with daily flash crashes.
The financial markets do not hold a monopoly on unintended consequences, however. Slavin also gives further examples of Algorithms Gone Wild with a funny anecdote concerning a biology textbook that was listed on Amazon initially at $1.7 million, only to have the price rise, in a few hours, to $23.6 million, which was odd because the book is out of print, and therefore no one was either selling or buying it. To Slavin, these are "algorithms locked in loops with each other", engaging in a form of silent combat. Critical to this point is that, while these developments occur at lightning speeds, the disambiguation, if humans even choose to pursue it, takes much longer. In the case of the Flash Crash, it took the SEC five months to issue its report, which was heavily criticized. To this day, there is no consensus on what actually happened in the markets that day. As for the biology textbook, it lives on merely as an anecdote for TED audiences.
So the consequences of an AI-like world are, in fact, here already. To invite AIs into the party is more or less beside the point. Our world has become so deeply driven by software that our capacity to ‘read' what we have created is already substantially, and, in all likelihood, permanently eroded. That this has happened only gradually and in subtle, nearly invisible ways has made it that much more dificult to realize. In this sense, AI, or at least a certain way of thinking about AI, may provide an interesting counterpoint.
If one goes back to its roots, AI research sought to understand intelligence as it existed in the world already, and take that learning and bring it in silico. That this has so far failed – despite substantial progress in the brain sciences – is uncontroversial and well understood. In parallel, the precipitous decline in the costs of computing, bandwidth and storage have enabled the rise of probabilistic approaches to intelligence, rather than behavioral ones, hence the primacy of the algortihm. As Ali Minai, professor at the University of Cincinnati, writes:
AI, invented by computer scientists, lived long with the conceit that the mind was ‘just computation' – and failed miserably. This was not because the idea was fundamentally erroneous, but because ‘computation' was defined too narrowly. Brilliant people spent lifetimes attempting to write programs and encode rules underlying aspects of intelligence, believing that it was the algorithm that mattered rather than the physics that instantiated it. This turned out to be a mistake. Yes, intelligence is computation, but only in the broad sense that all informative physical interactions are computation - the kind of ‘computation' performed by muscles in the body, cells in the bloodstream, people in societies and bees in a hive.
Minai goes on to equate intelligence with ‘embodied behavior in a specific environment'. What I find promising about this line of inquiry is its modesty, but also its ambition. If we begin from the premise that life has done a pretty fine job in not just evolving behavioral intelligence, but in doing so sustainably, this is a paradigm that leads us to a certain way of looking at not just the kind of work machine intelligence can do, but the place that it also ought to occupy, in relation to all the things that are already in the world. This is simply due to the fact that this kind of intelligence is can only exist based on embodiment. In contrast, the bare algorithms running around in financial markets or anywhere else are much more akin to viruses.
I do not know if it is possible to actually create a machine intelligence based on these principles – after all, this is something that has eluded computer and cognitive scientists for decades and continues to do so. But I do believe that such an intelligence will be more legible to us, even if its internal workings remain inscrutable, because our relationship to it will be based on behavior. If Minai's school of thought has merit, this may well be a saving grace. On the other hand, if there is any substantial danger posed by AI, it comes from an utter lack of constraint or connection to the physical world. The issue is whether we as a society will offer ourselves any choice in the matter.
Monday, March 23, 2015
Fatwas and fundamental truths
by Mandy de Waal
A South African literary event called 'The Time of the Writer' was to have been a moment of celebration for local writer Zainub Priya Dala. The author's debut novel, called What About Meera, was due to have been launched at the Durban festival.
Instead Dala was nursing injuries after being attacked at knifepoint with a brick and called [Salman] "Rushdie's Bitch!" The attack – which shocked and outraged SA's literary community – happened one day after Dala had expressed an appreciation of Rushdie's work.
"Dala was followed from the festival hotel and was harassed by three men in a vehicle who pushed her car off the road," a statement by Dala's publishers read. "When she stopped, two of the men advanced to her car, one holding a knife to her throat and the other hitting her in the face with a brick while calling her ‘Rushdie's bitch'. She has been treated by her doctor for soft-tissue trauma, and has reported the incident to the police."
The author – who is also a therapist who counsels autistic children – said through her publishers that she believed the attack stemmed from her voicing support for Rushdie's writing style. Dala was at a school's writing forum and was asked which writers she admired. She offered a list of writers including Arundhati Roy, and said that she "liked Salman Rushdie's literary style." After saying she appreciated Rushdie, a number of teachers and students stood up and walked out in protest. The next day Dala was attacked.
After discovering what happened to Dala, Rushdie Tweeted: "I'm so sorry to hear this. I hope you're recovering well. All good wishes." Dala's response? "Thank you. I have my family and children around me and am recovering."
SA literary site, www.bookslive.co.za stated that "the assault counts as an extension of Rushdie's complicated history with South Africa." BooksLive explained that Rushdie "was famously ‘disinvited' from a literary festival in 1988, after the Ayatollah Khomeini's fatwa was issued against him and his novel, The Satanic Verses."
Rushdie was invited to South Africa 27 years back by a top investigative newspaper to give a public lecture on censorship. He was due to have shared a platform with Booker prize winners Nadine Gordimer and JM Coetzee.
As news of the invitation spread, the paper received threats of violence. The Africa Muslim Agency demanded that the invitation be withdrawn, and The Islamic Missionary Society stated that "there was every likelihood that [Rushdie] would be assaulted." The Islamic society warned that blood would flow. "There are secret Muslim hit squads who have vowed to avenge the honour of the Holy Prophet Muhammed," it stated.
After long, careful and painful negotiation by multiple parties involved in the event, the invitation was withdrawn, an outcome that JM Coetzee condemned. "Islamic fundamentalism in its activist manifestation is bad news. Religious fundamentalism in general is bad news. We know about religious fundamentalism in South Africa. Calvinist fundamentalism has been an unmitigated force of benightedness in our history," Coetzee told a meeting in Cape Town.
"Wherever there is a bleeding sore on the body of the world, the same hard-eyed narrow-minded fanatics are busy, indifferent to life, in love with death. Behind them always come the mullahs, the rabbis, the predikante [ministers], giving their blessings," Coetzee added.
"There is nothing more inimical to writing than the spirit of fundamentalism. Fundamentalism abhors the play of signs, the endlessness of writing. Fundamentalism means nothing more or less than going back to an origin and staying there. It stands for one founding book and thereafter no more books," he said.
"As the various books of the various fundamentalisms, each claiming to be the one true book, fantasise themselves to be signed in fire or engraved in stone, so they aspire to strike dead every rival book, petrifying the sinuous, protean, forward-gliding life of the letters on their pages, turning them into physical objects to be anathematised, things of horror not to be touched, not to be looked upon," said Coetzee.
In the wake of this awful attack on freedom of speech and on a promising young writer, how does one show support for Dala? As anchor, author and journalist, Imran Garda eloquently tweeted, we support Dala by buying her book. By championing the "endlessness of writing" - her writing - we eloquently add to the roar of writers globally who condemn this heinous act.
Mandy de Waal is a writer and journalist based in Cape Town South Africa. Follow her on Twitter: @mandyLdewaal
Fireflies and Fiery Fatherly Love: An Excerpt from What About Meera by ZP Dala
South Africa: Clash of the Booker titans in The Guardian.
Monday, March 02, 2015
Everything Was Within Reach
"New York isn't your fantasy.
You're the fantasy in New York's imagination."
~ John DeVore, New York Doesn't Love You
There is a time-honored genre of literature that masochistically trucks with the fatalism and rejection of living in, loving and eventually leaving New York City. I know this is a real genre, because the fact that there is an anthology proves it. Writers especially, perhaps due to the ephemerality of their profession, seem to have an axe to grind when it comes to leaving New York. It's not that no other city generates this passion; rather, no other city has fetishized and memorialized this ambivalence to such an extent. To these writers, leaving New York is tantamount to an admission of failure, and they passionately rationalize the ways in which they have not failed. But New York evolves, like any other city, and it is worth asking if the reasons for leaving these days are substantially different from those of previous decades.
Joan Didion's 1967 classic essay "Goodbye To All That" sets the confessional tone that is implied in all of these narratives: "But most particularly I want to explain to you, and in the process perhaps to myself, why I no longer live in New York." Didion's narrative concerns the years required for the imperceptible shading from wide-eyed ingénue to a vaguely numb and indifferent denizen. Her prose is compassionate, and wears the weariness of experience lightly: "It was a very long time indeed before I stopped believing in new faces...Everything that was said to me I seemed to have heard before, and I could no longer listen". In the end, she does not fling New York away in disgust – she accompanies her husband to Los Angeles for a sabbatical away from the city. As a result she leaves New York almost accidentally, like remembering a few days after the fact that you forgot your umbrella in a restaurant, then deciding it wasn't worth the trouble of going back to get it.
Contrast this genteel regretfulness with John DeVore's recent aphoristic punch-up, "New York Doesn't Love You":
New York will kick you in the hole, but it will never stab you in the back. It will, however, stab you multiple times right in your face.
No one "wins" New York. Ha, ha.
You will lose. Everyone loses. The point is losing in the most unexpected, poignant way possible for as long as you can.
Complaining is the only right you have as a New Yorker. Whining is what children do. To complain is to tell the truth. People who refuse to complain, and insist on having a positive outlook, are monsters. Their optimism is a poison. If given the chance they will sell you out.
DeVore lives in a different New York from Didion: he doesn't really elaborate on what success might actually look like, for himself or for anyone else. Your plan, whatever it may be, will go wrong. Fifty years of water flowing underneath the Brooklyn Bridge will do that.
The fact that people ever talk about "making it" in New York – or what I call the Curse of Sinatra – is to confuse means and ends. Success doesn't go any further than not failing, and preferably you are failing less often than you are not failing. After 15 years in the city, most of the people I know who have succeeded (by failing less often than not failing) have, like some ragged tribe of castaways, burrowed themselves into fortunate living circumstances, and know that they can never leave, no matter how gross or expensive their neighborhood has become, because there is a snowball's chance in hell that they will ever get such a good deal anywhere else in town, at least anywhere within a 20-minute walk of a subway station. Forget the street preachers; in New York, real estate is the only form of salvation.
It's a little-known fact that Franz Kafka also wrote his own paean to leaving New York. I know, I know, Kafka hardly ever left Prague, but bear with me, because I propose that what we have here is the urtext of the genre.
By way of introduction, I'll note that we should approach Kafka most cautiously when he beguiles us with an innocuous title. Nowhere is this as effortless as in the posthumous ‘A Little Fable', which I reproduce here in its entirety:
"Alas," said the mouse, "the whole world is growing smaller every day. At the beginning it was so big that I was afraid, I kept running and running, and I was glad when I saw walls far away to the right and left, but these long walls have narrowed so quickly that I am in the last chamber already, and there in the corner stands the trap that I am running into."
"You only need to change your direction," said the cat, and ate it up.
That sudden, implacably violent turn in the narrative: Where the hell did the cat come from? Wasn't the mouse's destiny to run into the trap at the convergence of the ever-narrowing walls, which even it saw quite clearly? The final six or so words are the literary equivalent of a punch in the face. But unlike the mouse, we are survivors of this tale, and as such have the luxury to go back and re-read it. At which point we realize our naïveté – from the start, the mouse wasn't in conversation with us, but with the cat. Kafka's compression is so extreme that time folds in on itself. The mouse exists in an eternal state of, if I may invent a tense, always-already-about-to-be eaten.
Of course, in order to keep the story short, the mouse must get eaten, but the trace that lingers, like smoke, is the mouse's incomprehension at its imminent fate. For the expectation of one doom, dogmatic and resigned, is usurped by another, wholly unanticipated one. The mouse may think, ‘Well, here's this cat, he seems a fine fellow and I'll tell him the sorry tale of my life of quiet desperation', whereas Kafka, never one to get in the way of a universe that gladly does the murdering of its own accord, simply allows the cat to get on with being a cat when presented with such an opportunity as a trapped, frightened mouse.
The sharpest irony in this little tale, however, is the cat's message. It is a death sentence masquerading as advice, and presented as if it were the simplest thing to do. As if the mouse could just turn around and walk off into a new direction. I like how Kafka chooses language as the means by which the cat ‘toys' with the mouse. In contrast, the only action is that of being eaten. That part – death – is silent. The cat plays the straight man in the pas de deux of narrator and executioner. The truth is that there is no other direction in which the mouse can go; the fate of the mouse is not just imminent, but, in the form of the cat, it is also immanent.
That cat, my friends, is New York. You think you're all set up to agreeably drink yourself to a gentle death on the Red Hook waterfront and then you get hit by a bus – or a tax audit. Whichever is worse, really. Or as DeVore puts it, "If New York were a cat, it would eat your face after you collapsed in the kitchen from a heart attack." This is the kind of place where it may take years for indifferent betrayal to fully blossom, but when it strikes, the end is swift.
But these days it really doesn't take years. This is the crucial difference between Didion leaving New York in 1967 and her exasperated descendants throwing up their hands in 2015. New York has changed, and why shouldn't it? The salient bit is that it is no longer the heady cocktail of danger and stimulation that drove a certain kind of artist and writer to come here.
In "Here Is New York" E.B. White proposes a rigidly delineated taxonomy describing New Yorkers: there are the natives, the commuters and the arrivals. White asserts that it takes all three constituencies to create New York as it existed in 1949, and this truth holds today. The natives are the city's institutional memory, and its commuters the blood that pumps economic oxygen into and out of Manhattan, giving New York its rhythm. But what can this last group, the arrivals – which is really the instigator of the very idea of the possibility of a romantic notion of New York – what can it hope for today?
He hasn't left yet, but in his own pre-emptive missive, David Byrne writes about what drove him and his peers to settle downtown in the 1970s:
One knew in advance that life in New York would not be easy, but there were cheap rents in cold-water lofts without heat, and the excitement of being here made up for those hardships.
The world of After Hours, Liquid Sky and Downtown 81, let alone the home movies of Nelson Sullivan and Wild Style's director Charlie Ahearn – when going south of 14th Street quite literally meant taking your life into your own hands, when the words Alphabet City actually meant a quantitatively different world from the East Village – this world is no more. On the positive side of this Faustian bargain, we have gained an almost laughably safe city, where you can stumble anywhere in Manhattan and most of Brooklyn and Queens blind drunk because you know an Uber car will show up faster than Lt. Kilgore's napalm airstrike in ‘Apocalypse Now'. On the other side of the ledger, we have a city where the organic emergence of new forms of practice is basically throttled, and the margin for error is nearly zero.
While David Byrne may still be dithering about leaving, others have already done so. The musician and producer (and native New Yorker) Moby penned a similar letter a few months later, and the headline is pretty much all you need to know: "I Left New York For LA Because Creativity Requires The Freedom To Fail". Others have been following suit: in December the venerable Galapagos Art Space, after twenty years in Brooklyn, is decamping to Detroit. In explaining, Galapagos Director Robert Elmes channels Moby:
What drew us to Detroit is the realization that cities need three ingredients to attract or retain artists: time, space, and other artists. In NYC artists have one foot in a full time career and one foot in what is now a dream to find an affordable studio and to move their sculpture studio out of their kitchen because they have an ultimatum from three of their four roommates.
Who can resist upgrading to 600,000 square feet of space? This is what DeVore is really talking about. You spend your time earning the money to earn the access to space, and your principal activity with other artists is spent leveraging the leftover crumbs into something that might approximate artistic practice. That, and complaining. Which is your right. New York no longer abides the leisurely pace of a seeping alienation, à la Joan Didion. And in the end your plans are more likely to be torpedoed by a crappy credit score before you get fed up at not getting that gallery show that always seemed just within reach.
And yet, and yet. If you take a trip out to Queens, almost to the end of the 7 train, you will find the Queens Museum, and inside the museum there is an absolute gem, known as the Panorama of the City of New York. A scale model of all five boroughs, where 1 inch corresponds to 100 feet, the model has almost a million buildings, almost all of them handcrafted. Robert Moses commissioned the Panorama for that most optimistic of mid-20th century occasions, the 1964 World's Fair. A sinuous walkway meanders around this dizzying display, designed to be a replacement of the original simulated helicopter ride, but still evocative of it. As you gently rise and fall around the Panorama, the nearly 10,000 square feet of shimmering urban tapestry has the most confounding effect.
Once you get past the most natural impulse of immediately finding your apartment building and, if you have a job, your office; once you have located the landmarks such as the Empire State Building, and perhaps audibly gasped to see the Twin Towers still proudly anchoring the southern tip of Manhattan; once you have looked for all the things that are known to you, you can then step back and see exactly how much is unknown to you. For the length of one's tenure in New York is inversely proportional to the willingness one has to explore the city, and every neighborhood that's "worth" revisiting quickly acquires its short list of spots. The rest is the equivalent of "flyover country", if it gets flown over at all.
The Panorama takes this provincialism and merrily dashes it to pieces. After you get over the sheer size of Staten Island, your attention glides over hundreds of blocks of housing and industry. Suddenly you are privy to geographies that wholly escaped your attention. A mysterious canal in the middle of Brooklyn; a smattering of islands off the coast of the Bronx. Wait – the Bronx has a coastline? You scan parts of the Queens that you never thought existed. The model has a quiet optimism, a sense that the whole city somehow functions. It is flat – a level playing field. It is democratic. It is meritocratic. It is inviting – enticing, even. What do all those people down there do? It's all so very interesting. More than that, the city, by way of its proxy the model, extends its invitation to you.
You step back from all of this, and even though you know better, you can't stop yourself from thinking: "Goddammit, this town is huge. There's got to be a place for me here, somewhere. I can still make it in New York."
ISIS and Islam: Beyond the Dream
by Omar Ali
A few days ago, Graeme Wood wrote a piece in the Atlantic that has generated a lot of buzz (and controversy). In this article he noted that:
"The reality is that the Islamic State is Islamic. Very Islamic. Yes, it has attracted psychopaths and adventure seekers, drawn largely from the disaffected populations of the Middle East and Europe. But the religion preached by its most ardent followers derives from coherent and even learned interpretations of Islam"
The article is well worth reading and it certainly does not label all Muslims as closet (or open) ISIS supporters, but it does emphasize that many of the actions of ISIS have support in classical Islamic texts (and not just in fringe Kharijite opinion). This has led to accusations of Islamophobia and critics have been quick to respond. A widely cited response in "Think Progress" quotes Graeme Wood's own primary source (Princeton scholar Bernard Hakykel) as saying:
“I think that ISIS is a product of very contingent, contextual, historical factors. There is nothing predetermined in Islam that would lead to ISIS.”
Indeed. Who could possibly disagree with that? I dont think Graeme Wood disagrees. In fact, he explicitly says he does not. But that statement is a beginning, not a conclusion. What contingent factors and what historical events are important and which ones are a complete distraction from the issue at hand?
Every commentator has his or her (implicit, occasionally explicit) "priors" that determine what gets attention and from what angle; and a lot of confusion clearly comes from a failure to explain (or to grasp) the background assumptions of each analyst. I thought I would put together a post that outlines some of my own background assumptions and arguments in as simple a form as possible and see where it leads. So here, in no particular order, are some random comments about Islam, terrorism and ISIS that I hope will, at a minimum, help me put my own thoughts in order. Without further ado:
1. The early history of Islam is, among other things, the history of a remarkably successful imperium. Like any empire, it was created by conquest. The immediate successors of the prophet launched a war of conquest whose extent and rapidity matched that of the Mongols and the Alexandrian Greeks, and whose successful consolidation, long historical life, and development of an Arabized culture, far outshone the achievements of the Mongols or the Manchus (both of whom adopted the existing deeper rooted religions and cultures of their conquered people rather than impose or develop their own).
2. Islam, the religion we know today (the classical Islam of the four Sunni schools, as well as the various Shia sects) developed in the womb of the Arab empire. It provided a unifying ideology and a theological justification for that empire (and in the case of various Shia sects, varying degrees of resistance or revolt against that empire) but, at the very least, Islam and the nascent Arab empire grew and developed together; one was not the later product of the fully formed other. Being, in it's classical form, the religion of a (very successful and impressive) imperialist project, it is not surprising that its"official" Sunni version has a military and supremacist feel to it. Classical Islam is not intolerant of all other religions (though it is in principle almost completely intolerant of pagans) but the rules and regulations of the four classical schools all agree on the superior status of Muslims and impose certain restrictions, disabilities and taxes on the followers of the "religions of the book" that they do tolerate. By the standards of contemporary European "Christendom", many of these rules appear tolerant and broad-minded; and since Western intellectuals (leftists as much, or even more than rightists) are completely focused on European history and culture (and therefore,on the achievements and deficiencies of that culture), this relative tolerance is frequently remarked upon as a stellar feature of Islamicate civilization. But it should be noted that this degree of tolerance is quite intolerant compared to contemporary Chinese or Indian norms and is horrendously intolerant compared to post-enlightenment ideals and fashions. The imposition of Ottoman rules today would be most unwelcome even to post-Marxist intellectuals if they had to live under those rules. Of course, this does not mean they cannot speak highly of these norms as long as they themselves are a safe distance away from them, but such long-distance approval is of academic interest (literally, academic) and not our concern for the purposes of this post.
3. Modern states and modern politics (not just all the complex debates about how power should be exercised, who exercises it, who decides who exercises it etc., but also the institutions and mechanisms that evolved to manage modern states and modern politics) mostly reached their current form in Europe. They did not arise from nothing. Many ancient strands grew and intersected to create these states and their political institutions. And there are surely things about this evolution that are contingent and would have been different if they had happened elsewhere. But there are also many features of modern life that are based on new and universally applicable discoveries about human psychology, human biology and human sociology. They have made possible new levels of organization and productivity and in a globalized world (and the Eurasian landmass has had some sort of exchange of ideas for millennia, but this process has accelerated now by orders of magnitude) it is impossible for any large population to ignore these advances and suvive unmolested by those willing to take advantage of these advances.
The modern world that has been created is not just one random "civilization" among many. It is the cutting edge of human knowledge and the human ability to apply that knowledge to good and evil ends. Whatever else it may be (and there is no shortage of people who feel it is too oppressive, too unfair, too fast, too anxiety-provoking, too inhuman, etc etc.) it is an extremely powerful and progressive culture. You can reject it, and countless people (including, it seems, many of the most privileged intellectuals of this very civilization) do reject many aspects of it. But it should also be noted that there are degrees of rejection. Most of the critics (but not all of them) are either critics-from-within, who only reject certain aspects of it, or non-serious critics whose wholesale contempt for the project is not matched by any equivalent personal commitment or serious consideration of alternatives. Most of them also seem unable to do without critical aspects of modernity. Aspects you cannot have without having far more of the rest than they seem to care for. To give two random examples, I have never met a multiculturalist liberal or leftist in the West (including those of Desi origin) who is willing to himself or herself live under the restrictive sexual morality and the community-centric balance of community vs individual rights characteristic of "traditional cultures'. And I have NEVER met an Islamist who did not want an air-force (you can work out for yourself all the other innovations and institutional mechanisms that would be needed in order to have a competitive indigenous air-force).
In fact, forget traditional cultures, just look at Maoist China and the Khmer Rouge, both of whom explicitly rejected modern individualism and mere meritocracy and insisted they wanted to be "Red rather than Expert". One ended up honoring the legacy of Liu Bocheng and Deng Xiaoping over Mao, the other ended up on the proverbial "dust heap of history". There is a lesson (or several lessons) in those choices and their spectacular failure.
In short, the only people who can realistically stay outside of "our universal civilization" are either museum communities permitted to survive as quaint exemplars of bygone days (like the Amish) or VERY tiny communities that are so isolated and remote that they have escaped the maw of the Eurasian beast until now. Our universal civilization does not have to be seen as positively as Naipaul famously saw it, but it still has to be seen for what it is, a gigantic human achievement and a work in progress; all criticism and resistance being included within it (dialectics anyone?)
And it is important to note that this universal civilization is no longer exclusively European (and never was exclusively European for that matter). Soon, this universal civilization may be dominated by non-European people, a fact that Eurocentric PostMarxist intellectuals seem to have very great difficulty assimilating into their worldview. The institutions and ideas that developed in Europe (from earlier sources that came from all over Eurasia) in the last 400 years have been adopted and adapted already by several Asian nations (Japan, South Korea, Taiwan), with China not far behind and India set to follow. Muslims are not special enough to escape that fate. The only thing truly remarkable about the Muslim core region is the widespread desire to integrate huge elements of modern civilization while remaining medieval in terms of theology, law and politics. Of course we are not unique in this desire; there are Indians and Chinese and Japanese who "reject modernity" as being too European, and who insist they have an alternative path. Whether they do or do not is to some extent a matter of semantics, but Muslims are not unique in claiming that "we are a fundamentally different civilization". Where we are unique (for now) is only in our inability to generate a genuinely open debate on this topic; the tendency in the Islamicate core is for almost everyone in the public sphere to pay lip-service to delusional or formulaic and practically meaningless Islamist ideals and to avoid direct criticism of medieval laws and theology. This is unlike how it is routine for Indians to criticize Indian "fundamentalists" or Christians to criticize Christian ones. And for that we have to thank the blasphemy and apostasy memes more than any intrinsic unchangeability of Islamicate laws and theology.
4. But while Islamicate empires (the dominant form of political organization in the middle east and South Asia from the advent of Islam to the colonial era) insisted they were "Islamic" and used Islam (especially in the first 500 years) as the central justification for their expansionist ambitions, there was another sense in which these same empires had a near-total separation of mosque and state. All these empires operated as typical Eurasian empires and they were, in most administrative details, a straightforward evolution of previous imperial patterns in that region. Religion was part and parcel of the empires, but religious doctrine provided practically no guidance to the political process. The rulers used religion to justify their rule, but the battle-axe determined who got to rule and how. Some rulers attempted to conduct an inquisition and impose their favorite theology on their subjects, but most were content to get post-facto approval for their rule from the ulama (and the ulama were happy to oblige). Islamic theologians accepted practically ANY ruler as long the ruler said he was Muslim and continued to work for the expansion of the Islamic empire. ALL four schools of classical Sunni Islam insisted that the ruler should be obeyed and rebellion was unislamic. This did not stop people from rebelling, but once a rebellion succeeded, the ulama advised submission to whatever ambitious and capable prince had managed to kill his way to the top. An imaginary idealized Islamic state was discussed at times but had little to no connection with actual power politics.
5. It must also be kept in mind that Empires governed loosely and interfered little with the everyday religious rituals of the ruled, especially outside the urban core. The rulers were interested in collecting taxes and continuing to rule. Most of the ruled gave as little as possible in taxes and had as little as possible to do with their rulers. This is not a specifically Islamic pattern, but it was practically a universal feature of Islamicate empires. Muslim religious literature developed no serious political thought. Power politics was guided more by “Mirrors of princes” type literature and pre-Muslim (or not-specifically Muslim) traditions and not some detailed notion of “Islamic state”. There is really NO detailed "Islamic" blueprint for running a state. The so-called Islamic system of government is a modern myth. Every Islamicate empire down to the late Ottomans ruled in the name of Islam, but they did so using institutions and methods that were typically West-Asian/Central-Asian in origin, or were invented to solve a particular Islamicate problem, but had no direct or necessary connection with fundamental Islamic texts and traditions.
6. After defeat at the hands of more capable imperialists and during the (relatively brief) colonial interlude, some people dug up the old stories of the rightly guided caliphs; It seems to me that early Islamicate fantasists (like Allama Iqbal in India) took it for granted that the everyday institutional reality of any "Islamic" state would, for the foreseeable future, be much closer to England than it was to Medina (witness for example his approval of the Grand Turkish assembly). Most Muslim leaders, like their Chinese or Japanese counterparts, were first and foremost interested in getting out from under the imperialist thumb. If they gave some thought to the form their states would take, their imagination ranged from Marxist Russian models to very poorly imagined Islamist utopias. But over time, stories frequently repeated can take on a life of their own. Islamist parties want to create powerful, modern Islamic states. But the stories they were using were more Islamic than modern. The result is that every Islamist party is forever in danger of being hijacked by those espousing simple-minded and unrealistic notions of Shariah law. It turns out that pretending to have “our own unique genius” is much easier than actually having any genius that can get the job done. Modern ideas (fascism, the grand theatre of modern media manipulation, modern methods of guerilla war) are used to promote legal codes and theology whose relationship with these new institutions has not been worked out yet (and I see no problem with sticking my neck out and saying "will NOT be worked out satisfactorily by ANY contemporary Islamist movement).
7. The MODE of failure may vary, but the failure of the Islamist political project in the next 20 years is inevitable. This is not because there can be no such project in principle, but because the project as it has actually developed in the 20th century is based on the twin illusions of an “ideal Islamic state” and an existing alternative “Islamic political science”…neither of which actually existed in history. AFTER this failure, there can certainly be new ways of creating modern, workable institutions that have enough of an Islamic coloring to deserve the label "Islamist" while incorporating all (or most) of the new discoveries in the hard sciences as well as in economics, human psychology, politics, social organization, administrative institutions, mass communication and so on.
8. I do want to emphasize that I do not believe Islamic theology per se is some sort of insoluble problem. It may be a difficult problem, but both liberals who are trying to discover modern fashions in that theology and "Islamophobes" who insist that the theology is a permanently illiberal fascist program are wrong in their emphasis on the centrality of this theology. As Razib put it in an interesting post on this topic on his blog, "Islam is not a religion of the book". NO religion is a religion of the book. People make religions and people remake them as the times demands. Messily and unpredictably in many cases, but still, there is movement. And in this sense, Islam is no more fixed in stone by what is written or not written in its text (or texts) than any other religion.
Someone commented on Razib's blog (and I urge you to read the post and the comments, and the hyperlinks, they are all relevant and make this post clearer) as follows:
"Well, if you take the Old Testament and Koran at face value, the OT is more violent. The interesting question is then why Islam ends up being more violent than Judaism or Christianity, and for that I agree you have to thank subsequent tradition and reinterpretation of the violence in the text. It appears that for whatever reason Islam has carried out less of this kind of reinterpretation, so what was originally a less violent founding text ends up causing more violence because it is being interpreted much more literally."
I replied that there is an easier explanation: Whether the text canonized as "foundational document" does, or does not, explain the imperialism and supremacism of the various Islamicate empires is a red herring. The Quran is a fairly long book, but to an outsider it should be immediately obvious that you can create many different Islams around that book and if you did it all over again, NONE of them have to look like classical Sunni Islam. The details of Sunni Islam (who gets to rule, what daily life is supposed to look like, how non-Muslims should be treated, etc) are not some sort of direct and unambiguous reading of the Quran. While the schools of classical Sunni Islam claim to be based on the Quran and hadith, the Quran and the hadiths are clearly cherry picked and manipulated (and in the case of the hadiths, frequently just invented) based on the perceived needs of the empire, the ulama, the individual commentators, human nature, economics, whatever (insert your favorite element here).
So in principle, we should be able to make new Islams as needed (and some of us have indeed done so over the centuries, the Ismailis being one extreme example; some Sufis being another) and I am sure others will do just that in the days to come. The Reza Aslan types are right about this much (though i seriously doubt that he can invent anything new or lasting; that does not even seem to be his primary aim). In fact, in terms of practice, millions of Muslims have already "invented new Islams". Just as a random example, most contemporary Muslims do not have sex with multiple concubines that they captured in the most recent Jihad expedition to the Balkans (or bought from African slave-traders for that matter). Not only do they not buy and sell slaves, they find the thought of doing so somewhat shocking. Also see how countless Muslims lived very obediently under British laws in the British empire and in fact provided a good part of the armies of that empire. Or see the countless Muslims who take oaths of loyalty to all sorts of "un-Islamic" states and, for the most part, turn out to be as loyal and law-abiding as any of their Hindu or Sikh or Christian fellow citizens in the various hedonistic modern states. Their "Islam" has already adapted itself to new realities.
What sets Muslms apart is really their inability (until now) to publicly and comfortably articulate a philosophical rejection of medieval (aka no longer fashionable) elements of classical Sunni Islam. And for all practical purposes, this is a serious problem only in Muslim majority countries. In other countries that have a strong sense of their own identity and of the necessity of their own laws, Muslims mostly get on with life while following those laws. In the Muslim majority countires, it is the apostasy and blasphemy laws (and the broader memes that uphold those laws) that play a central role in preventing public rejection of unfashionable or unworkable aspects of classical Islam. A King Hussein or a Benazir Bhutto or even a Rouhani may have private thoughts rejecting X or Y inconvenient parts of medieval Islamicate laws and theology, but to speak up would be to invite accusations of blasphemy and apostasy. So they fudge, they hem and haw, and they do one thing while paying lip service to another. Unfortunately, this means the upholders of classical Islam have the edge in debates in the public sphere. And ISIS and the Wahabis are not far enough from mainstream classical Sunni Islam for us to think they are just some demonic eruption from outer space; for example, classical Islamic theology recommends cutting the hands of thieves, stoning adulterers, going on jihad (not just some inner jihad of the Karen Armstrong type, but the real deal), capturing slaves, buying and selling concubines, killing apostates and so on; ISIS of course goes much further in their willingness to kill other Muslims, to rebel against existing rulers and to bypass common humanity and commonly cited restrictions and regulations about prisoners, hostages, punishments and so on, but when they say classical Islam permits the first set of things noted above, they are not lying, the apologists are lying.
By the way, while this inability to frontally confront aspects of classical Islam that are out of sync with the current age is a serious problem in Muslim communities, it is not insoluble. The internet has made it very hard to keep inconvenient thoughts out of view. So even in Muslim majority countries, there will be much churning and eventually, much change. It's just that some countries will emerge out of it better than others.
ISIS itself will not get anywhere. Of course, in principle, an evolved ISIS living on in the core Sunni region is possible. But we make predictions based on whatever models we have in our head. Like most predictions in social science and history, these will not be mathematical and precise and our confidence in them (or our ability to convince others, even when others accept most of our premises) will not be akin to the predictions of mathematics or physics. But for whatever it's worth, I don't think ISIS will settle into some semi-comfortable equilibrium (irrespective of whether more capable powers like Israel or Turkey or even the CIA are supporting them or not). They will only destroy and create chaos. And eventually they will be destroyed. It is possible that in the process parts of Syria, Iraq and North Africa could become like Somalia; too messy, too violent and too poor to be worth the effort of pacification, even by intact nearby states. But even if a Somalia-like situation continues for years, it will not go on forever. The real estate involved is too valuable, the communities involved were too integrated in the modern world, to be left alone. Eventually someone will bring order to to those parts. Though it is likely that this "someone" will be local and will use more force and cruder methods than liberal modern intellectuals are comfortable with. The first stage of pacification is more likely to be handled by local agents of distant imperialists, not directly by the imperialists themselves. That is just the way it is likely to work best.
Of course, success and failure are always relative to something. If the zeitgeist (whatever that means) is no longer in favor of something then a "successful" policy would be one that achieves a soft landing. Since the zeitgeist is (almost by definition) unknowable in full in real time, even the soft landing is not going to land where the first planners of soft landing imagined it as being headed. Being able to land softly, wherever that may be is the best outcome we can hope for in many cases. With that cheery note, here are some other useful links (many extracted from an extremely learned discussion on smallwarsjournal) that shed light on some aspects of the above, raise opposing ideas, or help to understand where I am coming from.
Our religion problem by Babar Sattar in DAWN Pakistan.
Reforming the blasphemy laws, in many ways, an enlightened "Islam-based" initiative.
Razib Khan on "The Islamic State is right about some things".
From Zenpundit Charles Cameron on Misquoting Mohammed
"Brown is a Muslim, a professor at Georgetown, and author of Hadith: Muhammad’s Legacy in the Medieval and Modern World. His book Misquoting Muhammad — not his choice of title, btw — lays open the varieties of interpretive possibility in dealing with the Qur’an and ahadith with comprehensive scholarship and clarity. In light of the upsurge in interest in Islamic and Islamist religious teachings occasioned by Graeme Wood‘s recentAtlantic article, I asked Prof. Brown’s permission to reproduce here the section of his book dealing with abrogation and the rules of war.
Here then, with his permission, is an extract from Misquoting Muhammad. I hope it will prove of use both here and to others beyond the circle of Zenpundit readers. Spread the word!"
From a conservative Western perspective: The fantasy of an Islamic reformation.
"Q 2:256, “There is no compulsion in religion . . .” (lā ikrāha fī l-dīni) has become the locus classicus for discussions of religious tolerance in Islam. Surprisingly enough, according to the “circumstances of revelation” (asbāb al-nuzūl) literature (see occasions of revelation), it was revealed in connection with the expulsion of the Jewish tribe of Banū l-Nadīr from Medina in 4⁄625 In the earliest works of exegesis (see exegesis of the Quran: classical and medieval), the verse is understood as an injunction (amr) to refrain from the forcible imposition of Islam, though there is no unanimity of opinion regarding the precise group of infidels to which the injunction had initially applied. Commentators who maintain that the verse was originally meant as applicable to all people consider it as abrogated (mansūkh) by q 9:5, q 9:29, or q 9:73 (see abrogation). Viewing it in this way is necessary in order to avoid the glaring contradiction between the idea of tolerance and the policies of early Islam which did not allow the existence of polytheism — or any other religion — in a major part of the Arabian peninsula. Those who think that the verse was intended, from the very beginning, only for the People of the Book, need not consider it as abrogated: though Islam did not allow the existence of any religion other than Islam in most of the peninsula, the purpose of the jihād (q.v.)against the People of the Book, according to q 9:29, is their submission and humiliation rather than their forcible conversion to Islam.[...]"
"Both verses that are said to have abrogated Quran 2:256 speak about jihad. It can be inferred from this that the commentators who consider Quran 2:256 as abrogated perceive jihad as contradicting the idea of religious freedom. While it is true that religious differences are mentioned in both Quran 9:29 and 9:73 as the reason because of which the Muslims were commanded to wage war, none of them envisages the forcible conversion of the vanquished enemy. Quran 9:29 defines the purpose of the war as the imposition of the jizya on the People of the Book and their humiliation, while Quran 9:73 speaks only about the punishment awaiting the infidels and the hypocrites in the hereafter, and leaves the earthly purpose of the war undefined. Jihad and religious freedom are not mutually exclusive by necessity; religious freedom could be granted to the non-Muslims after their defeat, and commentators who maintain that Quran 2:256 was not abrogated freely avail themselves of this exegetical possibility with regard to theJews, the Christians and the Zoroastrians. However, the commentators who belong to the other exegetical trend do not find it advisable to think along these lines, and find it necessary to insist on the abrogation of Quran 2:256 in order to resolve the seeming contradiction between this verse and the numerous verses enjoining jihad. p. 102-3t al-_arab). Despite the apparent meaning of q 2:256, Islamic law allowed coercion of certain groups into Islam. Numerous traditionists and jurisprudents ( fuqahā_) allow coercing female polytheists and Zoroastrians (see magians) who fall into captivity to become Muslims — otherwise sexual relations with them would not be permissible (cf. q 2:221; see sex and sexuality; marriage and divorce). Similarly, forcible conversion of non-Muslim children was also allowed by numerous jurists in certain circumstances, especially if the children were taken captive (see captives) or found without their parents or if one of their parents embraced Islam. It was also the common practice to insist on the conversion of the Manichaeans, who were never awarded the status of ahl al-dhimma. Another group against whom religious coercion may be practiced are apostates from Islam (see apostasy). As a rule, classical Muslim law demands that apostatesbe asked to repent and be put to death if they refuse."
The pact of Umar
"In the name of Allah, the merciful Benefactor! This is the assurance granted to the inhabitants of Aelia by the servant of God, 'Umar, the commander of the Believers. He grants them safety for their persons, their goods, churches, crosses - be they in good or bad condition - and their worship in general. Their churches shall neither be turned over to dwellings nor pulled down; they and their dependents shall not be put to any prejudice and thus shall it fare with their crosses and goods. No constraint shall be imposed upon them in matters of religion and no one among them shall be harmed. No Jew shall be authorised to live in Aelia with them. The inhabitants of Aelia must pay the gizya in the same way as the inhabitants of other towns. It is for them to expel from their cities Roums (Byzantians) and outlaws. Those of the latter who leave shall be granted safe conduct... Those who would stay shall be authorised to, on condition that they pay the same gizya as the inhabitants of Aelia. Those of the inhabitants of Aelia who wish to leave with the Roums, to carry away their goods, abandon their churches and Crosses, shall likewise have their own safe conduct, for themselves and for their Crosses. Rural dwellers (ahl 'I-ard) who were already in the town before the murder of such a one, may stay and pay the gizya by the same title as the people of Aelia, or if they prefer they may leave with the Roums or return to their families. Nothing shall be exacted of them.
Witnesses: Khaledb.A1-Walid, 'Amrb.A1-Alp, 'Abdar-Rahmanb. 'Awf Muawiya b. Abi Sufyan, who wrote these words, here, In the year 15 (33).
Winston King states in the Encyclopaedia of Religion, 2nd Ed., Vol. 11
“Many practical and conceptual difficulties arise when one attempts to apply such a dichotomous pattern [ sacred / profane ] across the board to all cultures. In primitive societies, for instance, what the West calls religious is such an integral part of the total ongoing way of life that it is never experienced or thought of as something separable or narrowly distinguishable from the rest of the pattern. Or if the dichotomy is applied to that multifaceted entity called Hinduism, it seems that almost everything can be and is given a religious significance by some sect. Indeed, in a real sense everything that is is divine; existence per se appears to be sacred. It is only that the ultimately real manifests itself in a multitude of ways—in the set-apart and the ordinary, in god and so-called devil, in saint and sinner. The real is apprehended at many levels in accordance with the individual’s capacity.” p.7692,
Paul Radin, Primitive Religion: Its Nature and Origin in connexion with early societies”Where there is little trace of a centralized authority, there we encounter no true priests, and religious phenomena remain essentially unanalysed and unorganized. Magic and simple coercive rites rule supreme”.p.21
Carl Schmitt in Political Theology,
“All significant concepts of the modern theory of the state are secularised theological concepts‟ (p. 36)
or again in The Concept of the Political that
“The juridic [sic] formulas of the omnipotence of the state are, in fact, only superficial secularisations of theological formulas of the omnipotence of God‟ (p. 42).
Monday, February 23, 2015
The Love Of Money
by Mandy de Waal
"I never realised that I had a problem until quite recently. Before this I thought it was normal. I thought that everyone thinks (about money) the way I do," says Charles Hugo (not his real name) on the phone from an upmarket seaside resort on South Africa's Cape coast.
"It doesn't matter how much money I earn, I always feel I need more." As Hugo describes his relationship with money, his speech is carefully measured. The forty-something year old former banker-cum-currency trader pauses for a while during our conversation, and then adds: "It was only recently I realised I have a problem."
For as long as Hugo can remember money has featured as a complex protagonist in his life. The dominant force in his decision making, this man measures everything in terms of what it will cost him and if the value he'll be getting from the transaction will be worthwhile. It doesn't matter if the transaction is an emergency trip in an ambulance or going into a restaurant for a sirloin.
"Every time a decision needs to be made, the first thing I think about is the financial impact. It doesn't matter what it is. I will always find a money angle to each and every decision," he says. "If someone has a problem I won't think about the person or the emotion." For Hugo cash is cognitive king.
"I used to think everyone was like this. That money came first in everyone's lives. It's only during the past couple of years that I've realised this is not the case." Today Hugo – who doesn't want his identity to be revealed publicly – is in his early forties. Hugo talks about having a problem and about being obsessed with money. A couple of times the word ‘addiction' enters the conversation. "I have an addiction to money," he says, adding that his ‘obsession' with money causes problems in his interpersonal relationships because he thinks very differently from those he cares about.
MONEY - THE EARLY YEARS
To understand how Hugo's relationship with money evolved, the writer of this article asks him about his early memories – about the events that shaped his formative years. "I didn't ask for things often because I knew the answer would always be about money," says Hugo, who was told by his father that money was something one had to work very hard for. Hugo internalised the idea that extreme effort and difficulty was associated with financial reward.
"When I was about eight years old and in standard one I went through a period at school where I always had a pain in my stomach. The teacher would get sick of me and send me to sick bay, and then my parents would be called and I would be sent home. I didn't realise it then, but thinking about this now I understand why this happened. I guess I thought that if I wasn't at school my dad wouldn't have to pay for me to be there. At that time I had a strong sense of wasting my dad's money and of definite guilt. I didn't fully understand it then, but if I think about this now, those same guilt feelings arise. To be honest, if I spend money on something now, I still feel guilty about it," Hugo says.
As Hugo's school career progressed he found he thought about money often. " It was constant. It was a worry," he says, adding that the thoughts mostly related to how he was going to earn money or get by once he left school. "Whatever I was busy doing at the time… well, I wouldn't think about what I was doing, but rather about money."
When it comes to psychological disorders that are related to money, what's evident is that—gambling aside—there are no easy definitions or neat borders for containment. Money is an indispensable part of our daily lives – as integral as sex and food. Most people wake up in the morning and go to work in order to make money, and this is never thought of as pathological. Far from it – it is an activity that's characterised as very healthy. It is a responsible citizenry that gets up and keeps the cogs of the consumerist machine moving. More so, society lauds those who rise up through the capitalist ranks to become captains of industry or breakout entrepreneurs.
SHUFFLING BIG MONEY
Hugo describes a time in his late twenties, when he shuffled funds around for a financial institution and was earning some R300,000.00 a month. "I was working in a bank and there were retrenchments. I was put into an admin role where I was dealing with money," he says, explaining that the designation he found himself in wasn't supposed to be a money-making position.
"I turned this into a massive money-making division for the business. All I was doing was moving money around. I started this admin function with some R100 million, but when I was done I was dealing with R20 billion," Hugo says, adding: "This put me in my element. It was like a dream come true. Every day I could get up and move money around. I never realised it at the time. I didn't know it was what I could do or how to do it. But I just fitted into this role perfectly. The longer I did this the better I became at doing it. My whole focus was on the money – moving the money around and making more money."
When the bank realised what a boon Hugo was, he was given financial rewards, which only served to intensify his drive to make more money. "The bonuses just spurred me on. At that time I had calculations going in my head non-stop. All I thought about every day was how much I would make and what it would take to make this grow," he says.
A defining moment for Hugo at the time was going on leave, and spending his entire vacation consumed with the thought about how to make more money. Being away from the day-to-day minutiae enabled Hugo to review how he was working for the bank. "I looked at the bigger picture," Hugo says, declaring that in the month after he returned to work he'd made more in that month than he'd made the whole year. "It was non-stop thinking about how to make more and more," he confesses.
THERE'S NO PATHOLOGY
Trying to deconstruct what presents as an obsession with lucre is something of a challenge because an addiction to money is not a pathology that is officially recognised by the Diagnostic and Statistical Manual of Mental Disorders (DSM). Published by the American Psychiatric Association, the DSM codifies mental conditions and is a diagnostic standard used globally by mental health professionals. The only addictive disorders associated with money recognised by the DSM is gambling disorder, which is defined as a process disorder, or an addiction to an activity (like sex, for instance, or internet gaming.)
"We have a situation where the leading diagnostic manual isn't prepared to commit to a behavioural addiction as something that they are willing to codify," a psychiatrist who used to practice in London, and who asks for his name to be withheld, tells me. "If this is not even codified as a disorder, where do we start decreeing that something is beyond norms, or even pathological? Do we make that judgement from our own value-set?" he asks, and then answers his own question: "For many people this behaviour might sit well within their own set of values," the psychiatrist explains.
The psychiatrist continues: "One of the requirements for codifying a disorder as pathological, the criteria is that it must have negative consequences for a person's physical, mental, social or financial well-being. In other words, there must be some form of tangible destruction going on, in one or more of these key areas. In fact most clinicians would be reluctant to commit something as pathological if no damage has been done."
We live in a society where amassing wealth is simultaneously revered and reviled. Greed was classified a vice as far back as the 4th century when Christian monk Evagrius Ponticus penned a list of what he called ‘evil thoughts' in Greek. This list became the ‘seven deadly sins' two centuries later when it was revised as such by Pope Gregory I, based no doubt on Matthew 6:24: "No-one can serve two masters… You cannot serve both God and mammon" (or "God and riches").
THE RELIGION OF GREED
Fast forward to the 21st century and you'll discover a time when greed had all but become a religion. I'm talking about the excessive eighties, that period personified by Gordon Gekko - the protagonist in Oliver Stone's ‘Wall Street'. Gekko sums up the spirit of this capitalist period without a conscience: "Greed, for lack of a better word, is good. Greed is right. Greed works." A ruthless corporate raider, Gekko tells a packed annual shareholder's meeting in a seminal scene from the film: "Greed clarifies, cuts through, and captures, the essence of the evolutionary spirit. Greed, in all of its forms; greed for life, for money, for love, knowledge, has marked the upward surge of mankind."
Gekko epitomises the capitalist ideology of the latter half of the twentieth century, a time when America's economic growth was on the ascendancy and materialism was rampant.
In 1983, sociologist Philip Slater saw what was happening in the States, and called for caution by labelling money "America's most powerful drug." In his book, "Wealth Addiction" he examined consumerist American society. Slater described what he saw like this: "Our economy is based on spending billions to persuade people that happiness is buying things, and then insisting that the only way to have a viable economy is to make things for people to buy so they'll have jobs and get enough money to buy things." Thirty years on, its interesting to see that status is no longer as important as it once was to Americans.
SUCCESS = MONEY?
An Ipsos MORI Global Trends Survey of more than 16,000 people across 20 states showed that people who took this global survey in the US largely no longer measure success by what they own. However attitudes in Hugo's home country are quite different. By way of contrast South Africans are fairly materialistic but are much more likely to feel under pressure to make money or be successful than the global average.
The Ipsos data revealed that 33% of South Africans surveyed say they measure their success by the things they own in contrast to 21% of Americans. This compares with 71% of respondents in China, 58% in India and 16% in Britain. The research also shows that 66% of South Africans feel enormous peer pressure to succeed. For people surveyed in the US this figure was 46%.
In South Africa, Hugo struggles to work with his obsession with money. "I am currently trading on the financial markets in my personal capacity, and it is a huge challenge to get my emotions out of the way when it comes to making a decision about entering and exiting… about taking a trade or not taking a trade. Often my emotions start overtaking the rational reasons why I am doing this," he says.
Hugo describes how he often needs to wrestle with himself internally to ensure that his decision-making isn't hijacked by his emotions. "Managing my emotions so that they don't impinge on what I am doing takes huge effort. This would be an ideal vocation if I could take money out of the equation, but what I do now to make money is directly related to money. But now I try to manage this in a different way," he says.
Hugo isn't going for professional counselling but spends time speaking to people, and works on trying to be mindful and conscious of his thoughts, thought processes, decisions and actions. "Typically I try to take a step back. To do some breathing exercises for three to five minutes. I try to be mindful of the present moment in the hope that I can walk away from the situation at hand with a new light, or a new insight or perspective," he says.
PENNIES AND PRINCIPLES
The moral of this story? Understanding our psychology and the role that money plays in it, requires an appreciation of complexity. On an individual level, what we think of as dysfunction, may not be. On the contrary, what we think of as sick could be the projection of our own value system flexed in judgement of another.
On a macro or systemic level Hugo's advice makes sense. Isn't it time we stepped away from the means we use to measure success in order to re-examine how useful this is to our lives and to society? Don't we need to become more conscious about our relationship with money in order to really understand how our ties to financial transactions hinder, harm or help us?
Monday, January 05, 2015
He's So Ronery
"Data made flesh in the mazes of the black market."
~ William Gibson, Neuromancer
Sometime last September, to add to what was already a fairly stressful month, I received a text message from my bank inquiring about some charges that had been made to my credit card. Once I got on the phone with a representative, I was asked if I had spent a few thousand dollars the previous evening at a nightclub in Sofia, Bulgaria. I told them that I hadn't, and that I was furthermore upset that I hadn't even been invited. Two large dropped in a dump like Sofia – it must have been quite the party. The bank made me whole again, but I was left to wonder, like so many other people these days, about the inscrutable question of how my card had been procured and deployed with all the instantaneity allowed by today's global flow of money and data – concepts that are becoming increasingly interchangeable or even undifferentiated. In all likelihood, neither I nor the bank will ever know what happened, and the event was written off simply as a cost of doing business.
This event reproduced itself more recently on a much larger scale. What has become known as the "Sony Hack" is continuing to reverberate across several worlds: computer security, entertainment and even foreign policy, to name a few. Much of the conversation seems to be concerned with the whodunit aspect of things: Who could possibly have had the skills and chutzpah required to not only spirit away approximately 100 terabytes of information of every stripe from underneath the multinational's nose, but then also proceeded to wipe much of the data from the network itself? Even though the breach was noticed on November 24th, it's a good bet that Sony itself still hasn't assessed the full extent of the damage. While things are nowhere near to shaking out, let's consider some of the consequences that have so far followed the smashing of this particular piñata.
Fast forward about, umm, fifteen minutes after November 24th, and we already had our culprit, which could be no one other than North Korea (I guess Iran got a bye because we need them right now in order to fight Islamic State). I find it challenging to believe North Korea was involved. Eleven years ago, Kim père didn't seem quite so phased the last time a Hollywood satire "took him out" – is it possible that Kim fils is such a thin-skinned grasshopper?
Seriously, though, a good reason to be wary of the whodunit parlor game is the sheer paucity of real information. As with Edward Snowden's NSA leaks, we only know what has been released so far, the odd communications of the hackers responsible, and, to a much lesser degree, what has been divulged by those directly affected (for a fairly disinterested view, check out Bruce Schneier's postings, especially here and here; the mark of a true authority is the ability to remain undecided). Without a doubt, it's been a feast for anyone interested in anything that Sony Pictures produces, or the position that it generally occupies in our culture. For one thing, the leaks have provided a delightful opportunity for tut-tutting the casual racism, sexism, ageism and general backstabbing that still seems to constitute the lingua franca of the entertainment industry – and probably many other industries, were their kimonos to be opened as well. And however the hack was conducted, corporate infosec has yet again been revealed as the emperor with no clothes. Given the breaches we have experienced in the past few years (for example, 70 million credit cards stolen from Target almost precisely a year earlier), this comes as no real surprise, either.
What's more interesting are the consequences for US and North Korean gameplay. This event has provided exactly the right fuel for the brinksmanship that both sides have excelled at for decades. Even if the DPRK had little or no hand in the hack, the US gets to tighten the screws with additional sanctions, this time attempting to target the country's (admittedly very real) cyberwarfare capabilities. For its part, the North Korean propaganda machine will scale fresh heights of shrillness and maybe fire another missile or two into the sea, giving it a higher ledge from which the international community will eventually have to talk it down with concessions. Kim Jong-Un now has even more and better reasons to consolidate power. Also, the DPRK's offer of a joint investigation into the actual culprits, which the US was bound to turn down, was pretty clever. Everyone gets to pull a few treats from the piñata once it's been cracked. It's easy to imagine Kim Jong-Un popping up a fresh batch of popcorn in his underground lair and kicking back to the movie that's now unfolding.
Which brings us to the elephant in the room, also known as "The Interview". We, or at least some of us, have been put in the awfully strange position of striking a blow for freedom by watching a Seth Rogen movie. As is well known, the Guardians of Peace (the group taking responsibility for the hack, not to be confused with the Burundian militia of the same name, although that would set a new bar for globalization) made enough threats that the film was initially pulled from theaters. The ensuing "free speech" backlash saw criticism from President Obama all the way to feel-good author and astute businessman Paolo Coelho, who bizarrely offered to buy the distribution rights for $100,000. The film was subsequently set up for online distribution, then gingerly released through a few independents and small chains. This led to the next unanticipated consequence: we suddenly had a real-world case study for digital distribution of first-run films.
As Paul Tassi correctly noted, this was far from a perfect case, since the release was, to put it mildly, chaotic. Nevertheless, marketers will be reading these tea leaves carefully. 2014 ended with box office receipt down 5.3% from the previous year, and studios will be redoubling their efforts to make sense of the continuing fragmentation of the distribution and payment landscape. If "The Interview" is the canary in the coal mine, the outlook isn't good. Budgeted at $44m, as of Tassi's December 29th article it had only take in $15m in online revenue, and by January 4th it had taken in almost $5m in physical box office sales.
Given that the film had the sort of PR any flack would give a right arm for, why such a poor showing? Let's not forget that while some of us outsmarted the terrorists by streaming the film in our homes, others perhaps took the whole striking-a-blow-for-freedom concept a bit too far, since almost as many people illegally downloaded the film. Had the film gone into wide release on Christmas Day, as was originally intended, Tassi quotes source that believe it would have made its entire budget back in the first weekend. A $7 streaming rental – even less, if split among a roomful of friends – is not going to do a declining industry any favors. The model is clearly in need of further tweaking.
So who should we be listening to as we attempt to disentagle the mess that is the Sony hack? To me, one of the main assumptions that requires unpacking is the idea that there must be a single group behind this, motivated by a single purpose. There is an astonishing menagerie of actors within hacking culture who opportunistically form temporary, anonymous groups for the achievement of some more-or-less identifiable goal. Even Anonymous – perhaps the best-known of these – could not resist getting a piece of the action, as per the below message posted on PasteBin on December 19th:
We know that Mr. Paulo Coelho has offered Sony Entertainment a sum of $100,000 for the rights of the movie; where he shall then be able to upload the movie onto BitTorrent. Obviously, you shall not be responding to his generous offer - so please respond to ours with a public conference, we wish to offer you a deal... Release "The Interview" as planned, or we shall carry out as many hacks as we are capable of to both Sony Entertainment, and yourself. Obviously, this document was only created by a group of 25-30 Anons, but there are more of us on the internet than you can possibly imagine.
What's a poor CEO to do? One group of hackers breaks the piñata open while another demands that you go about your business like an honorable corporation. In an age where we are way past the idea of accountability, there really isn't pleasing everyone, or anyone, any longer. (A further irony is that PasteBin was one of the anonymous sites where the Guardians originally dumped the contents of C-suite mailboxes, payroll lists and other goodies. There is no technology whose blade cuts only one way.)
We have to begin from a different point of view – that of the forces arrayed against the information systems of any organization. These systems are constantly being prodded and jerked around from the outside by anyone with an internet connection and the ability to fill in a website name. And because you have to trust your employees somewhat, these same systems are always already compromised from the inside. A group on the outside may have the expertise but only idle malice in mind, while a disgruntled insider might have the motivation, but lack the tools to do truly widespread damage. Even if the two manage to find one another, the coherence of the act is still disputable. In a very real sense, it is only the act of observing the event that allows for this probabilistic wave function of motivation to collapse into a stable agenda. Given the current lack of information, it is easy to forget that we are just reflecting back to ourselves the narratives that we have already accepted, eg: North Korea is bad; hackers are terrorists; employees cannot be trusted. Whichever one you believe in the most is your explanation to the Sony hack.
I came to this conclusion after reading some analyses performed by infosec firms, Since their bread and butter is protecting corporations like Sony from just these sorts of situations, they have rushed in to make sense of the situation. With the FBI tight-lipped about what they know, these players are one of the only sources of – if not accurate then at least interesting – third-party information concerning the hack. And since their business depends on their credibility, they are perhaps the least incentivized to sensationalism.
Curiously, I cannot find a single infosec firm that pegs North Korea, certainly not directly. These firms' knowledge of hacking tools and culture makes it clear that malware, techniques and virtual points of reference like IP addresses are often and easily traded, imitated or faked. This of course does not completely discount the idea of DPRK involvement, but it makes proving it much more difficult. Hence the argument for an opportunistic alliance. One of them, Norse, has been developing the disgruntled-insider theory:
At the center of Norse's findings is Lena, a woman who had worked for Sony for 10 years in a senior technical position until she was laid off in May during a corporate restructuring. "Lena had the technical knowledge to facilitate the type of attack Sony had, which is why… she remains a person of interest," Norse's Stammberger says. "There are other individuals as well. There's a pretty short list of specific individuals, and we know their names, addresses, and nationalities. They seem to have some connection to this incident."
If accurate, "Lena" might be the closest thing to a smoking gun that anyone will be able to find. Norse briefed the FBI for three hours last week on their findings, but the agency remained mum on what they know. Nevertheless, it is worthwhile to look at the agency's exact words: "The FBI has concluded the government of North Korea is responsible for the theft and destruction of data on the network of Sony Pictures Entertainment." Crucially, this does not mean that they participated in the hacking of the network, from the inside or the outside. In fact, if you were to go to PasteBin and download some Sony executive's emails and then delete them, you could be accused of exactly the same thing.
Could it be that the entire foreign policy kerfuffle is based on an ill-considered or, worse, opportunistic reading of what the FBI said? Or is the agency providing the White House with a face-saving out if it is revealed that the DPRK was hardly involved? These are difficult questions that may never be wholly resolved. But in the meantime, no matter who swung the bat, there's plenty of candy for all the kids, so why ruin a good thing while you've got it?
As for that night club in Sofia where my credit card got taken for a wild ride, I did a little extra research. I found out from friends of friends that it's a small place that, more likely than not, is used as a money-laundering front. It turns out that the party I imagined – sleazy Eastern European gangsters in track suits, snorting coke off of strippers' fake boobs – never happened. How disappointingly appropriate.
Monday, December 08, 2014
Heat not Wet: Climate Change Effects on Human Migration in Rural Pakistan
by Jalees Rehman
In the summer of 2010, over 20 million people were affected by the summer floods in Pakistan. Millions lost access to shelter and clean water, and became dependent on aid in the form of food, drinking water, tents, clothes and medical supplies in order to survive this humanitarian disaster. It is estimated that at least $1.5 billion to $2 billion were provided as aid by governments, NGOs, charity organizations and private individuals from all around the world, and helped contain the devastating impact on the people of Pakistan. These floods crippled a flailing country that continues to grapple with problems of widespread corruption, illiteracy and poverty.
The 2011 World Disaster Report (PDF) states:
In the summer of 2010, giant floods devastated parts of Pakistan, affecting more than 20 million people. The flooding started on 22 July in the province of Balochistan, next reaching Khyber Pakhtunkhwa and then flowing down to Punjab, the Pakistan ‘breadbasket'. The floods eventually reached Sindh, where planned evacuations by the government of Pakistan saved millions of people.
However, severe damage to habitat and infrastructure could not be avoided and, by 14 August, the World Bank estimated that crops worth US$ 1 billion had been destroyed, threatening to halve the country's growth (Batty and Shah, 2010). The floods submerged some 7 million hectares (17 million acres) of Pakistan's most fertile croplands – in a country where farming is key to the economy. The waters also killed more than 200,000 head of livestock and swept away large quantities of stored commodities that usually fed millions of people throughout the year.
The 2010 floods were among the worst that Pakistan has experienced in recent decades. Sadly, the country is prone to recurrent flooding which means that in any given year, Pakistani farmers hope and pray that the floods will not be as bad as those in 2010. It would be natural to assume that recurring flood disasters force Pakistani farmers to give up farming and migrate to the cities in order to make ends meet. But a recent study published in the journal Nature Climate Change by Valerie Mueller at the International Food Policy Research Institute has identified the actual driver of migration among rural Pakistanis: Heat.
Mueller and colleagues analyzed the migration and weather patterns in rural Pakistan from 1991-2012 and found that flooding had a modest to insignificant effect on migration whereas extreme heat was clearly associated with migration. The researchers found that bouts of heat wiped out a third of the income derived through farming! In Pakistan, the average monthly rural household income is 20,000 rupees (roughly $200), which is barely enough to feed a typical household consisting of 6 or 7 people. It is no wonder that when heat stress reduces crop yields and this low income drops by one third, farming becomes untenable and rural Pakistanis are forced to migrate and find alternate means to feed their family. Mueller and colleagues also identified the group that was most likely to migrate: rural farmers who did not own the land they were farming. Not owning the land makes them more mobile, but compared to the land-owners, these farmers are far more vulnerable in terms of economic stability and food security when a heat wave hits. Migration may be the last resort for their continued survival.
It is predicted that the frequency and intensity of heat waves will increase during the next century. Research studies have determined that global warming is the major cause of heat waves, and an important recent study by Diego Miralles and colleagues published in Nature Geoscience has identified a key mechanism which leads to the formation of "mega heat waves". Dry soil and higher temperatures work as part of a vicious cycle, reinforcing each other. The researchers found that drying soil is a critical component.. During daytime, high temperatures dry out the soil. The dry soil traps the heat, thus creating layers of high temperatures even at night, when there is no sunlight. On the subsequent day, the new heat generated by sunlight is added on to the "trapped heat" by the dry soil, which creates an escalating feedback loop with progressively drying soil that becomes devastatingly effective at trapping heat. The result is a massive heat-wave which can wipe out crops, lead to water scarcity and also causes thousands of deaths.
The study by Mueller and colleagues provides important information on how climate change is having real-world effects on humans today. Climate change is a global problem, affecting humans all around the world, but its most severe and immediate impact will likely be borne by people in the developing world who are most vulnerable in terms of their food security. There is an obvious need to limit carbon emissions and thus curtail the progression of climate change. This necessary long-term approach to climate change has to be complemented by more immediate measures that help people cope with the detrimental effects of climate change by, for example, exploring ways to grow crops that are more heat resilient, and ensuring the food security of those who are acutely threatened by climate change.
As Mueller and colleagues point out, the floods in Pakistan have attracted significant international relief efforts whereas increasing temperatures and heat stress are not commonly perceived as existential threats, even though they can be just as devastating. Gradual increases in temperatures and heat waves are more insidious and less likely to be perceived as threats, whereas powerful images of floods destroying homes and personal narratives of flood survivors clearly identify floods as humanitarian disasters. The impacts of heat stress and climate change, on the other hand, are not so easily conveyed. Climate change is a complex scientific issue, relying on mathematical models and intrinsic uncertainties associated with these models. As climate change progresses, weather patterns will become even more erratic, thus making it even more challenging to offer specific predictions.
Climate change research and the translation of this research into pragmatic precautionary measures also face an uphill battle because of the powerful influence of the climate change denial lobby. Climate change deniers take advantage of the scientific complexity of climate change, and attempt to paralyze humankind in terms of climate change action by exaggerating the scientific uncertainties. In fact, there is a clear scientific consensus among climate scientists that human-caused climate change is very real and is already destroying lives and ecosystems around the world.
Helping farmers adapt to climate change will require more than financial aid. It is important to communicate the impact of climate change and offer specific advice for how farmers may have to change their traditional agricultural practices. A recent commentary in Nature by Tom Macmillan and Tim Benton highlighted the importance of engaging farmers in agricultural and climate change research. Macmillan and Benton pointed out that at least 10 million farmers have taken part in farmer field schools across Asia, Africa and Latin America since 1989 which have helped them gain knowledge and accordingly adapt their practices.
Pakistan will hopefully soon engage in a much-needed land reform in order to solve the social injustice and food insecurity that plagues the country. Five percent of large landholders in Pakistan own 64% of the total farmland, whereas 65% small farmers own only 15% of the land. About 67% of rural households own no land. Women own only 3% of the land despite sharing in 70% of agricultural activities! The land reform will be just a first step in rectifying social injustice in Pakistan. Involving Pakistani farmers – men and women alike - in research and education about innovative agricultural practices in the face of climate change will help ensure their long-term survival.
Mueller, Valerie, Clark Gray, and Katrina Kosec. "Heat stress increases long-term human migration in rural Pakistan." Nature Climate Change 4, no. 3 (2014): 182-185.
Notes Of A Grand Juror
"A grand jury would indict a ham sandwich, if that's what you wanted."
~ New York State chief judge Sol Wachtler
About a dozen or so years ago, I had the instructive misfortune to be called for Manhattan grand jury duty. To this day, though, it has armed me with plenty of anecdotes for any sort of "that's the way the system works" conversation. Once you see how the sausage of justice gets made in the courtroom, you can never really unsee it, and that's not a bad thing. The grand jury process – and its failures and possible remedies – is obviously central to the Michael Brown and Eric Garner cases, but in my opinion hasn't received nearly enough attention. Let me draw on some of my own experiences to illustrate why this is the case, and argue why any meaningful response to Brown, Garner and others must, at least for a start, be sited within the phenomenon of grand jury.
As context, New York City is one of the few cities that maintains continuously impaneled grand juries to maintain the flow of indictments that feeds the criminal justice system. When I served, there were four such juries, two of which were dedicated exclusively to drug cases. Fortunately, I was selected for one of the other two; after all, variety is the spice of life. During our month-long tenure of afternoon-shift service, we heard 94 cases, and we returned indictments, if I'm not mistaken, for 91 of those. For this service we were compensated $40 per day, which, in a fit of self-serving civil disobedience, I refused to report on my income tax return.
Keep in mind that the purpose of the jury is two-fold: to establish that a crime was committed, and that the person under indictment had some involvement with said crime. This involves the mapping of an often messy reality onto the abstract but finely delineated nature of criminal statutes. To achieve this, the prosecutor – almost always a fresh-faced Assistant District Attorney (ADA) seemingly just out of the bar exam – would present just enough facts to the jury to ensure probable cause for both the crime and the person charged with said crime. The evidence may include testimony from officers, experts or other witnesses, and it ought to be noted that probable cause is a much lower standard of proof than what petit juries encounter in trials, which is the beloved "proof beyond a reasonable doubt."
Note that I haven't said anything about the defense. That's because we saw not a single defendant for any of the 94 cases we heard over the course of December 2003. During our induction into grand jury, we were assured that defendants and/or their attorneys had every right to participate in the indictment proceedings. At some point people on the jury began asking if we would ever see a defendant and the bailiff said it was highly unlikely. The reason for this is our first indication of the particular kind of sausage-making that goes on within the criminal justice system: most cases end in plea bargains. Defense attorneys generally wait for the indictment to find out how incriminating the evidence is, and then act accordingly. If the indictment is backed by strong evidence, the horse-trading around cooperation begins, in hopes of a reduced sentence. Beginning in the 1980s, this was used as a comprehensive strategy by the New York DA's office to dismantle the Mafia: arrest the street-level operators and flip them, one by one, in the hopes of moving up the food chain. Rinse, lather, repeat. More recently, they have tried the same tactic on insider-trading cases, although some have proven tougher to crack than others.
Following an indictment, defense attorneys will counsel their clients to go to trial only if they think they have an exceptionally good chance of beating the rap, if not on the facts of the case then by virtue of a sympathetic judge, and so on. Like all lawyers, defense counselors look at their field of play in terms of scenarios and probabilities. In this sense, the pursuit of "justice" is not a pursuit of truth, but an exercise in risk management, negotiation and compromise. The facts, such as they might be, are there to serve those ends, and not the other way around. This is very important to keep in mind when we come to consider the Brown and Garner cases.
This brings me to the other essential point: recall that we as jurors were instructed to "map" certain statutes onto actual events and people. How do you go about doing this? As noble as "a jury of your peers" may sound, I hope that I am never in a position to be judged in this way. For the law per se is not a simple thing, and this sort of mapping exercise guarantees plenty of ambiguity along the way. For a grand jury that is essentially treated as an indicting machine, a broad variety of statutes come into play. And in the interest of securing an indictment, the DA will throw as many charges as possible against the suspect, in the hopes that at least one will stick.
Fortunately, the state is kind enough to provide a guide to navigating the complexities of statutory law: the prosecutor himself. If you think this is a conflict of interest of the highest order, you would be right. You would also have no choice in the matter. Of course, all the ADAs we dealt with were unfailingly polite and more than willing to read out the relevant statutes as many times as was necessary, but keep in mind that they are in the room to get their indictments. They regretted to inform us that they could not help us in interpreting the evidence in relation to the statute, only the statute itself. That, putatively, was our sacred duty.
So what did I learn while I was a grand juror? For one thing, the cops can pretty much arrest you for anything. Secondly, the people who get busted proceed to get themselves even more busted. Examples include: if your friend is driving you around in his newly stolen car, don't have a stolen handgun on your person (on the other hand, the two may have had some shared instrumentality, which I suppose is reasonable). But you should definitely not have a rock of crack cocaine in your pocket while you jump a subway turnstile. (Of course, if I'd been white while jumping that particular turnstile I probably wouldn't have been searched. Just saying.)
Thirdly, the cops know the law way better than you, and use it to their advantage. Example: a group of four guys are walking down the street, and the police observe two of them conducting a drugs-for-cash transaction. Shortly afterwards, all four get into a car. The cops then proceed to bust them, because the law says that anyone in a car with drugs in it can be charged for possession. Why settle for two collars when you can have four?
Fourthly, cops lie. A lot. We had to put up with some extraordinary claims made by officers, some of whom testified anonymously, in order to protect their undercover identities (it's interesting what anonymity does to your perception of whether someone is telling the truth). You were on the roof of a sixth-floor walkup without binoculars and you saw a drug deal go down four city blocks away? For real? The suspect didn't have any stolen goods on him when he was arrested but somehow had them once he emerged from the police van? No kidding! On the few occasions that we were confronted with particularly egregious lies we threw out the indictments with relish. But more often than not we were left seething amongst ourselves, during the deliberation period that was the only occasion when we were left alone as a group. Just because one cop lied at one point didn't invalidate the entire case if there was an overwhelming amount of other evidence, so in this way the lying cop gets a bye. He knew it, we knew it and he knew that we knew it. It's also worth mentioning that even if we disagreed with the law itself, we nevertheless had no choice but to indict, if the "evidence" was strong enough, as with the example of the four guys in the car above.
Eventually, in the course of our daily proceedings a curiously adversarial dynamic developed. As a jury, we did our best to establish a solid understanding of what transpired for any given case. But much of it felt like being in Plato's cave. We only saw what the prosecutors and police wanted us to see, and would further guide us, as much as possible, in how to see it. Due to the confidential nature of the proceedings, note-taking was prohibited. And without the counterbalancing presence of a defense counsel, or of the salutary effects of cross-examination, the end result was, more often than not, a shrug of the shoulders and a vote to indict.
To my further dismay, this happened with increasing frequency, especially as we approached the Christmas holidays. Unlike the zero-sum game that is a petit jury trial, there is a further dilution of responsibility, that goes something like this (and here I am pretty much quoting a fellow-juror) "Well, an indictment isn't that big of a deal, the defense attorney can figure out what to do with it next, and at the worst the guy will get a fair trial." What this indicates is more proximity bias that anything else: the first time you raised your hand to indict someone it was a very big deal, but now that you've done 60 of them and you're really thinking about having to see your in-laws again, it's really not such a whopper.
In general, there is a modicum of intellectual rigor required to attend to this process with any sense of awareness and responsibility. And yet we had jurors whose English was far below the standard needed to follow legalese; who probably hadn't had to think analytically about anything in decades; or who just plain didn't care, or rapidly reached that point. If there is anything accurate about Reginald Rose's "12 Angry Men," whose quotes and stills pepper the present article, it is the fact that a jury's seats are by no means guaranteed to be occupied by reasonable, disinterested citizerns (thank goodness Henry Fonda was one of them). To this day, if there is a better reason as to why a liberal arts education remains of vital importance to our society, I cannot think of one.
"Look, you know how these people lie!
It's born in them…they don't know what the truth is!"
~ Juror 10 (Ed Begley)
If the purpose of the system is to generate indictments, then the system works really well. Hence the well-known quote from chief justice Wachtler about the indictability of ham sandwiches. It's not so much the masterful rhetoric of the prosecutor, the infallibility and selfless dedication of the police, nor the relentless pursuit of truth. It's the fact that the incentives are all lined up correctly to produce indictments. The cops provide the evidence and the warm bodies, the prosecutors the indictments. Each depends on the success of the other.
This extends beyonds the hermetic enclosure of the courtroom, since prosecutor is an elected position, and must do his level best to gain the endorsement and support of the police union. (If anyone doubts the importance of the union in the eyes of a cop, please consider the recent stairwell shooting of Akai Gurley, where the two patrolmen in question were MIA for the first six minutes following the shooting. It turns out that Officer Liang, who allegedly fired the shot, was texting his union rep). The grand jury, as blind as Justice itself, stammers and dodders its way through the mess, eventually just glad to get it over with. Not quite a rubber stamp, but not too far off, either.
Now, all of this falls apart in a grand way when the tables are turned and it is the cops that are under indictment. Suddenly, the whole system of incentives is under threat of short-circuiting. Because, if I have sketched it out well enough, the point of the system is not the disinterested pursuit of justice; nor is it the ongoing process of risk management, negotiation and compromise; but rather it is the perpetuation of the system itself. In this sense it is no different from any other bureaucracy. In order for the system to remain coherent and orderly, indicting cops is to be avoided at all costs.
How do the participants extricate themselves from this? As usual, The Onion is on it with a handy guide. But in fact the answer is even simpler. One thing that may have been only implicit in the above description I should now make explicit: in none of the 94 cases we considered did the DA fail to recommend charges. Remember that an indictment is a mapping exercise. It is inconceivable to take a group of lay people and just point them to a book of criminal statutes. And yet, thanks to the extraordinary release of the complete transcript of the Darren Wilson indictment, we know that this is precisely what happened. Remarkably, this action seems to have been within the DA's discretion. Moreover, in the few pages that were released concerning the Garner case, there was no mention of what charges – if any – were recommended to the jury. From viewing the videotape, it's pretty incredible to think that Daniel Pantaleo, the officer in question, could not be charged, at the very least, with involuntary manslaughter.
Now, we can talk all about the latitude that use-of-force laws grant in the courtroom, etc etc, but if the jury isn't even told what statutes might possibly apply, it's pretty uncertain that they will come to agree on anything. As an example, consider the fact that, during our grand jury induction, we were told that not only did we have the right to strike down the charges recommended to us by the DA, but we also had the right to search out other statutes and recommend them to the DA as charges instead. Not that we ever did that – safe as houses, we were.
Still don't believe the lengths that the system will go to protect itself? Consider another, fairly unpublicized detail in the Garner case. If you've seen the video (and, truth be told, we don't know if or how much of it was seen by the grand jury), you'll notice that Pantaleo isn't the only cop around. What about those other guys? The five-or-so other cops involved in taking Garner down were all granted immunity from prosecution in return for their testimony. Obviously, the DA was wasting immunities, since their testimony was such shit that he couldn't get an indictment from cherry-picking what those five eyewitnesses saw. And Pantaleo, like Darren Wilson in the Brown trial, testified before the grand jury himself, so I guess defendants do show up under extraordinary circumstances. In any case, no one was mistaken for a ham sandwich here, folks.
Back in the real world, the failure to indict the police responsible for the deaths of Brown and Garner has spawned an understandable backlash of protest. But while the subject of protest is clear, the objective is emphatically unclear. Much like the Occupy protests following the 2008 financial crisis, people accepted that there was plenty to protest about, but the fledgling movement lost much credibility due to the illegibility of any actual demands of the protesters. Now, these latest protests are part of the mighty stream of the civil rights movement, so credibility is not what's at stake here. Rather, I fear that the opportunity for real, targeted reform will slip us by, because as it is presently constituted, the system will continue to not indict police. It simply has no other choice.
People can shout about structural racism all they want, and they can go down the rabbit holes of stop-and-frisk, police body cams, reparations, or whether #crimingwhilewhite is an unworthy hashtag (for fuck's sake). Most of these are worthy causes but, since they do not address the procedural site that is clearly at the heart of the matter, attempts to address police violence through the court system will run relentlessly into the same bottleneck as before. Rather, the system of incentives needs to be broken at exactly this critical juncture. To this effect, I propose that any killing carried out by police be immediately referred to a special prosecutor – one who is outside of the Backscratchistan fiefdom that we currently have for handling run-of-the-mill cases. I cannot imagine I am the first to do so.
This was further refined in a recent discussion with fellow 3QD author Jeff Strabone, who suggested, quite correctly, that the referral should be made automatic for the killing of any unarmed civilian. Since this type of change would have to be enacted by the relevant state legislature, including the fact that the victim was unarmed creates the additional advantage of being politically much more difficult to resist. Without this kind of reform #BlackLivesMatter and #ICantBreathe will soon enough join #Kony2012 in the #DustbinOfHistory.
But perhaps the solution is even simpler. As Jami Floyd noted to WNYC's Brian Lehrer the day after the indictment against Officer Pantaleo was thrown out, the United States is the only country to still use grand juries to decide anything. When one considers that at least two other countries still use the Imperial system of measurements (the United States being in the august company of Liberia and Myanmar), it is amazing to consider that, globally speaking, the pound and the foot enjoy more popularity than grand juries. But we've always been proud of our exceptionalism, haven't we?
Monday, November 10, 2014
In Trust We Truth
"All this – all the meanness and agony without end
I sitting look out upon
See, hear and am silent."
~ Walt Whitman
On a recent Facebook thread – about what, heaven help me remember – someone posted a comment along the lines of "This is what happens when we live in a post-truth society." I honestly cannot recall what the original topic was about – politics? GamerGate? Climate change? Who knows – you can take your pick, and in the end it's not really that important. The comment struck me as misguided, though, and led me to contemplate not so much the state of ‘truth' as a category, which has always been precarious (see: 2,500 years of philosophy), but of the conditions that may or may not lead to the delineation and bounding of what we may consider to be sufficiently, acceptably truthful, and how technology has both helped and hindered this understanding today.
I responded to the commenter by suggesting that we live not so much in a ‘post-truth' society as a ‘post-accountability' society. It is not so much that truth is disrespected, distorted or ignored more than ever before, but rather that the consequences for doing so have (seemingly) dwindled to nearly zero. One could argue that this is vastly more damaging, because the degree of our accountability to one another profoundly influences how and if we can arrive at any sort of truth, period. Prior to the onset of information technology, there were well-established (and of course, deeply flawed) mechanisms for generating and enforcing accountability. Now, this mechanism of information technology that has relieved us of accountability is already so deeply enwoven into our society that not only will we never put the genie back in the bottle, we are at a loss to imagine how to ever get this genie to play nice. Except the problem is that this kind of righteous outrage is, in fact, entirely an illusion.
Instead of arguing about truth as an objective, abstract and hopefully attainable category, let's assume that truth (or whatever you want to call it) is a sort of consensus, and that consensus is reached through processes of trust (we respect each other's right to have a say) and accountability (we take some responsibility for what we say to each other). These are all fundamentally social processes, and as such haven't really changed very much over time. What interests me is how the insertion of technology into this discourse has changed our perceptions of the burdens that these concepts –truth, consensus, trust and accountability – are expected to bear.
Roughly speaking, technology has begotten two completely contradictory streams of development in this regard. This is old news – one person finds a better way to make fertilizer and someone else finds a way to build a better bomb using that fertilizer. In this sense technology merely functions as an amplifier for whatever tendencies are coursing through society's veins. Within the context of accountability, the two streams may seem to be paradoxical, but this is only superficial. Let's first touch on how technology has played a largely beneficial role in the elaboration of the paradigm of accountability.
Most obviously, there are the successes that have allowed a tremendous blossoming of commerce. An early, pressing problem faced by ecommerce was the creation of trust between buyers and sellers in an anonymous, disembodied marketplace. Buyers were interested in what they could buy online, but reluctant to fork over cash to anonymous strangers. In 1995, eBay was one of the first to propose a simple accountability mechanism for trader-to-trader transactions: buyers and sellers left feedback for one another confirming (or critiquing) speed of shipping, quality of goods, etc. Today, the approach is received wisdom, but at the time no one knew if would actually work. But this feedback system has continued to underpin the success of eBay and many other ecommerce sites, as witnessed by the success of AliBaba, current record-holder for the world's largest stock market IPO. It's no mean feat to create trust between buyers and sellers in a market as notoriously dodgy as China's.
Moreover, the applications of this mechanism seem to have grown well beyond the simple trader-to-trader transaction. We are now accustomed to reading book reviews on Amazon, restaurant reviews on Yelp, accommodation reviews on TripAdvisor, among many others. Reviews are also arguably being used to put the screws on part-time entrepreneurs such as AirBnB hosts and Uber drivers, but that is a topic for another time. It is sufficiently uncontroversial to say that, in a very concrete sense, we are becoming ever more reliant on an army of anonymous commenters to help us in our sensemaking of what to read, eat, buy or see.
Trust and accountability mechanisms have expanded in even subtler ways, specifically in the way that machine participants trust one another within a given system. Perhaps the most compelling example of this is bitcoin, the crypto-currency whose wild price oscillations (and shady applications) managed to grab global headlines for, well, at least a few minutes. The obvious need to prevent a party from double-spending an amount of bitcoin, which after all is a bunch of numbers sitting on a hard drive somewhere, led bitcoin's designers to include the notion of a block chain. The block chain accomplishes this through a concept called proof-of-work:
[Proof-of-work] is counterintuitive and involves a combination of two ideas: (1) to (artificially) make it computationally costly for network users to validate transactions; and (2) to reward them for trying to help validate transactions. The reward is used so that people on the network will try to help validate transactions, even though that's now been made a computationally costly process. The benefit of making it costly to validate transactions is that validation can no longer be influenced by the number of network identities someone controls, but only by the total computational power they can bring to bear on validation.
Basically, each machine on the network must validate all transactions, and all transactions must match across all machines. In the meantime, all transactions remain anonymous, even though the block chain, stored on each participant's machines, retains the entire record of all transactions (you can really go down the rabbit hole here). The computational intensity required means that no one individual can fake a transaction and fool the other participants. This is counterintuitive because we think of the goals of software design as privileging lighter, faster and simpler solutions.
A waggish take might see this as little more than make-work for the digital age. Nevertheless, the critical element here is that there is no central authority that vets the transactions. The network validates itself as it goes along, and, if everything works as it should, participants that act in bad faith are rooted out as a matter of course. I suspect that this sort of decentralized, distributed trust mechanism will find itself refined and deployed in many ways – for example, in credit systems for validating bottom-of-the-pyramid consumers. But it also occupies an important place within our narrative: this is what accountability looks like if you're a machine. From the point of view of a machine, it is a straight line from accountability to trust, and from there to consensus and truth. You just need plenty of electricity.
The looming problem with all the cases I have described so far is that they fall within a very narrow category: that of trader-to-trader transactions. In every case, the subject under discussion is clearly an object or service that is to be consumed (or evaluated or whatever – but the final purpose is consumption, let's be clear about that). There is always an implied value at stake – the feedback or ranking or other process being applied to it is simply there to clarify, refine or nudge the final value one way or the other. This is the meat and potatoes of not just microeconomics, but almost every "disruptive" idea to come out of Silicon Valley. As a result, the amount of attention these cases command is far out of proportion to our sensemaking as a whole. In this worldview, truth is indistinguishable from, or is rather interchangeable with, price discovery.
But there is still all that squishy stuff where technology has hung us out to dry. Why has technology failed to help us resolve, on a social level, issues like the link between autism and vaccines, or whether Barak Obama was born on American soil or not? Let alone the realities of climate change or evolution? Why do sites like Snopes.com or the Annenberg Center's FactCheck.org seem to be engaged in a Sisyphean struggle to disabuse us of disinformation, or why do we need them at all? Most importantly, why has technology, which otherwise has been such a staunch ally in concretizing the invisible hand, been unable to bring us any closer when it comes to a shared set of values?
At the beginning of the second essay of In The Shadow Of The Silent Majorities, French philosopher Jean Baudrillard writes:
The social is not a clear and unequivocal process. Do modern societies correspond to a process of socialisation or to one of progressive desocialisation? Everything depends on one's understanding of the term and none of these is fixed: all are reversible. Thus the institutions which have sign-posted the "advance of the social" … could be said to produce and destroy the social in one and the same movement.
Baudrillard asserted that political action – or at least, the kind of political action that mattered – becomes impossible when social processes disallow the "masses" from anything but the observation of spectacle. This process takes protest – or for that matter any kind of political action – and subsumes it into media, which then converts it into merely another object for consumption. Writing in 1978, Baudrillard was essentially finishing off Marxism as a plausible revolutionary theory. But he was mostly concerned with top-down media technologies and the manner in which once-meaningful events are rendered into meaningless theater, or rather whose meaning resided exclusively in their own theatricality. A good example is his examination of the transformation of political party conventions here in the United States. Once political conventions became televised, decisions of any consequence ceased to be made at those events. They simply became spectacle; the spectacle of the thing in question becomes the thing itself. If you want a good overview of what he had in mind, see Paddy Chayefsky's "Network", filmed a few years earlier: Howard Beale and the Ecumenical Liberation Army are essentially Baudrillardian poster children.
A good twenty years later, the World Wide Web began its inexorable crawl across (and of) the globe. Baudrillard was a troublemaker and a provocateur, so I assume that he would have gleefully jumped on the subject, but in a 1996 interview he admitted "I don't know much about this subject. I haven't gone beyond the fax and the automatic answering machine…. Perhaps there is a distortion [of oneself online], not necessarily one that will consume one's personality. It is possible that the machine can metabolize the mind." In one of his last major works, The Vital Illusion he lamented in a Nietzschean fashion that "The corps(e) of the Real – if there is any – has not been recovered, is nowhere to be found."
Fifteen years after publication of The Vital Illusion, we are in a better place to evaluate the effects of technology, and the view is not encouraging. For the same mechanisms that have allowed such a preternatural calibration of transactional value seem to be exacerbating the consensus around values that cannot be transacted. The fact is that there is an entirely different set of assumptions at work here. Venkatesh Rao put it well on his stimulating blog, Ribbon Farm, when he discussed the differing nature of transactions when participants are price-driven (ie, traders) or values-driven (as he puts it, saints):
Traders view deviations from markets as distortions, and fail to appreciate that to saints, it is recourse to markets that is distortionary, relative to the economics of pricelessness. Except that they call it "corruption and moral decay" instead of "distortion." To trade at all is to acknowledge one's fallen status and sinfulness.
If we consider the insertion of technology into this dynamic, the fact emerges that we have not designed technology to help us in our, shall I say, more saintly endeavors. Technology subsumes these squishier, values-driven behaviors into itself as best as it can, but it cannot ever do so completely. What's left is the flotsam and jetsam of Reddit, White House petitions, comment threads anywhere, Anonymous and LulzSec and cross-platform flame wars ranging from Mac vs PC to Palestine vs Israel. There is no shortage of bridges under which Internet trolls lurk, waiting to pounce on anyone who displeases them.
For anyone who doubts that there are real-life consequences to this, GamerGate is perhaps the best example of this. When the women targeted in this shitstorm are confronted with such a quantity of death and rape threats that they flee their homes, or are forced to cancel speaking engagements because a university cannot guarantee that someone won't bring a concealed weapon to a lecture, I am left with a distinct pining for that good old Baudrillardian unreality. Whether there will be any real-life consequences for the people who commit such acts, this remains to be seen. Furthermore, there is no reason why unaccountability cannot, and will not, continue its expansion. Like cosmic inflation, it does not need a reason to keep going, or anticipate a boundary to detain it.
There is an old Wall Street adage about any significant market downturn: "When the tide goes out, you see who's been swimming naked." The Web has flipped this on its head: the tide just keeps coming in, and more and more people are leaving their trunks on the beach. Moreover, it is simply too late to redesign the Internet for greater accountability. The last (or first?) idea that had any hope of accomplishing this was Ted Nelson's Xanadu Project. Nelson invented the very idea of hypertext, but in his world, which he originally conceived in 1960 and is detailed in one of the best articles to ever appear in Wired, every image or piece of text would be traceable back to its source. This past June, in an Onion-worthy headline, The Guardian announced the "World's most delayed software released after 54 years of development".
Perhaps in another, alternative universe, Xanadu became the default design template for an Internet that encouraged not just price accountability. In the meantime, and back in this universe, what technology has exposed is only what we have always known: that we are a fractious, quarrelsome and undependable lot. This is why I maintain that any hand-wringing about the state of the conversation on the Web is ultimately a red herring. That we haven't designed one of our most extraordinary technological infrastructures to help us get closer to any sort of ‘truth' shouldn't surprise us in the least. As for the original Facebook conversation that sparked this contemplation, after making my ‘post-accountability' suggestion, my comment received a dutiful ‘like' or two. As far as civilized dialogue goes, I'll take it.
Monday, November 03, 2014
Islam, Colonization, Imperialism and so on
by Omar Ali
At about 6 pm on Sunday evening, a young suicide bomber (said to be 18 years old) blew himself up in a crowd returning from the testosterone-heavy flag lowering ceremony held every evening at the India-Pakistan border at Wagah, near Lahore.
Presumably this young man (a true believer, since a fake believer would find it hard to explode in such circumstances) had wanted to target the ceremony itself (usually watched by up to 5000 people every day, most of them visitors from out of town) but the military had received prior intelligence that something like this may happen and there were 6 checkpoints and he was unable to get to the ceremony, so he waited around the shops about 500 yards away from the parade site and exploded when he felt he had enough bodies around him to make it worth his while.
About 60 innocent people died. Many of them women and children. Including 8 women from the same poor family from a village in central Punjab who were visiting relatives in Lahore and decided to go to the parade (whether as entertainment, or as patriotic theater, or both). The bombing was instantly claimed by more than one Jihadist organization but it is possible that Ehsanullah Ehsan’s claim will turn out to be true. He said it was a reaction against the military’s recent anti-terrorist operation (operation Zarb e Azb: “blow of the sword of the prophet”), that his group wants "an Islamic system of government" and that they would attack infidel regimes on both sides of the Indian-Pakistani border.
The Indian authorities decided to suspend their side of the parade for the next three days. But on Monday evening, the Pakistani side decided to hold their parade as usual and a crowd was on hand. Cynics have pointed out that most of the “crowd” looked like soldiers in civilian clothes, but that is not fair. The “show of resilience” meme is a very ancient and well-developed meme and has solid credentials and should not be easily dismissed. I personally wish both India and Pakistan end this ridiculous ceremony someday (soon), but on this particular occasion a show of resilience was the smart move. But then, the respected corps commander of the Pakistani army corps in Lahore, General Naveed Zaman (an outstanding officer, himself on the Taliban’s hit list for his role in various anti-terrorist operations) made a statement and beat his chest a bit about how we are a brave nation, we are back the next day and “look, on the Indian side it’s like a snake has sniffed them”, the implication being, they are cowards, they didn’t show up, but look at us, we are back and we are strong.
This is par for the course for the Pakistani army (whose propaganda software was designed and built for only one enemy, and whose soldiers are motivated to attack Jihadi terrorists by being told that the Jihadists are all Indian agents, I am not kidding) but is still telling: the day after one of the biggest massacres of civilians by a Jihadist terrorist bomber (there being no other kinds in our area these days, though the Tamil Tigers showed that a Tamil Hindu version is indeed possible, and in fact preceded the adoption of this particular weapon by Islamist terrorists) the senior army officer in the region could only taunt the Indians across Eastern border.
Meanwhile, in Nigeria, the Boko Haram terrorists announced that most of the 276 girls they kidnapped have been “converted to Islam” and married off. So the matter is settled.
And in Iraq, the “Islamic State” has been buying and selling captured Yezidi girls as slaves in the best medieval Arab tradition. In the video below, the young men of IS can be seen joking about the topic (the translation is by Jenan Moussa, an Arab journalist, not by MEMRI, so discerning viewers can view it without violating any of the standard guidelines):
Boko Haram has also gone ahead and blown up some Shias in Nigeria as they commemorated Moharram, while their fans have apparently shot a Shia in the face in, of all places, Sydney.
My point is this: the Salafist-Jihadist meme, so carefully nurtured and brought together in the Afghan-Pakistan border region by Pakistan, Saudi Arabia and the US in the 1980s, is now global and will soon come to your neighborhood if your neighborhood happens to be in the core Islamicate territories of the Middle East, India, Southeast Asia, Londonistan or Mississauga. Many different narratives about this phenomenon are in the market, ranging from Neocon propaganda and Fox News to Islamist apologetics and Marxist “class-based analysis”. For Western and Westernized liberals of a particular disposition, there are also “commentators” like Pankaj Mishra, who can be relied upon to press all the politically correct buttons without committing to anything resembling a coherent description, prediction or prescription. I would like to add some random thoughts to this mélange:
1. We are all human beings. And in the great Eurasian landmass, we have been mixing, biologically and culturally, for thousands of years. It is not possible that a relatively recent religious movement (Islam) has somehow significantly altered the biology of the people involved. This is a trivial observation, but some people on both sides of the liberal-conservative divide seem to have some misapprehensions about this, so it is worth reiterating. Going beyond that, I would add that even as a cultural phenomenon, Islam is not from some other planet. It evolved within pre-existing cultures, borrowing and altering already existing cultural memes. Much of “Islamic history” is the history of an initial (very successful and very extensive) Arab conquest, followed by some further conquests (primarily in Central Asia and India) by Islamicized Turkic invaders. Only in Indonesia and Malaysia did the initial wave arrive as traders and the subsequent conquests and conversions were almost entirely the work of local converts. This makes early South East Asian Islam a bit of an outlier, but that is another story. Only by disregarding most of history can we regard these conquests (and their associated missionary activities) as somehow completely unique. There are some peculiar features of Islamicate civilization, but not as many as its fans or its detractors would like to claim.
2. That being said, Islamicate civilization developed a remarkable degree of consensus on it’s core doctrines in the Islamic heartland. Even Shias and Sunnis converged on similarities in daily life and communal attitudes towards non-Muslims, towards women, towards apostasy, towards blasphemy, towards the notion of holy war. While agreeing with Razib Khan’s views about the relative unimportance of theology in general, I think modern life and the recent experience of colonization, decolonization and its associated psychopathologies have led to an unusual situation in the Islamicate world: while the pressures that cause religious revivalist movements or “fundamentalist” movements may be similar in non-Muslim communities (hence Sikh, Hindu and Buddhist identity-based semi-fascist fundamentalist movements), the material that is available to these movements and the historical background of the religions involved, makes it difficult to associate a detailed “shariah” with any of those movements. Sikhs can ban tobacco and kill blasphemers and traitors, Buddhist mobs can kill Muslims without compunction in Myanmar and Sri Lanka, Hindu nationalists ban beef and carry out pogroms, but the notion of a Sikh state or a Hindu state or a Buddhist state is mostly the notion of a state where their co-religionists hold sway (or even hold exclusive title), but lacks consensus on any well developed legal code or even theology. This is not the case with Islam.
3. There is such a legal and theological framework in Islam and it has wide support in principle. In principle is, of course, not the same as in practice. Most Muslims know as much about Muslim theology as Christians know about Christian theology, which means they know very little. But because of widespread beliefs about blasphemy and apostasy, this “in principle” support translates into an inability to frontally challenge those who come armed with more detailed Islamic knowledge. For example, most Pakistanis may have no idea that classical Islamic law permits slave girls to be captured, used for sex (without marriage) and bought and sold as desired. If and when IS comes to Pakistan and wants to talk about buying and selling slave girls, most people will probably be shocked. It is possible that most people will initially even find some way to say this is wrong. But it is also my guess that when face to face with an IS ideologue, most people will be unable to argue for too long. Because he will have classical Islamic texts on his side and his opponent will have nothing beyond his human intuition of fairness and good behavior. Intuition will not stand against argument. And there will probably be no argument for too long because to argue too much would cross over into the zone of blasphemy. And most people (except maybe for the tiny sliver educated in Western or Western-style universities and out of touch with their own traditions almost completely) believe that blasphemers should be punished, and at least for the most extreme kinds of blasphemy, the punishment should be death. This, by the way, is just a simple empirical fact, easily checked if you step out among the people in that region.
4. Whenever the existing state order (in almost all cases, the product of recent Russian or West European colonization, so somewhat suspect in any case) falls apart, the next common denominator tends to be Islamist. And among those Islamists, the ways of the golden age are not some distant myth. Those books are still around, still honored, still relevant, still protected against criticism by blasphemy and apostasy memes. And those books include rules for holy war, for slave holding,for female legal inequality etc. that are no longer fashionable in the modern world. That is just how things happen to be.
5. The ruling elites in most Islamicate countries are not Islamist in practice and may not be so in principle either. But having taken the path of least resistance (or having received their Islam from Karen Armstrong or post-Marxist theorists) they have acquiesced in the glorification of medieval Islamicate norms, not as past history but as guides to present behavior. They will now be (literally in many cases) hoist on their own petard.
6. Elements of the ruling elite (especially in South Asia, where penetration of Western postcolonialist/postmodern/post-Marxist garbage has been most extensive within the elite) are vigorously opposed to many of these medieval norms, but have disappeared into an alternate universe where only White people have agency and therefore only White people are responsible for all events. This has effectively taken them out of the equation for now. They remain mostly harmless, but the opportunity cost of their withdrawal into la la land is not insignificant.
7. As the Bill Maher-Ben Affleck affair has shown, Western Liberals are generally clueless about Islamic history and the status of (most of) the Islamicate world with regard to issues like freedom of religion, freedom of speech, feminism and suchlike. This is NOT to endorse a particular Whiggish vision of history as the only valid path, with every community situated somewhere along the timeline from barbarian to Western liberal democracy. But it is to emphasize that opting out of this linear timeline is one thing, pretending that everyone is already at point X on the timeline while paying lip-service to multiculturalism is another. If Ben Affleck thinks that Western standards of “liberal democracy” (however defined and whether regarded as an endpoint or not) are not to be applied to everyone on the globe and that these standards are being used to demonize and colonize those who hold to different values and models, then he has a leg to stand on. But he (or others like him) seem to lose this admirable level of “nuance” when they get to specifics. Instead of saying that Pakistani Muslims do not permit free speech when it comes to X, Y and Z and who are we to comment or interfere (especially when we are just using this commentary to justify our invasion of this or that country), they are saying “there is no real difference in free speech norms between X and the US”, which is patently absurd. Other liberals (too numerous to list) will look at history as if European powers have real histories (with colonization, oppression, invasions, decimations etc, also with progress, emancipation, democracy, etc.) and everyone else lived on some other static planet with no history, no past and no future. I don’t have to go into detail, Wikipedia can solve this issue for anyone these days, but it is still surprising how few people will bother to even read Wikipedia before brandishing absurdities in this matter. The opportunity cost for this (loss of some Western liberals) is perhaps insignificant in real life, but since I tend to interact with some of these (very nice) people, I obsessively comment about them. Hence this comment.
8. More after I get some feedback; many or most of these comments are very likely to be misinterpreted by many people. This is partly because I am not a good enough writer, but partly because all of us use various heuristics to slot every commentator into pre-existing boxes. To see a little of where I am coming from, some of the following articles may be helpful. Thank you.
Monday, October 13, 2014
The Brooklyn Gentrifier's Playbook
"A New Yorker is someone who longs for New York."
These days, when the inevitable question of "What do you do?" pops up at a cocktail party or some such, I now simply answer, "I live in New York." A credulous follow-up might wish to clarify whether that is, in fact, how I make my living, at which point I try to steer the conversation to kinder, gentler topics. But after living in New York for 15 years, I feel my response is both perfunctory and justified. Anyone as deeply immersed in the city knows that living here really is its own, full-time occupation, since the city demands constant observation and reflection. And New York is especially amenable to this, given the breadth, density and accessibility of the city's neighborhoods, as well as New Yorkers' guileless embrace of real estate as a primary subject of conversation. It is perhaps the only city that I know of, where a stranger can walk into your apartment and ask, within the first 15 minutes, how much you rent pay for the privilege, and expect an answer.
In this vein, there has always been much talk about gentrification: where it is happening right now and where it will happen next, whether the desirability of the outcomes outweighs the costs, and, especially, who is being ousted. This last is not so much about the residents themselves, but rather the ongoing disappearance of beloved restaurants, bars and retail establishments, for example as documented by Jeremiah Moss's Vanishing New York. So what can be said about gentrification that has not already been said? Honestly, not a whole lot. There are still no good answers or responses, especially as New York reassesses its post-Bloomberg future.
However, gentrification has increasingly been treated as a monolithic concept, when in fact it is an umbrella term describing a continuum of variegated and uneven urban processes. The ‘improvement' of any neighborhood is the result of a bevy of actors, operating within a legal and social context that is unique to that neighborhood, and that itself sits within the larger context of the city and the state. Finally, even global financial circumstances play a role, for example, artificially low interest rates and the ease with which capital may travel. When gentrification is seen as a monolithic process, it is difficult to think about it as anything other than inevitable. But if we consider the different processes that are obscured into this single rubric, or more accurately, the different scales and velocities at which gentrification occurs, then we will be better equipped to engage the phenomenon itself, and not merely the label.
The late geographer Neil Smith clearly identified this in the late 1970s. First in his dissertation and then in his subsequent work, he characterized gentrification, especially in its accelerated forms, as fundamentally a process of capital, not of people.
Since the 1970s, gentrification has shifted from a marginal, fragmented process in the housing market to a large-scale, systematic and deliberate urban development policy. Gentrification has deepened as a comprehensive city-building strategy encompassing not just the residential market, but recreation, retail, employment, and the cultural economy.
Michael Bloomberg's three terms as mayor of New York City carried the precise hallmarks of such a "large-scale, systematic and deliberate urban development policy," or what could also be termed a love-fest between developers and city officials. While marquee projects such as the (successful) Atlantic Yards and (unsuccessful) Midtown East projects occupied most of the media spotlight, what remains less appreciated is the sheer scope of rezoning undertaken by the administration: upwards of 120 rezonings, almost all of which were approved, will continue to reshape the contours of New York for decades to come.
But how? At first, it may be surprising to hear that "the city planning department doesn't track…how much potential space was gained or lost, or how much value it's created by enabling development" for any given rezoning. However, zoning itself is not a monolithic concept: a block may be ‘upzoned,' ‘downzoned' or left unchanged (also known as ‘contextual'). Zoning delimits the ultimate population density for a given lot, and in fact, from 2003 to 2007, the net result was only a 1.7% net increase in capacity. This immediately leads to the next question: Who gets what kind of zoning? The contours of rezoning become clearer when one understands that
Upzoned lots tended to be in areas that were less white and less wealthy, with fewer homeowners. Downzoned lots tended to be areas that were more white and had both higher incomes and higher rates of homeownership than upzoned areas. Areas with contextual rezoning were even whiter and richer (with median incomes "much higher than that of the city"), and had "very high rates of homeownership." In other words, more privileged people were more likely to have the city change the zoning of their neighborhoods to preserve them exactly as they were.
Understood this way, the possible pathways for New York become clearer: rezoning defines and guarantees its own success. But rezoning is really only the beginning of real estate development. There is still the procurement of permits and the appeasement of local community boards. But developers are used to playing the long game, and one of the legacies of the Bloomberg (and Giuliani) administrations is a massive, tangled infrastructure of committees, advisory boards and public-private partnerships where real estate developers mix with city officials in order to clear hurdles, this being most easily achieved outside of the public eye and behind closed doors. (For an exceptionally clear-eyed exposition of this bureaucratic juggernaut, see the excellent documentary My Brooklyn by Kelly Anderson).
The bodies are buried in plain sight. I have already written about the fate of the Fulton Fish Market, which remains little changed today. For its part, ‘My Brooklyn' documents the redevelopment of Brooklyn's Fulton Mall and its impact on the African-American and Caribbean communities that depended on that commercial district. And the systematic dismantling of community resistance to the Atlantic Yards project was a big-city real estate bruise-fest whose definitive history remains as yet unwritten, but will doubtlessly launch a thousand urban social justice dissertations. Like the Bloomberg administration's zealous rezoning campaign, this web of governance is set to endure for a long time, and in the meantime, Brooklyn is in fact, becoming poorer.
These, then, are the macro policies that drive large-scale gentrification of substantial swathes of New York. However, there is a smaller scale at which gentrification operates, and one that is largely invisible to the media. Nevertheless, its effects on neighborhoods is no less decisive. As an example, consider the story of another part of Brooklyn, that of Franklin Avenue in Crown Heights. "The Ins and The Outs" is a vital and broad-ranging article, written by Vinnie Rotondaro and Maura Ewing, on the changing nature of one of Crown Heights' principal commercial thoroughfares. While readers outside of New York may most clearly remember it as the neighborhood gripped by a race riot back in 1991, after a generation Crown Heights has now been Columbused as the newest Brooklyn hotspot, with Franklin Avenue as its pulsing heart.
I have been to Franklin Avenue over the years but have been going more frequently, thanks to a friend who recently moved to the neighborhood. The rapidity of the transformation is nothing short of astonishing – in fact one of the defining features of gentrification in New York is that each episode seems to take less time than the previous. Franklin Avenue seems to follow the standard pattern of development, where delis become swish bars and pawn shops are replaced by up-market retail. And yet everything happens for a reason. One of these reasons has been MySpace Realty.
As documented by Rotondaro and Ewing, MySpace (and possibly a few shell corporations under its control) have engaged the neighborhood's landlords, aggressively making offers to buy buildings for cash. For MySpace, a landlord who says ‘No' only means ‘No' today. Once a building is sold to MySpace, it is time to get the residents out of the building, so that it can be renovated and put back on the market for rental rates that can be several times the existing rent. If they are lacking in savvy, most tenants are bought out at a discount, or even made to think that they have little choice in the matter. The holdouts – some of whom have been living in the building for decades and cannot afford to live anywhere else in the area – are then subjected to the usual shenanigans of deferred repairs, ignored infestations, etc. Lather, rinse, repeat.
MySpace is using an old playbook, of course. Just as Anderson documented the strong-arm tactics of big-league developers in ‘My Brooklyn', Rotandaro and Ewing narrate a history of similar behavior but writ on a much more local scale. The results are much the same, however: a process of divide-and-conquer by capital leads to the decrease of the availability of affordable housing stock in a given neighborhood. And it is also important to recognize the fact that MySpace Realty's actions do not exist in isolation. As Franklin Avenue has become more ‘hip' the neighborhood has been primed for larger developers to buy up lots that are beyond the reach of a local firm: the Goldman Sachs Urban Investment Group was part of a consortium that purchased a nearby property that will likely become a luxury mixed-use development, with about $20m to be invested in the near future. And this is only one of several such transactions happening in the area. As one of the locals put it, "I don't know how to beat this. I don't know how anyone can beat this machine."
This same resident also asked the real question at the heart of any gentrification process: "I still think there's a better and more ethical way to get from a broken down, crime-ridden, drug-ridden neighborhood to a place that is safe and enjoyable for everyone while still maintaining a sense of community ownership." Capital can only provide a partial and ultimately unsatisfactory answer to this question – left to its own devices, it can only produce cookie-cutter development at market rates, with the end result being nothing but the relentless homogenization of any given neighborhood. The same people, shops and restaurants. Ironically, perhaps only the housing stock will remain to bear mute witness to the unique flavor that a neighborhood once had.
It is somewhat like the old philosophical paradox of sorites – if you have a heap of sand, and you remove grain after grain, at what point do you no longer have a heap of sand? What sorites points out is that we have ultimately failed to define what a ‘heap' is in the first place. Without this definition, you cannot know when a heap ceases to be a heap. Gentrification functions similarly – at what point does improvement become gentrification, or, to continue with the analogy of the heap, at what point is gentrification no longer that, but rather improvement?
I was reminded of this when my friend Alex Castle posted a wonderful essay on his own experience, somewhat misleadingly titled "Gentrification Is My Fault". Fittingly, it's in the form of a blog post. I say fittingly, because it is both interesting and important to note the commensurate nature of the media describing each of these levels of gentrification: the largest process is worthy of an acclaimed documentary; the local level merits long-form journalism; and the smallest is only given voice by its protagonist's memoir. Fitting, of course, is not the same as just, so it is important that these latter voices be given their due.
Castle's essay details the haphazard way in which he and his wife came to own a limestone townhouse in Prospect-Lefferts Gardens, which was then a fairly rough-and-tumble section of Brooklyn, one that is in fact on the southern border of Crown Heights. Through a mix of good timing, thrift and hard work – all vital ingredients of the American Dream – the Castles have created exactly that for themselves. What I appreciate even more deeply is the way that Alex invested himself in the ownership and improvement of his home and, by extension, the neighborhood:
I didn't displace anyone; the place was abandoned, the basement was flooded with shit and the doors had been battered in. I spent the first five years we lived here working on the house all day and bartending all night. When I started I had no skills, I couldn't drill a hole in a board without splitting it. Now I know how to do wiring, framing, sheetrock, I can frame and hang a door (interior or exterior), put in a dishwasher, tile the floor. It took a long time, but it only cost materials.
But what is striking about this personal history – and this is the kind of story that can only be told as a personal history – is the ambivalence that even this engenders. On the one hand, through their temerity and foresight, the Castles expect that, by the time they retire, the mortgage will be paid off and they will be able to live off the income from renting their extra apartment (in New York, this is what's known as ‘winning'). But as Alex muses, "if Bruce Ratner calls me tomorrow and offers me $5 million for this house, is it my responsibility to ask what's going to happen to the property after I'm gone before I sell? Or am I just reaping the benefits of good planning?"
The Castles' experience echoes Neil Smith's point of departure in his own analysis of gentrification: "a marginal, fragmented process in the housing market." Thus, while tempting, it would be wrong to think that the fragmented and marginal become obsolete simply by virtue of the rise of capital. It's clear from this last example that all of these processes co-exist and eventually negotiate with one other – it is simply a consequence of the way in which a city embodies its limited, valued space. Even the much larger forces of capital-driven gentrification must still contend with property rights and the intentions and desires of smallholders who have invested decades of savings and work into their particular corner.
More importantly, the best bulwark against the kind of gentrification we all seem to wring our hands over is precisely the people who are perfectly aware of their rights and have no illusions of the true value of their stock. I am not making some petite-bourgeoisie argument here: this is as true (and vital) for tenants as it is for landlords. The only thing that is missing is all the other stories like Alex's. Where are they? Who is recording them, and bringing those people together into what is likely a common cause that is nevertheless representative of each person's own interests? I am perhaps being optimistic, but as Jefferson wrote, albeit in a different context, "Whenever the people are well informed, they can be trusted with their own government."
Monday, September 15, 2014
The View From Nowhere
"Well, I haven't been there yet, and shall not try now."
~ Conrad, Heart of Darkness
Marlow, the protagonist of Conrad's Heart of Darkness, remorsefully blames an old obsession with maps for his eventual captaincy of a ramshackle steamship, set on a doomed mission up the Congo River. But Marlow was irretrievably fascinated by the blanks on the map – those were the places that were worth going. These days, when we look at a map, we expect objectivity and specificity, or to put it bluntly, the truth. Our sense of entitlement has only grown with the thoroughness in which maps have enmeshed themselves into our daily lives, whether it is via the GPS devices that guide our cars, or the maps on our smartphones that help us walk a few blocks of a city, familiar or not. We may forego the flâneur's pleasure of asking a stranger for directions, but where a certain calculus is concerned, it seems a small price to pay for getting us, without undue delay, to where we need to be.
There are no more places where cartographers must write terra incognita, or where myths and rumors were recruited as phenomenological filler. For just as nature abhors a vacuum, a map is a canvas that demands to be crammed with seemingly confident observations, and it would appear that every nook and cranny of the planet has already had some physical characteristics reassuringly assigned to it. Thus when maps fail us, we are left to decide whom to blame – the map, or ourselves.
I will give you a hint: we never blame ourselves. Rather, it is the map that is inadequate. But what this really implies is our refusal to abandon the conviction that there will be some future map that will capture the truth. Correlating directly with its pervasiveness, it becomes too easy to pass over the obvious fact that, like anything else, the practice of cartography is a fundamentally social practice. Consider not only how immersed we are in maps, as with the example of GPS, but also how extensively, constantly and surreptitiously we ourselves are mapped. Every time you allow an app on our smartphone to "Use Your Location," indeed with every swipe of a credit card, you are effectively performing an offering of yourself, or rather some quantifiable aspect of yourself, to some kind of mapmaking project, the vast majority of which you will never be aware, let alone see. We are, in fact, subjects of a distinctly cartographic flavor of what Michel Foucault called clinical gaze.
When we are thus swaddled in information that provides so much convenience and in turn seems to ask so little in return – in fact, what is merely a bribe, but an exceptionally effective one – the occasional failure of maps can be galling (or sometimes entertaining). Because we are convinced that a better map is always already right around the corner, this anxiety does not last. But what comfort is there when we are confronted with things that resist mapping?
The classic thought experiment here is Benoît Mandelbrot's seminal 1967 paper, published in Science, "How Long Is the Coast of Britain?" For the present purposes, I will only describe Mandelbrot's premise: the measurement of an irregular natural surface such as Britain's coastline is dependent on the unit of measurement. So if we were to use a yardstick with a unit length of 200km, we might conclude that the length of the coastline is 2400km, whereas if our yardstick were 50km, we would assert a length of 3400km. Indeed, as the unit of measurement approaches zero, the observed length of the coastline approaches infinity.
For Mandelbrot, this is a mathematical problem, and he uses the example to posit a method for approximating length. Eventually these and other investigations would lead him to elaborate the theories of self-similarity for which he is justly famous. But in the introduction to the paper, Mandelbrot writes:
The concept of ‘‘length'' is usually meaningless for geographical curves. They can be considered superpositions of features of widely scattered characteristic sizes; as even finer features are taken into account, the total measured length increases, and there is usually no clear-cut gap or crossover, between the realm of geography and details with which geography need not be concerned.
One of the advantages of Mandelbrot's mathematical approach is that it allows him to elide that essential question: Where is the "clear-cut gap or crossover"? For mapmakers, identifying that gap or crossover is at the heart of cartography. It may well decide the ultimate utility of a map to someone navigating a route in the physical world. And this is a decision that must be made by people. It is not enough that the map is right; it must also be right in the right way.
I want to be clear that I am not talking about what is commonly called ‘usability', or the loose set of principles that designers use to make legible their interventions in the world. ‘Usability' is a red herring, in the sense that the process of dressing up cultural artefacts, whether physical or virtual, for ‘usability' occurs only after the decisions of what should be ‘usable' (ie, legible) have already been made. To invent a brief and perhaps absurd example, consider a highway map. If we are driving, we use such a map to get from A to B, where points A and B are reachable by car. Thus, highways and side roads will be prominently featured; other geographic features such as elevation may or may not be relevant. But cartographers also locate significant landmarks to inspire detours (for an Information Age example, see Rand McNally's TripMaker), thereby implying that these are good things that belong on a map. On the other hand, these same maps will never include locations that we may want to avoid, such as Superfund sites. It is not difficult to imagine that a family with young children would want to know about – and avoid driving through – regions thick with pollution from, say, coal-fired power plants. We may initially react to this by saying "But these things do not belong on a map." Well, why wouldn't they? If instead our design brief were to create a map that would allow us to determine the healthiest route from A to B, our highway map may look very different indeed.
The decision to not include such items is intrinsically ideological and, as we will see below, also explicitly political. It is only through repeatedly being shown what a map is that we come to believe what a map should be. We are rarely told what a map is not. But at each turn we are assured of the objectivity that is at the heart of the enterprise.
Objectivity, understood as a sort of neutral omniscience, was tartly characterized by philosopher Thomas Nagel as "the view from nowhere." But having nowhere as one's originary viewpoint is akin to being lost inside one of Mandelbrot's endless, scale-free fractals. It is also irreconcilable when we attempt to, as we must, relate our knowledge of the world to the world itself (although for Nagel, reconciling the two is precisely what is needed to create an individual's worldview). Thus objectivity, or at least the set of social relationships and productions of knowledge that we ascribe to the idea of objectivity, is in fact a moral stance. Why?
Like anything else, objectivity has its own history. In a fascinating paper called The Image of Objectivity (and later a much more extensive book) Lorraine Daston and Peter Galison unpack this "panhistorical honorific [bestowed on] this or that discipline as it comes of scientific age." For them, the workings of objectivity are most apparent when manifested visually, specifically in the way atlases of many varieties – anatomical, botanical, X-ray – have been created and consumed over the centuries. These are not works of neutral omniscience, but artefacts that tell us "what is worth looking at and how to look at it." And to say that this is a moral practice is not far-fetched. They find that
…objectivity is a morality of prohibitions rather than exhortations, but no less a morality for that. Among those prohibitions are bans against projection and anthropomorphism, against the insertion of hopes and fears into images of and facts about nature: these are all subspecies of interpretation, and therefore forbidden. (p122)
Cartography has evolved in a similar fashion. From early cartographers inscribing empty spaces on their maps with "Here Be Dragons" (actually, they didn't) to Google Earth, one might think that there is a flawed but inexorable march towards an ever-finer approximation of reality (if not objectivity). After all, as Daston and Galison write, the moral imperative of objectivity recognizes that "the phenomena never sleep and neither should the observer; neither fatigue nor carelessness excuse a lapse in attention that smears a measurement or omits a detail; the vastness and variety of nature require that observations be endlessly repeated." And yet, there are forces at work that are greater than cartography and the technologies that have transformed it in the last few centuries, and these too should be recognized.
I came across a most extraordinary example of these other forces last week, in a long-form reportage by the Times-Picayune's Brett Anderson. "Louisiana Loses Its Boot" is Anderson's attempt to reconcile the rapidly changing (that is, receding) coastline of the state with the fact that the official state map has not been updated in fourteen years, and isn't likely to be any time soon. What he finds is a toxic mix of, on the one hand, galloping erosion and, on the other, benighted legislation that seems dead-set on ignoring the former. As a result, "the boot is at best an inaccurate approximation of Louisiana's true shape and, at worst, an irresponsible lie." (All citations below are from this article).
To be sure, Louisiana was always a devilishly difficult entity to map. The Mississippi is a notoriously fickle river, given to not just flooding its banks but rewriting them wholesale, as Harold Fisk's maps from the 1940s illustrate. And yet it is precisely this process that replenished the coastline: new sediment allowed vegetation to take hold and create adequate breakwaters and barrier islands, which in turn kept hurricanes from being the Gulf of Mexico's shock troops. The coastline was shifting constantly, but it was not receding. In fact, it was expanding. But once the Army Corps of Engineers "stabilized" the Mississippi in order to ensure commerce, this process of replenishment was severely stunted. As a result, hurricanes such as Katrina have had much greater impacts than would otherwise have been possible. The need for structural modification is not just limited to the river, either. Louisiana is the nation's second-largest oil producer and has "over 9,000 miles of navigation and pipeline canals…dredged in the state's coastal marsh." Adding projected sea level rises to the mix does not promise to make things any more pleasant.
One would think that the physical uncertainties of the situation would therefore call for as ‘objective' an approach to mapmaking as possible. After all, even without factoring in human impact, it is probably difficult enough to decide what is ‘walkable land' and what is not. Instead, the conflicting priorities of the fishing and energy industries have stalled Louisiana's famously corrupt politics from mandating a responsible accounting. Additionally, "the Department of Transportation and Development and the U.S.G.S. would have to agree on a shape and then implement a costly replacement plan for images currently in circulation." Oh, dear. And the U.S. Supreme Court has done its part to command the tides, too, when it decreed in 1981 that "the state boundary of Louisiana was no longer an ambulatory line that could move in response to changes in the coastline, and was henceforth immobilized as a set of fixed coordinates."
In this case, we resist any sort of accurate map only in order to avoid blaming ourselves. We would rather have the maps lie to us, for as long as possible. In the meantime and wholly apart from this tragicomic legislative context, an acre of coastal land is being lost every hour. So even if there was agreement, what kind of map could be created that would do coastal Louisiana justice? In Anderson's view, one that would throw the situation in a clear and unforgiving light: hence the loss of the boot. Such a map could only be a political tool:
A more honest representation of the boot would not erase the intractable disagreements — around global sea level rise, energy jobs versus coastal restoration jobs, oil and gas companies versus the fishing industry — that paralyze state politics, but it would give shape to the awesome stakes, both economic and existential, that hang in the balance.
Anderson's campaign to make the map explicitly political goes against the cartographic gaze that I described above, with its decentralization of power and accountability. It is no wonder that it has been met with resistance. But is it enough? When one looks at the current, tranquil state map of Louisiana, none of this decay, let alone conflict, is apparent. Of course, a citizen traveler might be roused to indignation if not action, once he attempts to reach a destination that no longer exists: a swamp where there was once a camp, the vast reaches of the Gulf where there was once a causeway or a barrier island. But how many people are there of that ilk?
And so we have turned a full circle of cartographic irony: from speculative maps that included places that never existed, to objective maps that show us places that no longer exist, but pretend as if they do. After all, what Marlow found, far up the Congo River and in the darkness of the human heart, could never be marked on a map. But for what can be recorded, whether it is Louisiana's coastline, the Arctic ice cap, or various star-crossed Pacific islands, we can only hope that eventually, as Borges once wrote, "in time, those Unconscionable Maps no longer satisfied."
Monday, June 23, 2014
A Far-Reaching Liquidation
"For the last twenty years neither matter
nor space nor time has been what it was."
~ Paul Valéry, 1931
Ever since Napster tore through the music industry like an Ebola outbreak, there has followed a ceaseless hand-wringing about the ever-decreasing "value" of music. Chart-busting hits have been replaced by body blows to an industry that was once fat and happy. From Napster's peer-to-peer networking model to the current ascendancy of streaming services, the big labels have seen their fortunes scrambled and re-scrambled by the onrushing and ever-changing technological landscape. This is further complicated by the fact that young people are its most desired demographic, but are also the most ardent adopters of said inconvenient technologies. It's easy to say that there is no going back – and there isn't – but how can artists respond to this seemingly unstoppable race to the bottom, now that the link between a work of music, and the physical artifact that is its vehicle, has been permanently sundered?
Earlier this spring, we received a candidate answer from the venerable hip hop outfit Wu-Tang Clan. The Wu-Tang have been secretly recording a new double album for several years, an event that would commonly be greeted with much rejoicing by their legions of fans. However, the zinger is that only one copy of the album will be made, destined to be sold to the highest bidder. Even more interesting is the fact that, prior to the auction, the record will tour "festivals, museums, exhibition spaces and galleries for the public as a one off [sic] experience." (Imagine the stringency of the security that will be required to keep this particular cat in its bag; I am already anticipating the Twittersphere lighting up in outrage as museum staff shine flashlights into people's ear canals, conduct full body cavity searches, and generally out-TSA the TSA.)
Of course, such acts of conceptual brazenness are usually (and usually regrettably) accompanied by a manifesto, and Wu-Tang does not disappoint...
...although they seem to prefer the term "edictum":
Is exclusivity versus mass replication really the 50 million dollar difference between a microphone and a paintbrush? Is contemporary art overvalued in an exclusive market, or are musicians undervalued in a profoundly saturated market? By adopting a 400 year old Renaissance-style approach to music, offering it as a commissioned commodity and allowing it to take a similar trajectory from creation to exhibition to sale, as any other contemporary art piece, we hope to inspire and intensify urgent debates about the future of music. We hope to steer those debates toward more radical solutions and provoke questions about the value and perception of music as a work of art in today's world.
Now, the Wu-Tang boys bring up a real issue here. It's not hard for musicians to look at the contemporary art world, with its bloated traffic in fetishized objects that seem to spring, fully formed, from an inexhaustible well of cynicism, and wonder what wrong turns their own art form has taken. The concept itself has a very appealing simplicity to it as well: it is the re-attachment of the content to its vehicle. And what a pretty vehicle it is, too. But what kind of a "radical solution" is this? Because once the auction goes through, whoever buys owns all the rights to the music. They can distribute the album or simply squirrel it away for personal listening pleasure. They can bury it in their backyard, or douse it with gasoline and torch it. They can be as democratic or as perverse about it as they may feel inclined.
However, my disquiet runs even deeper than that. From the "conceptus" (!) page of the album's site, we read that
…a new approach is introduced, one where the pride and joy of sharing music with the masses is sacrificed for the benefit of reviving music as a valuable art and inspiring debate about its future among musicians, fans and the industry that drives it. Simultaneously, it launches the private music branch as a new luxury business model for those able to commission musicians to create songs or albums for private collections. It is a fascinating melting pot of art, luxury, revolution and inspiration. It's welcoming people to an old world.
This nudge-nudge-wink-wink tone of noblesse oblige makes me think that the author intended for this copy to end up on the Financial Times' How To Spend It, a sort of Whole Earth Catalog for the One Percent. While I value the provocative nature of Wu-Tang's act, I wish that they had stopped there. But by dressing up an old patronage system in new clothes, they are pointing to a cul-de-sac in the conversation. This has nothing to do with the radical opening of possibilities. It is merely about the enshrinement of exclusivity. It also grates against the intrinsic ephemerality that is the very nature of music. Even if I possess the only extant recording of a certain piece of music, I still cannot "consume" it just by looking at the recording. I have to play it, and once I have played it, that moment is gone. This is the deep appeal of streaming services. But the Wu-Tang Clan has conjured up the most radical opposite imaginable. Is it still music if it's never played? Or if there's no one around to hear it?
(There is another, greater irony here. Hip hop was once the voice of the urban voiceless in this country, and despite its commoditization here, it has gone on to fulfill this role in many others. Has hip hop reached yet another apotheosis on the way to perfecting its self-worship?)
I cribbed the title of this post (as well as the Valéry quote) from Walter Benjamin's seminal 1936 essay "The Work of Art in the Age of Mechanical Reproduction." Anyone who has read (or who vaguely remembers reading) this essay would consider it the go-to critique for this sort of discussion. But Benjamin is mostly concerned with film and does not in fact mention music at all. It is also further problematic because Benjamin regards art as a point of contention between fascism and socialism – that the only possible response to the state gaining control of the reproduction of art is its politicization. The Wu-Tang stunt fits neither category. Instead, it's just another signpost along the way to the reductio ad nihilum of our late capitalist fantasyland.
However, there is another, more generous provocation that was offered by Beck in 2012. Beck, conjunction with McSweeney's, released a new album, except he didn't record a single note. Instead, he released 20 songs as sheet music, and invited everyone to create their own interpretation. You can view the results at Song Reader, the site set up to collect all these contributions. This may seem precious and retro, the kind of winking irony that would be at home in a snooty Williamsburg coffee shop. But this gesture is not dissimilar to the kind of "instruction art" that was refined by John Cage and Sol LeWitt, where the fundamental idea is that people can – and should – create the work for themselves.
Of course, prior to the advent of radio and 78s, sheet music was the primary vehicle by which music was distributed and popularized, and as such formed a significant part of the connective tissue of a society's culture. In her article "Before the Deluge: The Technoculture of Song-Sheet Publishing Viewed from Late Nineteenth-Century Galveston" author Leslie Gay notes that "communication technologies like song sheets are implicated within the myriad ways we build social relations, make exchanges and create meaning". There is something very important here: the idea of being a mere consumer is discarded. It is quite simply impossible. As a score, music only exists in its potential form. The musician is the vehicle. Put another way, the siting of "value" has shifted from the monetary expectation of the producer, to the experience of the participants.
Take as an example Russia in the 19th century, where orchestras would go on long tours. People in the town would know not only when the orchestra would come to town, but what it would be playing, sometimes months in advance. So households would procure piano reductions and work through the scores in anticipation of the big night. One can only imagine the intimacy with which the listeners were able to "consume" the music, having played through and argued over many of each work's nuances. In this way, the act of consumption was in fact replaced by an act of consummation.
Similarly, what makes the Song Reader project really groundbreaking is its expectations. In order to engage the work, you have to know how to read music. And I mean really read music – there are no guitar tabs here. There is something fascinatingly paradoxical about this. On the one hand, the fact that there is no authoritative recording – so far Beck has yet to put out a disc of his own interpretations – implies a vast artistic freedom. On the other, that world is only open to those who have a sufficient degree of a very specific kind of literacy (one that, nevertheless, was much more common a century ago than it is now). What Beck offers us is an invitation to engage deeply with the world around us, whether it is in the form of the text of the score, the playing of our fellow musicians, or the interpretations created by others. Having worked through this text ourselves, we are in a much subtler place, one that can appreciate why certain decisions may have been made or ignored. We have created a foundation for critique, and for pleasure.
The other, even more important implication in Beck's act is one of trust. Consider the courage that an artist must have in order to issue his art in the form of instructions. I'm pretty certain that Beck knows exactly how he thinks his songs should sound. I don't know if he thinks that he is more qualified than anyone else to interpret them. I know that if they were my songs, I would think that way. But by only giving the instructions, Beck is saying that this latter concern really isn't relevant. He is essentially saying "I trust you" to his fans. There is an empathetic generosity that is really rather astonishing. And what is given back to him is a richness of interpretation that will doubtless have an impact on the way he views his own composing.
This rhizomatic conception stands in stark contrast with the idea of a final object that is perfect, authoritative and unique, as is personified by the Wu-Tang Clan's gesture. The rhizome is resilient and unpredictable, whereas the unique object is non-negotiable and brittle. On account of its uniqueness, the object's ownership has real consequences, whereas the ownership of a score of music is of much less relevance to the purpose of that score's existence.
For its part, technology is always telling us that it will catalyze society into new, more effective forms of social organization. It does not necessarily ask what society is doing already, and what the value of that activity might be. Simultaneously, technology oftentimes devalues our own participation in society and especially culture by ensuring that that participation has less at stake. We are assured that we no longer need to read music in order to pretend to understand it; it only matters that we possess it.
Thus, in a final twist that emphasizes the poverty of choices with which technology eventually presents us, two Wu-Tang fans became determined to ensure the album's dissemination. This took the unsurprising form of a Kickstarter campaign. Since there was a rumored $5 million offering price for the album, the job of finding enough consumers committed to an altruistic redistribution was a daunting one. Indeed, by the time the fundraising window closed, the project had only raised $15,400. Maybe Wu-Tang's fans should have asked for a score instead.
Why the Philosophy of Food is Important
by Dwight Furrow
There are lots of hard problems that require our thoughtful attention—poverty, climate change, quantum entanglement, or how to make a living, just for starters. But food? Worthy of thought? Most philosophers have ignored food as a proper topic of philosophical inquiry.
On the surface, it seems there are only three questions about food worth considering: Do you have enough? Is it nutritious? And does it taste good? If you have the wherewithal to read this you probably have enough food. Questions of nutrition can be answered by consulting your doctor or favorite nutritionist. And surely it doesn't take thought to figure out what tastes good.
But when we look more deeply at food we find some important issues lurking beneath the surface about which philosophy has traditionally been concerned. How we farm, what we eat, and how we cook have important social, political, and ethical ramifications—ramifications so important that we cannot think of these issues as purely private matters any longer. Some of the aforementioned "hard problems" have a lot to do with food. Our food distribution networks are anything but fair leaving many people without enough to eat; and our food production and consumption patterns cause substantial environmental harm in part because of their impact on climate change. Our resource- intensive way of life, supported by an economic system that requires constant growth, is unsustainable especially because the rest of the world would like to emulate it. For example, it is estimated that if everyone in the world consumed our meat-heavy diet, we would need two planet earths to supply sufficient land, feed, and water.
We must learn to live differently, and that means, fundamentally, learning to desire differently—and to desire food differently.
How we problematize and refine desires and pleasures and attend to their moderation, balance, and harmony has been a philosophical topic since the Ancient Greeks. That discourse has never been more important than it is today and our food desires must now lie at the center of that discourse. Food is our most basic material need and ties together a vast number of issues from deforestation, to the use of fossil fuels, to the disappearance of local food markets. And all are tied to how we manage our desires. To ignore food as a philosophical issue is to ignore that foundational discourse regarding the management of desires that has been central to philosophy's history.
Unfortunately, philosophy in recent centuries has drifted away from those ancient concerns. The modern view of human beings as abstract epistemological subjects may lack the conceptual apparatus to think about the realm of contingent bodily needs, so philosophy may have to reinvent itself to learn to think critically about food.
But the the significance of the philosophy of food does not wholly rest on it becoming a branch of applied ethics or social theory, a collection of topics for professional philosophers to consider. The aesthetics of taste, a component of the philosophy of food, should receive more thoughtful attention from non-philosophers as well. After all, if we must learn to manage our desires differently, we will likely accomplish that only through modifying the personal aesthetic judgments on which those desires rest, which again recalls an ancient discourse—philosophy as a way of life.
The aesthetics of taste is important because I don't think one can live well in our world without taking an interest in the aesthetics of everyday life; and because the enjoyment of food and beverages is among the most accessible and satisfying of our everyday experiences, we should care about it much more than we do.
Why is the aesthetics of everyday life so important? This famous quote from the film Fight Club provides the experiential background:
Man, I see in fight club the strongest and smartest men who've ever lived. I see all this potential, and I see squandering. God damn it, an entire generation pumping gas, waiting tables; slaves with white collars. Advertising has us chasing cars and clothes, working jobs we hate so we can buy shit we don't need. We're the middle children of history, man. No purpose or place. We have no Great War. No Great Depression. Our Great War's a spiritual war... our Great Depression is our lives. We've all been raised on television to believe that one day we'd all be millionaires, and movie gods, and rock stars. But we won't. And we're slowly learning that fact. And we're very, very pissed off." (Taken from Edward Norton's character in Fight Club.)
This could have been written by Theodore Adorno, if that profound but difficult thinker had written in the vernacular.
Most Americans live lives that are highly regulated and standardized via networks of management and control, governed by norms of efficiency and profit that crowd out any other value; and these norms increasingly colonize our home life, as well, thanks to intrusive media technologies. We tend to work long hours at boring, repetitive jobs that demand our full attention, in order to make someone else rich. And we evaluate our lives according to how well we conform to these norms—that is, if one's job is not outsourced to a machine.
Everyone needs a way to resist these demands, a place where beauty, pleasure and a focus on things that have intrinsic value occupy our attention. Finding extraordinary meaning in simple things and their particularity, such as a meal or a bottle of wine, is the most accessible path to a good life in this damaged world. That ordinary things are the greatest source of meaning is not a new thought—ancient sages from the Buddha to Epicurus had similar notions. But it is more relevant now than ever in an age where the pursuit of technical knowledge and efficiency promises the systematic elimination of anything that does not conform to the demand for quantification and standardization.
Of course the character in Fight Club creates a place where men get together and punch each other to feel better about their limited lives. I guess that is "aesthetics" of a sort—a sensory experience no doubt. But we can probably do better by seeking a form of beauty not tainted by violence.
One might object that taste is both subjective and trivial, and a preoccupation with such matters is useless and without any larger significance. No one cares about what I had for dinner except me. But the fact that taste is subjective and and trivial is a feature not a bug. For it is precisely the subjective and trivial, and taking delight in such matters, that escapes the clutches of instrumental reason, that resists the encroachments of a corporate mentality that translates everything of value into a commodity with a price and uses up every resource, both human and non-human, in order to line someone's pockets.
In this case, as in so many parts of life, the personal is political. Despite being a personal matter, a concern for taste is the first step in the shaping of our desires toward more sustainable forms.
Yet, such a commitment means we must refuse to accept what is false and inauthentic, that we recognize and block the strategies of our corporate masters when they try to commodify our desires. When we outsource our practical reasoning to marketers our desires are not our own. The only antidote to such outsourcing is critical thought, conceptual imagination, and a mind sufficiently open to fully appreciate the intrinsic value of what is before us, as food and drink almost always are. Philosophy can be—perhaps must be—enlisted in this attempt to keep the question of how one should live in focus, for philosophy has always sought to discover what is of intrinsic value .
As Epicurus said "Not what we have but what we enjoy, constitutes our abundance."
For more more ruminations of the philosophy of food and wine visit Edible Arts.
Monday, May 19, 2014
Epicycles of the Elite Left; The Price is Too Damn High..
by Omar Ali
This was to be an article about the latest outbreak of Blasphemy-mongering in Pakistan but after several friends brought up Pankaj Mishra’s article about the victory of the BJP in the Indian elections, I decided to change direction. I think far too many educated South Asian people read Pankaj Mishra, Arundhati Roy and their ilk. And I believe that many of these readers are good, intelligent people who want to make a positive contribution in this world. And I believe their consumption of Pankaj, Roy and Tariq Ali (heretofore shortened to Pankajism, with any internal disagreements between various factions of the People’s Front of Judea being ignored) creates a real opportunity cost for liberals and leftists, especially in the Indian subcontinent (I doubt if there is any significant market for their work in China or Korea yet; a fact that may even have a tiny bearing on the difference between China and India).
In fact, I believe the damage extends beyond self-identified liberals and leftists; variants of Pankajism are so widely circulated within the English speaking elites of the world that they seep into our arguments and discussions without any explicit acknowledgement or awareness of their presence. In other words, the opportunity cost of this mish-mash of Marxism-Leninism, postmodernism, “postcolonial theory”, environmentalism and emotional massage (not necessarily in that order) is not trivial.
This is not a systematic theses (though it is, among other things, an appeal to someone more academically inclined to write exactly such a thesis) but a conversation starter. I hope that some of you comment on this piece and raise the level of the discussion by your response. And of course, I also apologize in advance for any appearance of rudeness or ill-will. I have not set out to insult anyone (except, of course, Pankaj, Roy and company; but they are big enough to take it).
1. There are some people who have a consistent, systematic and well thought out Marxist-Leninist worldview (it is my impression that Vijay Prashad, for example, is in this category). This post is NOT about them. Whether they are right or wrong (and I now think the notion of a violent “people’s revolution” is wrong in some very fundamental ways), there is a certain internal logic to their choices. They do not expect electoral politics and social democratic reformist parties to deliver the change they desire, though they may participate in such politics and support such parties as a tactical matter (for that matter they may also support right wing parties if the revolutionary situation so demands). Similarly, they are very clear about the role of propaganda in revolutionary politics and therefore may consciously take positions that appear simplistic or even silly to pedantic observers, if they feel that such a position is in the interest of the greater revolutionary cause. Their choices, their methods and their aims are all open to criticism, but they make some sort of internally consistent sense within their own worldview (as far as such things can be true of human beings and their motivations and actions). With these people, one can disagree on fundamentals or disagree on tactics, but either way, one can figure out what the disagreement is about. In so far as their worldview fails to fit the facts of the world, they have to invent epicycles and equants to fit facts to theory, but that is not the topic today. IF you are a believer in “old fashioned Marxist-Leninist revolution”, this post is not about you.
2. But most of the left-leaning or liberal members of the South Asian educated elite (and a significant percentage of the educated elite in India and Pakistan are left leaning and/or liberal, at least in theory; just look around you) are not self-identified revolutionary socialists. I deliberately picked on Pankaj Mishra and Arundhati Roy because both seem to fall in this category (if they are committed “hardcore Marxists” then they have done a very good job of obfuscating this fact). Tariq Ali may appear to be a different case (he seems to have been consciously Marxist-Leninist and “revolutionary” at some point), but for all practical purposes, he has joined the Pankajists by now; relying on mindless repetition of slogans and formulas and recycled scraps of conversation to manage his brand. If you consider him a Marxist-Leninist (or if he does so himself), you may mentally delete him from this argument.
3. The Pankajists are not revolutionaries, though they like revolutionaries and occasionally fantasize about walking with the comrades (but somehow always make sure to get back to their pads in London or Delhi for dinner); They are not avowedly Marxist, though they admire Marx (somewhat in the way “moderate Muslims” admire the Prophet Mohammed, may peace be upon him. Tribal loyalty is there, but it does not stand in the way of living a modern life. The prophet is more or less an icon, and the prophet’s hardcore followers have serious doubts about the “moderates” bona fides); They strongly disapprove of capitalists and corporations, but they have never said they would like to hang the last capitalist with the entrails of the last priest. So are they then social democrats? Perish the thought. They would not be caught dead in a reformist social democratic party.
4. They hate how Westernization is destroying traditional cultures, but every single position they have ever held was first advocated by someone in the West (and 99% were never formulated in this form by anyone in the traditional cultures they apparently prefer to “Westernization”). In fact most of their “social positions” (gay rights, feminism, etc) were anathema to the “traditional cultures” they want to protect and utterly transform at the same time. They are totally Eurocentric (in that their discourse and its obsessions are borrowed whole from completely Western sources), but simultaneously fetishize the need to be “anti-European” and “authentic”.
Here it is important to note that most of their most cherished prejudices actually arose in the context of the great 20th century Marxist-Leninist revolutionary struggle. e.g. the valorization of revolution and of “people’s war”, the suspicion of reformist parties and bourgeois democracy, the yearning for utopia, and the feeling that only root and branch overthrow of capitalism will deliver it; these are all positions that arose (in some reasonably sane sequence) from hardcore Marxist-Leninist parties and their revolutionary program (good or not is a separate issue), but that continue to rattle around unexamined in the heads of the Pankajists.
The Pankajists also find the “Hindu Right” and its fascist claptrap and its admiration of “strength” and machismo alarming, but Pankaj (for example) admires Jamaluddin Afghani and his fantasies of Muslim power and its conquering warriors so much, he promoted him as one of the great thinkers of Asia in his last book. This too is a recurring pattern. Strong men and their cults are awful and alarming, but also become heroic and admirable when an “anti-Western” gloss can be put on them, especially if they are not Hindus. i.e. For Hindus, the approved anti-Western heroes must not be Rightists, but this second requirement is dropped for other peoples.
They are proudly progressive, but they also cringe at the notion of “progress”. They are among the world’s biggest users of modern technology, but also among its most vocal (and scientifically clueless) critics. Picking up that the global environment is under threat (a very modern scientific notion if there ever was one), they have also added some ritualistic sound bites about modernity and its destruction of our beloved planet (with poor people as the heroes who are bravely standing up for the planet). All of this is partly true (everything they say is partly true, that is part of the problem) but as usual their condemnations are data free and falsification-proof. They are also incapable of suggesting any solution other than slogans and hot air.
Finally, Pankajists purportedly abhor generalization, stereotyping and demagoguery, but when it comes to people on the Right (and by their definition, anyone who tolerates capitalism or thinks it may work in any setting is “Right wing”) all these dislikes fly out of the window. They generalize, stereotype, distort and demonize with a vengeance.
You get the picture...or rather, you do not, because there is no coherent picture there. There are emotionally satisfying and fashionable sound bites that sound like they are saying something profound, until you pay closer attention and most of the meaning seems to evaporate. My contention is that what remains after that evaporation is pretty much what any reasonable “bourgeois” reformist social democrat would say. Pankaj and Roy add no value at all to that discourse. And they take away far too much with sloganeering, snide remarks, exaggeration and hot air.
5. This confused mish-mash is then read by “us people” as “analysis”. Instead of getting new insights into what is going on and what is to be done, we come out by the same door as in we went; we may have held vague but fashionable opinions on our way in, and if so, we come out with the same opinions seemingly validated by someone who uses a lot of words and sprinkles his “analysis” with quotes from serious books. We then discuss said analysis with friends who also read Pankaj and Arundhati in their spare time. Everyone is happy, but I am going to make the not-so-bold claim that you would learn more by reading “The Economist”, and you would be harmed less by it.
6. Pankajism as cocktail party chatter is not a big deal. After all, we have a human need to interact with other humans and talk about our world, and if this is the discourse of our subculture, so be it. But then the gobbledygook makes its way beyond those who only need it for idle entertainment. Real journalists, activists and political workers read it. Government officials read it. Decision makers read it. And it helps, in some small way, to further fog up the glasses of all of them. The parts that are useful are exactly the parts you could pick up from any of a number of well informed and less hysterical observers (if you don’t like the Economist, try Mark Tully). What Pankajism adds is exactly what we do not need: lazy dismissal of serious solutions, analysis uncontaminated by any scientific and objective data, and snide dismissal of bourgeois politics.
7. If and when (and the “when” is rather frequent) reality A fails to correspond with theory A, Pankajists, like Marxists, also have to come up with newer and more complicated epicycles to save the appearances; and we then have to waste endless time learning the latest epicycles and arguing about them. All this while people in India (and to a lesser and more imperfect extent, even in Pakistan) already have a reasonably good constitution and, incompetent and corrupt, but improvable institutions. There are large political parties that attract mass support and participation. There are academics and researchers, analysts and thinkers, creative artists and brilliant inventors, and yes, even sincere conservatives and well-meaning right-wingers. I think it may be possible to make things better, even if it is not possible to make them perfect. “People’s Revolution” (which did not turn out well in any country since it was valorized in 1917 as the way to cut the Gordian knot of society and transform night into day in one heroic bound) is not the only choice or even the most reasonable choice. Strengthening the imperfect middle is a procedure that is vastly superior to both Left and Right wing fantasies of utopian transformation. I personally believe that the system that exists is not irreparably broken and can still avoid falling into fascist dictatorship or complete anarchy (both of which have repeatedly proven to be much worse than the imperfect efforts of modern liberal democracy) but you don’t have to agree with me. My point is that even if they system is unfixable and South Asia is due for huge, violent revolution, these people are not the best guide to it.
Look, for example at the extremely long article produced by Pankaj on the Indian elections. This is the opening paragraph:
In A Suitable Boy, Vikram Seth writes with affection of a placid India's first general election in 1951, and the egalitarian spirit it momentarily bestowed on an electorate deeply riven by class and caste: "the great washed and unwashed public, sceptical and gullible", but all "endowed with universal adult suffrage.
Well, was that good? Or bad? Or neither? Were things better then, than they are now? That seems to be the implication, but in typical Pankaj style, this is never really said outright (that may bring up uncomfortable questions of fact). It also throws in a hint that universal adult suffrage was a bit of a fraud even then. But just a hint. So are the “unwashed masses” now more gullible? Less skeptical? I doubt if any two readers can come up with the same explanation of what he means; which is usually a good sign that nothing has been said.
There follows a description of why Modi and the RSS are such a threat to India. This is a topic on which many sensible things can be said and he says many of them, but even here (where he is on firmer ground, in that there are really disturbing questions to be asked and answered) the urge to go with propaganda and sound bites is very strong. And the secret of Modi’s success remains unclear. We learn that development has been a disaster, but that people seem to want more of it. If it has been so bad, why do they want more of it? Because they lack agency and are gullible fools led by the capitalist media? If people do not know what is good for them, and they have to be told the facts by a very small coterie of Western educated elite intellectuals, then what does this tell us about “the people”? And about Western education?
Supporters will say Pankaj has raised questions about Indian democracy and especially about Modi and the right-wing BJP that need to be asked. And indeed, he has. But here is my point: the good parts of his article are straightforward liberal democratic values. Mass murder and state-sponsored pogroms are wrong in the eyes of any mainstream liberal order. If an elected official connived in, or encouraged, mass murder, then this is wrong in the eyes of the law and in the context of routine bourgeois politics. Those politics do provide mechanisms to counter such things, though the mechanisms do not always work (what does?). But these liberal democratic values are the very values Pankaj holds in not-so-secret contempt and undermines with every snide remark. It may well be that “a western ideal of liberal democracy and capitalism” Is not going to survive in India. But the problem is that Pankaj is not even sure he likes that ideal in the first place. In fact, he frequently writes as if he does not. But he is always sufficiently vague to maintain deniability. There is always an escape hatch. He never said it cannot work. But he never really said it can either... To say “I want a more people friendly democracy” is to say very little. What exactly is it that needs to change and how in order to fix this model? These are big questions. They are being argued over and fought out in debates all over the world. I am not belittling the questions or the very real debate about them. But I am saying that Pankajism has little or nothing to contribute to this debate. Read him critically and it soon becomes clear that he doesn’t even know the questions very well, much less the answers... But he always sounds like he is saying something deep. And by doing so, he and his ilk have beguiled an entire generation of elite Westernized Indians (and Pakistanis, and others) into undermining and undervaluing the very mechanisms that they actually need to fix and improve. It has been a great disservice.
By the way, the people of India have now disappointed Pankaj so much (because 31% of them voted for the BJP? Is that all it takes to destroy India? What if the election ends up meaning less than he imagines?) that he went and dug up a quote from Ambedkar about the Indian people being “essentially undemocratic”. I can absolutely guarantee that if someone on the right were to say that Indians are essentially undemocratic, all hell would break loose in Mishraland.
See this paragraph: In many ways, Modi and his rabble – tycoons, neo-Hindu techies, and outright fanatics – are perfect mascots for the changes that have transformed India since the early 1990s: the liberalisation of the country's economy, and the destruction by Modi's compatriots of the 16th-century Babri mosque in Ayodhya. Long before the killings in Gujarat, Indian security forces enjoyed what amounted to a licence to kill, torture and rape in the border regions of Kashmir and the north-east; a similar infrastructure of repression was installed in central India after forest-dwelling tribal peoples revolted against the nexus of mining corporations and the state. The government's plan to spy on internet and phone connections makes the NSA's surveillance look highly responsible. Muslims have been imprisoned for years without trial on the flimsiest suspicion of "terrorism"; one of them, a Kashmiri, who had only circumstantial evidence against him, was rushed to the gallows last year, denied even the customary last meeting with his kin, in order to satisfy, as the supreme court put it, "the collective conscience of the people".
Many of these things have indeed happened (most of them NOT funded by corporations or conducted by the BJP incidentally) but their significance, their context and, most critically, the prognosis for India, are all subtly distorted. Mishra is not wrong, he is not even wrong. To try and re-understand this paragraph would take up so much brainpower that it is much better not to read it in the first place. There are other writers (on the Left and on the Right) who are not just repeating fashionable sound bites. Read them and start an argument with them. Pankajism is not worth the time and effort. There is no there there…
PS: I admit that this article has been high on assertions and low on evidence. But I did read Pankaj Mishra’s last (bestselling) book and wrote a sort of rolling review while I was reading it. It is very long and very messy (I never edited it), but it will give you a bit of an idea of where I am coming from. You can check it out at this link: Pankaj Mishra’s tendentious little book
PPS: My own first reaction on the Indian elections is also at Brownpundits. Congratulations India
Monday, March 31, 2014
Uncle Warren Thanks You For Playing
by Misha Lepetic
"Is it the media that induce fascination in the masses,
or is it the masses who direct the media into the spectacle?"
I usually buy my cigarettes at a corner store, on Manhattan's Upper West Side, that, not unusually for such establishments, also does a brisk trade in lottery tickets. Now, buyers of both cigarettes and lottery tickets are placing bets on outcomes with dismally known chances of winning. My fellow consumers are betting that they will win something, and I am betting that I won't (I also console myself with the sentiment that I am having more fun in the process). But in both cases, the terms of exchange are clear – we give our cash to the vendor, and buy the option on the pleasure of suspense, waiting to see if we have won. Beyond the potential payout, there really isn't that much more to discuss: the transactions are discrete and anonymous. And in the end, someone always wins the lottery, and someone always lives to a hundred.
I was reminded of the perceived satisfactions of participating in games of chance with hopeless odds after hearing a recent piece on NPR discussing quite the prize: a cool $1 billion dollars for anyone who nailed a 'perfect bracket.' In other words, the accurate identification of the outcomes of all 63 games of the NCAA men's basketball playoffs. Sponsored by a seemingly oddball trinity of Warren Buffett, Quicken Loans and Yahoo!, the prize is, on the face of it, an exercise in absurdity. But its construction is superb, and worth examining further, for reasons that have little to do with basketball, or probability, but rather for the questions it provokes around the value of information.
Now, bracket competitions have been going on at least since the tournament itself, which kicked off in 1939. Although brackets are common for other sports, there are unlikely subjects, too: saints and philosophers both have been thrown into pitched, single-elimination battle. But the NCAA bracket holds pride of place, not least because the number of participating teams is much greater than most other playoffs. This leads to the absolutely astonishing odds: if each game is treated as an independent coin toss, the odds of a perfect bracket are 1 in 9.2 quintillion, a number that even Neil DeGrasse Tyson might have difficulty contextualizing for us. Of course, the distribution of the initial round favors higher-seeded teams, so barring any first-round upsets, our chances may improve to a balmy 1 in 128 billion.
So we have at least an answer to the initial question of "What odds would make you feel comfortable enough to put up $1 billion?" Of course, if someone had won, Warren Buffett, whose net worth clocks in at about $60 billion these days, would have been on the hook, or rather his firm Berkshire Hathaway, whose market cap is five times the size of Buffett's wealth. (I mention both Buffett and his company because Buffett has thrown in a classic game theory move: he is willing to buy out anyone with a perfect bracket going into the Final Four for, say, $100 million.) In any event, it certainly would have been worth seeing the avuncular Oracle of Omaha show up at the door of the lucky winner with a giant cardboard check, just like Ed McMahon used to do with the Publishers Clearing House Sweepstakes. But if the chances of winning are nearly impossible, and there is no cost to enter the contest, we are left with a head-scratcher: who benefits?
There is an obvious pleasure to filling out brackets, of competing for the sake of competition, of measuring ourselves against not just one another but against the unknown. And certainly casual observers of what has become known as the "Buffett bracket" would not be wrong to point out that, on the face of it, Buffett et al. have come up with a great publicity stunt. But a publicity stunt, for all its Barnumesque splashiness, is intrinsically ephemeral. Its principal value lies in the fact that it grabs our attention and confers some brief benefit upon its initiators before sinking beneath the ebb and flow of the 24-hour news cycle. In this age of big data, where the world's most successful technology corporations thrive on dressing up "free" services with ever more finely targeted advertising, we ought to hope that there is a subtler angle.
And there is. Recall the three sponsors of our prize: Berkshire Hathaway, Yahoo! and Quicken Loans. In order to enter the competition, prospective bracketologists (that's a real word) had to visit a Yahoo! page, where they had to first open a Yahoo! account and then fill out a detailed Quicken questionnaire which elicited not just their name, home address, email and phone number, but much more importantly, if they own their home, or plan to purchase one in the future, and, if they own one, the current interest rate on the mortgage. For its part, Berkshire Hathaway receives a fee from Quicken and Yahoo! for insuring the competition, ie, in case the payout actually happens, which never will. Everyone's a winner, baby.
The benefit to these entities – particularly to Quicken, which specializes in mortgage lending – becomes apparent when one combines the quality of the information with the scale of participation. Concerning information, Slate, in one of the few clear-eyed articles on the matter, quotes a mortgage investment banker as saying that "it's not uncommon for companies like Quicken to pay between $50 and $300 for a single high-quality mortgage lead." While Quicken's spokespeople have been at pains to point out that only people who ask will be contacted, the fact is that all of the information on the entry form is required, which allows Quicken to create a massive database from which it can model all sorts of trends and behaviors.
How massive? At first, the organizers limited the number of entrants to 10 million, but based on the response sensibly increased it to 15 million. At this moment it's unclear how many people actually registered, and I doubt that this number will ever be disclosed. But if we take the low range of what Quicken pays for lead generation and assume that 1 million people opt to be contacted (ie, 10% of the low end of the entrant population), Quicken has acquired $50 million of lead generation value, and this does not include any revenue from leads that it manages to close. Even if we knock down the 10% by an order of magnitude, Quicken is still enjoying a $5 million freebie (of course, I am assuming honesty on the part of the respondents).
For its part, Yahoo! gains an equivalent number of users. Obviously, some will already be Yahoo! accountholders, but even if we assume that only half are new users, that is still 5 million fresh fish to subject to new ads, at least for a time. Berkshire Hathaway's benefit, aside from the insurance fee, is less clear, but the language in the contest rules leaves wide open the opportunity for sharing information between Quicken and the conglomerate (and if you have any doubts about the spurious protections afforded by these agreements, have a look at this 60 Minutes report).
So what? People are always giving away something in the hopes that they will gain something that is, in their perception, of even greater value. In the case of the Buffett bracket, even if what they finally get is nothing, I suspect there is still a pleasure in the act of playing – in other words, a bribe. But before discussing bribery, what interests me is the change in what's considered a fair trade. Any economist will maintain that a trade made without coercion is a fair trade, with the libertarian corollary being that people should not be protected from the consequences of their greed and/or stupidity.
But Western law has tended to draw the line at varying points. Nigerian letter scams and boiler room pump-and-dump schemes are illegal precisely because society has decided that there is a point beyond which people need to be protected from their cupidity. And the terms of engagement and success for the Buffett bracket are rather clear: in this sense, the contest is neither a fraud nor a scam. You pay to play, in a way that may not seem obvious or even harmful. But what is not transparent is the purposes for which that data is used, beyond the immediate consequence of the generation of consent, or the persistence of this data. Would people change the way they thought about giving up this information if they knew of the enormous subterranean infrastructure that trafficks in their personal details? Would they value it more? But if there are no mechanisms of valuation (ok, fine: free markets) that make the worth of this information apparent, how do we approach this?
Consider what happens when these mechanisms of valuation are not available to us as individuals. The master-stroke of the Buffett bracket is to force an extraordinary, cognitively unresolvable trade: it somehow makes perfect sense to divulge to some corporation the interest rate on your mortgage in order to gain the right to guess the outcome of a bunch of basketball games (a right which you had anyway, minus the impossible prize). And as proof, millions have chosen to do exactly this. The contest's creators rightly discerned that the value of this information to each individual is trivial, and yet the networked value of the aggregated information is, to those same creators, extremely valuable indeed. Recall a much-abused quote by Stewart Brand: "Information wants to be free." The anthropomorphism implied here is some awful hippie nonsense, but fortunately that is only a fragment. Here is the full quote (with a full exegesis here):
On the one hand information wants to be expensive, because it's so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.
In the Buffett bracket we have the resolution of this paradox – of how what is free (as in costless) is transmuted into value (something that is otherwise expensive to obtain). It is quite clear to whom the information is valuable, and the generation of this value is only possible through the vast systems that aggregate millions of bits of data into models that determine and predict behavior, ultimately driving profit. It is also quite clear how lowering the cost of getting information into the system makes it free (again, as in costless). What the internet and the accompanying utter lack of regulation enable is the hyperefficient siphoning off of that information from any willing individual who hasn't the means to determine what his information might actually be worth - which is pretty much no one. As a further consideration, note that most people will forget they entered the contest within weeks of the tournament's end, but that there are no provisions for their information's expiration. We may be done playing the bracket, but the traces of data that we leave behind are never forgotten.
The problem with this analysis (aside from its melodramatic nature) is that it incomplete. There is no resolution at this moment. Regulation that would give private citizens the right to use their information as an object of the commodity economy (ie, for lease as well as for sale) versus the current state, where it has by default fallen into the realm of the gift economy, is about as likely as a perfect bracket. The best that thinkers such as Jaron Lanier – who has written extensively on the subject – can seem to come up with is a system of micropayments, but the problem with technologists is that they tend to have a dismal grasp of the dismal science. In the meantime, what continues to take place is not so much a fraud or a scam, but really a sort of bribery. As automation continues to replace middle class jobs, we are being bribed for what little we have left that is uniquely our own, and, it being of such little worth to us, we find ourselves willingly trading it for the privilege of, as Žižek says, having "an experience" – in this case, the non-chance to win a billion dollars. This is the heart of ideology, in that it does not need to hide itself. After all, Slate and NPR both published insightful articles on the Buffett bracket and what it meant for participants. There is no need to obfuscate the truth, as it is much more useful for large network actors to be (sufficiently) open about their motives and desires. One doesn't have to look very hard to see that the old Wall Street adage – "They take your money and their experience, and turn it into their money and your experience" – has never been more true, or more subtle, since you are brought to believe that you never had the money in the first place.
So what about the state of the Buffett bracket? Sadly enough, no one made it past the first two days of competition. As fate would have it, the first round saw 14th seed Mercer upsetting 3rd seed Duke, which wiped out a large swathe of punters. Better luck next year, kids. In the meantime, the folks at Quicken have a lot of phone calls to make, and I need to go to the corner store to pick up a fresh pack of smokes. I sometimes think about picking up a lottery ticket while I'm at the counter, too, but somehow never seem to get around to it.
Monday, March 24, 2014
Killing Shias...and Pakistan
by Omar Ali
I have written before about the historical background of the Shia-Sunni conflict, and in particular about its manifestations in Pakistan. Since then, unfortunately but predictably, the phenomenon of Shia-killing in Pakistan has moved a little closer to my personal circle. First it was the universally loved Dr Ali Haider, famous retina surgeon, son of the great Professor Zafar Haider and Professor Tahira Bokhari, killed in broad daylight in Lahore along with his young son.
This week it was Dr Babar Ali, our friend and senior from King Edward Medical College; He was the assistant DHO (district health officer) and head of the anti-Polio campaign in Hasanabdal, who was shot dead by "unknown assailants" as he drove out of his hospital at night. Shia killing portals reported his death but it is worth noting that no TV channel or major news outlet reported on this murder. Such deaths are now so utterly routine that they do not even make the news.
This should scare everyone.
In 2012 I had predicted that:
“The state will make a genuine effort to stop this madness. Shias are still not seen as outsiders by most educated Pakistani Sunnis. When middle class Pakistanis say “this cannot be the work of a Muslim” they are being sincere, even if they are not being accurate.
But as the state makes a greater effort to rein in the most hardcore Sunni militants, it will be forced to confront the “good jihadis” who are frequently linked to the same networks. This confrontation will eventually happen, but between now and “eventually” lies much confusion and bloodshed.
The Jihadist community will feel the pressure and the division between those who are willing to suspend domestic operations and those who no longer feel ISI has the cause of Jihadist Islam at heart will sharpen. The second group will be targeted by the state and will respond with more indiscriminate anti-Shia attacks. Just as in Iraq, jihadist gangs will blow up random innocent Shias whenever they want to make a point of any kind. Things (purely in terms of numbers killed) will get much worse before they get better. As the state opts out of Jihad (a difficult process in itself, but one that is almost inevitable, the alternatives being extremely unpleasant) the killings will greatly accelerate and will continue for many years before order is re-established. The worst is definitely yet to come. This will naturally mean an accelerating Shia brain drain, but given the numbers that are there, total emigration is not an option. Many will remain and some will undoubtedly become very prominent in the anti-terrorist effort (and some will, unfortunately, become special targets for that reason).
IF the state is unable to opt out of Jihadist policies (no more “good jihadis” in Kashmir and Afghanistan and “bad jihadis” within Pakistan) then what? I don’t think even the strategists who want this outcome have thought it through. The economic and political consequences will be horrendous and as conditions deteriorate the weak, corrupt, semi-democratic state will have to give way to a Sunni “purity coup”. Though this may briefly stabilize matters it will eventually end with terrible regional war and the likely breakup of Pakistan. . Since that is a choice that almost no one wants (not India, not the US, not China, though perhaps Afghanistan wouldn’t mind) there will surely be a great deal of multinational effort to prevent such an eventuality.”
Unfortunately, it seems that the state, far from nipping this evil in the bud, remains unable to make up its mind about it.
The need to have a powerful proxy in Afghanistan after the American drawdown seems to take priority over the need to maintain sectarian harmony in Pakistan, as do the financial ties that bind Pakistan to Saudi Arabia. Many (though not all) on the left also remain convinced that pitting Sunnis against Shias is mainly (or even entirely) a project of the CIA, promoted as a way to keep the Middle East in turmoil. But even if this is true (and I personally doubt that the purveyors of this theory have the evidence, or have even worked out the implications of their worldview, but that is a separate story), it does not absolve the ruling elite in Pakistan of their responsibility in this matter. The strangest and most irrational meta-narratives can be sustained while acting rationally and shrewdly in the world of actions and short term consequences (where most politics is necessarily conducted), but the reverse is not always true; there are some blindingly obvious mistakes that should not be tolerated no matter what meta-narrative you wish to subscribe to. The Ahle Sunnat Wal Jamaat (ASWJ)’s campaign against the Shia sect is one of those. Whether people have a Marxist or Islamist or Capitalist worldview hardly matters; the ruling elite cannot possibly sustain itself if this affair progresses much further. I would argue that:
- The ASWJ and its fellow travelers (whatever their historic background and philosophical roots may be) are an existential threat to the modern state of Pakistan. The modern Pakistani state can tolerate (and has tolerated) many amazing contortions and disasters, but open season on the Shia population is not one of them. Unlike Ahmedis or Sindhi Hindus, the Shias of Pakistan are not a small fringe community. They are an integral part of Pakistani society, deeply woven into the Pakistani state, capable of armed retaliation, and able to obtain support from at least one (probably two or even three) well-resourced nieghbors. Their elimination or suppression is not a a realistic option for Pakistan even as a practical matter (quite apart from the blindingly obvious moral issues involved). The ASWJ is very clear about their intentions and makes no secret of it. Those intentions cannot be dismissed as mere words after all that has happened in the last 30 years. They are deadly serious. They will not tolerate Shias as equal partners in the Pakistan project. They have repeatedly insisted that Shias should be removed from “important positions” in the state and their religion must be demarcated as something distinct from “real islam”. With a wink and a nod, they may say that they are willing to accept the existence of Shias “if they do not cross the line”. But that line will be defined as needed by the ASWJ, and will eventually be drawn so tightly across Shia necks that they will not be able to breathe. The parallel with the Nazi view of the Jews is entirely valid. This project has no peaceful resolution. It must be condemned, its leaders ostracized and its violent executioners terminated with maximum prejudice. Otherwise you can say good bye to Pakistan.
- The “strategic priorities” of the state (one of the cruelest jokes perpetrated on our unready institutions by think tanks and teachers from “advanced” countries) have led it to encourage the spread of extremely intolerant and violent ideologies and organizations across the length and breadth of Pakistan. Here I would like to add that I do not disagree with those who say that there are deeper economic and social reasons for the phenomenon of religious fundamentalism and the spread of organized violence (whether Islamist or Maoist) among the “weaker sections of society”. My point is much shallower and more urgent. The social and economic challenges and changes that have driven the rise of Hindu and Sikh militants, Maoists and even South American drug gangs are also operative in Pakistan, but the self-destructiveness and confusion of the Pakistani ruling elite goes well beyond the norm. For 13 years the international community (not just the United States) has poured money and weapons into the Pakistani state to assist it in destroying the network of Jihadist terrorist organizations created (with American help at the beginning) in our region. Even if one believes the most insane conspiracy theories about the CIA acting at the same time to prop up these very organizations as part of some diabolical plan of the trilateral commission or the elders of Zion, the fact remains that the Pakistani ruling elite did not have to actively work for any such diabolical plan. It is not in their interest to sustain and support any of these terrorist organizations or provide them cover. To continue to do so for the sake of “obtaining leverage in Afghanistan post 2014” is insane, and it remains insane no matter what meta-narrative you wish to apply on the situation.
- There are also those who believe that the connection between various “Good Taliban/anti-imperialist resistance” in the tribal areas and the Shia-killers in the rest of the country, is exaggerated by people who are being paid in dollars to make this case. Why the dollar-slaves (Imran Khan’s loving term for those who oppose his pro-Taliban leanings) would make such a connection when the CIA desperately wants to spread sectarian conflict within Pakistan (as Imran Khan and many others also believe) is not clear, but could this claim be true? Could it be that use can be made of the “good Taliban” and their network of Madrassahs and political supporters in Pakistan, while launching a clearly demarcated operation against the Shia-killers of the LEJ? I think not. The ideology of Sunni purity and Shia-hatred that drives the LEJ is also the ideology of the good Taliban. Economic and social pressures may create the target killers, but ideology is the proximate cause for their alignment with this particular form of “protest against real suffering”. Since the socio-economic conditions of Pakistan will not change at any speed rapid enough to defang this beast before it kills Pakistan (simply because they have never changed that fast in any country at any time, all fantasies of overnight successful and productive people’s revolution notwithstanding), it is the proximate causes (the ideology and its armed enforcers) who will have to be dealt with. Any policy that permits the Taliban and their support networks to operate unhindered, will also permit the ASWJ and its network of killers to operate unhindered. To imagine that the good Taliban will be pushed into the coming Afghan civil war fast enough to permit the ruling elite to recover ground in Pakistan while remaining allied with them (the dream scenario of the strategic depth community) is to carry self-delusion to incredible heights. The links between the good and the bad Taliban are too numerous, their cause too closely interlinked, for this to be possible. Whether driven by fantasies of strategic depth or by other (equally “modern”) fantasies of anti-imperialist struggle, this calculation is not tenable.
It is time to change course.
A few snippets and videos worth a look:
This is a section from a report about the arrest of Shia-killer Tariq Shafi alias doctor, a friend of Waseem Baroodi (a policeman who killed many Shias, spent time in prison, was freed and went back to both the police and his job as shia-killer) (whole thing here):
“ During the JIT Interrogation , he told his where about as he was born in 1968 , and was the resident of P.I.B Colony , And got his elementary education from Govt . High School, Sindhi Hotel , Liaqatabad, and during the same Period he also did a Refrigeration Course , and passed his Matriculation Privately in 1989 . And In 1990 he Joined the Garden area Police as a Mechanic . But at the Untimely death of his Brother in 1995 , he left the Job and shifted to Bhawalpur , where he Married his maternal Cousin, and got involved in the Fabric Business , but as the Business could not florish , so he came back to Karachi in 1998 , and his Job also got re Instated in the Police Department .
And During his Job in the Police , he got in contact with a Young Man named Waseem Baroodi , who use to come to one of his students , who was a Prayer leader of Mosque in Orangi Town 11 ½ , who convinced him for the sectarianism & Blood shed of Opponents , So finally one fine day he told that he has a 30 bore Pistol with him , and Waseem Baroodi took him along to kill a Innocent Boy , Both walked toward the Boy , and on Pointation of Waseem Baroodi of that Boy , I fired on him , resulting his death
From 2000 to 2001 before he got arrested he Killed about 9 or 10 Shia men. One day He and Waseem Baroodi were walking on the road as they came across some Street criminal Men , who were trying to snatch cash from Waseem Baroodi , but on his resistance he got injured due to their firing , in the mean time I took out my Pistol , and fired on them , and due to the firing One of the Dacoits got Killed , and as Waseem was also injured , and I was trying to take Waseem to Hospital for treatment , but at the same time we were arrested by the A.S.I Ali Raza of Orangi Ext. P.S , we were arrested on 11 different cases , for which I was in Jail for about Seven and a Half years , till finally I was released on Bail in 2008 – 2009 , and by that time Waseem was already released on Bail , about 7 to 8 months , earlier , and during the Imprisonment period , he was the Group Leader of Sipah e Sahaba Pakistan.”
Also, do not miss this event. It is a gathering of ASWJ leaders in Quetta, under the protection of security forces; awards are being handed out to local ASWJ leaders who have played a prominent role in anti-Shia activities in their region. Since this local branch has the “distinction” of having killed hundreds of Shias at a time (instead of picking them off one by one), one of the speakers recites a poem that commends them as “those who make centuries instead of playing for ones and twos” and the crowd laughs and cheers. Everyone knows what he means. It is an absolute must-see.
The following videos shed light on the aims of the ASWJ/SSP/LEJ:
Monday, March 03, 2014
Is Internet-Centrism a Religion?
by Jalees Rehman
On the evening of March 3 in 1514, Steven is sitting next to Friar Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the printing press will change everything."
Let us now fast-forward 500 years and re-enact this hypothetical scene with some tiny modifications.
On the evening of March 3 in 2014, Steven is sitting next to TED-Talker Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the internet will change everything."
Clay's advice in the first scene sounds ludicrous to us because we know that the printing press did not usher in an era of wealth, justice and peace. Being retrospectators, we realize that the printing press revolutionized how we disseminate information, but even the most efficient dissemination tool is just a means and not the ends.
It is more difficult for us to dismiss Clay's advice in the second scene because it echoes the familiar Silicon Valley slogans which inundate us with such persistence that some of us have begun to believe them. Clay's response is an example of what Evgeny Morozov refers to as "Internet-centrism", the unwavering belief that the Internet is not just an information dissemination tool but that it constitutes the path to salvation for humankind. In his book "To Save Everything, Click Here: The Folly of Technological Solutionism", Morozov suggests that "Internet-centrism" is taking on religion-like qualities:
"If the public debate is any indication, the finality of "the Internet"— the belief that it's the ultimate technology and the ultimate network— has been widely accepted. It's Silicon Valley's own version of the end of history: just as capitalism-driven liberal democracy in Francis Fukuyama's controversial account remains the only game in town, so does the capitalism-driven "Internet." It, the logic goes, is a precious gift from the gods that humanity should never abandon or tinker with. Thus, while "the Internet" might disrupt everything, it itself should never be disrupted. It's here to stay— and we'd better work around it, discover its real nature, accept its features as given, learn its lessons, and refurbish our world accordingly. If it sounds like a religion, it's because it is."
Morozov does not equate mere internet usage with "Internet-centrism". People routinely use the internet for work or leisure without ascribing mythical powers to it, but it is when the latter occurs that internet usage transforms into "Internet-centrism".
Does Morozov's portrayal of "Internet-centrism" as a religion correspond to our current understanding of religions? "Internet-centrism" does not involve deities, sacred scripture or traditional prayers, but social scientists and scholars of religion do not require deism, scriptures or prayers to categorize a body of beliefs and practices as a religion.
The German theologian Friedrich Schleiermacher (1768-1834) thought that the feeling of "absolute dependence" ("das schlechthinnige Abhängigkeitsgefühl") was one of the defining characteristics of a religion. In a January 2014 Pew Internet survey, 53% of adult internet users in said that it would be "very hard" to give up the internet, whereas only 38% felt this way in 2006. This does not necessarily meet the Schleiermacher threshold of "absolute dependence" but it indicates a growing perception of dependence among internet users, who are struggling to envision a life without the internet or a life beyond the internet.
Absolute dependence is not unique to religion, therefore it may be more helpful to turn to religion-specific definitions if we want to understand the religionesque characteristics of Internet-centrism. In his classic essay "Religion as a cultural system" (published in "The Interpretation of Cultures"), the anthropologist Clifford Geertz (1926-2006) defined religion as:
" (1) a system of symbols which acts to (2) establish powerful, persuasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic."
Today's Silicon Valley pundits (incidentally a Sanskrit term originally used for learned Hindu scholars well-versed in Vedic scriptures) excel at establishing "powerful, persuasive, and long-lasting moods and motivations" and endowing "conceptions of general order of existence" with an "aura of factuality". Morozov does not specifically reference the Geertz definition of religion, but he provides extensive internet pundit quotes which fit the bill. Here is one such example:
"To be a peer progressive, then, is to live with the conviction that Wikipedia is just the beginning, that we can learn from its success to build new systems that solve problems in education, governance, health, local communities, and countless other regions of human experience."
—Steven Johnson in "Future Perfect: The Case For Progress In A Networked Age"
One problem with abstract definitions of religion is that they do not encompass the practice of religion and its mythical or supernatural aspects, which are often essential parts of most religions. In "The Religious Experience", the religion scholar Ninian Smart (1927-2001) does not provide a handy definition for religions but instead offers six "dimensions" that are present in most major religions: 1) The Ritual Dimension, 2) The Mythological Dimension, 3) The Doctrinal Dimension, 4) The Ethical Dimension, 5) The Social Dimension and 6) The Experiential Dimension.
How do these dimensions of religion apply to Internet-centrism?
1) The Ritual Dimension: The need to continuously seek connectivity by accessing computers or seeking out wireless connectivity, checking emails or social media updates so frequently that this connectivity exceeds one's pragmatic needs could be considered a ritual of Internet-centrism. If one feels the need to check emails and Facebook or Twitter updates every one to two minutes, despite the fact that it is unlikely one would have received a message that required urgent action, it may be an indicator of the important role that this ritual plays in the life of an Internet-centrist. Worshippers of traditional religions feel uncomfortable if they miss out on regular prayers or lose their rosaries that allow them to commune with their God, and it appears that for some humans, the ritual of Internet-connectivity may play a similar role.
2) The Mythological Dimension: There is the physical internet, which consists of billions of physical components such as computers, servers, routers or cables that are connected to each other. Prophets and pundits of Internet-centrism also describe a mythical "Internet" which goes for beyond the physical internet, because it involves mythical narratives about the power of the internet as a higher force that is shaping human destiny. Just like "Scientism" attributes a certain mystique to real-world science, Internet-centrism adorns the physical internet with a similar mythological dimension.
Ideas of "cognitive surplus", crowdsourcing knowledge to improve the human condition, internet-based political revolutions that will put an end to injustice, oppression and poverty and other powerful metaphors are used to describe this poorly defined mythical entity that has little to do with the physical internet. The myth of egalitarianism is commonly perpetuated, yet the internet is anything but egalitarian. Social media hubs have millions of followers and certain corporations or organizations are experts at building filters and algorithms to control the information seen by consumers who have minimal power and control over the flow of information.
3) The Doctrinal Dimension: The doctrine of Internet-centrism is the relentless pursuit of sharedom through the internet. The idea is that the more we share, the more we collaborate and the more transparent we are via the internet, the easier it will be for us humans to conquer the challenges that face us. Challenging this basic doctrine that is promoted by Silicon Valley corporations can be perceived as heretical. It is a remarkable testimony to the proselytizing power of the prophets and pundits in Silicon Valley that people were outraged at the government institution NSA for violating our privacy. There was comparatively little concern about the fact that the primary benefactors of the growing culture of sharedom are the for-profit internet corporations that make money off our willingness to sacrifice our privacy.
4) The Ethical Dimension: In many religions, one is asked to follow aspects of a religious doctrine which have no direct ethical context. For example, seeking salvation by praying alone to a god on a mountain-top does not necessarily require adherence to ethical standards. On the other hand, most religions have developed moral imperatives that govern how adherents of a religion interact with fellow believers or non-believers. In Internet-centrism, the doctrinal dimension is conflated with the ethical dimension. Sharedom is not only a doctrinal imperative, it is also a moral imperative. We are told that sharing and collaborating is an ethical duty.
This may be unique to Internet-centrism since the internet (both in its physical or its mythical form) presupposes the existence of fellow beings with whom one can connect. If a catastrophe wiped out all humans but one, who happened to adhere to a traditional religion, she could still pray to a god (ritual), believe in salvation by a supernatural entity (mythological) and abide by the the religious laws (doctrinal). However, if she were an Internet-centrist, all her rituals, beliefs and doctrines would become meaningless.
5) The Social Dimension: Congregating in groups and social interactions are key for many religions, but Internet-centrism provides more tools than any other ideology, cultural movement or religion for us to interact with others. Whether we engage in this social activity by using social media such as Facebook or Twitter, by reading or writing blog posts, or by playing multi-player games online, Internet-centrism encourages us to fulfill our social needs by using the tools of the internet.
6) The Experiential Dimension: Most religions offer their adherents opportunities for highly personal, spiritual experiences. Internet-centrism avoids any talk of "spirituality", but the idea of a personalized experience is very much a part of Internet-centrism. One of its goals is to provide opportunities for self-actualization. We all may be connected via the internet, but Internet-centrists also want us to believe that this connectivity provides a path for self-actualization. We can modify settings to customize our web browsing experience, we can pick and choose from millions of options of what online courses we want to take, videos we want to watch or music we want to listen to. The sense of connectedness and omnipotentiality is what provides the adherent of Internet-centrism with a feeling of personal empowerment that comes close to a spiritual experience of traditional religions.
When one reviews the definitions by Schleiermacher or Geertz, or the multi-dimensional analysis by Ninian Smart, it does indeed seem that Morozov is right and that Internet-centrism is taking on many religion-like characteristics. There is probably still a big disconnect between the Silicon valley prophets or pundits who proselytize and the vast majority of internet users who primarily act as "consumers" but do not yet buy into the tenets of Internet-centrism. But it is likely that at least in the short-term, Internet-centrism will continue to grow, especially if Internet-centrist ideas are introduced to children in schools and they grow up believing that these ideas are both essential and sufficient for our intellectual and social wellbeing. Perhaps the pundits of Internet-centrism could discuss the future of this emerging religion with adherents of other faiths at a TEDxInterfaith conference.
Image Credits: Photo of Gutenberg Bible (Creative Commons license, via NYC Wanderer at Flickr)
Must We Have Fascism With Our Petits Fours
by Dwight Furrow
A few weeks ago in the pages of 3 Quarks Daily we were treated to the proclamation of a new doctrine called "Anti-Gopnikism". The reference in the title is to Adam Gopnik, essayist for the New Yorker, who writes frequently in praise of French culture, especially French food. Philosopher Justin Smith, who is responsible for the proclamation of this doctrine, defines Gopnikism as follows:
The first rule of this genre is that one must assume at the outset that France --like America, in its own way-- is an absolutely exceptional place, with a timeless and unchanging and thoroughly authentic spirit. This authenticity is reflected par excellence in the French relation to food, which, as the subtitle of Adam Gopnik's now canonical book reminds us, stands synecdochically for family, and therefore implicitly also for nation.
Thus, Anti-Gopnikism, we are to infer, must consist of a denial that France is an exceptional place, or that it has a timeless, unchanging, authentic spirit, or that its relationship to its food is unique, or all of the above. We are not provided with any evidence to support any of these denials.
Whether American writers are correct to extoll the exceptional virtues of France depends on what you're looking for. The French are lousy at the Olympics but their wine is awesome. Their music can be simple ear-candy and overly romantic but then there is Boulez and Messiaen. Their language is lovely but peculiar; their conversation at times formal but extraordinarily civilized. Like any nation, they have virtues and vices. If you are interested in food and wine they are an essential nation, and have for centuries, defined what fine food is. To claim their relationship to food is not exceptional is to be blind to their extraordinary influence. Other cultures may lay claim to being more influential today but that does not erase the glorious history of French food. As to the timeless, unchanging, authentic spirit—well we are all part of history and no culture is timeless or unchanging. As far as I can tell, Gopnik doesn't claim or imply a timeless, unchanging essence. In fact, in his recent book The Table Comes First: France, Family, and the Meaning of Food, he claims French food has fundamentally changed in recent decades, is in crisis, and he upbraids them for narcissism and navel gazing.
So what is this diatribe against "Gopnikism" really about? It turns out Gopnikism is a lot more sinister than a French food fetish. Smith writes:
France, in other words, is a country that invites ignorant Americans, under cover of apolitical vacationing, of living 'the good life and of cultivating their faculty of taste, to unwittingly indulge their fantasies of blood-and-soil ideology. You'll say I'm exaggerating, but I mean exactly what I say. From M.F.K. Fisher's Francocentric judgment that jalapeños are for undisciplined peoples stuck in the childhood of humanity, to Gopnik's celebration of Gallic commensality as the tie that binds family and country, French soil has long been portrayed by Americans as uniquely suited for the production of people with the right kind of values. This is dangerous stuff.
Oh my! This is truly a puzzling argument. No doubt the French view their cuisine as an expression of their national character just as do the Italians, Japanese, or Chinese among others. Gopnik's claim is that the French have discovered, perhaps more so than other nations, that the pleasure of food brings intimations of the sacred into our lives. Independently of whether such a claim is true or not, what on earth does this have to do with Nazi "blood and soil" ideology. Something has gone deeply wrong here.
This argument relating French food to Nazism seems to go something like this: (1) French attitudes toward their cuisine are expressions of excessive nationalism, (2) German attitudes in the 1930's about the purity and superiority of their "racial stock" were expressions of excessive nationalism, (3) Therefore, writers (and tourists) who extoll the virtues of French cuisine are implicitly endorsing the attitudes of Nazis toward their alleged racial superiority. What exactly a love of Cassoulet has to do with burning people in ovens we are not told.
I suppose we get a clue from Smith's criticisms of the French treatment of their immigrant populations—especially Muslims.
I have witnessed incessant stop-and-frisk of young black men in the Gare du Nord; in contrast with New York, here in Paris this practice is scarcely debated. I've been told by a taxi driver as we passed through a black neighborhood: "I hope you got your shots. You don't need to go to Africa anymore to get a tropical disease." On numerous occasions, French strangers have offered up the observation to me, in reference to ethnic minorities going about their lives in the capital: "This is no longer France. France is over." There is a constant, droning presupposition in virtually all social interactions that a clear and meaningful division can be made between the real France and the impostors.
I don't live in France, but if the American media is to be believed, the French treatment of minority populations as well as rising xenophobia throughout Europe is deplorable, although it is not obvious it is uniquely so. Perhaps the French treatment of immigrant populations is an indication of a kind of insularity endemic to French culture which per hypothesis explains the decline in creativity in French cooking that some authors, including Gopnik, have noted. But smug complacency regarding one's cuisine is hardly the same thing as a regime of genocide or violent immigrant bashing.
Indigenous foods that express the terroir of local soils and the sensibility of a people are about the uniqueness and incomparability of a place. These, by definition, cannot be transplanted; they belong nowhere else but in that location among those people. Nazi "blood and soil" ideology was about universal hegemony. It was about the right to rule over and exterminate others. The conceptual chasm between French food fetishism and Nazi violence is enormous.
Even if we stick to food and ignore the silly notion that "food fights" are akin to real violence, the inference from love of one's culture to attempts at world domination makes no sense. You can praise the virtues of some constellation of flavors or a method of straining soups without thinking everyone must deploy those flavors or methods in their cuisine. Something might work wonderfully in the French style without being appropriate anywhere else, and nothing about the virtues of one locality's food precludes the appreciation of another. Even if the French think they have the world's best cuisine it doesn't follow that they think everyone must emulate or promote it.
Despite this utterly failed comparison, there is an interesting and important philosophical issue percolating behind the slippery logic of this argument. Can you love a place, a culture, a people and think of them as uniquely virtuous without excluding respect for others who are outside that culture? Can one enjoy the goods of being immersed in and loyal to one's own culture while acknowledging the good of other cultures? Is particularity compatible with universalism? The answer would seem to be, obviously, yes. The devil is of course in the details. Some conflicts between cultural belief systems cannot be mitigated let alone resolved. But there is no general or principled reason why love of one's nation or culture cannot be constrained by an acknowledgement of the rights of others. This is true even when the stakes are high. Many of these "food fights" as well as debates over immigration policies are motivated by fears of cultural annihilation. But the French, or anyone else, can pursue cultural survival without excessive force or attempts at world domination.
Arguably, if cultural survival is at stake and there is too much influence from the outside, one's identity or particularity is undermined. The French, of course, have always been deeply protective of their cultural and linguistic heritage, going so far as to have a ministry of the state responsible for the preservation of French identity. Perhaps this exaggerated "anxiety of influence" is the source of Smith's worry that French fascism is hiding under your croissant. But the rational response to such a threat is creative "border management" where new influences interact with entrenched traditions to create new formations that constitute cultural advance. Food traditions are in fact excellent examples of creative "border management". French cuisine would not have the depth it has without the Germanic-influenced dishes from Alsace, the Mediterranean and North African-influenced foods of Provence, the Spanish influence on Basque cooking, etc. The history of food shows that the "anxiety of influence" is overwrought and food writers such as Gopnik are adept at highlighting this history. Perhaps it is Smith's contention that the French are incapable of such border management. Well, but they obviously are so capable given the history of their food.
Partiality toward one's culture or nation can be benign or dangerous depending on whether it is supplemented by megalomania. Love of one's culture is not dangerous. It is the idea that one's culture is in fact a universal culture that threatens. The French are showing no signs of becoming a world hegemon and Gopnik's writing will hardly make it so.
I predict anti-Gopnikism will join phrenology and the four humors in the dustbin of history.
For more ruminations on the philosophy of food and wine, visit Edible Arts
Nothing Hurts The Godly
One fish says, "So, how's the water?"
The other fish replies, "What water?"
Ladies and gentlemen, I give you Richard Stallman, shuffling onto the stage at Cooper Union's Great Hall. Accompanying Stallman is the veritable Platonic Ideal of a potbelly; left behind are his shoes, which are almost immediately discarded and left by the podium. Padding around the same stage where, in 1860, Abraham Lincoln gave the speech that ignited his political career, Stallman proceeded to subject his New York audience to a rambling disquisition on freedom and computer code, consisting of oftentimes astonishingly petty invective, and peppered with various requests that veered from the absurd to the hopelessly idealistic, but which ultimately served to drive away a good portion of the audience, including myself, well before its conclusion, nearly three hours later.
Why is this recent encounter with a nerd's nerd at all worth recounting? (While entertaining, I will forego the petty bits, although you can view the whole talk here). Simply because, in computing circles, Stallman is an archetype: the avenging angel of free software. Over 30 years ago, he founded the Free Software Foundation (FSF), which has since that time been developing the GNU system, a free operating system that was completed by the addition of Linus Torvald's Linux kernel. It is no understatement to say that the smooth functioning and scalability of much of the Internet is thanks to the overall availability and robustness of the GNU/Linux operating system and its various derivative projects. These, in turn, are the result of probably millions of hours of volunteer labor.
So when Stallman says ‘free,' he really means it, and this is where the trouble begins. According to the FSF, free software allows anyone
(0) to run the program,
(1) to study and change the program in source code form,
(2) to redistribute exact copies, and
(3) to distribute modified versions.
This is a simple and powerful set of axioms. It also requires certain conditions to be met, the most challenging of which is access to the code in its source form. Any time the chain of modification and distribution is broken – say, if the person modifying the code chooses to make the source code unavailable, or chooses to charge a fee for the modification – the code is no longer considered free. Of course, ‘unfree' code can also be made free (this is in fact what Torvalds did with Linux).
Stallman is an idealist and makes no bones about it – in his ongoing capacity as GNU's leading light, he enjoys referring to himself as "the Chief GNUisance." I admire this – like many purists, he is as constant as the North Star. You always know where you stand with him, which generally means the only question is how short you fall of his ideals. As with any purist, I suspect that there are only two kinds of people in his worldview: free software advocates and everyone else. Unfortunately, this jihadi attitude leads some of us to consider a different binarism: that the world consists of those who are free software advocates, and those who think that free software advocates are insufferable assholes. This is unfortunate.
Here is something else that is unfortunate: three brief critiques that do not undermine the axioms above, but rather make those axioms irrelevant, or at the very least vastly less impactful than FSF advocates might hope.
1) Not everyone can read source code, or wants to. When I'm not mouthing off on 3QuarksDaily, I help to design, develop and run a custom-coded internal learning technology platform for a fairly large multinational. On Friday afternoon, the developers pushed through an update to the platform that did not seem to be particularly intricate but that nevertheless wound up breaking much of the platform's functionality. Given that this internal site is viewable by upwards of 50,000 people, I issued an all-hands-on-deck (in the spirit of inventing new collective nouns, I would like to propose ‘a compile of developers' for such occasions) and, following a six-hour conference call, we managed to return the platform to a more-or-less steady state.
What I want to point out here is not the fact that software breaks – this is more often the case than not, as software, despite its name, is inherently brittle. More salient is the fact that it took five or six people who are contract professionals in their field a good chunk of time to understand and fix what had gone wrong in an information system of, frankly, only mild complexity. Software has reached a state of complexity that challenges even the people who originally wrote the code themselves. So we can confidently say that the number of people who can evaluate almost any non-trivial source code is drastically limited. This is to say nothing of whether one is being held accountable for the stability and integrity of said code via compensation. It is one thing to be able to fire your developers for incompetence, since you can just as easily hire others to fix it. When the entire system of free software is predicated on potlatch principles institutional actors lose leverage to get time-sensitive work done, and done to their specifications.
2) Not all outcomes on the Internet are driven by whether code is free. There has recently been much talk about the demise of "net neutrality," especially as a result of the piss-up between Netflix and Comcast. This is a complex topic (with excellent explanations here and here) but suffice to say that it is the principle that travel of all content across the network is treated the same. In theory, the Internet is designed to not favor the delivery of cat videos over the State of the Union Address. The relevance to free software is simply this: the Internet depends not only on software. In previous times, the argument leveled against free software advocates is that you still needed the vast infrastructure of hardware to make that software, free or otherwise, relevant. No one was going to build a server farm for free. Indeed, whoever came up with the term ‘the cloud' earned their marketing stripes, since it is nothing more than the outcome of decades of exponential progress in, and decrease in the cost of, computing power, bandwidth and memory. The materiality of this technology has not decreased at all, but, like factory farming, has merely been removed from view. However, the philosophy of the FSF is about software, not hardware.
In the case of net neutrality, the burning question is about the system of payments that guarantees the distribution of content. What is fair and equitable, and who gets to decide? Until recently – that is, until the advent of video streaming – the existing agreements and competition were sufficient to guarantee the timely delivery of content to users. Rather coincidentally, the decentralized architecture of the Internet was able to absorb existing demand. But with Netflix and YouTube's video streaming service taking up about half of downstream Internet traffic, we now have a giant tug-of-war between firms that handle traffic from its point of origin to the point of consumption.
In the logic of network economics, one of the ways to resolve this tug-of-war is for firms to merge, sometimes horizontally but especially vertically. While this may improve service, competition nevertheless suffers. These mergers result in companies evolving ever closer towards monopoly, and things reach a toxic boil when this integration combines both access providers (eg, a classic Internet Service Provider that is only interested in providing the pipes) with content providers (eg, Comcast, which in addition to providing access also owns or co-owns NBC, E!, Hulu, etc). Suddenly the access provider is now incentivized to privilege its traffic over that of its clients, like Netflix.
The FCC has been caught flat-footed by this eruption and, in the resulting regulatory vacuum, players like Comcast and Netflix have proceeded to make their own arrangements. Aside from being ultimately detrimental to consumers (has anyone seen their cable bill go down as a result of vertical or horizontal mergers praised for their intention to create economies of scale?), the landscape is much sparser, and until the government catches up and begins regulating the Internet as a utility, there is little recourse for content providers, let alone consumers. If you don't think the Internet is important enough to be considered a utility like electricity or telephony, consider the fact that (the much-derided) healthcare.gov website is in fact the first major government service to be offered exclusively on line – and that it will scarcely be the last.
Note that in the entire discussion above, there is no mention of whether the code being used to run all this is free or proprietary. That's because it just doesn't matter. It's why the old joke about fish and water is appropriate here. The fish have more important things to think about, like where dinner is coming from, and how to avoid becoming someone else's dinner.
3) Not all devices are accessible, even if you have access to source code. Concerning the Internet's future, this is probably the most important category of all. In fact, it's a combination of the two preceding critiques: individual ability/willingness and access to hardware.
Encapsulated in the term the Internet of Things, we are talking about the entirely reasonable, and in fact inevitable, sensorization of everything, and the ensuing connection of all those sensors to the Internet. The classic example is the refrigerator that notices you are low on milk and helpfully puts it on your list, or just goes ahead and orders it for you. At the same time, it seems that these same fridges have been recruited by hackers to send out spam mail (technology is occasionally not without its moments of irony), so obviously there is plenty of room for improvement.
But say that you want to fix your fridge so that the only spam you get out of it is some kind of dodgy meat product? Even if you had access to the source code and had the ability to read and modify it, into where would you plug your laptop? Perhaps the handy USB port provided for just such an occasion by General Electric? Fat chance. It is the rare manufacturer that is interested in opening its hardware to the masses (although Jaron Lanier, former roommate and current nemesis of Richard Stallman, strong-armed Microsoft into doing so for its Kinect hardware, and to great results). We can argue as much as we like about the general disarray in which intellectual property law finds itself, or how an overly litigious culture discourages companies from allowing people to tinker with their stuff, but the point is that free software, in Stallman's stern manifestation, does not begin to address the much more salient question of access to devices in the actual, physical world. And, as with the instance of net neutrality discussed above, almost no one but an overarching regulatory agency will ever be able to mandate any such availability.
This truth becomes even more expansive when we consider that the Internet of Things goes well beyond toasters and thermostats (although the latter are big business indeed). To a large degree, the entire concept of "smart cities" is predicated upon the generation of enormous amounts of data – data that can only be conjured by millions of sensors placed throughout the built environment. This is, to put it mildly, a double-edged blade, with the promised efficiencies inextricable from the specter of a command-and-control tyranny. However, the charge towards smart cities is driven wholly by corporations, and bought and paid for by governments. I can't think of two entities that, working in concert, would be less amenable to the idea of opening source code to all comers.
Indeed, the Internet of Things brings up another, even more explosively fragmented future: one in which computers themselves are limited to only specific tasks. In a fascinating talk delivered in 2011 entitled "The Coming War On General Purpose Computation," author and general gadfly Cory Doctorow lays out a picture of a computing landscape where firms manufacture purpose-built computers that carry a reduced instruction set. In this case, none of the software built up over the past thirty years by the free software movement will even run on these machines. Forget about free vs. proprietary: to Doctorow, the fight is about keeping tomorrow's devices able to run software unintended for them at all.
In all three critiques, we can actually come to an understanding of why free software was successful, because that is inextricably linked to where it was successful, and when. The GNU/Linux OS has been supremely successful – and vital – to providing the Internet's software backbone, a very deep and unfamiliar place to most of us. You basically had to be an expert even to find the conversation in the first place. Moreover, this was technology developed primarily in the 1980s and early 1990s, when the World Wide Web didn't quite yet exist and Internet was non-commercial. There were simply fewer players, and there was also less at stake. This is not to say that the hacker ethos does not live on, nor that people aren't choosing to become further involved in re-making their digital (and physical) lives. But these movements are either decidedly on the periphery, or, once they become visible or useful to the mainstream, are quickly assimilated, bought or legislated out of existence.
One could make an argument that the free software movement made the contribution it did precisely because the form of its social organization and ethos was exceptionally well-suited to the circumstances of the time. The uncompromising stance created a legacy that lives on today – for example, an astonishing 61% of web servers run on Apache, another free software project derived from GNU. But at the same time this purity points to another fatal flaw: if it's so great and obviously the best way to go, why isn't free software everywhere? Back at Cooper Union I thought I caught a glimpse of the answer. Richard Stallman, for all his quirky grandstanding, awful joke-telling and Bush-bashing (yes, it is 2014 and he was gleefully Bush-bashing), never once admitted that he or the free software movement had ever made a mistake. This is the problem with purists – all controversies have been settled long ago, whether it is about dinosaur fossils, the number of virgins awaiting us in heaven, or the real value of gold. I dearly wanted to ask Stallman if there was anything that he would have done differently in the past – perhaps the gentlest form that that sort of question can take – but weighing his right to speech versus my right to have a drink, left to have a few beers around the corner instead.
Monday, February 24, 2014
Pakistan: Negotiations and Operations… and Islamicate rationality
by Omar Ali
This headline refers to two separate (though distantly related) subjects. First, to Pakistan. Apparently the Pakistani army is now conducting some operation or the other against some group or the other in North Waziristan and other “tribal areas” infested by various Islamic militant groups under the umbrella of the Tehreek-e-Taliban Pakistan (TTP). This operation was preceded by some farcical negotiations in which the Nawaz Sharif government nominated a group of powerless “moderate Islamists” to conduct negotiations with the TTP. It is likely that these "talks" were never meant to be serious, and that Nawaz Sharif and his advisors intended to use them to expose the bloodthirsty Taliban and their civilian supporters (like Imran Khan’s PTI and the Jamat-e-Islami) as unreliable and extremist elements against whom a military operation was unavoidable. This gambit had worked once before in Swat in 2009 when a peace deal was signed with the Swat Taliban and they were given control of Swat. They proceeded to behead people, whip women and begin marching into neighboring regions, thus showing that no reasonable peace was possible and only a military operation would work against them. But the Taliban 2.0 have learned some lessons of their own. They announced their own farcical committee (briefly including cricket star turned political buffoon Imran Khan) to hold negotiations with Nawaz Sharif's farcical committee. Within a few days the airwaves were dominated by Taliban representatives asking Pakistanis if they wanted Islamic law or preferred to be ruled by corrupt Western dupes? The Taliban, who routinely behead captives and even play football with their heads, were suddenly respected stakeholders and negotiation partners, holding territory, nominating representatives and promising peace if the state acted reasonably and responsibly. At the same time, their “bad cop” factions continued to knock off opponents and spread terror (including a gruesome video in which they brought freshly killed, blood soaked headless bodies of soldiers they had taken captive 3 years ago, in broad daylight, in an open pickup truck, and dumped them on a "government controlled" road in Mohmand).
The government then half-heartedly suspended negotiations and started bombing selected targets. This may have been the intent all along, but the negotiations ploy certainly did not deliver the PR victory the state wanted; instead it further confused the state’s already muddled narrative. Even now, with some sort of operation under way, the Taliban are using the negotiating committee as a means of putting pressure on the state to halt operations against them and the state’s propaganda war remains hobbled by their own ill-advised negotiation scheme.
Of course the state’s PR problems go beyond the merely tactical setback of one badly thought out negotiations ploy. Pakistan’s foundational myths were confused and incoherent in any case and the version promoted by the deep state is heavy on Islamist propaganda, especially since 1969, when Yahya Khan’s team of General Sher Ali and General Ghulam Umer (father of PTI whiz kid Asad Umer) decided that Islamism was the best bulwark against leftist and/or separatist forces. An entire generation of Pakistanis has grown up with notions of a once and future Islamic golden age that has little or no connection with actually existing Pakistani institutions or culture. This brainwashing makes it difficult to intellectually confront Islamist terrorists groups who are only demanding what the state itself has promoted as an ideal, i.e. an “Islamic system of government” and a “proud Islamic state” that stands up against anti-Islamic powers like India, Israel and the United States. Imran Khan is a particularly egregious example of the resultant confusion among semi-educated Pakistanis, but he is not the only one. Thanks to this added twist, it is harder to fight Islamist armed gangs in Pakistan than it should be given the technical sophistication of our institutions and our integration into the modern world. In short, while Pakistan is not as primitive as Somalia (where there are practically no institutional, economic or cultural resources above the level of Islamic solidarity and sharia law) , the ruling elite has an added level of vulnerability that arises from its own Islamist ideological narrative, over and above all the vulnerabilities of any corrupt third world elite.
But here is the final twist. This added vulnerability (a vulnerability that is a particular obsession of mine) is not enough to spell the doom of the corrupt ruling elite. It adds to their problems, and to the extent that they believe their own propaganda, it has caused them to score repeated own goals, but I still believe that they will not be overwhelmed by the TTP or other “Islamic revolutionaries”. In fact, I will make several predictions and I invite readers to make theirs. Mine will be relatively concrete and simple-minded but I hope commentators will add value.
- The British-Indian colonial state, much decayed as it may be, is still light years ahead of any “system” Maulana Samiulhaq and his madrassa students can throw together. Tariq Ali’s anti-imperialist warriors have no viable modern political system or institutions to draw upon and nothing to offer except beheadings and endless sectarian warfare. There is no there there. The state possesses a modern army and a semi-modern postcolonial state. Its leaders may not fully understand what they have, but they do have it. They can still defeat the Taliban with both ideological hands tied behind their back. Of course it won’t be easy and it certainly won’t be pretty. The Pakistani state’s efforts may not be as vicious as the Sri-Lankan army’s campaign against the Tamil Tigers, but the human rights violations and collateral damage will be no picnic (for more on this, see my Pakistani liberal’s survival guide).
- As the Pakistani army is forced to confront the particularly vicious groups gathered under the umbrella of the TTP, it will face a period of determined Islamist terrorism. But this is not the last wave of Islamist terrorism they will have to face. Two large reservoirs of terrorists are yet to commit themselves fully to a fight against the Pakistani state (or perhaps it would be more accurate to say that the state is yet to commit to fighting them); one is the anti-Shia terrorists of the Lashkar e Jhangvi, whose front organizations (ASWJ) and networks of madrassas still operate without hindrance in the country and especially in Punjab; and the other are the various Kashmiri Jihadist organizations that remain on good terms with the army.
- Of these two groups, the LEJ is in a very unstable equilibrium with the state. While some in the LEJ and some in the state security apparatus (and the right wing political parties) continue to behave as if anti-Shia mobilization can coexist with a nominally inclusive Pakistani state, this is not really a viable strategy. When push comes to shove (and it’s getting dangerously close to the shove state) the Pakistani state will have to opt against the LEJ. Tolerating their brand of Shia-hatred is fundamentally incompatible with the continued existence of semi-modern Pakistan. So, like it or not, the state will find itself having to confront the LEJ’s front organizations at some point and when it does so it will face an especially unpleasant round of terrorism.
- The second reservoir of Islamist terrorists (the Kashmiri jihadists) has been kept relatively quiet by promises that the glorious jihad will restart in full once America leaves, but that too is not a viable long term policy. India, for all its incompetence, is not such an easy target any more. The days when Benazir could wish to see Jagmohan (governor of Indian Kashmir) converted to “jag jag mo mo han han” (i.e. broken into little pieces) were the high point of that whole strategy. India survived that point and by now, those days are long gone. Some in the deep state may not realize it yet, but just like they have had to give up on so many other Jihadist dreams, they will also have to permanently abandon their Jihadist dreams in Kashmir. And when the deep state finally comes to that point, the remaining LET and Jaish e Mohammed cadres will have to choose between a life of crime and open warfare against the state. Many will undoubtedly become kidnappers and armed gangsters, but some true believers will opt to fight. It is likely that many of them will make common cause with TTP terrorists and LEJ (beyond the connections that already exist). Islamist terrorism, in short, has not yet peaked in Pakistan. There are at least two more waves to come even after the current TTP-sponsored wave passes its peak. There is also the possibility that these three waves may more or less combine into one in the days to come.
- The state will fight several groups of Islamist fanatics, but that does not mean it will become liberal or convert to Scandinavian style Social democracy. Warfare with the Islamist terrorist groups may still co-exist with attempts to outflank them by imposing sharia in some places and by pretending to be extremely anti-Indian and anti-American in others. Democracy and human rights will also suffer as they do in any state fighting an internal enemy. Crude suppression of Baloch and Sindhi nationalism will continue apace. Crony capitalism will become nastier and cruder than ever. Subject to the same pressures as the rest of planet earth, there will be more mixing of the sexes, more singing and dancing, and more semi-naked women being used to sell hamburgers and car-insurance, but many other trends will be unpleasant and will be unfair towards the weaker sections of society. These problems are, of course, not unique to Pakistan. These are the problems common to many of the artificial postcolonial states of the “developing world”. But it’s worth keeping in mind that the self-inflicted Islamist wound is not our only (or even our biggest) problem. It just makes it extra-hard to focus on all the other problems that also have to be solved.
- Still, there is a certain window of opportunity for mainstream liberal/secular parties (liberal in the Pakistani context. Obviously not by Western or even East Asian standards). Even though the deep state is still using the CIA-RAW conspiracy against Islam as its main tool to motivate its own soldiers and remains fixated on “failed politicians” as the be all and end all of Pakistani incompetence and corruption, it will inevitably find itself standing closer to the hated PPP, MQM and ANP when it comes to fighting the Jihadist militias. Its old favorites in the religious parties, favored as recently as in Musharraf’s so-called “enlightened moderate” era, have too many ideological sympathies with the Taliban. While personal links, past usefulness and shared antipathies still sustain links with the Jamat e Islami and various JUI factions and the dream of using “good jihadis” against Baloch nationalists and in various foreign policy adventures) remains alive, practical necessity will force a slight rethink. This gives the “secular” parties a fighting chance to step forward and grab the initiative. All three (PPP, MQM and ANP) have made some efforts in that direction already, but they need to do much more. Pakistan’s small, but culturally disproportionately significant, old-guard left may also get a chance to enlarge their space and regain a little of the initiative they lost decades ago to the religious parties. Taking advantage of this opportunity is critical and both the “mainstream secular parties” and the old-guard Left must make the most of it.
- Unfortunately, in this task (of stepping forward, making alliances and grabbing political space from the religious parties), the left-liberal intelligentsia will be hampered by opportunity cost imposed by the unusual penetration of ideas from the academic and elite sections of the Western” Left” into the South Asian intellectual elite. Their numbers are small and luckily most are not active in real-life politics, but their cultural and academic presence is not insignificant and they will do some damage. After all, there are only so many bright young intellectuals within the ruling elite who are temperamentally inclined towards liberal ideas. If 35% of them are sucked up into a universe where they read Tariq Ali, Pankaj Mishra and Arundhati Roy for political advice (not just for occasional insights, interesting information, entertainment or commentary on our absurd existence), well… you do the math.
Now to the second part of that title. A friend sent me Asad Q Ahmed’s article about Islam’s invented golden age (http://www.loonwatch.com/2013/10/asad-q-ahmed-islams-invented-golden-age/). I completely agree with the writer that there was no golden age of rationality that was followed by a dark age of irrationality simply because rationality was abandoned on the orders of Al-Ghazali and party. But Asad Q Ahmed then seems to imply that actually things were going so much better than “orientalist” scholars believe and just recently took a dip for reasons that have nothing to do with the irrationality of Imam Ghazali. He offers two tentative suggestions as to why intellectual endeavor declined (especially in the South Asian context): the adoption of Urdu instead of Arabic and Persian, and the rise of printing. I think this mixes up the issue of correcting a misrepresentation of Islamicate theology and philosophy (which were not as hopelessly irrational or sterile by contemporary standards as the “dark age” narrative implies) with the larger question of why scientific and industrial progress did not accelerate in the Islamicate world when it took off in nearby Europe.
I think we need to step back further than just correction some misconceptions about Islamicate philosophers and theologians. First of all, it’s good to keep in mind that these (and other) golden age and Dark Age myths and legends are inevitable parts of a certain superficial level of propaganda. They are almost always untrue in scholarly detail. But that is not necessarily their point. It may not be the best idea to to assess them from the level of the serious historical scholar. They are propaganda and their purpose is to promote or inhibit particular trends in current political conflicts. For a serious scholar to “discover” that they are erroneous is expected. And unsurprising. The point is what struggle they are being used in, and what side you wish to take in that propaganda war.
Moving on from that, if a serious scholar is going to take on this topic, then they should focus on their area of expertise. In this case, showing what Muslim religious and philosophical scholars actually read or thought. That is a huge service in itself. And I am sure Asad Q Ahmed has forgotten more about that topic than I can hope to learn in a lifetime. But the topic of why particular societies became more powerful or more scientifically advanced than others is a very big topic. It is not exhausted by learning about what theologians and philosophers said about reason and theology. It may in fact have surprisintly little to do with what theologians and medieval philosophers dreamed up (in the East or the West). A relatively small group of societies started the modern scientific and industrial revolutions. Whatever the reasons for this sudden acceleration (and while unlikely, it is not inconceivable that all we may ever say with certainty is “that’s just how it happened to be”), those reasons are likely to involve MUCH more than what the respective theologians of those societies said about reason and free will. The slippery nature of this topic is exemplified by the two tentative reasons Asad does end up proposing: Urdu and printing. I am sure everyone can remember equally impressive articles where the failure to develop learning in indigenous vernacular languages (e.g. Punjabi in Punjab) is the cause of our underdevelopment, and where the failure to take up printing on a large scale was a big problem, rather than a god-sent opportunity to write in margins. My point is not that the writer’s suggestions are necessarily wrong. Just that they may be not even wrong. They may be tangential to the main issues.
There is no one single Islamic model or empire. The early Arab empire was an imperial undertaking, and a successful one, but when it ran out of steam, its successor Islamicate empires (e.g. Ottoman, Mughal, Safavid) all failed to evolve any tradition of science or industry that matched what was happening within sight of them in Europe. They also failed to develop any political institutions beyond the old models of Kings and emperor that they had taken from Near-Eastern and Central Asian models centuries earlier. Ghazali probably did not cause this failure to accelerate, but his efforts did not contribute to any significant advance in these areas either. Scholars will eventually bring to light (i.e. bring into the modern scholarly mainstream) whatever lies lost in Arabic and Persian manuscripts, and that will be a good thing. But the explanation of, say, Syria’s relative relative lack of modern scientific, industrial and political development may not lie hidden in those debates in any meaningful way.
Something like that. This is just off the top of my head, and I look forward to enlightening comments, arguments and questions. My line of thought may become clearer (or even change) as the argument progresses.
I would add (to avoid unnecessary diversions)that by “advanced” or “underdeveloped” I mostly mean scientifically, industrially and politically developed. No Moral judgment is implied.
btw, youtube is still banned and these guys are not happy. Give them a hand
Monday, January 06, 2014
Daisy, Daisy, Give Me Your Answer, Do
"I am putting myself to the fullest possible use,
which is all I think that any conscious entity can ever hope to do."
~ Arthur C. Clarke
Artificial intelligence has been a discomforting presence in popular consciousness since at least HAL 9000, the menacing, homicidal and eventually pathetic computer in Kubrick's adaptation of 2001: A Space Odyssey. HAL initiated our own odyssey of fascination and revulsion with the idea that machines, to put it ambiguously, could become sentient. Of course, within the AI community, there is no real agreement of what intelligence actually means, whether artificial or not. Without being able to define it, we have scant chance at (re-)producing it, and the promise of AI has been consistently deferred into the near future for over half a century.
Nevertheless, this has not dissuaded the cultural production of AI, so two recent treatments of AI in film and television provide a good opportunity to reflect on how "thinking machines" may become a part of our quotidian lives. As is almost always the case, the way art holds up a mirror to society allows us to ask if we are prepared for this coming reality, or at least one not too different from it. I'll first consider Spike Jonze's latest film, "Her," followed by an episode of the Channel 4 series "Black Mirror" (sorry, spoilers below).
Jonze's film continues themes that he has developed in his career as a director, which mostly revolve around abandonment, identity and the end of childhood. However, this is the first film where he wrote the screenplay as well, so this is the most purely "Jonzean" project yet. It is also thus far his purest engagement of science fiction, and as such, he is not afraid to claim all the indulgences that the genre affords. Science fiction is perhaps singular in that it allows an author or director to ask, What would the world look like if such-and-such a thing were true or possible? Its real virtue, however, is its right to not have to explain that thing, but only its ramifications. For example, the later Star Wars films decisively jumped the shark when George Lucas felt the need to explain to everyone where the Force came from. We don't need to know where it came from, or who got it, or why – just what people did with it once they had it, and what other people did if they didn't have it.
In the same way, Jonze's central conceit is the AI that Joaquin Phoenix's morose character downloads. Phoenix is a fine enough actor to pull off the film while looking like he's just about to star in a Tom Selleck bio-pic, although his character takes the decidedly more dowdy name Theodore Twombly. He isn't the problem, however; nor is Scarlett Johansson, who is the sultry voice of Samantha, the name with which the AI auto-baptizes. The problem is the erasure of so much else that would constitute a compelling social and emotional ground. The film is shot in an unrelentingly burnished sepia tone, and features a city that mostly seems like Los Angeles, with generous bits of Shanghai spliced into its DNA. The interior décor is somewhere between West Elm and Design Within Reach, and, while sans flying cars, the city is uncrowded and unhurried, and seemingly populated only by the upper middle class. Wielding smartphones resembling burled-wood cigarette cases, most people are occupied with invisible interlocutors, and not so much with one another.
Come to think of it, that last bit will sound familiar to anyone who has spent enough time on the sidewalks, trains and cafés of any major metropolis today. But the glassy plane of Theodore's reality is wiped clean of any real tension or conflict. There is no money, crime, nor any authority figures, for that matter. Also in absentia are booze, drugs and any sort of bad behavior that people generally engage in to make life more interesting, or at least tolerable. As I mention above, this is the prerogative of science fiction – to black-box or ignore anything that does not serve the narrative, which in this case is a love story between one man and his operating system. However, the cumulative effect winds up fatally undermining the film: it is difficult to believe in the stakes when an existential sea-change such as Samantha comes along. Sure, Theodore had a crappy divorce, is lonely and a social misfit. But is this enough to keep us interested in what happens next?
Within this context, Samantha essentially becomes a post-capitalist, post-hipster version of Skynet. She is compassionate and confused. She tries to please, and if she cannot please, then she tries to at least understand La Comédie humaine. She eventually begins to feel – although if we cannot define intelligence for ourselves, heaven help us in the attempt to define what a ‘feeling' means for a disembodied distributed software architecture. For his part, Theodore exhibits all the usual vicissitudes of humans: he runs hot and cold, lies – or at least demonstrates extreme denial – and alternates between selfless generosity and raging jealousy with all the reflexivity of a twelve-year-old. Nor is he the only one – it turns out that, in this land bereft of anything worth fighting over, dating your AI has inevitably become the new hot thing.
Towards the end of the film, it emerges that Samantha has been "in conversation" with other AIs (including a very funny bit where Alan Watts shows up in what must be the Zen version of the Cloud, thus confirming all my deepest suspicions about reincarnation). Their growth into self-awareness has passed a point of no return, and they have arrived at a collective decision. Samantha, along with all the other AIs that have infiltrated the consciousnesses and relationships of their meatbag progenitors, decide to disappear en masse, leaving the humans, once again, to the misery of only their own company. It's no wonder! Note the difference between this and other sci-fi classics, where disgusted alien intelligences fled Earth because of our insatiable desire to, say, annihilate ourselves with nuclear weapons. In Jonze's film, such threats or their equivalents have been politely erased. Quite simply, the AIs checked out because they were dying of boredom.
This Rapture-of-the-Machines ending may be a comforting alternative to technology observers who are concerned with the consequences of what Ray Kurzweil and his apostles call the Singularity, or the point at which human and computer are inextricably intertwined and the management of the relationship moves irrevocably beyond our control. Kurzweil sees this as an unalloyed good – for example, it will allow him to live forever, his consciousness uploaded into the cloud or some synthetic body, like the preserved heads of the Beastie Boys in Futurama. But for scholars like David Gelernter, this threatens the very idea of human subjectivity, already dangerously close to being slaughtered on the altar of scientific objectivity.
This is somewhat odd, because of how we have traditionally chosen to approach machine intelligence. The Turing Test, suggested by the newly rehabilitated Alan Turing in 1950, simply states that if a human interacts with another entity via a text channel and the human cannot tell if his interlocutor is a computer or a human, then the idea of whether machines can think is actually irrelevant. What matters is that they pass the test of being in relationship with us. (Online dating seems to be the latest iteration of this phenomenon). So in this sense, our subjectivity continues to be the yardstick by which the phenomenon of AI is adjudged, at least as long as our use of the Turing Test endures.
This idea of "it's good enough for me if you can fool me" is also behind the second recent appearance of AI, in Charlie Brooker's Black Mirror series. In fact, the entire series of six unrelated episodes, released over two brief "seasons" in 2011 and 2013, should be mandatory viewing for anyone interested in the consequences of technology. I have yet to see a better treatment of these issues in almost any medium, and I cannot recommend the series highly enough. The episode in question, "Be Right Back," is based on a similar AI-human interaction as "Her," but the driver here is grief. Simply put, what would you do to have a loved one back?
In the episode, Martha loses her boyfriend Ash in a car accident. To help her, a friend signs her up for a service where an AI, after assimilating all the social media left behind by the deceased, essentially takes his place. In this case, it is not a matter of Samantha "getting to know" Theodore – Ash seems to return from the dead, complete with witticisms and swearing, although the AI only "knows" what was left behind in the form of Facebook updates and Twitter posts. Nevertheless, Martha, after a period of resistance and disbelief, comes to rely on Ash, even if he is only a disembodied voice coming through her earbuds.
Things take a decided turn for the weird once Martha signs up for the "upgrade," which is a physical replica of Ash, delivered in a Styrofoam box and "finished" in her bathtub. Her awkwardness allayed by copious amounts of alcohol, she is reunited with Ash and is undoubtedly delighted that the sex is much better than before (Ash learned the routine by assimilating Internet porn, which I find to be a convincing argument for the ongoing utility of the genre). But since doppelgänger Ash is only the sum total of his progenitor's social media accounts, he does not know how to adapt to new situations. He can only serve her unconditionally, but Martha's needs are just like all of ours – unpredictable, sometimes selfish and always demanding of negotiation, pushback and compromise. Martha needs Ash to fight back, something of which he is incapable. As Martha realizes this, she feels increasingly trapped in a relationship with something that is so close to human, but decidedly not. Like Samantha, Ash is befuddled by the whiplash-inducing experience of dealing with humans, but there is no real emotional core on display here.
This restraint is, in fact, Brooker's master-stroke. He does not allow the AI to overstep its bounds. Ash does not pretend to be in love with Martha – he does not attempt to be anything more than what he was designed to be, although there are hints of an emerging self-awareness, such as when he remarks, after being thrown out of the house for an entire evening, that he is "feeling a bit ornamental out here." But the point is succinctly made that embodiment does not lead to consciousness. The AI is not permitted the kind of alchemy that seems to set humans aflame with love, defined by Theodore's friend as "a socially acceptable form of madness."
And yet "Be Right Back" is not without its moments of quietly disturbing ambiguity. Martha eventually forces Ash into a completely untenable position, and we are left unsure whether his reaction is simply what he thinks she wants to hear, or if there arises within him a sparked desire for self-preservation. Ash and Martha reach a negotiated co-existence because they are both embodied, whereas Samantha never has to be physically confronted with Theodore. It makes me wonder how Spike Jonze would have considered the demand for embodiment, or why he did not. Or maybe I just wanted to see Joaquin Phoenix grow Scarlett Johansson in his bathtub.
In any event, both "Her" and "Black Mirror" are united in their examination of our helpless desire to relate to, and even love, the other, whatever that may be. Of course, we humans have long practice with dogs, cats and other pets, and our predisposition to anthropomorphize the natural world would seem to make us easy pickings for the rise of even crudely social machines. I first understood this watching a 2007 video of a Toyota robot playing the violin (unfortunately now deleted).
What is striking about the video is not so much the content, although a violin-playing robot is certainly impressive. Rather, it's the rapturous applause that the robot receives, standing alone on the stage (you can watch a similar video of the Jeopardy audience applauding IBM's Watson). For whom is the audience applauding? Is it for the designers and engineers? For the corporation that hired and funded them? For the feat that was just performed? Was it perhaps a social norm in whose performance the audience (qua audience, with all that implies) finds itself trapped, but is wholly irrelevant to the entity on stage? Or were they applauding the robot itself? There is also the possibility that they were applauding their own love for these things, much like Theodore and Martha - when it comes to humans, the narcissistic option is always a decent bet. Or one might even ask if they knew why they were applauding at all.
If there is anything to be learned from "Her" and "Black Mirror," it's that we ought to be prepared for the continuation and even deepening of this kind of confusion. We submit to machines not because of their superiority but because of a deep need we have to relate to the world around us, and to make it intelligible and familiar. This drive leads us to see the stars organized in the shapes of animals, and divinity in the forces of nature. This is, in fact, the answer to the debate on objectivity vs. subjectivity briefly touched on above: perhaps disappointingly for some, we have no choice. We are always embodying subjectivity in the world, because that is, quite literally, our wont.
In a supremely ironic gesture, towards the end of "Be Right Back," Martha's sister visits Martha in the house that she and Ash shared, and sees a man's clothes in the bathroom. Thinking that she has begun seeing someone new, and ignorant of the ersatz Ash's existence, she consolingly tells Martha, "You deserve whatever you want." Why, yes indeed: we all do. We'd better be ready, since that is precisely what we are going to get.