Monday, March 03, 2014
Is Internet-Centrism a Religion?
by Jalees Rehman
On the evening of March 3 in 1514, Steven is sitting next to Friar Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the printing press will change everything."
Let us now fast-forward 500 years and re-enact this hypothetical scene with some tiny modifications.
On the evening of March 3 in 2014, Steven is sitting next to TED-Talker Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the internet will change everything."
Clay's advice in the first scene sounds ludicrous to us because we know that the printing press did not usher in an era of wealth, justice and peace. Being retrospectators, we realize that the printing press revolutionized how we disseminate information, but even the most efficient dissemination tool is just a means and not the ends.
It is more difficult for us to dismiss Clay's advice in the second scene because it echoes the familiar Silicon Valley slogans which inundate us with such persistence that some of us have begun to believe them. Clay's response is an example of what Evgeny Morozov refers to as "Internet-centrism", the unwavering belief that the Internet is not just an information dissemination tool but that it constitutes the path to salvation for humankind. In his book "To Save Everything, Click Here: The Folly of Technological Solutionism", Morozov suggests that "Internet-centrism" is taking on religion-like qualities:
"If the public debate is any indication, the finality of "the Internet"— the belief that it's the ultimate technology and the ultimate network— has been widely accepted. It's Silicon Valley's own version of the end of history: just as capitalism-driven liberal democracy in Francis Fukuyama's controversial account remains the only game in town, so does the capitalism-driven "Internet." It, the logic goes, is a precious gift from the gods that humanity should never abandon or tinker with. Thus, while "the Internet" might disrupt everything, it itself should never be disrupted. It's here to stay— and we'd better work around it, discover its real nature, accept its features as given, learn its lessons, and refurbish our world accordingly. If it sounds like a religion, it's because it is."
Morozov does not equate mere internet usage with "Internet-centrism". People routinely use the internet for work or leisure without ascribing mythical powers to it, but it is when the latter occurs that internet usage transforms into "Internet-centrism".
Does Morozov's portrayal of "Internet-centrism" as a religion correspond to our current understanding of religions? "Internet-centrism" does not involve deities, sacred scripture or traditional prayers, but social scientists and scholars of religion do not require deism, scriptures or prayers to categorize a body of beliefs and practices as a religion.
The German theologian Friedrich Schleiermacher (1768-1834) thought that the feeling of "absolute dependence" ("das schlechthinnige Abhängigkeitsgefühl") was one of the defining characteristics of a religion. In a January 2014 Pew Internet survey, 53% of adult internet users in said that it would be "very hard" to give up the internet, whereas only 38% felt this way in 2006. This does not necessarily meet the Schleiermacher threshold of "absolute dependence" but it indicates a growing perception of dependence among internet users, who are struggling to envision a life without the internet or a life beyond the internet.
Absolute dependence is not unique to religion, therefore it may be more helpful to turn to religion-specific definitions if we want to understand the religionesque characteristics of Internet-centrism. In his classic essay "Religion as a cultural system" (published in "The Interpretation of Cultures"), the anthropologist Clifford Geertz (1926-2006) defined religion as:
" (1) a system of symbols which acts to (2) establish powerful, persuasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic."
Today's Silicon Valley pundits (incidentally a Sanskrit term originally used for learned Hindu scholars well-versed in Vedic scriptures) excel at establishing "powerful, persuasive, and long-lasting moods and motivations" and endowing "conceptions of general order of existence" with an "aura of factuality". Morozov does not specifically reference the Geertz definition of religion, but he provides extensive internet pundit quotes which fit the bill. Here is one such example:
"To be a peer progressive, then, is to live with the conviction that Wikipedia is just the beginning, that we can learn from its success to build new systems that solve problems in education, governance, health, local communities, and countless other regions of human experience."
—Steven Johnson in "Future Perfect: The Case For Progress In A Networked Age"
One problem with abstract definitions of religion is that they do not encompass the practice of religion and its mythical or supernatural aspects, which are often essential parts of most religions. In "The Religious Experience", the religion scholar Ninian Smart (1927-2001) does not provide a handy definition for religions but instead offers six "dimensions" that are present in most major religions: 1) The Ritual Dimension, 2) The Mythological Dimension, 3) The Doctrinal Dimension, 4) The Ethical Dimension, 5) The Social Dimension and 6) The Experiential Dimension.
How do these dimensions of religion apply to Internet-centrism?
1) The Ritual Dimension: The need to continuously seek connectivity by accessing computers or seeking out wireless connectivity, checking emails or social media updates so frequently that this connectivity exceeds one's pragmatic needs could be considered a ritual of Internet-centrism. If one feels the need to check emails and Facebook or Twitter updates every one to two minutes, despite the fact that it is unlikely one would have received a message that required urgent action, it may be an indicator of the important role that this ritual plays in the life of an Internet-centrist. Worshippers of traditional religions feel uncomfortable if they miss out on regular prayers or lose their rosaries that allow them to commune with their God, and it appears that for some humans, the ritual of Internet-connectivity may play a similar role.
2) The Mythological Dimension: There is the physical internet, which consists of billions of physical components such as computers, servers, routers or cables that are connected to each other. Prophets and pundits of Internet-centrism also describe a mythical "Internet" which goes for beyond the physical internet, because it involves mythical narratives about the power of the internet as a higher force that is shaping human destiny. Just like "Scientism" attributes a certain mystique to real-world science, Internet-centrism adorns the physical internet with a similar mythological dimension.
Ideas of "cognitive surplus", crowdsourcing knowledge to improve the human condition, internet-based political revolutions that will put an end to injustice, oppression and poverty and other powerful metaphors are used to describe this poorly defined mythical entity that has little to do with the physical internet. The myth of egalitarianism is commonly perpetuated, yet the internet is anything but egalitarian. Social media hubs have millions of followers and certain corporations or organizations are experts at building filters and algorithms to control the information seen by consumers who have minimal power and control over the flow of information.
3) The Doctrinal Dimension: The doctrine of Internet-centrism is the relentless pursuit of sharedom through the internet. The idea is that the more we share, the more we collaborate and the more transparent we are via the internet, the easier it will be for us humans to conquer the challenges that face us. Challenging this basic doctrine that is promoted by Silicon Valley corporations can be perceived as heretical. It is a remarkable testimony to the proselytizing power of the prophets and pundits in Silicon Valley that people were outraged at the government institution NSA for violating our privacy. There was comparatively little concern about the fact that the primary benefactors of the growing culture of sharedom are the for-profit internet corporations that make money off our willingness to sacrifice our privacy.
4) The Ethical Dimension: In many religions, one is asked to follow aspects of a religious doctrine which have no direct ethical context. For example, seeking salvation by praying alone to a god on a mountain-top does not necessarily require adherence to ethical standards. On the other hand, most religions have developed moral imperatives that govern how adherents of a religion interact with fellow believers or non-believers. In Internet-centrism, the doctrinal dimension is conflated with the ethical dimension. Sharedom is not only a doctrinal imperative, it is also a moral imperative. We are told that sharing and collaborating is an ethical duty.
This may be unique to Internet-centrism since the internet (both in its physical or its mythical form) presupposes the existence of fellow beings with whom one can connect. If a catastrophe wiped out all humans but one, who happened to adhere to a traditional religion, she could still pray to a god (ritual), believe in salvation by a supernatural entity (mythological) and abide by the the religious laws (doctrinal). However, if she were an Internet-centrist, all her rituals, beliefs and doctrines would become meaningless.
5) The Social Dimension: Congregating in groups and social interactions are key for many religions, but Internet-centrism provides more tools than any other ideology, cultural movement or religion for us to interact with others. Whether we engage in this social activity by using social media such as Facebook or Twitter, by reading or writing blog posts, or by playing multi-player games online, Internet-centrism encourages us to fulfill our social needs by using the tools of the internet.
6) The Experiential Dimension: Most religions offer their adherents opportunities for highly personal, spiritual experiences. Internet-centrism avoids any talk of "spirituality", but the idea of a personalized experience is very much a part of Internet-centrism. One of its goals is to provide opportunities for self-actualization. We all may be connected via the internet, but Internet-centrists also want us to believe that this connectivity provides a path for self-actualization. We can modify settings to customize our web browsing experience, we can pick and choose from millions of options of what online courses we want to take, videos we want to watch or music we want to listen to. The sense of connectedness and omnipotentiality is what provides the adherent of Internet-centrism with a feeling of personal empowerment that comes close to a spiritual experience of traditional religions.
When one reviews the definitions by Schleiermacher or Geertz, or the multi-dimensional analysis by Ninian Smart, it does indeed seem that Morozov is right and that Internet-centrism is taking on many religion-like characteristics. There is probably still a big disconnect between the Silicon valley prophets or pundits who proselytize and the vast majority of internet users who primarily act as "consumers" but do not yet buy into the tenets of Internet-centrism. But it is likely that at least in the short-term, Internet-centrism will continue to grow, especially if Internet-centrist ideas are introduced to children in schools and they grow up believing that these ideas are both essential and sufficient for our intellectual and social wellbeing. Perhaps the pundits of Internet-centrism could discuss the future of this emerging religion with adherents of other faiths at a TEDxInterfaith conference.
Image Credits: Photo of Gutenberg Bible (Creative Commons license, via NYC Wanderer at Flickr)
Monday, February 03, 2014
The Impossibility of Satan
by Scott F. Aikin and Robert B. Talisse
God is by definition is the greatest possible thing.
If God is the greatest possible thing, then He cannot fail to manifest any perfection — otherwise, there would be a possible thing greater than He.
Existence is a perfection; that which does not exist lacks something that would improve it.
Therefore, God must exist.
The conclusion can be strengthened, further, with the thought that necessary existence is a greater perfection than contingent existence, and so it is necessary that God necessarily exists. Now, that's a pretty heavy conclusion derived only from some strikingly lightweight premises. This is what makes the Ontological Argument so interesting – it seems clear that something's gone wrong, but it turns out that it's very hard to explain what it is.
In our Reasonable Atheism and elsewhere, we've held that the Ontological Argument is a kind of litmus test for intellectual seriousness concerning God's existence. We've claimed further that atheists in particular had better grapple with it. Here's why: Every atheist thinks the argument goes wrong; moreover, they think it's obvious that it fails. But saying that the argument's failure is obvious is not yet to identify what the failure consists in. Yet very few atheists offer much more than simple derision of the argument. Now, that's not intellectually serious – especially if the whole point of any argument is to articulate reasons for the sake of guiding belief. Saying that an argument is obviously wrong and then not having anything substantive to say about its failure is contrary to what honest argument is all about. Smug dismissals of the Ontological Argument as insipid or mere wordplay are themselves mere blather. On top of that, it is exactly the sort of thing anyone devoted to the Enlightenment project should avoid. If you're committed to reason and think the Ontological Argument isn't any good, you've got to wrestle with it and devise an account of its flaws. And while you're at it, you had better bother to consult the most sophisticated versions of the argument available. Otherwise, you're just a poser and a hypocrite.
Now, the Ontological Argument has its critics, and some of the more trenchant objections have been devised by theists. One longstanding objection has been that the Ontological Argument proves too much – specifically, that it overpopulates the world with strange but necessarily existing entities. And so, to St. Anselm's version of the Ontological Argument, the Catholic monk Gaunilo ran the counterargument that the same reasoning could prove that there is a Perfect Island. Atheists have gotten in on the game too. Michael Martin has argued that the Ontological Argument can prove that there must be a perfectly evil being (1990: 93). Richard Dawkins claims that by identical reasoning he can prove that pigs can fly (2006: 84), and Christopher Hitchens argued that it allowed him to prove that there are dragons (2007: 265). We joined the game in our Reasonable Atheism, where we argued that the same reasoning at work in the Ontological Argument can be extended to prove that God can't be the thing that necessarily exists (2011: 88).
Here's another run at the "proves too much"critique, one that takes the existence is a perfection premise in a quite different direction. Here, we are not concerned to show that the Ontological Arugment overpopulates the Christian's world, but rather that it underpopulates it in a crucial respect.
Call it the Ontological Proof for the Impossibility of Satan. To start, we employ the similar definitional setup as the Theistic Ontological Argument. Let's say that Satan is, by definition, the worst possible thing. If something is the worst possible thing, then it not only must have lots of bad properties, but it must not have any perfections; it must be the kind of thing that could not be made any worse than it already is. If it had a perfection, it would be better, not worse, than a thing that lacked that perfection, and thus would not be the worst possible thing. Next, we adopt the Ontological Argument's premise that existence is a perfection. And the conclusion swiftly follows: Satan must lack existence. Further, assuming that a possibly existing thing is better than necessarily not existing thing, it must follow that it is necessary that Satan necessarily does not exist. The Christian's world just got a whole lot smaller.
At first blush, this argument might be excellent news for theists and atheists alike. That there's no Satan is morally speaking an excellent outcome. It is a proposition that atheists already knew, but it will also be one that will relieve the theists of the threat of all-encompassing eternal torture.
But now consider a troubling consequence of the argument. If one accepts the Ontological Argument for the Impossibility of Satan, one must hold that there are some evils that, in virtue of not existing, are worse than evils that do exist. Consider a case of evil – say, the kidnappings in Cleveland, Ohio. An implication of our Ontological Argument for the Impossibility of Satan is that the morally identical copy of those kidnappings that might have happened in Pittsburgh but did not,are worse than the ones that occurred in Cleveland. This looks twisted. The implication is that the world is made better when evils actually occur, as existing evils are less bad than nonexistent ones. How could that be? The culprit is the premise that existence is a perfection. And that's the premise driving our Ontological Argument for the Impossibility of Satan, and some version of this premise features in all versions of the Ontological Argument for God's Existence that we know of. Indeed, it strikes us that some such premise is a sine qua non of Ontological Arguments as such. Alas, this premise must be rejected if the theist wants a world populated by both a God and Satan. Perhaps the better course for the theist would be to just abandon the Ontological Argument.
Monday, December 09, 2013
A Refutation of the Undergraduate Atheists
by David V. Johnson
In "San Manuel Bueno, Martir," the Spanish philosopher Miguel de Unamuno tells the fictional story of a parish priest in Valverde de Lucerna, a small Spanish town, and his successful conversion of a sophisticated favorite son, Lazaro, who had left to seek his fortunes in America and returned an atheist.
"The main thing," San Manuel says, in summarizing his ministry, "is for the people to be happy, that everyone be happy with their life. The happiness of life is the main thing of all."
When Lazaro arrives from the New World, he dismisses the town's medieval backwardness and begins confronting villagers about their superstitions. "Leave them alone, as long as it consoles them," San Manuel tells him. "It is better for them to believe it all, even contradictory things, than not to believe in anything."
Lazaro confronts San Manuel with a mixture of curiosity and respect, since San Manuel is not only beloved by Lazaro's family for his piety but also because he appears educated. Over time, the two become friends and, eventually, Lazaro rejoins the Church and takes communion, to the tearful delight of all.
The twist: Like Lazaro, San Manuel doesn't believe the articles of faith. ("I believe in one God, the Father and Almighty, Creator of heaven and Earth, of all that is seen and unseen …") What he believes in, rather, is administering to the needs of the villagers, in putting on such a convincing performance of dedication to Christ that they all believe he is a saint and have their faith in the Church and in life everlasting sustained. Lazaro's "conversion," then, is one consistent with atheism. He becomes a lay-minister of sorts under San Manuel and eventually dies a Catholic.
I think of this story when I hear the arguments against religion of the late Christopher Hitchens, Richard Dawkins and Sam Harris. If Unamuno's story were updated, I could imagine Lazaro coming home to Valverde de Lucerna with a copy of God Is Not Great under his arm, ready to do battle with San Manuel. And if the story makes sense, we can imagine someone who has imbibed the arguments of Hitchens, yet converts to the faith under the saint's arguments.
The question is why.
* * *
I like to follow the practice of philosopher Mark Johnston and label the Hitchens-Dawkins-Harris trinity and their followers the "undergraduate atheists." And risking oversimplication, I would summarize their view with the following statement:
Humanity would be better off without religious belief.
This view — call it the Undergraduate Atheists' Thesis (UAT) — asks us to compare two different lines of human history, one in which the vast majority of human beings have held and continue to hold religious beliefs, and one in which they haven't and don't. Their argument is that the world will be better off in the latter scenario.
I am an atheist who was raised Catholic and, like Lazaro, I am also someone who frets about the public's general lack of scientific understanding. Yet I am deeply skeptical of UAT.
First, demonstrating the truth of UAT would require an enormous calculation of the two competing scenarios. It demands that we add up all the good and bad consequent on human beings being religious, from the beginning to the end of human history, and all the good and bad consequent on human beings not being religious. We are then supposed to compare the two totals and see which version of human history winds up better.
My impression of UAT advocates is that they think it obvious that human beings would be better off without religion. Their typical mode of argument suggests this. They tend to argue by piling up a litany of anecdotes that, in total, suggest such a massive sum of evil from religion that it tips the scales so strongly toward the negative that a more careful weighing is unnecessary. But I remain unconvinced. In fact, I suspect the scales might tip the other way.
Why? For the same reasons as San Manuel Bueno's. The psychological consequences of religious faith — the deep satisfaction, reduction of existential anxiety and feeling of security and meaning it provides — would represent an enormous and underappreciated part of the calculation. Imagine the billions of believers that have lived, live now, or will live, and consider what life is like for them from the inside. Consider the tremendous boon in happiness for all of them in knowing, in the way a believer knows, that their lives and the universe are imbued with meaning, that there is a cosmic destiny in which they play a part, that they do not suffer in vain, that their death is not final but merely a transition to a better existence. This mental state is, I submit, so important to human happiness that people are willing to suffer and die for it, and do so gladly.
As someone who knows what it's like from the inside to be a believer, I suspect that I'm better able to appreciate this point than the undergraduate atheists, who perhaps never grew up as part of a faith. For them, the only thing worth calculating is the objective consequences of religious superstition. But that would represent a gross error.
Under the comparative scenario on which UAT rests, we are to imagine, as far as we are able, a course of human history without religious belief. This is exceedingly difficult to do, since religion is nearly universal across cultures. Yes, in this alternate universe, there would be no religious wars — but I suspect there would be wars. There would be no superstition — but I suspect there would be nonsense and folly all the same. But what this universe would lack is the ability of human beings to have religious faith and reap its subjective psychological benefits. I submit that this would be a huge net negative for humanity, even if we granted that the religious universe would have more war, more intolerance and more folly than the non-religious one — something I'm not willing to grant.
* * *
A related problem for this alternative scenario: Some researchers have suggested that there is a natural tendency in human beings, perhaps even grounded in their neuropsychology, that leads them to form religious beliefs. If this is true, the alternative scenario under UAT would have human beings like us — i.e. ones who have a tendency (dare I say a need?) for religious belief — who nevertheless lacked the resources to form such beliefs. That sounds to me like a recipe for mass misery.
Or suppose that in the alternative universe, human beings would not have this tendency towards religion. They would not be quite like us. Let us call them "Dawkinsians." They would be like human beings in every respect, including their stupidity, impulsiveness and tribalism, but they would lack any tendency toward forming religious beliefs. They would certainly lack the psychological boon from religion, but they would also somehow not have the need for it. They couldn't all be like David Hume, meeting death without blinking — that would be unfair. (Of course humanity would be better off if everyone were like David Hume!) What would it be like, from the inside, to be a Dawkinsian in a world of fellow Dawkinsians? To be a human-like creature, but to be satisfied with the rational belief that there is no God, no ultimate meaning or goodness to the universe, no life after death, and so on. Would Dawkinsians dread their own deaths? Would they have any capacity for mystical feeling? Would they suffer existential angst? Would they worry about the ultimate grounds of good and evil? If they did, then they would likely be worse off, I submit, than a world of human beings with religion. If they didn't, then Dawkinsians are a species that is so unlike ours that it's not a fair comparison.
Note that I do not need to secure agreement with the conclusion that humanity with religion is better off than without. All I need to put UAT in doubt is the consideration that a full investigation into its truth would require calculating not only all the good and bad objective consequences of religious belief versus the good and bad of a world without belief — wars, intolerance, violence, etc. — but also the subjective psychological consequences of human beings with religious belief versus humans without. If you believe that this is a hopelessly complicated task, you have reason to suspend judgment about UAT.
Hitchens, Dawkins, Harris, and their followers have something remarkably in common with religionists: they claim to know something (UAT) that cannot, in fact, be known and must be accepted on faith. The truth is that we cannot know what humanity would be like without religious belief, because humanity in that scenario would be so much unlike us that it would be impossible to determine what it would be like in that alternate universe. Their inability to acknowledge the immense calculation that would be required is unscientific. Their conclusion is as intolerant and inimical to the liberal tradition as the ranting of any superstitious windbag.
Monday, November 18, 2013
Black and Blue: Measuring Hate in America
by Katharine Blake McFarland
On Saturday, September 20, 2013, Prabhjot Singh, a Sikh man who wears a turban, was attacked by a group of teenagers in New York City. "Get Osama," they shouted as they grabbed his beard, punched him in the face and kicked him once he fell to the ground. Though Singh ended up in the hospital with a broken jaw, he survived the attack.
More than a year earlier, on a hot day in July, Wade Michael Page walked into Shooters Shop in West Allis, Wisconsin. He picked out a Springfield Armory XDM and three 19-round ammunition magazines, for which he paid $650 in cash. Kevin Nugent, like many gun shop owners, reserves the right not to sell a weapon to anyone who seems agitated or under the influence, and Page, he said, seemed neither. But he was wrong. Eight days after his visit to Shooters Shop, Page interrupted services at a Sikh Gurdwara in Oak Creek, Wisconsin, about thirty minutes southeast of West Allis, by opening fire on Sunday morning worship. He killed six people and wounded three others, and when local police authorities arrived on the scene, he turned the gun on himself.
Page, it turns out, had been a member of the Hammerskins, a Neo-Nazi, white supremacist offshoot born in the late 1980s in Dallas, Texas, responsible for the vandalism of Jewish-owned businesses and the brutal murders of nonwhite victims. He was under the influence. The influence of something lethal, addictive, and distorting: indoctrinated hatred. We don't know the precise array of influences motivating the teenagers who attacked Prabhjot Singh. But even considering the reckless folly of youth, their assault against him—a man they did not know, a physician and professor targeted only for his Sikh beard and turban—reverberates down the history of American hate crimes.
Last fall, I attended a workshop offered by the Southern Poverty Law Center on hate groups in the United States. The workshop was part of an educational retreat for law enforcement and corrections officials, and was being held at a remote lodge in northern Ohio on one of the most beautiful fall days I can remember, trees ablaze against a deep blue sky that betrays the blackness of space behind it. It was a strangely glorious setting in which to learn about skinheads. The dissonance was unnerving.
The man leading the workshop on hate groups was very muscular, a little shiny and a bit red in the face. Reminiscent of a cartoon bull, he is the kind of man I instinctively hope never to see angry. When I googled him before the presentation nothing turned up, but this anonymity is purposeful. Since the 1980s, SPLC has used the courts to undermine extremist groups, winning large damage awards on behalf of victims. Several hate groups have been bankrupted by these verdicts, rendering SPLC the occasional target of retaliatory plots. Thus, the low Internet profile and somewhat threatening physique of the workshop presenter, whose singular job it is to monitor these groups day in and day out. I found myself wondering about his family—what did his children know about their father's work, what did they think of it, were they safe?
Before the workshop, my knowledge of hate groups was limited, an epistemological deficiency afforded by privilege. I knew about the terror of the Klan in the 1800s, and their resurgence in the 1900s. I had studied, read, and heard firsthand stories of cross burnings and lynchings, sinister echoes of our nation's Original Sin. But my notion of modern-day extremism was based on the occasional unkempt white supremacist, rising up from his subterranean Internet world to buy a town. According to SPLC, the reality is more damning. Here's what I wrote down in my notebook during the workshop:
- There are more than 1000 active hate groups, including Neo-Nazis, Klansmen, white nationalists, neo-Confederates, racist skinheads, black separatists, and border vigilantes.
- This figure—this 1000+—represents a 67% increase since 2000.
- Since 44th President Barack Obama was elected in 2008, the number of Patriot groups, including armed militias, has grown 813% from 149 in 2008 to 1,360 in 2012.
- Only 5 – 15% of hate crimes are committed by actual hate groups.
In the margin next to this fourth fact, I scribbled three question marks and the words, how do we measure threat?
When I was six years old, my favorite fairytale was The Princess and the Pea. The Prince's search for a real Princess, a designation determined entirely by her sensitivity to a pea under twenty mattresses and twenty featherbeds, seemed remarkable. As an unduly sensitive child, I marveled at the notion that sensitivity could be the key to a happy ending. In my own life, even in those earliest years, sensitivity seemed only a liability.
But lately I've remembered the story in a different light, for its comment on what lies beneath. The ability of unseen, seemingly insignificant phenomena to affect the surface. A relatively small proportion of all hate crimes are committed by hate group members. But statistical insignificance might not obviate concern because numbers might tell only part of the story. I scarcely slept at all, the Princess said, I'm black and blue all over.
Here is a problem of statistical measurement: in 2008, two professors wrote a white paper that found no significant relationship between hate groups and hate crimes. "Though populated by hateful people," they write, " [hate groups] may be a lot of hateful bluster." But in 2010, Professor Mulholland at Stonehill College conducted a study that found hate crimes to be "18.7 percent more likely to occur in counties with active white supremacist hate group chapters."
Part of the problem is a lack of reporting. According to a report by the Bureau of Justice Statistics out this year, victims are less likely to report hate crimes to the police than they were ten years ago, with only 35 percent of all crimes reported. The result is that thousands of hate crimes go uncounted each year. This study also found an increase in the number of violent victimizations (92 percent of all hate crimes are now violent), and an increase in the number of religiously-motivated crimes over the past 10 years.
In a somewhat complicated coincidence, the problem of inaccurate data collection was addressed by Prabhjot Singh in a New York Times op-ed he wrote over a year ago. He called on the FBI to stop categorizing anti-Sikh violence as anti-Muslim or anti-Islamic in their annual reports. He decried the popular assumption that all hate crimes against Sikhs are instances of "mistaken identity," wherein the attacker assumes the victims to be Muslim. A true and fair grievance. But a year and a month later, Singh was victimized in his own neighborhood in Harlem by a group of teenagers yelling, "get Osama."
How do we measure threat?
Just after the shooting at Oak Creek, and months before the workshop on hate groups, I attended an interfaith service at a Sikh Gurdwara to commemorate those killed by Wade Michael Page. Upon entering the Gurdwara, I was instructed to take off my shoes, which I did, and then a young woman handed me a scarf to cover my head. I was escorted to a long, white room, with an aisle down the center—women sitting on the floor to the left, men on the right, and an altar adorned with brightly colored tapestries and cloths at the front. The room was almost full, but I found a spot near the back. The women's headscarves—blood orange, deep blue, and scarlet—burned beautifully against the white walls.
The service opened with a Sikh prayer, and Dr. Butalia, the leader of this Gurdwara, welcomed us all in English. He expressed how much it meant to him and his community to be supported by so many visitors, and he asked all the Christians to stand. I stood up, along with the two Catholic nuns in front of me, and about fifteen others. When we sat down, he asked all the Muslims to stand. When the Muslims sat down, he asked the Jews to stand, then the Hindus, then the Buddhists, then the Baha'i, then the Jains, then the "various people of conscience." With each group that stood, the hard shell formed by the word "stranger" cracked and dissolved. Children ran back and forth across the aisle, holding hands, on important missions from mother to father and back again. Dr. Butalia described his friend, Satwant Kaleka, the leader of the Gurdwara in Oak Creek who died trying to protect his congregation with a butter knife. His voice faltered, "He was a peaceful man." Then we prayed for the man who killed Kaleka. We prayed for Wade Michael Page, naming him "a victim of hatred," and we prayed for his family.
Towards the end of the service, a speaker told us a story that went something like this: a long time ago, there was a king who sought to be the most powerful man in all the land. He went around proving his strength by breaking the branches off trees with his bare hands. A wise man saw him doing this and approached him. "‘Oh, you are very strong,' said the wise man, ‘but now, can you put it back together?' People who destroy are not powerful," the speaker said, "people who unite are powerful."
The earliest definition of the word "victim" dates back to the 15th century and connotes a holy sacrifice. By the following century, the word lost its exclusively sacred associations, and today four definitions are offered:
- a person who suffers from a destructive or injurious action or agency;
- a person who is deceived or cheated, as by his or her own emotions or ignorance, by the dishonesty of others, or by some impersonal agency;
- a person or animal sacrificed or regarded as sacrificed;
- a living creature sacrificed in religious rites.
A person harmed by injurious agency. A person deceived by her own ignorance. A person sacrificed. It's too much to measure.
And there is no word or concept for "victim" in the Sikh tradition. After he was attacked, Prabhjot Singh's responses embodied the Sikh concept of chardi kala, which translates to "joyous spirit" or "perpetual optimism." He said that if he could talk to his attackers he would "ask them if they had any questions," and "invite them to the Gurdwara where we worship." He was also thoughtful about his one-year-old son: "I can't help but see the kids who assaulted me as somehow linked to him."
Numbers and naming can take us only so far. Sometimes causality defies quantifiable analysis and sometimes the relationship of one thing to another is indirect, cyclical, or statistically unlikely. A restless night, a confusing coincidence. Perhaps the question is not exclusively, or even primarily, one of measurement—the measurement of threat and causation, the correct category and quantity of victims—but a different question entirely:
Can you put it back together? I'm black and blue all over.
Monday, April 29, 2013
The Folly of Perpetual Victimhood
by Jalees Rehman
I grew up in a culture of guilt. One of the defining characteristics of post-war Germany was the "Vergangenheitsbewältigung", a monstrous German word that describes the attempts to come to terms with the horrors of Nazi-Germany and World War II. How could Germans have abandoned all sense of humanity and decency? Why had millions of German actively or passively engaged in the mass murder of millions of Jews, gypsies, homosexuals, socialists and so many other minorities? This Vergangenheitsbewältigung resulted in a deep-rooted sense of collective shame and guilt, one which transcended the generation which had lived through the war and even engulfed Germans born after the war and Germans with immigrant backgrounds, whose families obviously had no historic link to the atrocious crimes committed in Nazi Germany. We did not feel blameworthy in the sense of having to answer for the Nazi crimes, but we did feel that the burden of history had foisted a responsibility on us. We felt that it was our responsibility to be continuously vigilant, watching for any signs or symptoms indicating a recurrence of right-wing extremism, anti-Semitism, fascism, racism, militarism, nationalism, discrimination or other characteristics of Nazi Germany. Our obsession with collective introspection at times became so excessive that it paralyzed us, such as when we developed a general paranoia of expressing any form of German patriotism, because it might set us on a path to Nazism. Many Germans also had near-hysterical responses to any discussions about genetic engineering, because it evoked haunting memories of Nazi eugenics. But despite these irrational excesses, I think that we Germans greatly benefited from our post-war soul-searching which helped us build a mostly peaceful country – no small feat, considering our past.
Roughly one decade ago, "mirror neurons" were among the hottest items in neuroscience research. These neurons in the brain of an individual were thought to fire upon observing behaviors in other individuals: When I see someone eating a delicious piece of chocolate, my mirror neurons fire and help create a proxy sensation or awareness in my brain that mirror the observed behavior so that I might have some sense of eating the chocolate myself. If this were true, mirror neurons would play a central role in generating a sense of empathy. Newer scientific research has questioned whether "mirror neurons" truly exist, but there is little doubt that our brain has some neurobiological substrate that enables empathy, even if it does not consist of the exact same set of anatomically defined neurons as has been previously suspected. I therefore still like to use the "mirror neurons" metaphor, because it aptly evokes the image of a neurobiological mirror in our mind. I would like to extend that mirror metaphor and also propose that our mind might contain "guilt neurons", which fire when we observe some degree of resemblance between ourselves and perpetrators of crimes. Part of being immersed in the post-war German tradition of collective guilt and soul-searching is that it endowed me with ultra-sensitive hypothetical "guilt neurons". When I hear about a tragedy or crime, I not only feel the natural empathy with the victims, but in a reflex-like manner ask the question whether I bear some degree of responsibility – not blame – for this tragedy and crime and how to best work towards preventing it in the future. This "guilt neuron" activity is strongest when I sense that the perpetrator is a member of an in-group that I also belong to, such as crimes being committed by fathers or husbands, by Germans, by scientists or physicians, by Muslims, by people with a South Asian heritage and so forth.
When Anders Breivik in Norway committed his mass murder in 2011, I felt a very deep sadness, because I could really empathize with the victims and their families. He killed teenagers and young adults attending a youth camp of the Social Democratic party. His victims could have been my children, and a couple of decades ago, I might have attended a similar youth camp in Germany. My guilt neurons were silent – I did not feel much of a responsibility because I had little to nothing in common with the perpetrator. He despised everything that I supported – diversity, feminism, progressive-liberal thought and the environmental movement. But I felt that there were people who ought to have felt some degree of responsibility. His manifesto quoted extensively from far-right bloggers and authors in the United States and in Europe, who seemed to have shared his world-views. Does this mean that everyone promoting anti-immigrant or far-right views in Europe and the US should have been blamed for the deeds this mass killer? Of course not! They did not directly provide him with the weapons and they did not ask him to murder the social democratic youth – but shouldn't one take some responsibility for promoting hateful messages that denigrated immigrants, Muslims and citizens with progressive-liberal thoughts? The responses of the far-right politicians and authors who might have unwittingly influenced Breivik were quite disappointing. Instead of undergoing an introspective analysis, the far right just issued perfunctory condemnations, stating that they would never have endorsed the murders. The politicians and far right bloggers continued to engage in their hateful rhetoric, even tried to seize the opportunity to portray themselves as unfairly maligned victims. The Breivik acts of terror did not seem to have activated the "guilt neurons" of the far right.
The week following the Boston Marathon bombings on April 15, 2013 was a very sad week for me. Boston is one of my most favorite cities in the world. It is the first US city that I ever visited. I spent many months there when I was a student in the 1990s. Boston eased me into American culture by cushioning the culture shock that Europeans experience when they first visit the US, mostly because it reminded me a lot of my home town Munich, famously known as the "Weltstadt mit Herz" ("city of the world with a heart)". Like Munich, Boston is wonderfully suited for long city walks. The Bostonians were extremely hospitable and friendly. I remember seeing beautiful sunsets in Boston, spending hours in the wonderful bookstores in Cambridge and Boston and being thrilled by the plethora of universities and their libraries in the Boston area, which seemed like an endless treasure trove of knowledge. I was thus devastated when I saw the tragedy of the bombings unfold – more or less live on the Internet and on Twitter revealing painful descriptions of victims who had lost their limbs at a marathon. I was haunted by the image of the young boy Martin Richard holding up a sign which said "No more hurting people" in 2012 – only to be murdered in the subsequent year at the Boston Marathon bombings. The idea of this beautiful city, normally bustling with activity and creativity, being forced into a lockdown because of some psychopathic killers was heartbreaking.
On Friday morning, I heard the news that the perpetrators had been identified; two Muslim immigrants with Chechen origins. They were brothers, the older one - 26 year old Tamerlan Tsarnaev - had been killed in a shoot-out. The younger one - 19 year old Dzhokhar Tsarnaev - had not yet been captured and an ongoing manhunt was still paralyzing the city of Boston. There were vague reports of "Islamist connections" of the older brother based on his alleged Youtube video playlists. The younger brother was a college student at the University of Massachusetts and had a Twitter account with the handle @J_tsar, from which he had sent his last tweet on April 17, two days after the Boston Marathon bombing. His last tweet was a re-tweet of the conservative Muslim cleric, Mufti Ismail Menk: "Attitude can take away your beauty no matter how good looking you are or it could enhance your beauty, making you adorable." Dzhokhar Tsarnaev's last self-authored tweet was "I'm a stress free kind of guy", one day after the bombing – both tweets seem rather cynical in the context of someone who had helped inflict so much suffering. His Twitter feed of the past months was a combination of mindless blather, evoking the traditional cliché of the banality of evil, but it also contained a number of tweets which indicated that he saw himself as a Muslim, even quipping about how Muslims at his mosque thought he was a convert to Islam instead of being born a Muslim.
The specific motives of the two brothers were not yet known when the news broke. Did they murder and maim their fellow citizens because they felt it was consistent with or even mandated by their view of Islam? Was it a political statement regarding the war in Chechnya and they just happened to choose innocent civilian targets in Boston because it was easier than planting bombs in Chechnya or Russia? Were they psychopaths seeking notoriety and infamy without any specific religious or political goals? Were they aided by a terrorist organization or acting as individuals?
Multiple Muslim organizations and prominent Muslims strongly condemned the Boston Marathon bombings, expressed their condolences for the victims and made it very clear that such acts of terror were inconsistent with Islam. Muslim organizations routinely issue such statements when Muslims commit acts of terror, but the question remains whether such statements are enough. Since I possess overactive German guilt neurons, I feel that as members of the Muslim community in the US, we have a deeper responsibility to undertake an introspective analysis and explore why US Muslims engage in forms of violence. Some might argue that there is no need for such introspection, since we do not yet whether the motives of the Tsarnaev brothers were in any way linked to Islam. Even apart from the Tsarnaev brothers' motives, US Muslims need to understand that there is an unfortunately high level of tolerating suicide bombings or violence against civilians. A Pew survey conducted in 2011 revealed that 13% of US Muslims thought suicide bombings or violence against civilian targets could be justified to defend Islam (rarely justified: 5%; sometimes justified: 7%; often justified: 1%). The Pew survey compared the results to those obtained from surveying Muslims in Pakistan, of whom only 7% felt that such violence could be justified in the name of Islam. Sadly, this degree of acceptance of suicide bombings or violence against civilians among US Muslims has not budged since 2007. This suggests that there is a disconnect between US Muslim organizations (which categorically condemn all attacks against civilians) and the US Muslim community.
Even though the 13% represent a small minority within the larger US Muslim community, they might be the ones who are most likely to be radicalized and it is thus important to understand what motivates them to endorse suicide bombings and violence against civilians. One hypothesis that can be explored is whether the Muslim self-perception of collective victimhood may contribute to their willingness to endorse violence. During the past 15 years that I have lived in the US, I have noticed that in Friday sermons (khutbahs), discussions, lectures, articles and books, American Muslims often perceive themselves as collective victims. Khutbahs routinely end with prayers for people in need, but in my experience, there is a rather one-sided portrayal of the global Muslim community as victims - khateebs (khateeb = person who gives the Muslim Friday sermon or khutbah) frequently mention the plight of Muslims who are oppressed and persecuted in regions such as Palestine, Chechnya, Kashmir or more recently, Burma. However, there is little mention of prayers for victims in situations where Muslims are the primary perpetrators, such as is the case when Sunni Muslims murder Shia or Ahmadiyya in Pakistan, or when they kill or persecute Christians, Jews or atheists. The buzzword "Islamophobia", which is not really a phobia in the psychiatric sense, is frequently used to depict Muslims as victims. There are many cases of anti-Muslim hate speech and discrimination, but the haphazard use of "Islamophobia" to bludgeon legitimate criticisms of Muslims or Islam is rendering this term useless. An exaggerated "Islamophobia" view of the world also perpetuates the one-sided portrayal of Muslims as victims instead of promoting a more balanced view, one which would also include some discussion of anti-Western hostility that is found among Muslims ("Occidentophobia", incidentally is also not a true "phobia").
Is there any evidence that such a sense of collective victimhood could affect one's moral judgment? A remarkable study conducted by the social psychologists Michael Wohl and Nyla Branscombe lends credence to this idea. In a paper entitled "Remembering historical victimization: Collective guilt for current ingroup transgressions" published 2008 in the Journal of Personality and Social Psychology, Wohl and Branscombe examined the acceptance of Israeli acts of violence against Palestinians by Jewish Canadians. Using a web-based questionnaire, they surveyed Jewish Canadians in two different conditions, one which included showing the participants a website that reminded them of the Holocaust and the suffering of Jews and one condition in which participants just saw a neutral website. Importantly, participants who were reminded of the suffering of Jews in the Holocaust (prior to answering the questions) experienced significantly less guilt about Israeli actions against Palestinians. In a different set of experiments, Wohl and Branscombe then asked Americans how they felt about the harm inflicted by American troops on Iraqis. The American participants felt far less guilt regarding the American attacks, when the participants were reminded of the September 11, 2001 attacks. Interestingly, they also felt less guilt about the Iraq war when they were reminded of the Pearl Harbor attack. This suggested that it was not a causal link between September 11, 2001 and Iraq that had made them endorse American violence, but merely the sense of collective victimization - independent of whether the perpetrators were the Japanese military or Muslim terrorists.
Considering these data, it might be important to study whether Muslims who are continuously reminded of historical or ongoing collective victimization - being victims of "Islamophobia" or of military actions in Palestine, Kashmir or Chechnya - could promote a justification for violent acts, quite similar to the participants studied by Wohl and Branscombe. Conversely, a more balanced and realistic view of history and current affairs which would depict Muslims as both, victims and perpetrators might lower the likelihood of Muslims endorsing violence.
On the Friday after the Boston Marathon bombings, prior to heading to the Friday sermon, I wondered whether the newly disclosed information that the bombers were US based Muslims would help promote a process of soul searching in the American Muslim community. Unfortunately, the twitter feed of one of the most popular English-language Muslim blogs, MuslimMatters.org, known for its überconservative or right-wing ideas, did not suggest that this would occur. Some of its tweets and re-tweets on Friday morning suggested an all-too-familiar reaction of American Muslims. The religion of the Tsarnaev brothers was supposedly not relevant and had no bearing on the attacks; "only the perpetrator is responsible for the crime"; "If it wasn't you, then don't feel guilty. Do not take the burden of others upon your shoulders when they are wrongfully placed there"; and there were tweets about how Muslims might need to be vigilant about potential "Islamophobic" backlashes: "Please contact your local CAIR chapter if you experience any type of violence as a result of the tragedy in Boston: cair.com."
The idea that somehow "only the perpetrator is responsible for the crime" is puzzling since we routinely look at context of a crime. When Adam Lanza went on a shooting rampage, murdering children and terrorizing an elementary school, American society did not just respond with "only the perpetrator is responsible for the crime". There was an extensive effort made to re-evaluate gun laws and the mental healthcare system, and there was a general shift in the public opinion on gun control. It may be important to clarify the difference between "blame" and "responsibility". As a society, we should take responsibility to help each other and care for each other, and when we fail to do so, there is no shame in taking responsibility for that failure. That does not necessarily mean that we are all to "blame" for the acts committed by the Tsarnaev brothers or by Adam Lanza. Also, there is no need to expect that only Muslims have a responsibility to act in response to the Tsarnaev crimes. One should explore all the factors that resulted in the tragedy, such as failures of law enforcement to detect the planned plot, addressing how they accessed the weapons and training that enabled them to commit their crimes or whether there had been warning signs that could have alerted family members, friends and colleagues. Muslim soul-searching is just part of the greater soul-searching process that involves society-at-large in response to the tragedy.
As I headed towards our Friday khutbah in Chicago, I wondered whether the khateeb would broach this difficult subject. The first part of the khutbah was about Moses and David, and how these two prophets should be our role models because they exemplified steadfastness in their faith, gratitude and prayer, thanking God even under most difficult circumstances. The second part of the khutbah specifically addressed the Boston bombings. The khateeb strongly condemned the terror attacks, and said that Muslims are never allowed to kill innocent civilians. He then explained the horrors of the Chechnyan war and how Muslims suffered at the hands of the Soviet and Russian military. However, instead of an analysis and introspection addressing how we could help reduce the recurrence of such acts, the khateeb indicated that he wanted to mention one other event in this context. He said that after the Boston attacks, an interfaith service had been planned and that the initially proposed Muslim representative had been vetoed by some members of the Boston community. The objection to this particular choice stemmed from the suggested imam's alleged ties to Islamist groups. A different Muslim representative was then chosen for the Interfaith service. Our khateeb then made a rather bizarre statement in a defiant tone and said that Muslims should choose their own leaders instead of allowing "Zionists" to make decisions for the Muslim community! Rather than look in the mirror and think about potential reasons for why some US Muslims justify violence with religion, Muslims were again being portrayed as victims of alleged "Zionists". The promising first part of the khutbah had focused on Moses and David and emphasized the shared Abrahamic traditions of Islam, Judaism and Christianity, but the same khutbah had ended with the spreading of unnecessary conspiracy theories and regurgitating the image of the victimized Muslims. I left the khutbah with a heavy heart.
In the subsequent days, I observed how Muslims attempted to downplay the Muslim connection of the Tsarnaev brothers but I also saw how right-wing, anti-Muslim American groups began asking for massive profiling of Muslims merely based on their faith or ethnicity. We need to move beyond the two extremes - the one-sided portrayal by anti-Muslim hate-mongers of Muslims as purely evil perpetrators and the equally one-sided portrayal of Muslims as perpetual victims. We can then achieve a balanced and honest view of the role of Muslims in American society with a realistic and equitable distribution of responsibilities and expectations.
As with most unfathomable crimes, there are probably many factors that come together, there is no one single all-explanatory cause. The vast majority of supporters of far-right ideology do not go on shooting rampages like the Norwegian terrorist Anders Breivik did. The vast majority of homes containing an arsenal of guns do not give rise to child murderers such as Adam Lanza. The vast majority of Muslims who watch Islamist Youtube videos do not commit terrorist attacks. In all of these cases, we have to carefully analyze the risk factors that lead to the tragedies and work together to reduce the risk of recurrences. I do not want to live in a libertarian heaven with dormant "guilt neurons", where everyone is exclusively responsible for their own actions and where we can expediently shrug off any responsibility for the suffering of fellow humans or for the crimes committed by others. The strength of a society depends on the willingness of its members to engage in introspection and shoulder responsibilities.
Monday, September 17, 2012
Civility and Public Reason
Scott F. Aikin and Robert B. Talisse
According to a prevailing conception among political theorists, part of what accounts for the legitimacy of democratic government and the bindingness of its laws is democracy’s commitment to public deliberation. Democracy is not merely a process of collective decision in which each adult citizen gets precisely one vote and the majority rules; after all, that an outcome was produced by a process of majoritarian equal voting provides only a weak reason to accept it. The crucial aspect of democracy is the process of public reasoning and deliberation that precedes the vote. The idea is that majoritarian equal voting procedures can produce a binding outcome only when they are engaged after citizens have had ample opportunity to reason and deliberate together about matters of public concern. We claimed in last month’s post that democracy is all about argument; this means that at democracy’s core is public deliberation.
In a democracy, public deliberation is the activity in which citizens exchange reasons concerning which governmental policies should be instituted. This activity is necessary because democratic decision-making regularly takes place against a backdrop of disagreement, where different conceptions of public interest conflict. It is important to note that although reasoning always has consensus among its goals, democratic deliberation is aimed primarily at reconciling citizens to the central reality of politics, namely that in a society of free and equal individuals, no one can get everything he or she wants from politics. As democratic citizens, we disagree about which policies will best serve the public interest, and so, when democracy makes collective decisions, some of us will lose – our preferred policy will fail to win the requisite support. Yet democratic laws and decisions are prima facie binding on us all, even when they conflict with our individual judgments about what is best.
Public deliberation is that component of the democratic process in which citizens show each other respect: When democracy decides, some will win, and others will lose, but everyone has the opportunity in advance of the voting to present reasons and arguments in favor of their preferred outcome and against its competitors. Even though democracy requires each of us to live under some laws and policies that we individually oppose, we nonetheless can see ourselves as something more than mere subjects; because we each are able to contribute to the deliberation leading up to the vote, we can see ourselves as authors of the laws and policies that result, even when our individual judgment opposes those results. In short, public deliberation enables us to see democratic policies as justified even in cases where we individually think them mistaken. We cannot require unanimity in a society of free and equal citizens, but we can nonetheless respect each other by resolving to live together under only those laws and policies that can be justified. Public deliberation is the means for exhibiting this kind of respect.
Given that the public deliberation is supposed to manifest respect among citizens who disagree, there are a few desiderata that processes of public deliberation must satisfy. The most obvious is egalitarianism. Those who affirm a view about the public good must be open to questions and challenges from any quarter; every citizen has the right not only to assert views, but also to voice objections. “Because I said so” and “you don’t count” are never valid moves in public deliberation. This leads naturally to an additional feature of proper public deliberation, namely, reasonableness. If processes of affirming views and voicing objections are going to manifest respect among citizens, then when gets exchanged must be reasons rather than threats, commands, and insults. At the very least, this means that public deliberators must argue in accordance with basic rules of critical thinking and proper inference. But reasonableness also requires that citizens exchange reasons of a certain kind. More specifically, in order to be reasonable, public deliberation must be conducted by means of reasons that are themselves public. Public reasons are those reasons that are recognizable as reasons by democratic citizens as such. They are reasons that could be acknowledged from the perspective of any democratic citizen, rather than only from the perspective of some individual perspective or other. Accordingly, in public deliberation, citizens must reason from a public perspective rather than from the perspective of their individual moral or religious convictions. Just as “Because I said so” does not count as a reason in public deliberation, neither does “Because my church says so.”
Here’s why. We noted above that the main function of public deliberation is not to prove that one’s views about the public good are true, but rather to show one’s fellow citizens that one’s views about the public good are justifiable. And to show one’s fellow citizens that one’s views about the public good are justifiable is to show that they are justifiable to them. In order to show that one’s views about the public good are justifiable to your fellow citizens, one must articulate the case for one’s views in terms that do not presuppose one’s own particular moral, metaphysical, or religious commitments. For your fellow citizen may reject these commitments without thereby disqualifying themselves for democratic citizenship.
An example will help. Imagine a fellow citizen affirming that the state ought to prohibit same-sex marriage because God forbids homosexuality. Here, what has been offered is a reason that could count as a reason only for those who hold certain religious convictions. But free and equal citizens of a democratic society are not required to have any religious convictions at all. So the justification proposed fails to show that the position is justifiable. Contrast this with the case of a fellow citizen who affirms that that the state ought to prohibit same-sex marriage because permitting it would weaken the stability of the family, thereby weakening the most basic institution of all human society. Social stability is a concern for democratic citizens as such. Accordingly, in response, a critic will challenge the claim that allowing same-sex marriage will undermine the stability of the family, and thus social stability overall. But the important thing is that the social stability argument proposes a reason of the right kind. Those who support same-sex marriage cannot simply say in response, “Who cares about social stability?” They instead need to engage with the reasons offered by the same-sex marriage opponent. To be sure, we are confident that the social stability argument against same-sex marriage falls short, but that is a different matter from what is now at issue, namely, which reasons are properly public.
We may say that public reasons are of the kind that cannot be dismissed as irrelevant or unintelligible by democratic citizens. Thus there is a fundamental difference between a reason such as “The Bible forbids it” and “Equality requires it.” One who dismisses the former does not thereby disqualify himself for democratic citizenship; one who dismisses the latter does. Accordingly, a group of citizens that insists on a public policy that can be supported only by means of nonpublic reasons thereby shows disrespect for their fellow citizens. Put otherwise, to affirm a public policy that cannot be supported by public reasons is in effect to say to one’s fellow citizens “Because I said so.” And that’s to deny that one’s fellow citizens are one’s equals. That’s disrespectful.
Indeed, it’s uncivil. The moral core of democracy consists in the project of enabling citizens to live together socially as equals, despite the fact that they disagree deeply about fundamental moral and religious matters. This democratic moral vision can be realized only when citizens recognize a duty to respect each other as fellow citizens, equal sharers in political power. This respect requires citizens to recognize what John Rawls called the duty of civility, which is the duty to offer one’s fellow citizens public reasons when deliberating with them about the public good. Knowing that deliberation occurs against the backdrop of deep disagreement, we must on the one hand be willing to recognize the diversity of religious, philosophical, and ethical commitments available to democratic citizens. On the other hand, we must be able to explain the basis for any policy we advocate with reasons we can expect any of those diverse individuals to endorse as consistent with their status as a fellow free and equal citizen. That’s the tightrope of democratic justification. Democratic deliberation, then, requires us to argue from a public perspective.
Accordingly, we see that civility is indeed all about being respectful. But the relevant kind of respect is not that of the calm tone and cool demeanor. The respect proper to democratic citizens has to do with the ways in which we acknowledge our fundamental equality as sharers in self-government. And this is in turn a matter of whether we reason together even when our reasons conflict.
Monday, July 23, 2012
The Argument from Ugliness
by Scott F. Aikin and Robert B. Talisse
1. The universe (or parts of it) exhibit property X.
2. Property X is usually (if not always) brought about by the purposive actions of those who created objects for them to be X.
3. The cases mentioned in Premise 1 are not explained (or fully explained) by human action or non-intentional events internal to the universe.
4. Therefore: The universe is (likely) the product of a purposive agent who created it to be X, namely God.
The variety of teleological arguments is as broad the range of properties one can reasonably substitute for X. Traditional teleological arguments plug in for X the claim that some feature of the universe is fine-tuned for life, or that the universe has whatever is required in order to support creatures capable of consciousness, or moral responsibility. The argument from beauty, by contrast, begins from the premise that the universe exhibits beauty. This, the argument runs, entails that the universe must have been created by God, and thus that God exists. But teleological arguments have what we call evil twins. These are arguments that are teleological in structure, but proceed from premises concerning the imperfection or nastiness of the universe to the conclusion that there is no God. The universe, after all, is a mixed bag. Thus, for every theistic argument from, say, fine-tuning, there’s an atheistic challenge beginning from the observation that precious little of the universe is inhabitable and that living creatures are actually poorly designed. Similarly, for every theistic argument from consciousness, there’s an opposing atheistic argument that contends that consciousness deeply flawed and in any case not much of a boon. And for every theistic argument from the fact of moral responsibility, there’s an atheistic argument from immorality. This is what we call the evil twin problem: if theists contend that teleological arguments are valid in their logical form, then they must confront the atheist versions.
Here we will pose the evil twin problem for the argument from beauty: the argument from ugliness.The theistic argument from beauty has been around at least since Hesiod, who explains the grandeur of the world as a product of Gaia and Ouranos’ love. Plato, too, invokes the divine to account for beauty in the Symposium. Augustine gives an explicit version of the argument in his Confessions: We look upon the heavens and earth, and they cry aloud that they were made. . . . It was You, Lord, who made them: for You are beautiful, and they are beautiful; You are good, and they are good: You are, and they are. (XI. 4) In the twentieth century, F.R. Tennant proposed a version of the argument, noting that the world is “saturat[ed]” with beauty (1930:91). He continued, “Nature is sublime or beautiful, and the exceptions do not but prove the rule” (1930, 91-2). Nature, Tennant then infers, must be the product of a mind with the purpose of aesthetic fulfillment, intent on producing something beautiful. Mark Wynn, extending Tennant’s line of thought, notes that: Most believers, it seems to me, are more likely to be impressed by the beauty of nature, when considering whether the world answers to providential purpose, than by mere regularity or order. (1999: 15) Wynn, however, is modest about how much the case from beauty can actually prove; he holds that it cannot be “persuasive in isolation from other [theistic] arguments” (1999: 36). Nonetheless, Wynn does take it to be a positive case. Finally, Richard Swinburne holds that “God has a reason to create a beautiful inanimate world – that is, a beautiful physical universe” (2004: 121). Swinburne claims that God, being the source of good, will be instrumental in producing as much good in as many varieties as possible. So, he reasons, if God creates a universe, it will be beautiful. Since the universe is beautiful (and a universe without a creative god would likely not be quite as beautiful as this one), we therefore have reason to believe that God exists and has aesthetic values (2004: 190).
Again, the general problem for theistic teleological arguments is that the world is a mixed bag. Yes, there is order, pleasure, goodwill, and beauty aplenty. But there’s also disorder, suffering, hate, and ugliness. Now, if we are reasoning from effects to a cause, then the cause of the mixed-bag universe must be a mixed bag as well. But God can’t be a mixed bag. You get the idea. Like the argument from beauty, the argument from ugliness proceeds from a few key cases. Consider terrible art. We have some in mind, for example songs by the 1980s rock group Ratt and Thomas Kinkade paintings. They are schlocky and stupid, things merely to endure. Yet these are human products. So consider instead the harsh call of crows, or the unsightly leaking of sap from a splintered tree limb. Or take the human form and the insipid and unwieldy elbow – even the most graceful can only but manage its awkwardly hinged angularity. The anglerfish of the deep and the aruana of the Amazon are hideous creatures, and spiders are so awful, it is hard for many to contain themselves when up close to them. In addition, there are sticky and stinky swamps, boring groupings of trees, misplaced shrubbery, and intermittent villages filled with sticky and stinky children. Yuck. A friend of ours recently stopped smoking, and he remarked that, as an unfortunate consequence, his sense of smell had returned: “Most of the smells in the world are disgusting.” And, of course, there’s also vomit, puss, bile, phlegm, and feces. The world is tolerable only in small and selective doses, or perhaps from very far away. Why so much ugliness? Is it that there’s a God who has inverted aesthetic sensibilities and wishes to impress them upon us? Is God ugly and so causes earthly ugliness? The argument from beauty contends that since there is beauty in the world there must be a God who is beautiful or prefers beautiful things. By similar reasoning, one might conclude that God either is ugly or likes ugliness. But there’s a third possibility: Perhaps God created such an ugly world because he hates us. Consequently, given the amount of ugliness in the world, we have reason to believe that God either is ugly, likes ugliness, or hates us and torments us with ugliness. However, given that God must be a perfect unity of all good things, a being that either is ugly, likes ugliness, or inflicts ugliness on others cannot be God. Therefore there is no God.
Monday, May 28, 2012
Only Philosophers Go to Hell
by Scott F. Aikin and Robert B. Talisse
The Problem of Hell is familiar enough to many traditional theists. Roughly, it is this: How could a loving and just god create a place of endless misery? The Problem of Hell is a special version of the Problem of Evil, which is the general challenge that a just and loving God would not intentionally create a world with excessive misery, and yet we see the excesses all around us. Hell, on its face, seems like it is actually part of God’s plan, and moreover, the misery there far exceeds misery here. At least the misery here is finite; it ends when one dies. But in Hell, death is just the beginning. Those in Hell suffer for eternity. Hell, so described, seems less the product of a just and loving entity than a vicious and spiteful one. That’s a problem.
There are two standard lines in defense of Hell. The first is the retributivist line, and the second is the libertarian line. We think that if either succeeds, only philosophers could go to Hell. This is because only someone who understands exactly what she is doing in sinning or rejecting God could deserve such a fate as Hell, and only a philosophical education could provide that kind of understanding. So, it follows, only philosophers can go to Hell.
Retributivism with regard to Hell runs as follows: Those in Hell are sinners, and sin demands punishment. Therefore, Hell is necessary; it is the place where that punishment is delivered. This seems reasonable as far as it goes, and it does work as a nice counterpoint to the regular complaint that sometimes the wicked prosper in this life – they will suffer appropriately in the next. But retributivism about Hell ultimately seems problematic. Grant that sinners deserve punishment. Nonetheless, the amount of punishment being visited upon those in Hell is objectionable. Sinners can’t do infinite harm, no matter how bad they are. But they get an eternity of torment. Punishment is just only when it is proportionate to the wrongs committed by the guilty. So even if Hell’s express purpose is to enact retribution on those who are guilty of sin, and even if the guilty do get what’s coming to them in Hell, making that punishment eternal is moral overkill. Again, disproportionate punishment is morally wrong, and Hell is guaranteed to be exactly that for everyone there.
Take a moment to consider some moral wrong you’ve done. Perhaps you stole a piece of bubblegum from the corner store. That was wrong. You know that. Now imagine that you were caught in the act, and you were given a beating for doing that wrong. And we’re not talking just any beating – we’re talking about a real drubbing, one that ranges from your legs, up to your torso, and then to your face. And it doesn’t stop. The people who caught you keep hitting you. For a week. For a month. For a year. Now, for sure, you got punished for your moral error. The problem with the punishment is that it was out of proportion to the seriousness of the wrong you committed. You stole bubblegum, but you got a year-long beating in return. The beating was much worse than the moral harm done in stealing the bubblegum. Now consider: Every sin is only a finite harm, but punishment in Hell is eternal. No matter how bad the sins of sinners are, they will always be punished disproportionately in Hell. That’s unjust.
One response might contend that the sin of those in Hell isn’t in the temporal wrongs they have committed in sinning, but rather, the sinners in Hell commit the wrong of rejecting God, the greatest good. That is their infinite error. Consequently, the sin of those in Hell is infinite, and so they deserve eternal (hence proportionate) punishment.
Notice that in order to deserve the full measure of that punishment in Hell, a sinner who rejects God must know exactly what she’s doing. If, say, the person who rejects God does so because she did not understand Him properly or because she did not know what she was rejecting Him, then she cannot deserve full punishment of Hell. She has made an error, but it was not related to her character, but consists in her failure to grasp the divine. She didn’t fully understand her actions. Only those who understand exactly what they are doing deserve proportionate retribution.
It seems clear that only someone with appropriate philosophical acumen could have that kind of understanding. Being familiar with a textual tradition is clearly insufficient, as the art of interpreting those texts is what’s required to take them appropriately. (No one takes Solomonic wisdom to consist in the threatening to chop up anything in contention.) Philosophy is what constitutes those interpretive moves. So, on the retributive theory of Hell, only a philosopher could justly go there.
The other going justification for Hell is libertarianism, the view that one freely chooses Hell as embracing an eternity away from God. God made Hell as a place where those who want to be away from Him can go. As C. S. Lewis put it, “the doors of Hell are locked from inside.”
Again, choosing is not simply a matter of what gets chosen, but it is also a matter of what the chooser thinks she’s choosing. A person who freely drinks a cup of petrol while believing it to be a cup of water does not really choose to drink petrol. Consequently, only those who know who and what God is can properly choose to be without Him. And only those with accurate philosophical understanding of God can be in this position. Again, only philosophers can go to Hell.
All this seems excellent news for non-philosophers. Socrates may have been right that the unexamined life is not worth living, but at least it keeps you out of Hell. But there’s some bad news, too. By way of the same kind of arguments presented above, we should hold that Heaven is reserved only for philosophers. If Heaven is our loving communion with God, it must be something we’ve knowingly chosen. God could not want us to enter into an eternity of loving communion with Him without our knowing what we are doing. And, again, only philosophers could understand what that choice amounts to. Only philosophers can go to Hell. And only philosophers can go to Heaven. Maybe that’s not such good news for non-philosophers. But perhaps there’s some comfort in the thought that non-philosophers might be able to avoid going anywhere for eternity.
Aikin and Talisse's Reasonable Atheism is available from Prometheus Books.
Monday, January 16, 2012
On the Areopagitica: Why Milton’s Defence of Free Speech Remains Almost Unsurpassed but Not Secular
by Tauriq Moosa
In 1643, the English Parliament instituted the Licensing Order. This meant pre-publication censorship on all printed writings, including and aiming mostly at newspapers. This followed the abolishing, two years earlier, of the Star Chamber, which according to Kevin Marsh, “had been the monarchy's most potent tool of repression for centuries: a court that held secret sessions, without juries, and produced arbitrary judgments... all to please the king.” This blanket censorship, however, disappeared, requiring Parliament to take some action, thus the Licensing Order. But the next quilt of authority was simply knitted from the frayed threads of the previous.
Arrests, search and seizure of books, book burnings and all other classical depictions of authoritarian hatred were the outcome of this Order. The Stationer’s Company, a guild of booksellers, printers and so on, and established by Queen Mary in 1557, was put in charge of dealing out this Order. Hindsight makes those fires brighter and stupidity greater and fear lesser; curled pages to us invite anger at oppression, but in the eyes of the moralisers, it meant something called order.
The great poet, John Milton, delivered a speech in 1644, called Areopagitica (or, its full title Areopagitica: A speech of Mr. John Milton for the liberty of unlicensed printing to the Parliament of England). In it, he made an impassioned plea that rings out today, calling for free thought, speech and reason, for “when complaints are freely heard, deeply considered and speedily reformed, then is the utmost bound of civil liberty attained, that wise men look for.”
His most powerful argument is encapsulated in what is surely one of the most beautiful sentences ever written:
A man may be a heretic in the truth, and if he believe things only because his pastor says so, or the assembly so determines, without knowing other reason, though his belief be true, yet the very truth he holds becomes his heresy.
Here, Milton cut to the heart of the problem.
Belief is not knowledge, it is merely a belief or a formation of viewpoints on a particular subject. Belief backed by evidence, reason, engagement, self-criticism is the ideal of any thinking person – but we cannot expect all our beliefs to follow suit, though we ought, as much possible, to be testing our beliefs against these forms of self-engagement, since we could be wrong.
Milton highlights that even if a belief be absolutely true – “the planet is not on the back of a tortoise” – it is the basis of that belief that highlights whether one is a heretic or not. If your basis of belief is because some pastor or assembly dictates the belief, then anything can be believed. A pastor could claim that condoms increase the spread/danger of AIDs, an assembly could determine that public spending on stem cells is wrong – but no one should accept that just because the pastor or assembly has so determined.
If a group of people decide that a particular piece of writing violates what they consider appropriate morals, attitudes or views, they will then censor that piece of writing, whether through complete obliteration or, worse, modification tailored to the tastes of the mindful moralisers; its existence is one aspect but it is also the idea’s distribution that concerns censors. An idea or viewpoint’s contrarian view will be locked inside its author’s head, forced to rot, since it is denied the sustenance of fellow minds. This is the goal, in any case, of every form of censorship.
But it doesn't work.
The “heresy” that Milton refers to is not Biblical antagonism; it’s not defying the orders of the ruling religious authority (though obviously that’s the definition we assume). Milton’s heresy is about complete domination of thought.
Truth and understanding are not such wares as to be monopolized and traded in by tickets and statutes and standards. We must not think to make a staple commodity of all the knowledge in the land, to mark and license it like our broadcloth and our woolpacks.
Milton, however, must not be viewed as a secularist, fighting to untangle religion's root in political decision-making. He was not against the status quo and indeed was simply advocating that if a view or opinion truly is against the status quo, then so be it. This blasphemy however can be discovered afterwards and the books can be done away with then: blasphemy will reveal itself, so should not concern us before since we might end up lumping in legitimate, albeit controversial, inquiries which could benefit us all, among the things we ought not to see or to have been produced in the first place. The Areopagitica is filled with justifications based upon Bibilical mandates to seek out “God’s work”, in order to understand him. His suggestion was that works should not be censored before publication. There will be many failures and offences, he said, “ere the house of God can be built.”
It is this that makes Milton's argument seem strange. After all, Milton has just indicated that one ought not to believe based on an appeal to authority – but is defending free speech because God has said so. However, Milton can overcome this by indicating that the purpose of life is to discover his god’s purpose, which can only be found by constantly engaging with ideas, forcing them apart, seeking what is true. Indeed, the idea of knowledge leading to proper engagement also made it easier to separate good from evil, since, as Milton says, “good and evil… grow up together almost inseparably.” Milton claims that to fight Adam’s curse, humans require better knowledge overall, despite knowledge being the basis of the curse. In order to know good, Milton says, we must know evil.
Therefore the state of man now is, what wisdom can there be to choose, what continence to forbear, without the knowledge of evil? He that can apprehend and consider vice with all her baits and seeming pleasures, yet abstain, and yet distinguish, and yet prefer what which is truly better, he is the true warfaring Christian.
There is little wonder then that Milton’s most famous character is his Satan, in the celebrated long poem Paradise Lost. Satan and what he embodied is so potent, Alasdair MacIntyre says, that this character alone “brought Blake over to the devil’s party, and has been seen as the first Whig.” Satan’s motto is, after all, Non Serviam, which, continues MacIntyre, is “not merely a personal revolt against God, but a revolt against the concept of an ordained and unchangeable hierarchy.”
The point being that the fight for individual liberty means the distancing from the security of larger dominance. Security does not necessarily mean safety though: it only means one is not in danger of intrusion - like having one’s views, opinions and therefore life upended by radical alternatives (that, possibly, might be better). The point being that overarching infringement on individuals was done for the purposes of maintaining, as we have seen, “order” (for the common folk - also known as "power" for the rulers). Satan upset this order as set by god by “rebelling” – though this is in itself quite a complicated matter – but forever served as the catalyst for thought against overarching domination – even if, as all domination claims, it is for the individual’s own good because he is part of a larger group. Milton was evidently in two minds about it, but saw the necessity in both areas.
The beauty of the Areopagitica is that it eloquently outlined and began a conversation from the lips of one of our greatest word-users. Even if, as I’ve highlighted, Milton only began a conversation for free thought - and did so within the narrow confines of religious thought - Milton was spurned on not by anti-religious sentiments but by what he perceived to be a twisting of the very religious sentiments which should make humanity curious, knowledgeable and able to engage with varying and new concepts. Milton feared that due to our inherent ignorance, which can only decrease (or increase if we want to take a Socratic stance, given our awareness of our ignorance) with more knowledge, we are not even in the right position to know whether something should be banned or censored:
He who thinks we are to pitch our tent here, and have attained the utmost prospect of reformation that the mortal glass wherein we contemplate can show us, till we come to beatific vision, that man by this very opinion declares that he is yet far short of truth.
How does someone know that we need not attain more knowledge, simply because the idea appears heretical? Milton’s worry, though apt, was driven by the desire to learn more so that humanity could be closer to God. Milton thought therefore opposing knowledge acquisition was to, essentially, oppose humanity's most important mission, since“he who destroys a good book, kills reason itself, kills the image of God, as it were in the eye.”
Only God is so infallible, Milton could claim, as to know what is and is not allowed to be considered. Humans, being infallible and ignorant and full of sin, would be going against their very nature and design to deny knowledge, since they would be claiming to have that knowledge anyway: how can we know if it is good or bad unless we know what it is!
The irony should be obvious now: Humanity’s fall was supposedly through its acquisition of knowledge in the Garden. For Milton, the fruit of our failure becomes the seeds of our salvation.
The reason for highlighting Milton’s motivations and justifications is to not allow us to paint a secular portrait of this religious man. This does not discredit his brilliance, talent and genius, nor should it lessen the power of the Areopagitica. But in order to know our history of fighting for freedom of thought and speech, we should consider one of the most important documents to be the Areopagitica. But in so doing, we should be as fully aware of its origin and justifications as possible. As Milton himself tried to do, knowing an origin can help clarify a path for the future.
The Areopagitica remains one of the best documents for freedom ever conceived. But, one that remains central to me, will have to be a little book by another John, published in the same year as Darwin’s Origin of Species, called On Liberty.
Monday, December 12, 2011
The Case Against Santa
As we have noted previously on this blog, Christmas is a drag. The holiday’s norms and founding mythologies are repugnant, especially when compared to its more humane cousin, Thanksgiving. The story of the nativity doesn’t make much sense; moreover, it seems odd to celebrate an occasion that involved the slaughter of innocent children. And the other founding myth - the myth of Santa and the North Pole - is one of a morally tone-deaf autocrat who delivers toys to the children of well-off parents rather than life-saving basic goods to the most needy. But, when you think about it, the Santa myth is far worse than even that.
To start, the Christmas mythology has it that Santa is a being who is morally omniscient - he knows whether we are bad or good, and in fact keeps a record of our acts. Additionally he is somnically omniscient – he sees us when we’re sleeping, he knows when we’re awake. Santa has unacceptable capacities for monitoring our actions, and he exercises them! In a similar vein, Santa takes himself to be entitled to enter our homes, in the night and while we’re not looking, despite the fact that we have locked the doors. In other words, Santa does not respect our privacy. He watches us, constantly.
This is important because the moral value of our actions is largely determined by our motives for performing them. Performing the action that morality requires is surely good; however, when the morally required act is performed for the wrong reasons, the morality of the act is diminished. Acting for the right reasons is a condition for being worthy of moral praise; and, correlatively, the blame that follows a morally wrong action is properly mitigated when the agent can show the purity of her motives.
The trouble with Santa’s surveillance is that it affects our motives. When we know that we are being watched by an omniscient judge looking to mete out rewards and punishments, we find ourselves with strong reasons to act for the sake of getting the reward and avoiding the punishment. But in order for our actions to have moral worth, they must be motivated by moral reasons, rather than narrowly self-interested ones. In short, under Santa’s watchful eye, our motivations become clouded, and so does the morality of our actions.
The exclamation at the end of Santa Claus is Coming to Town captures the moral ambiguity that the Santa myth imposes on us: “You better be good for goodness sake.” Could there be a more confused moral prescription? On the one hand, if the expression aims to exhort us to act on the basis of properly moral motives (for the sake of goodness itself), then the Myth of Santa undercuts our reasons to be moral. Apparently, the account runs as follows: Santa keeps tabs on what you do and when you sleep. He will punish or reward you on the basis of your performance. So you should be good for purely moral motives. The trouble, again, is that after having given a variety of non-moral, strictly self-interested reasons to act, it is a perfect non sequitur to conclude that we must act on the basis of purely moral motives. In fact, if we’re right, the Santa myth undermines the idea that we should act on the basis of our moral reasons. By accepting the Santa myth, then, we nearly ensure that we will not be good.
On the other hand, the expression “be good for goodness sake” may be simply a form of emphatic interjection, like “Do your homework, for Chrissakes!” or “Exercise and eat right, for Pete’s sake!” And in light of the story of Santa’s monitoring practices and the consequent rewards and punishments, this interpretation seems more in keeping with the overall Santa myth, and seems like a more psychologically plausible bit of advice. This reading, however, embraces the usurpation of moral motivation. It impels its listener to be bribed for good behavior; in fact, it places bribery at the heart of morality.
So far, we’ve presented a broadly moral argument against Santa. He doesn’t respect our privacy, and our knowledge of that fact, in light of his role in punishing and rewarding us, distorts our moral motives. Yet he seems to require that our motives be pure. Santa is thus a moral torturer: he punishes those who are not good, and then imposes a system of incentives and encouragements that go a long way towards ensuring that everyone will fail at goodness.
To our moral argument there could be added a theological critique of Santa. The problem with Santa Claus from a religious perspective is that he is presented in the mythology as a kind of god. Like the gods of the familiar forms of monotheism, Santa is morally omniscient. He rewards the good and punishes the evil. Moreover, he performs yearly miracles of bounty that, at least by our lights, put Jesus’ miracle of the fishes and loaves to shame. In other words, Santa Claus can be no mere man; accordingly, the Santa mythology implies a Santa theology. And monotheists should be alarmed. We know that Yahweh is a jealous god, and encouraging children to propitiate Santa with their moral behavior sounds very much like the sort of thing that makes a jealous god very, very angry. Imagine Moses’ frustration with the Israelites were he to come back from the mountain to find them telling Santa stories instead of only worshipping a golden calf. It seems to us that taking the first commandments seriously (the ones about worshipping only the god of Moses) should be a source of moral concern about the Santa myth. Christian parents that embrace the Santa myth make idolaters of their children.
We, the authors, are atheists. We deny Santa’s existence, and Yahweh’s, too. The case we’re pressing against Santa here is analogous to the famous argument from evil. (We think the argument applies to Yahweh, too; but that’s a different story.) It works on Santa because he is a morally objectionable entity who is supposed to be intrinsically good, and intrinsically good yet bad entities do not exist. There is, of course, much more to say about the moral case against Santa. To repeat: he uses his miraculous production capacities to make toys instead of things that contribute to lasting welfare; he uses his monitoring capacities to keep track of the things people do, but does not see fit to prevent morally horrible things, assist the victims of crimes, or report criminals to the authorities. And so, not only does Santa Claus not exist, it’s a good thing, too. The questions that remain are why the myth of Santa persists, and why a major holiday is partially focused on such a despicable character.
Monday, October 17, 2011
On the Gods of Horses
The Presocratic philosopher-poet Xenophanes famously noted that if horses could draw, they would draw their gods as horses. The same, he holds, goes for lions and oxen. What is the intended critical edge of such observations? Suppose it’s true that horses would draw their gods as horses. So what?
The famous Xenophanes fragment runs as follows:
If horses or oxen or lions had hands
or if they could draw with their hands and
produce works like men,
horses would draw the figures of the gods as
similar to horses, and oxen as similar to oxen,
and they would make the bodies
of the sort which each of them had.
The Christian apologist Clement of Alexandria is our source. He portrays Xenophanes as a religious reformer, one committed to criticizing anthropomorphism in religion. To construct a god in your own image, he holds, is a form of idolatry. Clement also provides another Xenophanes fragment, one that he takes to provide parallel support for this interpretation:
Ethiopians say their gods are snub-nosed and black;
Thracians that theirs are blue-eyed and red-haired.
The same lesson is said to follow: Humans make their gods look like themselves. But the question remains. What is the critical edge? They serve a critical religious program, but there is no overt argument in either. We hold it that the observations function as a reductio ad ridiculum.
To see this, we must make explicit what’s funny about horses drawing horse gods. In doing so, we’ll ruin the joke, for sure, but that’s philosophy. So what’s funny about horses and horse gods?
Imagine a horse crafting a god in his own image, an ox attributing to the divine the best of what he can conceive. What’s funny is that these are self-indulgent depictions, limited by the depictor’s imagination, which is bounded by the kind of animal it is. An image may capture the comic intuition: In the process of drawing the gods, each animal’s body casts a shadow. The animal draws an outline of the god’s body using that shadow. That’s how each animal gets started conceiving of the divine.
The correlation observed is between properties of the one doing the depicting of a god with the depiction of the god. Crucially, the humor, then, indicates that these depictions are, as one would expect, erroneous. God just doesn’t look like a horse, or an ox, or a lion, or....
But why would such images be in error? Imagine a committed polytheist, one who thinks that there are many, many gods. Polytheists of course disagree about the number of gods there are. So consider a Herculean polytheist. The Herculean polytheist believes in the maximal number of gods. He may say that the Xenophanes’ joke relies on an underestimation of the number of gods there are. The Herclulean polytheist may say in response to Xenophanes:
Exactly! The Thracians have red-haired gods, Ethiopians have dark-skinned gods. Greeks have Greek-looking gods. Same with oxen, horses, lions, and so on, all the way down to squid, moles, and worms. They’ve all got gods that fit with them. The variety of the representation of the gods is not a challenge, but rather evidence of the vast number of gods.
When there are philosophical bullets to bite, the Herculean polytheist makes a meal of them. The Herculean polytheist surely has devised a clever strategy of embracing the presumptively ridiculous consequences of the view. Herculean polytheism, however, invites two uncomfortable consequences. First, it excessively populates divine entities. Greeks have the Olympians and their ancestors and progeny, which already seems bloated; now multiply those numbers by the number of species and races. That’s a whole lot of gods, many of whom are simply redundant. Does the horse’s sun god or the Greek’s sun god or the sparrow’s sun god move the sun across the sky? Was it a team effort when the Thracian, mole, and squid gods created the world?
Second, on the Herculean polytheist view, the duty to worship the gods now has more to do with the worshippers than the gods. One might think that the reason why one should worship a god is that the god is special; the Herculean polytheist holds that one worships a particular god because of who one is. It may seem correct to do so as a gesture of identity, but then worship is no longer about god, but about the worshipper’s identity. Hence we are back to Xenophanes’ critical concern (and Clement’s extension of it), namely, that it looks like religion is more about the humans the gods.
Consider an analogy. In freethinker and atheist circles, a version of the Xenophanes correlation is often invoked to capture the contingency of religious belief. The following is exemplary.
Freethinker: If you were born in the United States of America, you are most likely to become a Christian. If you were born in Saudi Arabia, you are likely to become a Muslim. If you had been born in Norway in the viking ages, you would have believed in Thor and Odin. If you were born in Athens around 500 BC, you’d worship Zeus and Athena.
The correlation here is, roughly, that the surrounding cultural milieu determines how one conceives of the divine. We may call this the sociological theory of religion. As the dominant religion of the culture varies with time and geography, the conceptions of the divine held by individuals will also vary. So far, this is only a descriptive point, but it is often deployed as a criticism of religion. The presumption seems to be that one’s conception of the divine should not be determined by simple contingencies. And so the more one’s theology is the product of time and chance, the less confident one should be that it is correct. The determining factors are sociological and historical, not rational.
However, once we make this observation about our images of the divine, we can subject the whole of our theology to the same criticism. It’s not just conceiving god with a long beard that’s in trouble; conceiving of god as rational, loving, and good may be projections as well.
Interestingly, the question of existence never arises for Xenophanes. In fact, in other fragments, Xenophanes offers positive conceptions of the greatest of the gods. But once we see the critical trajectory of Xenophanes’ challenges, we are compelled to ask the question: Isn’t god’s existence, too, a projection, the product of mere contingency?
Monday, September 19, 2011
Book Review: ALL THINGS SHINING: READING THE WESTERN CLASSICS TO FIND MEANING IN A SECULAR AGE By Hubert Dreyfus and Sean Dorrance Kelly
by Wayne Ferrier
ALL THINGS SHINING is a book meant for a general readership, and I am approaching this review as a general reader rather than from within the academic consortia. I may not be the ideal person to review this book. First off, I don't feel like my life is worthless or lacking meaning, which the authors assume is the way most of us feel; secondly, reading Dreyfus and Kelly reminded me why I gave up on philosophy in favor of science; finally, if I had to choose, I'd choose monotheism to polytheism any day.
I do think that it can't hurt to peruse the classics and/or philosophy in search for meaning, but so much of it is long winded and more often than not takes you on a journey into the incessant clamoring of the individual intellect; itself often leading to depression. Each sentence, perhaps each paragraph of ALL THINGS SHINING makes glorious sense, yet it made no sense to me what the authors are getting at. If I were to boil it down, I am left feeling that the thesis is an emperor without clothing. After reading, it is hoped that we'd wish to escape the supposed nihilism of our hopelessly lost modern dilemma. Calling upon a pantheon of Homeric gods is the way to bring back the sacred, to restore meaning. Man himself cannot do great things nor should he be expected to—when man acts great, it's the doing of the gods. To not acknowledge this is being ungrateful. We have lost touch by not honoring and respecting these gods, who can supply so much benevolence; gods which I could not make out, by reading this book, if we are really supposed to believe in or not.
The monotheism offered in ALL THINGS SHINING, and we are advised to abandon, is taken from Dante's poetry, concepts from the Middle Ages rather than the monotheism found in scripture. What is ironic it's just this version of monotheism that is more Greek than of Abraham; ideas such as the Inferno, (Hell, Hades), were incorporated into early Christianity when it was being exported to the west.
The leap from many gods to one God did not come suddenly—it took time. Monotheism was a new paradigm in human thinking, evidence of what the human mind was becoming capable of doing. The ancients had been exposed to the so-called wisdom literature and came to believe in a common human heritage and universal thought. I see no reason to go back. If you're looking for a book to help you find the meaning of life, you may not find it here. On the other hand, if you're looking for a book to introduce you to a thread of philosophy running through several important classics, you might enjoy it.
Monday, June 27, 2011
Life on a pillar: environmental thought and the odor of sanctity
by Liam Heneghan
The saint on the pillar stands,/The pillar is alone,/He has stood so long/That he himself is stone. Louis MacNeice, Stylite, 1940 [i]
In Moby-Dick; or, The Whale, Melville’s anachronistically recognized ecological masterpiece, a calculation is presented that on a three or four year voyage a seaman manning one of the mast-heads of a whaleship would spend several entire months aloft his pillar above the ship. A whaleship like the Pequod, Ishmael informs, was not provided with a crow’s-nest as was the case with the Greenland ships – the mast-man on the southern whaler was exposed to the elements and to the mesmerizing crawl of the oceans far below him. Our narrator cautions the ship-owners of Nantucket to be especially wary of taking on philosophical lads given to “unseasonable meditativeness”. Whaling could be an asylum for romantic souls, youngsters that are “disgusted with the carking cares of earth”. The cost could be high. Such a youth can lose his identity in his ocean reverie and “[take] the mystic ocean at his feet for the visible image of that, deep, blue, bottomless soul, pervading man and nature…” In such a meditation one misplaced step and “your identity comes back in horror” and perhaps “with one half-throttled shriek you drop through that transparent air into the summer sea, no more to rise for ever.” Ishmael concludes the observation thus: “Heed it well, ye Pantheists.” By which I take it that he is talking to dreamy youth and latterly to us environmentalists.
In chronological sequence Melville mirthfully compares the solitary, watchful, deprived life on the mast to that of other motionless dwellers, starting with Egyptians who climbed the pyramids to gaze at the stars and concluding with stone or metal men atop columns, figures unresponsive to the beseeching yells of those below them, that is, statues of Washington, Napoleon and Nelson. Included in this evolutionary sequence – for the land-locked lofty paved the way according to Melville to maritime mast-men – is Saint Stylites of whom he says “in him we have a remarkable instance of a dauntless stander-of-mast-heads…[he] literally died at his post.”
A helpful footnote in my copy of Moby-Dick declares Melville’s entertaining claim about pyramids as astronomical pillars implausible, and of course, statues, though they may remain impressively motionless for quite some time, have the benefit of being lifeless[ii]. In Melville’s roster, Saint Stylites stands out, so to speak, having spent almost forty years on his pillar.
About him I have a few things I’d like to say.
Just as Melville’s masterpiece can retrospectively be read as an ecological classic – a tale of resource consumption; a disquisition on our relationship with something upon which we both monomaniacally depend and that which will be the death of us: I speak here of nature – there are things we can learn from the asceticism of Simeon Stylites valuable to us as environmentalists. The magnetic force of an ascetic impulse that drew the Stylite up the pillar, and that skewed the balance of his life towards denial rather than affirmation also draws environmental writers to their proverbial mountain tops, and oftentimes swerves our environmental instincts towards chastisement rather than celebration. The cooler air on the pillar-top and on the piney mountain trail is languidly scented with the odor of sanctity. Saint Simeon’s life is so brutal, so macabre, that a close reflection is self-revelatory in the way that microscopy turned on the human body exposes within us both the teeming good and the pathologically bad.
Simeon Stylites installed himself on a pillar constructed on a site of his choosing near Antioch, Syria, and lived there for thirty-six years until his death in 459 AD. This can be regarded as one of the more terrifying historical examples of a modest ecological footprint. Simeon remains a revered saint, though it is clear that he shocked many of his contemporaries. Today he serves as an example of the bewildering nature of the early Christian ascetic impulse. Nevertheless, his self-renunciation was so extreme and his self-mortification so unsavory that most modern commentators disavow him. To suggest that the modern environmental movement shares this same ascetic impulse may seem gratuitous. I try to show that the comparison is useful, and do so not in a bid to scupper environmentalism (I am, in fact, a committed environmentalist) but rather to contribute to a more honest discernment of our environment motives.
I start by recounting in modest detail the extraordinary and ghastly details of Simeon’s life.[iii]
Simeon was born in 388 AD in Sis near the northern border of Syria in what is now modern Turkey. His early interest in Christianity was stimulated, some say, by hearing a talk on Jesus’ Sermon on the Mount. He entered into monastic life quite young, perhaps around the age of sixteen. Asceticism was especially prevalent in Syria in early Christian times where eremitic monasticism (solitary anchorites) was more common than in Egypt where coenobitic, that is communal forms of monasticism were favored. Accounts of Simeon's initial feats of austerity and the responses of his fellow monks remind us that he was extreme at a time when spiritual rigor was already quite pronounced. In addition to more conventional forms of asceticism: fasting, sleep deprivation, standing for lengthy periods and not washing, he invented a range of self-mortification techniques that put him in an ascetic class of his own much. For instance, when others in the community finished their nocturns he would hang a heavy stone around his neck as penance while his brothers slept. One night he fell asleep with this apparatus about his neck and injured his head. To prevent this from happening again, he procured a “certain round piece of wood” which would roll from beneath him if he nodded off. [iv] In addition to the asperities already mentioned he also innovated by tying a rough fiber around his waist (in one account, it was the rope from the monastery’s well that he wore) which abraded the skin and produced noisome smells, and had him shedding worms into his bed.
Many of the stories told about Simeon can be classified as hagiographic nonsense. For instance, he was challenged by some of the monks to test his faith and trust in God by grasping a red-hot poker which he did with without harm to his hands. Perhaps the moral of the story is that what protected him from incinerating his hand was that “he despised them (i.e. his hands).” Even his abbot, to whom his chagrined and apparently jealous brothers complained, found his fervor disconcerting (though the community may have been irritated by his flaunting of the monastic rule; indeed, more simply it may have been the smell of putrefaction that so disconcerted them). When the abbot asked the youthful Simeon to account for the vigor of his practice the young monk replied, quoting scripture: "Behold, I was brought forth in iniquities, and in sins did my mother conceive me" (Ps. 50:7).
Ultimately Simeon was forced out of the monastic community and became a hermit living for three years in a hut at Tell-Neschin. There he spent the whole of Lent without eating or drinking, a practice that became habitual for him. He broke his Lenten fast with the Eucharist host which returned him to vigor. Another austerity from this period was standing in prayer for as long as his legs could hold him. He perfected this and the claim is that he would stand in prayer for the duration of lent. From the hut in Tell-Neschin he moved to a rocky platform near Antioch and spent five years standing there. After this he moved to his series of pillars. His first pillar was nine feet high, but it was replaced by a series of others, each taller than the last. Ultimately, the progressively ascending Simeon lived fifty feet or so from the ground and was visible throughout the region, attracting a large congregation of the faithful and the curious.
The list of his spiritual services performed from the rocky platform and from his successively more prodigious pillars is a long one; harlots were transformed into vessels of virtue, the blind saw the light, hunchbacks were straightened, heathens were converted to Christianity, lepers were healed, the exsanguinating possessed were relieved of their demons. All the while our hermit is strenuously attacked by satanic forces which came in all forms, including that of a lustful camel!
One final nauseating story: as the “king of the Arabs” (more correctly, a Saracen) approached our saint’s pillar, a worm fell from a necrotizing tumor on Simeon’s thigh and the king picked it up. He touched it to his eyes and heart. The saint declared, appropriately enough, that it is “a stinking worm, fallen from stinking flesh” and in consternation asks why the king was soiling his hands. The king however regarded the worm as a blessing and on opening his hand found the worm transformed into a pearl." This allegory prompts to ask how we might manufacture a pearl from the tortured life of Simeon. What is the meaning of all of this? What general principles can be deduced?
Ascetic deprivation is a price paid in flesh for metaphysical rewards
Simeon’s turned his back on this world so that he could gain access to that other world: a heavenly one with the angels. In his early monastic life Simeon submitted to the coenobitic rule of the house (though not without chafing at the rule as we have seen), praying in common, celebrating the Eucharist together – the typical trade of earthy freedoms for heavenly reward. The pillar was something different. It is hard not to see in the pillar a more direct emulation of the Christ’s passion. The pillar can be seen as representing the mountainous heights of Christ in the wilderness and the ultimate stasis of Christ on the cross – an emulation that one can term “the prophecy of behavior”, a term coined by Professor Susan Ashbrook Harvey of Brown University to illustrate the significance of Simeon’s actions as powerful in their symbolism.[v] Simeon on the pillar can be seen as an aggressively literal form of standing before God. In his introduction to the translation of the lives of Simeon, Robert Doran locates this practice within the exercises of Gnosticism[vi]. Gnostics, Doran, reports have been referred to as “the immovable race”. Standing before God result in what is termed “immovability”, achieved by means of a visionary ascent to the transcendent realm. For this removal to the heavenly realm Simeon acquitted his debt with ulcerated feet and maggoty flesh. The suggestion is not, I think, that Simeon was a Gnostic, it is just that in his ascetic ascent and his aggravated immobility, he reinvented gestures that hitched him to another world beyond the tears and tribulations of ordinary mortal cares. Asceticism is reproduced both by emulation and by the types of intuitive rediscovery found in the life of Simeon.
We know of Simeon through what was written about him by his contemporaries and those who came after him, but other than the few snatches of conversation reported by his biographers (often regarding his worms, it might seem) we do not have his direct account of what motivated him. A clue though from the Antonius biography: as a youth in church Simeon inquires of an old man about what is being read and learns that it concerns “the control of the soul”. Pressing his elder further, he is told to:
“reflect on these things in your heart, for you must hunger and thirst, you must be assaulted and buffeted and reproached, you must groan and weep and be oppressed and suffer ups and downs of fortune; you must renounce bodily health and desires, be humiliated and suffer much from men, for you will be comforted by angels.”[vii]
Asceticism relies upon the acquisition and application of expert knowledge
Ascetics are called to special vocation – the life in the desert is not everyone’s cup of tea. Thomas Merton, a monk and occasional anchorite of more recent times, writes of the special nature of desert hermits’ lives in the early Christian centuries in the introduction to “The Wisdom of the Desert” his slim but compelling volume of the sayings of the desert fathers.[viii] Those more loquacious fellows had more to say than Simeon about the application of spiritually expert knowledge towards to end of achieving closeness with God. A dramatic account of the purpose of ascetic knowledge is given by Abbot Joseph: when Abbot Lot asked him what he should do in addition to keeping the rule, and applying himself to prayer and contemplative silence, Abbott Lot rose, his hands extended towards the heavens and his fingers “became like ten lamps of fire.” He said: “Why not be totally changed into fire?”[ix]
Merton calls the wisdom of the desert “a very practical and unassuming wisdom that is at once primitive and timeless.”[x] This wisdom concerns self-discovery regarding the spiritual journal – discoveries that Merton describes as “more important than any journey to the moon.” The wisdom of the desert is simple in philosophy but is quite voluminous: I will give just a few examples. Abbot Hyperichius instructs that it “is better to eat meat and drink wine, than by detraction to devour the flesh of your brother.”[xi] Less obscurely Abbot Pastor said that “a life of ease drives out the fear of the Lord from man’s soul and takes away all his good work.”[xii] Again, Abbot Pastor” “[if] you want to have rest here in this life and also in the next, in every conflict with another say: Who am I? And judge no one.” Perhaps you had to be there.
A more technical account of ascetic wisdom can be found in the Philokalia, a collection of texts written from the 4th to 15th centuries, deemed especially important in Eastern Orthodoxy.[xiii] There, a more complex theological lexicon is employed. In order to achieve the end of “being comforted by angels”, or achieving a greater closeness with God, the desert father marshals the following skills: “discrimination”, the spiritual gift of discriminating between the types of thought entering the mind, with the purpose of achieving “discernment of the spirits” – which thoughts come from God and should be cleaved to, and which from the devil; “intimate communion”, the freedom of approach to God; “Watchfulness”, a state of attentiveness where one carefully watches over one’s inward thoughts and fantasies – the state is linked with purity of heart and the rigorous application of the virtues and results in stillness (hesychia) in which one listens to God and can open to Him.
Ascetic deprivation secures a measure of temporal power
The hagiographical exuberance of Simeon’s vitae with their massive iteration of Simeon’s improbable miracles becomes tedious in its pietistic adulation; nevertheless the examples testify to the intercessionary power of our saint, and provide a roster of critical community needs. Surrounding Simeon on his pillar was a fairly dense agricultural population, reliant on reliable irrigation systems. This was a community concerned about disease, drought, crop productivity, and the depredations of large predators. A saint should be able to regulate the elements and master nature.
The equations of ascetic algebra typically balance the significant intercessionary power of the holy man against the self-mortification of his body. Great power is equated with great corporeal contempt. One wins a spiritual war not by inflicting the most violence, but by sustaining the most damage. For Simeon to accumulate the reputation that he did one should expect staggering penance of this flesh. And this, as we have already seen in part is what we find.
Ironically, each incremental rise of Simeon on his pillars, motivated, according to some biographical authorities, by a desire to get away from the throngs and closer to an airy solitude, increased his visibility and attracted more onlookers. Nevertheless, Simeon served this community through his miracle-working, and his fame and influence spread throughout the Christian world.
The ascetic then is marked by i) a commitment to rewards in another realm, by ii) the deployment of an expert’s knowledge in achieving esoteric goals, and by iii) the achievement of certain temporal authority, despite the ascetic’s declared intent. My list is illustrative rather than exhaustive. The problem to which asceticism is the proposed solution is solved by a suite of regularly recurring behaviors that we should also note – an initial departure followed by a commitment to immobility in another place; a rejection of civilization, through a commitment to a new rule; a distain of the city; physical austerity; a preference for raised ground, though ascetics often start their career at lower altitudes (Simeon, for instance, lived down a well for a while after leaving the monastery).
If asceticism was simply a matter of self-mortification then we could claim that we have never lived in more ascetic times. We diet to shed those dozens and dozens of unsightly pounds; some voluntarily submit to a surgical ablating of the flesh for the purposes of fabricating the perfect nose; our star athletes allegedly undergo a period of sexual continence before the big game; some of you may even gallop on scorching days for distances in excess of twenty-six miles, for no better reason than to replicate the achievement of the first person to die from that feat. And in general terms the definition of the ascetic as a person who practices “rigorous self-discipline, severe abstinence, austerity”, might tempt us to smuggle the more excessive of these modern deprivations under the definitional bar. However, the OED qualifies the definition by pointing out that asceticism aims are achieved “by seclusion or by abstinence from creature comforts”. Furthermore, the term derives etymologically from the Greek asketikos, meaning monk or hermit and more generally the root term is ascesis – the practice of self discipline, or exercise. If, in the final analysis, the contemporary mortifications listed above seem to fall short of being ascetic, why might we, in contrast, regard environmentalism as fundamentally so?
To use the life of Simeon Stylites as a point of comparison with environmental thought and practice may be a challenging place to start to make a case that environmentalism is foundationally ascetic. Certainly there are more temperate ascetics, ones who like St Antony of Egypt (231-356 AD) traveled to the wilds there to meditatively dally, but after decades alone returned to society, at least in the sense of taking many disciples under his care. In other words, there are ascetics whose practice might be more appropriately compared to Thoreau’s sojourn at Walden Pond. Perhaps one might compare tree-stylites like John Muir perched in a storm-tossed Douglas Fir or Julia Butterfly Hill residing in her California Redwood to the ascetic sadhus of India, who, practicing what is called urdhamukhi, dangle out of trees. In the case of Hill, she lasted two years; as for the Muir and the sadhus, the latter who dangle upside-down, their tree dwelling lasted a matter of hours. And so on; one might look for a milder ascetic counterpart for Robinson Jeffers dyspepsia concerning his fellows, preferring you’ll recall, to “sooner, except the penalties, kill a man than a hawk”; one for Ed Abbey’s hilarious but curmudgeonly defense of inaccessibility for Arches National Monument in Desert Solitaire; one for Paul Ehrlich’s discomfort in an ancient Indian taxi (“People visiting, arguing and screaming…. defecating and urinating”) prompting his writing of The Population Bomb; counterparts even for the simple-living needed for ecological footprint reduction, for the belt-tightening required by sustainability, and for the meat-eschewing dicta of environmental vegetarianism. In all of these examples there is a whiff of asceticism but none requires the foot ulcerating commitment of standing on a pillar for decades. So why Simeon?
As we have seen most definitions of asceticism are vague to the point of admitting too many members into the ascetic fold –skipping a meal or two does not the ascetic life make. The vitae of Simeon Stylites, however, distill his life to the point where there is little to notice other than ascetic fervor. As discussed, the examination of his life allowed us to enunciate some principles, and to register the suite of dispositions associated with the ascetic. These included a commitment to rewards in another realm, a deployment of an expert’s knowledge in achieving esoteric goals, and the gaining of certain temporal authority, often despite the ascetic’s declared intent. The dispositions include a departure from “home”, followed by a commitment to immobility in another place, a rejection of civilization which is typically accompanied by a distain of the city, often physical austerity, and a preference for raised ground. The life of an ascetic is the life of critique. In this we not only see the odd particulars of our saint’s life, but also, I think, if one squints a little, the life of the environmental movement.
Space prohibits a full treatment here of how the ascetic drive underpinning environmental thought and action unfolded over the past century or so. Using the principles and dispositions just enunciated some of this should be fairly obvious; other points are more obscure. Sustainability measures, fairly obviously, call (justly) for a deferment of pleasures right now, for an equitable world in the future; Paul Shepard and David Abram mourn the passing of the Pleistocene or indigenous worlds; nature-lovers almost everywhere incline towards inhospitable places; John Muir, Henry David Thoreau, Ed Abbey, Charles Darwin (even): all left though some returned to tend their flock; the mountains beckon to Gary Snyder, David Brower, and to Arne Næss; Garrett Hardin, Paul Ehrlich, and Bill McKibben all demand reproductive self-limitation; Rachel Carson, Terry Tempest Williams, and Al Gore are outraged by what our times have wrought; eight biosphereans spent two years in the bubble of Biosphere 2 (like Simeon they had their support "disciples"); Aldo Leopold and Martin Heidegger had a great fondness for the nostalgia of shack-dwelling. And those not in shacks prefer, like Melville's mast-men, and Simeon, life en plein air - leave absolutely no one inside!
And I agree with them all, in many ways at least. My point, and it seems curiously feeble to me to say it, is not that the ascetic impulse is always wrong, though most contemporary writers disapprove of the Simeon’s vigor, or that environmental thought is wrong when it tends towards asceticism (it certainly is not, but our priorities need to be refined). Rather, I am interested in a more straightforward accounting of the motivations and the behavioral reflexes of environmentalism – where it is ascetic let us call it so; and when our ascetic impulses lead us astray let us reconsider. At its worst the ascetic disposition of environmental thought has translated into calamitous action – for instance, inhumane population policies, unjust removal of peoples from their traditional lands. Less tragically, but still detrimentally, the comfortableness of the environmentalists’ ascetic disposition coaxes the “eco-cete”, the everyday ecological monk, into an unbalanced preoccupation with conservation in wilderness areas, a neglect, until quite recently, of the city as a site for conservation, an often ruthless demarcation of the human from the wild, a nostalgia for worlds that have passed if they ever existed at all, a great nausea towards domesticated humanity – that is, most of us, an over-confidence in an expert knowledge of the natural world, a puzzling relationship with technology, and finally (for now) a snooty distain of those who cannot articulate the environmental convictions in the professional lingo of the movement.
Now, an objection to my claim (one of many, no doubt) may be that there is, quite obviously, no direct link between the life of Simeon and other pillar saints and the mainstream of environmental thought. However, the ascetic impulse is an ineradicable component of who we are – the human without some ascetic impulse (even if it is expressed in a diminished key) has not been born. We do not simply copy ascetic gestures, we all seem capable of ascetic innovation. In some movements – religious, philosophical, environmental –they may simply express themselves more blatantly. To illustrate the idea that ascetic gestures can converge, consider this. There is evidence of a phallus worshiping cult in Northern Syria sometime before Simeon’s time and centered about 180 km east of Simeon’s pillar. According to the Greek author Lucian, men would climb the phalli two times a year for a period of a week and “commune with the gods” and bring good fortune to the community. Though the period aloft was not reckoned in years, nevertheless the phallus dweller remained awake for the duration. If he fell asleep, a scorpion climbs up the column and “treats him unpleasantly.” [xiv] So, long before Simeon’s time worshipful clambering up phalli was commonplace. This has led some to suggest that his ascetic practice was merely an emulation of pagan practice. Several Simeonists are outraged and take pains to deny the connection. The issue is moot from my perspective. It seems that when a saint sees a phallus or a pillar he knows just what to do. That, my friends, is the ascetic impulse. And if environmentalists are up there with them, hoisted up their own proverbial pillars, at the very least the view should be clear; it may be time for us to clamber on down, and lead the community as many ascetics have also done.
[i] MacNeice, Louis (1940) Stylite, Poetry, Vol. 56, No. 2, p. 68
[ii] Melville, H Moby-Dick, W. W. Norton & Company; Second Edition (October 2001)
[iii] There are three accounts of Simeon’s life available, one written by Theodoret, Bishop of Cyrrhus, a contemporary of our saint, one by his disciple Antonius, and the so-called Syriac Text, the longest account of his life. Translations are available in a convenient volume by Robert Doran (1989, The Lives of Simeon Stylites Cistercian Publications). There are conflicts between the accounts and not all of the stories are shared between all of them. Indeed, there are some accounts in the broad literature on Simeon that I draw on but which may not be canonical.
[iv] Frederick Lent (1915)The Life of St. Simeon Stylites: A Translation of the Syriac Text in Bedjan's Acta Martyrum et Sanctorum, Vol. IV: Journal of the American Oriental Society, Vol. 35 (1915), pp. 103-198
[v] S. Ashbrook Harvey (1988) The Sense of a Stylite: Perspectives on Simeon the Elder Vigiliae Christianae Vol. 42, No. 4, pp. 376-394
[vi] Doran, 33.
[vii] Doran, 88.
[viii] Merton, Thomas (1960) The Wisdom of the Desert, New Directions
[ix] Merton, 50.
[x] Merton, 11.
[xi] Merton, 32.
[xii] Merton, 62.
[xiii] Palmer, G.E.H., Sherrard, Philip, Ware, Kallistos (translators) (1979)The Philokalia: The Complete Text (Vol. 1 - 4); Compiled by St. Nikodimos of the Holy Mountain and St. Markarios of Corinth. My paraphrasing of the definitions of technical terms relied upon the glossary from these volumes.
[xiv] Frankfurter, David T. M. (1990) Pillar Religions in Late Antique Syria. Vigiliae Christianae, Vol. 44, No. 2, pp. 168-198
Monday, May 23, 2011
Letter From Be’er Sheva
By Jenny White
I remain convinced, despite my anthropological training not to generalize, that every society has an aesthetic, a particular repetition of pattern, that informs its material manifestation. In contradiction to the anthropological view that you must delve under the surface to understand a place, I’m going to suggest that this aesthetic is most powerfully visible to the uninitiated. The observant tourist, for instance, who sees everything through a child’s indiscriminate and unfiltered gaze. Patterns pop out to the uninitiated. For locals, by contrast, patterns harbor familiarity, wholeness, comfort, rootedness. Patterns are woven into the everyday, felt, but no longer seen. On my first visit to Japan, I was struck by the layered rows of boxes I saw everywhere, in the arrangements of windows, proportions of houses, the way images were arrayed on fliers and ads, far beyond what I would expect by accident or convenience. I experienced the boxes as a powerful imprint on my surroundings wherever I went. Perhaps I was wrong. A friend who is a specialist on Japan doesn’t see it. Does the forest have a shape without its trees? Nonetheless, I will continue with my conceit, on the justification that I am also a writer and writers gleefully play with any patterns they see, even if an anthropologist would tell them that without context, there is no meaning. No writer believes that; her job is to create meaning, not analyze it.
I am now in Be’er Sheva in the Negev desert, teaching a three-week course at Ben Gurion University. A driver brought me from Tel Aviv airport to my residence in a ten-story building that towers over the neighborhood. The streets near the residence are little more than rows of cement rooms with walled-in tile forecourts. Behind them loom three- and four-story apartment buildings of unfinished cement without ornamentation or color. There is little attention to detail and the buildings are crumbling, festooned with wires and rusting grates. They remind me of bunkers with blank walls and slits for windows. That is the only pattern I see beyond the ubiquitous lack of ornament. But it is a pattern.
On the Sabbath I set off toward the high rises I could see in the distance. I had heard about a mall – the only one – that was open on Saturdays because the owner was Russian. I trudged for an hour and a half along wide utilitarian roads unrelieved by vegetation and then through a deserted industrial area until I reached the mall—a strip of boxy shops arrayed in a horseshoe around a parking lot and the Hillel Café, which to my great relief was open. After an invigorating pasta salad, I returned home by another route. Nothing I saw diminished my first impression. The buildings were higher and better built, but pale, bunker-like, with wide, narrow windows, and without detail.
There is little color in Be’er Sheva, even in people’s clothing. I am told that many eastern Jews live here who are observant. Some of the women wear long skirts or tunics and little caps over their hair. Others dress in shorts and t-shirts. Observant or not, it's a low-key, cheap-jersey world. Sidewalks are cracked, littered. Things are jury-rigged. In some ways the city reminds me of Turkey in the 1970s, but without the colors. In those days, even the poorest Turkish house was whitewashed or painted blue against the Evil Eye and flowers planted in recycled oil cans brightened balconies and doorways. In Israel it seems as if color might tempt the Evil Eye rather than ward it off.
The university is an oasis to which I gratefully return after each expedition. Its buildings are attractively modern, set like sculptures within artfully designed greenery through which snakes a gurgling stream. It is full of cafes, like a little Tel Aviv. The students on campus look like the students on any US campus. But the buildings also have something of the bunker about them, thick cement expanses, narrow windows, no color. I was told that if anything threatening happens, there were bunkers in the basement, but that simply stepping into an inner room would likely be safe enough. The common room in my residence also doubles as a bunker. There is a reason, of course, for bunkers. Not long ago, Grad rockets fired from Gaza reached Be’er Sheva and one landed on a university dormitory.
Jerusalem gave me the impression of monumental fortifications, by which I mean less the Ottoman city walls, but the vast settlements, enormous cliffs of structures rising from hilltops and extending into the distance like impregnable walled forts. I also visited Sderot last year, one of the towns that bears the brunt of Hamas-inflicted rockets. Sderot is nothing but a bunker. Even the bus stops are bunkers where the riders wind themselves into cement walls to wait. The city representatives met us in their underground command bunker and told us about their traumatized populace.
Tel Aviv boasts celebrated modernist structures in the cubic Bauhaus style. But when I visited Tel Aviv for the first time last year, I saw a city of square, unadorned cement buildings with balconies, their cladding peeling like bad sunburn, shots of color provided by the occasional orange trumpet vine. The boardwalk along the sea was bleak cement. Friends who live there see an entirely different city; they see cafes, the sociable homes of friends and family, intellectual and economic exchange, a vivid life projected onto these otherwise unremarkable surfaces like hot-pink bougainvillea. I didn’t think “bunker” in Tel Aviv, but I did think “impermanence.” This is a clue, I think: structures built not from a tradition or for posterity (except in a defensive sense), but also not built as a tradition in the making. The statement seems to be a practical “get on with it”, set in a no-nonsense present that eschews investment in an uncertain future.
Yesterday I asked one of my university colleagues here why there was so little color or ornamentation. He explained that early Israeli architects brought with them the Bauhaus style and that modernism remained the building norm. In the 1950s and ‘60s Israel’s population increased so rapidly that buildings were thrown up as quickly as possible with “no time for nonsense.” The modernist style and the lack of ornament and color, he added, were also a rejection of the East. It was a way for Israelis to say, “We are Western.” Indeed, houses in Bedouin towns on the outskirts of Be’er Sheva and in Palestinian Ramallah, which I visited last year, were notably more ornate with large arched windows and rooftop crenellations that looked less like battlements than crowns.
In Ritual and its Consequences, Adam Seligman et al. suggest that repetition of pattern in the world we construct around us is closely related to ritual, that is, stylized repetitive interactions that relate the self to the world. These exist in both religious and secular domains. Ritual isn’t an external expression of the world as it actually is, nor is it simply the expression of an internal state. Instead, it supersedes the individual by providing an “as if” world that makes it possible for very different groups of people to share a social world. Rituals, I would add, are acts by which we calm ourselves, rehearse who we are and who we would like to be as a society, despite our differences. Israel’s national pattern might be called the blast wall, a defensive structural skin that protects the family within. It shows little concern for structural details or exposed public space, but places all its emphasis on what is inside. My otherwise bleak walk to the mall, for instance, was relieved by a colorful head-high billboard as long as a city block that showed a series of ordinary faces of men, women, children, couples. People are displayed in bright hues, whereas structures and actual people on the street are wrapped in protective coloring.
There are, of course, instructive exceptions. In suburbs inhabited by professionals, houses are shaded by lush gardens. Tel Aviv is said to be a city of cafes – individuals practicing public simultaneity and open display. Contradicting the Israeli pattern with such alternate rituals cracks open the “as if” world of Israeli public identity and reveals the different groups within. Last year, the Zionist head of an illegal West Bank settlement told my visiting group with some disgust that “those people in Tel Aviv who sit around in cafes and drink cappuccinos” had forgotten their Zionist heritage. They had become soft, he complained, and would no longer be able to defend the nation.
The lens through which Israelis see the world is that they are a people under existential threat (it is common to hear people say, “they hate us because we’re Jews”). Patterns of material culture evince that fear with a repetition of defensive walls and an unwillingness to draw attention through ornament, color or public display. Instead, Israel as a nation emphasizes Jewishness and the physical presence of Jews and their families, the fertile yolk within a hard, white shell. There is little “Israeli” material culture. You won’t find tourist artifacts that represent Jewish Israel (rather than the many other cultures that reside here). Other than religious artifacts and Dead Sea bath salts, there is nothing “Israeli” to buy in the areas tourists frequent or the airport shops. Jewishness and Jews are the portable treasures.
There are signs of change, the confidence to tempt the Evil Eye. The newest building on campus has a vulnerable glass skin under a cement awning. A new housing complex has Moorish arches over its windows. Delicate nods to the future from a haunted present.
Monday, May 02, 2011
Dishonest to Whom?
Mary Warnock’s Dishonest to God: On Keeping Religion Out of Politics (Continuum, 2010) is an ambitious book. In it, Warnock distinguishes religion from morality, demonstrates the dependence of religious reasoning on moral reasoning, and argues that religious perspectives are nevertheless crucial for social and political life. We have a review of the book forthcoming in The Philosopher’s Magazine. For the most part, we are in agreement with Warnock. But we do have some disagreements, and we want to focus here on one aspect of Warnock’s view that strikes us as especially troublesome, namely, Warnock’s conception of the value of religion in a secular society.
Warnock’s case in favor of religion is broadly consequentialist. She holds that religious institutions and practices should be sustained because, on balance, they are socially beneficial. Warnock contends that - unlike morality and the rule of law - religion is not necessary for civil society; yet she insists that “there is no possible argument for holding religion is outdated, or that it can be wholly replaced in society by science or by any other imaginative exercise” (159). Surely this is overstated. No possible argument for the social dispensability of religion? Really? Actual arguments for this conclusion are easy to find. Consider Hegel’s argument at the end of the Phenomenology that religion must give way to art and philosophy in public life. Or John Dewey’s argument in A Common Faith that the social and experiential benefits of religious life can be detached from religion and subsumed under a more substantive conception of democratic community, leaving religion to wither away.
It is likely that Warnock means to claim that there is no good argument for the dispensability of religion; that is, Warnock means to deny that there could be an argument for the dispensability of religion which gives religion its due.
Warnock affirms that religion can be morally good, and good for us. She holds that the stories of the New Testament “can teach morality as nothing else can, in vivid and memorable form” (159). Additionally, she holds religion is a civilizing force. Emotionally profound episodes in life call for ritual and ceremony; death, birth, thanksgiving, marriage are made public, sharable, and civil given their intersection with religion. Finally, Warnock emphasized the aesthetic dimension of religion; she holds that the breadth and depth of our imagination is increased with religious icons and stories. She claims that “to lose these things, though it would not be the end of society, would be its incomparable spiritual loss” (161). Hence there are moral, social, and aesthetic reasons to reject the view, common in some atheist circles, that sufficiently enlightened individuals see the elimination of religion as a worthy social goal, that in a properly civilized social order, religion would be at best a historical curiosity.
Warnock acknowledges that the question of the social value of religion comes to the balance of goods and evils. Surely Warnock is correct to holds that, from this consequentialist perspective, religion can be a vital social good. However, there are familiar social consequences of religion that are not so salutary: religious bigotry and intolerance, mistreatment of women, opposition to science, general credulity, authoritarianism, and so on. But it is important to emphasize that, regardless of how the cost/benefit calculation runs, Warnock has hung her case for the social value of religion on entirely secular considerations. In proposing that the matter is to be decided on the character of the social consequences of religious belief, Warnock has asserted that the value of religion is wholly detached from the truth or rationality of the central theological claims of the major religions. On Warnock’s view, the content of religious beliefs, arguments, and commitments is irrelevant.
In fact, on Warnock’s view, religion can be highly socially valuable – and therefore worth sustaining - even if all distinctively religious claims are demonstrably false. Her defense of religion, then, has the same form as the defense we adults give of our practice of promoting among our children the belief in Santa Claus: It’s such a useful and comforting fiction that the belief ought to be promoted (or at least not denied), despite the fact that Santa does not exist. This is what Plato famously called a Noble Lie.
Given this, we must ask: Is Warnock’s defense a defense of religion? Or is it merely a defense of the idea that some people need the comforting fictions that religion provides in order to be good, responsible citizens?
Arguably a defense of religion along these latter lines evacuates religion of what does the inspiring and instructing, namely, the distinctively theological commitment to God. Put otherwise, a defense of religion which rests solely upon considerations regarding the social value of religious belief is ultimately no defense at all. If religious belief is to be defended, it must be understood in terms that religious believers can recognize. According to religious believers, their beliefs are not merely useful social instruments or efficient means for instilling good moral habits. They are rather commitments to very particular metaphysical, ontological, and epistemological views. These views provide the basis for the moral and communal practices among religious believers that Warnock finds socially valuable. But the social value of the practices provides no defense for the underlying views, all of which are, we contend, false. No discussion of the merits of religious practices and institutions should be permitted to evade the fundamental question of the truth of distinctively religious claims.
When we first read Warnock’s book, we puzzled over its title. Why would a book that sets out to defend the social value of religious belief be titled Dishonest to God? We wondered: Who is being dishonest? What could dishonesty to God be? In the light of subsequent reflection, though, we think we’ve come to understand the title perfectly.
Monday, April 18, 2011
Of Quislings and Science: Reflecting on Mark Vernon, The Templeton Prize and Richard Dawkins
by Tauriq Moosa
Recently, Sir Martin Rees was awarded the most lucrative science-prize in the world, The Templeton Prize. Notice I said ‘lucrative’; not most respected or prestigious, though some indeed do think it is. This prize is awarded because it, according to its official website, “honors a living person who has made an exceptional contribution to affirming life’s spiritual dimension, whether through insight, discovery, or practical works.” It is given to those “who have devoted their talents to expanding our vision of human purpose and ultimate reality” – a sentence worthy of a tacky Hallmark card.
Sir Martin is in the company of £1,000,000 sterling and Mother Theresa and Billy Graham. Indeed, I wonder if that amount is enough to sway anyone, so that he or she is mentioned in the same breath as these fanatics. The point being there is little that is, by definition, about science. The Templeton Foundation and Prize is about promoting notions of the Divine, in whatever loose language you can fathom, using something vaguely non-Divine in approach. If you can anchor your pursuits that effect the world, dealing with sick people (not aiding) like Mother Theresa, or probing the mysteries of the universe with an appreciation for its beauty or possible higher purpose, then you qualify. They’ve melted the solid idea of the theistic god down into liquid form, so it slips through any pretention even when the person awarded the prize is not religious. Like Sir Martin Rees.
If Sir Martin donates it all to Oxfam, I would have little to quarrel with it suppose, except I think any scientist who doesn’t think there’s a conflict between faith and reason or science and religion is wrong. But that’s another discussion. What interests me about this whole episode was not the prize itself but the views that arose concerning the atheist culture wars. I’m interested particularly in ex-Anglican-priest-turned-“agnostic” Mark Vernon’s ever-banal criticisms of Richard Dawkins, as seen here (an ad hominem attack), here (how Dawkins is doing nothing new even though Vernon keeps writing about him), here (when Dawkins praises fellow writer, Christopher Hitchens, Dawkins is promoting hatred), here (Dawkins… groupthink… bus… bad), here (I don’t even know).
I rather enjoyed Dr Vernon’s books 42 and Plato’s Podcasts, so it is disappointing to see this usually clear, clever writer putting on the same performance each time Dawkins is mentioned in an online discussion or in the media. This is especially so when Vernon reflects on Sir Martin’s recent prize and… Richard Dawkins’ stridency. Yes. You obviously made that connection as quickly as I did. Vernon, expert bar none on how Dawkins should conduct himself publicly, has to write something… and it might as well be as Dawkins’ media nanny.
What is remarkable is Vernon’s ability to pull bones from the Rees story to create some fossil of an argument about Dawkins’ stridency. He does this by reflecting on Dawkins’ calling Rees, then head of the Royal Society, a “compliant Quisling” when it came to hosting the Templeton Foundation in the UK. As if uncovering a museum piece, Vernon unveils his newfound argument in the short space allotted him in the Guardian. But at the end, it’s as though we were expecting to discover a newly discovered species (of argument) but were given merely two broken teeth from a creature we’ve all dealt with before.
What’s more disappointing is the teeth barely draw any blood.
Firstly, why is it necessary to raise Dawkins’ past comment, now nearly a year old? Dawkins’ comment was about Rees and Templeton, so Vernon says it appears as “Rees has seemingly hit back”. This, however, makes no sense since Sir Martin did not choose to give himself the prize. It was, um, awarded to him. Maybe we can say he “hit back” by accepting it. But not everyone’s life revolves around Richard Dawkins enough to accept a million-pound prize just to spite the brilliant biologist.
Secondly, Vernon claims that Dawkins was wrong to call Rees a “Quisling”. Such rhetoric (finger wave) will not do! “Quisling”, says Vernon, was “hurled” against fascist collaborators during the Second World War; what was Rees collaborating with by agreeing for the Royal Society to host the Templeton Foundation? Says Vernon: “The Royal Society lent its prestige to the Templeton Foundation by hosting events sponsored by the fund, which supports a variety of projects investigating the science of wellbeing and faith.”
Hm. The science of what? Wellbeing… sure. Kind of. I would just put that down to either health or morality. But faith? Should the Royal Society also sponsor the Aries Rising Druids Society to investigate “the science” of astrology? Or perhaps Darkmoon Bloodstar’s “science” of magic? The point is: It’s not science, so why involve the Royal Society? Remember, the award is not restricted to scientists – though it seems, politically, this is a powerful avenue for the Foundation to push for in order to gain some credence. After all it is the major area fast destroying any pretentions toward knowing what you can’t possibly know.
Vernon I think is using “science” incorrectly when saying “science of… faith”. (I also doubt he means investigating religious belief from a psychological or neuroscientific perspective. If he did, he could’ve said it as I have. If this is what he means, I apologise.) For someone who spends so much time talking about Dawkins, even wanting to subject the poor biologist to unfalsifiable assertions to do with human behaviour based on ancient mythology (no, not astrology – I’m talking about Freudian psychotherapy), Mark Vernon has not read enough Dawkins. Consider this irritating paragraph:
Dawkins and Rees differ markedly on the tone with which the debate between science and religion should be conducted. Dawkins devotes his talents and resources to challenging, questioning and mocking faith. Rees, on the other hand, though an atheist, values the legacy sustained by the church and other faith traditions. He confesses a liking for choral evensong in the chapel of Trinity College. It seems a modest indulgence. The ethereal voices of rehearsing choristers can literally be heard from his front door. But for Dawkins this makes the man a "fervent believer in belief". And that is a foul betrayal of science.
Notice it has nothing to do with whether god exists or not, whether there is a purpose to our lives, etc. It’s got to do with Dawkins and Rees’ public image and personal habits. We’ll forget that Dawkins regularly speaks on aesthetic appreciation with some other unfeeling, cold, nihilist atheistic scientists (well, according to Vernon’s judgement).
Dawkins spends much time in The God Delusion discussing his appreciation for beauty in the world, even in choral music. Indeed there are several pages where he quotes his appreciation of the Bible as an important work of literature. Anyone can appreciate the beauty of Gothic architecture and the intricacy and brilliance of cathedrals. This does not make anyone “a believer in belief”. Again, like his use of "science", Vernon does not understand the terms he is using.
The term is from Daniel Dennett’s Breaking the Spell. Writing in the Guardian (of all places!), Dennett explains what belief in belief is. “Sometimes the maintenance of a belief is deemed so important that impressive systems of propaganda are erected and vigorously defended by people who do not in fact share the belief that they think is so important for society to endorse.” Vernon does not himself believe in god but thinks it is important, like Rees, to appreciate that others believe in god. Call it belief squared. (I think “belief in belief” is a catchy but is somewhat obscuring phrase. It doesn't quite capture what Dennett means - or, again, maybe I'm just slow.) Dennett continues:
Today one of the most insistent forces arrayed in opposition to us vocal atheists is the "I'm an atheist but" crowd, who publicly deplore our "hostility", our "rudeness" (which is actually just candour), while privately admitting that we're right. They don't themselves believe in God, but they certainly do believe in belief in God.
He appears to be telling us about Mark Vernon. But, with regard to appreciating that others believe in god, Sir Martin does fit this. So, he is a believer in belief but not because he appreciates choral music and evensong. Vernon has given us the right conclusion but using the wrong premises. Rees qualifies as a “believer in belief” because he:
also claims to be an "unbelieving Anglican" who goes to church "out of loyalty to the tribe". He has criticised Stephen Hawking for arguing that we don't need God to explain the origin of the universe, and supports "peaceful co-existence between religion and science because they concern different domains".
This appreciation or respect for religion because others are religious, however, is exactly what Dawkins was correctly criticising as policy for the Royal Society. Similarly – and again – we must ask for consistency. Would we support those who believed in alchemy as a solution for cures? Would we be happy to say homeopathic parents should not be charged for manslaughter when they implicitly kill their children? These beliefs, like religious faith, also are unwarranted, unsound and have no evidence to support them – they, too have plenty of believers. We do not expect physicians to have an appreciative attitude toward homeopaths, when both are focused on the same area. Why then should a cosmologist have an appreciation toward theologians, also focused on the same area?
No doubt many will claim they are separate spheres, but this is not true. If they are separate spheres, religious institutions wouldn’t be trying to get evolution out of schools; we wouldn’t be having hysteria over pushing the time back as to the birth of the universe. If religion and science were different spheres, I don’t think I would be fighting tooth and nail in my thesis to have policies on the ethics of killing re-considered. If these are separate domains, it is not the scientists up in arms but the faithful. Basically, this discussion wouldn’t need to happen. What can cosmology possibly gain from engaging with Leviticus or Paul’s Letters? What can the Quran contribute to our learning about biology?
Vernon however will have none of this. Respect is needed. Indeed, Vernon proudly proclaims himself as a benefactor of Templeton funding. He calls himself an accommodationist. “I often write about the relationship between science and religion, and have been a Templeton-Cambridge Journalism Fellow, the beneficiary of a first-rate seminar programme organised by Cambridge academics, funded by the Templeton Foundation. But then I love the big questions.”
Right. The big questions. I have and do read some wonderful Christian writers on “the big questions”, like Alasdair MacIntyre, a contemporary moral philosopher who follows Thomas Aquinas very closely. I am currently obtaining a bibliography for my thesis that includes many works, including analyses of Aquinas’ Summa and the positions of the Catholic Church regarding assisted-death. My main reason is not through respect of the God-Did-It formula (explain anything: science, morality and so on, by throwing god somewhere in to your explanation); rather, my reason is out of appreciating the power the formula holds, especially with regard to swaying public policy and opinion on my research area, which is killing and death. I need to know it, inside-and-out so that I can combat it effectively. This means I can engage sociably, amicably and vaguely intellectually (I still don’t understand most of philosophy, because I’m unfortunately not smart enough) without being smug, arrogant and so on. I can do this in my writing and I do it with my opponents.
Why mention “I love the big questions” by associating it with Templeton God-Dit-It engagements? Why mention Dawkins' strident attitude and so on unless Vernon means his appreciation for these sorts of questions somehow makes him better able to engage? There is no evidence to support his attack, nor his view that Dawkins’ attitude is doing anything wrong. This is an annoying paragraph – much like the whole silly article – but it becomes clear why he ended it as he did: to compares himself to Rees. Not in the sense of being as brilliant, but in the sense that both can be appreciative of and toward Templeton policy.
Rees pursues [the big questions], too, through cosmology, a subject that clearly fascinates many for similar reasons. Is there life like ours on other planets? What is the nature of our connectedness with the stars? It is partly for his insights on such matters that he has won the prize. But if he is modest about what can be achieved for religious belief by science, he insists that scientists should not stray into theological territory that they don't understand.
It is Rees' insights, not his evidence or his actual scientific research, that matters: His insights based on his own (very beautiful) writings and communications. It is not his research but Rees' somewhat open-hearted approach to the Divine and majesty within the Universe that won him the prize. Dawkins, remember, can’t appreciate beauty, can’t listen to any kind of music and love anything vaguely religious because he is just a smug, bad person with no social-media skills. If this is Vernon’s opinion, which this piece and those millions of others seem to indicate, then of course Vernon would think that Dawkins is not pursuing the big questions like Rees is.
Again, Dawkins himself has put it beautifully that he considers biology, or rather the study of evolution, to be the most important study of all: It answers “Where did we come from? Why are we here?” It even answers what the “purpose” of life is. None of these questions appear to satisfy many people; some people want grand, mythological narratives of which they are the centre between battling gods and swooping Hans Zimmer soundtracks. Rather, Dawkins would advocate that we consider a warm summer breeze and the sounds of a night garden, the Milky Way and pictures from the Hubble Telescope.
I don’t understand why anyone would want their lives to be out of a story written by Bronze-Age goat-herders. Certainly, because of science-writers like Dawkins and Steve Jones, I am considering studying biology. (Again, I don’t think I can with the brains required and all but I want to.) I agree with Dawkins’ assessment. So, if Dawkins is in the department directly involved in perhaps the most important questions, how does this distinguish him as any different from Sir Martin? Dawkins’ science was through a microscope; Sir Martin’s is through a telescope. Both are wonderful areas, with their own intricacies and theories that display human genius. Both, too, are related.
Vernon goes on to describe some previous prize-winners, like Paul Davies (who actually I really enjoy as a science-writer). “Davies is not [theistic or religious], though he believes it is perfectly valid to pursue questions of meaning in the context of what is being discovered about the cosmos. After all, is it not remarkable that our universe has produced entities within it that ask such questions – namely ourselves?”
Did Dr Vernon, a philosopher, just ask or rhetorically make the Fine-Tuning Argument? I don’t think it needs any more debunking than is already in place. But, let’s remember: Vernon is not a believer. It is doubtful he would be persuaded that there is anything more just because things look remarkably tuned that way. I don’t think it’s remarkable that the universe has produced entities like us since it could not have done otherwise. Vernon is giving a free-willed intention to the Universe. We can say things like “If the amount of carbon was this or that degree off, we never would’ve existed” but why is that useful? It didn’t happen. All those percentages are in place.
Consider: Imagine another universe, very similar to ours. Call it the Fire Universe. On one particular planet, in one particular solar-system, in one particular galaxy, there are conscious life-forms made of fire. They cannot exist on every part of their planet – some parts are too cold; other, ironically, are too hot – but nonetheless, there are certain parts of the planet that are Goldilocks good (i.e. just right). These Fire Beings would no doubt say “If carbon was just one degree off, we never would’ve existed!” But that’s the point! They exist. If the carbon was, say, a degree this way or that, Ice Beings would be there instead posing the same questions. This is not useful for discussions since, to repeat, it didn’t happen. (Notice the irony: Why make life-forms so potentially on the brink of non-existence? Why would a deity make it that life perpetually exists on a precipice. Remember: if all these amazing complicated numbers are just a tiny degree bigger or smaller, we would not have existed. But it also means if they change sometime today or tomorrow, we would cease to exist. I’m uncertain why this is an argument for god’s existence. It seems be the opposite.)
We can therefore point out that Vernon is placating Templeton policy with such a statement. It is not remarkable, Dr Vernon. It simply is. You are not posing a big question as opposed to a banal if not useless one. Ironically, these are exactly the sorts of questions Templeton, being a not-very-well disguised faith-based institution, would be asking: boring, circular and unhelpful for science. Finally, Vernon lops his culture-war grenade into the mix. Out of nowhere comes this sentence, found in the second last paragraph.
That such a highly regarded figure [as Sir Martin] has received its premier prize will make it that little bit harder for Dawkins to sustain respect amongst his peers for his crusade against religion.
He provides no evidence of this. This is something that that silly thing called science can verify. For example: Will Dawkins and other atheist book sales decrease? Will the rhetoric change into the rainbow-unicorn speak that explodes with glitter at the mention of the word “divine” and “remarkable” and “majestic”? Again: Vernon still does not understand that there is little linking science and Templeton save the scientists it has given money to. I’ve noted above that the sorts of questions Templeton tackles are not scientific; rather, it is this kind of hippy-ish, pantheistic wonderment at existence that qualifies one as a potential prize-winner. Templeton won’t give an award to a remarkable scientist who bashes god but, for example, cures cancers. It’s not about the science, it’s about the attitude. For this reason, it has nothing to do with science. Vernon does not appear to understand that. Therefore, it seems unlikely Dawkins will lose respect among his peers because his peers are, um, scientists.
Whether or not they do hold views like Sir Martin’s is beside the point. That’s what will win them the Templeton Prize. It’s not attitude that wins you a Nobel. Vernon’s piece is boring. Very boring. My column is longer than his article, obviously. But this all highlights something quite important that I think needs to be understood about science, attitude, snarkiness and so on. Vernon has made no argument and has contributed very little to the discussion. My aim is to try get Vernon away from talking about Dawkins and back on to talking about Plato. (Frankly, I prefer Plato to Dawkins but then I’m a follower of Socrates. Science is way above me.) That’s where Vernon shines and the more he continues to write nonsense like this, the less I want to read him, no matter his subject matter.
Monday, April 04, 2011
Clifford and James on Evidence and Belief
William Kingdon Clifford’s “The Ethics of Belief” and Willam James’s “The Will to Believe” are yoked together in the story of philosophy. The two essays are taken as the classic starting points for reflection on the norms governing responsible belief. Clifford captures his view, evidentialism, with the stark pronouncement that “it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.” Clifford, thus, stands as the paragon of intellectual honesty; he follows the arguments where they lead, and spurns comforting fictions. In contrast, James’s doctrine of the will-to-believe is summarized by his claim that “our passional nature not only lawfully may, but must, decide an option between propositions, whenever it is a genuine option that cannot by its nature be decided on intellectual grounds.” James offers a defense of the role of the sentiments in intellectual life; he stands as the Romantic resistance to the demands of cold-blooded reason, he defends belief in the face of withering skepticism. Clifford and James are iconically opposed.
Clifford’s case is made primarily on the basis of a series of examples. The most powerful of them involves a ship owner who believes contrary to his evidence that his ship is seaworthy. The ship owner suppresses his doubts about his vessel, and sends it out to sea, full of emigrants bound for a new land; he then collects the insurance money when it sinks. Surely this man is blameworthy. But what if the ship had not gone down? What if the emigrants got to their destination safely? Would that bit of good luck diminish the guilt of the shipowner? Clifford answers, “Not one jot.” Why? Because the question of concerning the propriety of the owner’s belief does not rest on whether the emigrants were harmed, but on whether he “had a right to believe on such evidence as was before him.” Clifford holds that “It is never lawful to stifle a doubt.”
William James acknowledges that this evidentialist rule is generally sound, but he holds that there are exceptions, specifically in matters of the heart. James considers the following scenario. A young man wants to ask a young woman out for a date. He is unsure that she will accept, as he does not have evidence that she likes him. What is he to do? James proposes one option: “if I stand aloof, and refuse to budge an inch until I have some objective evidence, until you have done something apt . . . ten to one your liking never comes.” Such an option is unacceptable, both to the young man and to the young woman. “How many women’s hearts are vanquished by the mere sanguine insistence of some man that they must love him!” James proposes another option, one that calls for an ungrounded commitment; so the young man’s “faith acts on the powers above him as a claim, and creates its own verification. . . [F]aith in the fact can help create the fact.”
Given this, it is easy to see why Clifford and James are treated as philosophical antagonists. One pressing question is how practicable the two views really are. On the one hand, it might seem that Clifford’s evidentialism is far too demanding. Clifford himself was aware of this concern, as he worries that his view flirts with an untenable skepticism. He argues that he is no skeptic, yet this protest seems flimsy; in any case, James certainly takes Clifford to be a skeptic.
On the other hand, James’s proposal raises practical difficulties of its own. One can ask whether it is wise to have confidence in beliefs when there is no evidence to support them. James sees the danger in rejecting evidentialism. He holds that when properly deployed, the will-to-believe is not self-confidence or wishful thinking run amok. The question then is what the conditions for the proper deployment of the will-to-believe are. As Clifford emphasizes, having an exaggerated degree of confidence in one’s beliefs is most often a vice, not a virtue.
The main question in the dispute, though, concerns the philosophical concern driving both views: religious belief. Both philosophers agree that the traditional arguments for God’s existence fail. Both agree that the evidence for God is weak, and certainly not sufficient to justify religious belief. They disagree on the question of whether religious belief, given the lack of evidence for it, is ever intellectually responsible.
Clifford’s case against religious belief proceeds along two lines. First, Clifford argues that because the evidence is not sufficient to show that belief in God is true, one should not believe. That’s just evidentialism. Second, Clifford argues that the evidence also shows that belief in God encourages other intellectual and moral failures. According to Clifford, religious belief is not an isolated phenomenon, a one-off case of epistemic irresponsibility. On the contrary, Clifford holds that religious belief brings with it a host of other intellectual vices of credulity.
Alternately, James’s will-to-believe doctrine is committed to the proposition that religious belief may be responsibly held. Yet he does not give the religious believer carte blanche to believe at will whatever proposition that favor. Rather, James contends that religious belief of only a very specific kind of allowable. To be more specific, James argues that the most one is justified in adopting is what he calls the “religious hypothesis.”
James holds that because the arguments for the existence of the traditional God fail, the traditional conception of God fails as well. Accordingly, in James’s hands, religious belief is reconstructed. The religious hypothesis is less a view about God’s nature and existence, and more a view about the place of hope in our lives. That is, James’s strategy for defending religious belief is simply to transform it into something else, something less theological. And so, according to James, religious belief is not about God, Jesus, Heaven, Hell, angels, immortality, souls, or miracles. It rather is simply the belief that “the more eternal things are best.” This is the belief that the will-to-believe doctrine aspires to defend.
The question for traditional religious believers, of course, is whether James is really an ally at all. The Jamesian argument seems an overt bait-and-switch; he seems to have defended religious belief by distorting it into something else. Arguably, Jamesian reconstructed religious belief is not religious belief at all. Indeed, it seems that Jamesian religious belief is in the end no different from Cliffordian non-belief. And so the iconic opposition between Clifford and James admits of reconciliation.
Monday, February 28, 2011
Can Egypt Be Turkey?
by Jenny White
Turkey has been bandied about this past week as a model to be emulated by the new nations being born like small supernovas across the Middle East. Turkey was founded by a powerful military that doesn’t flinch from coups, but has also had a functioning and fair, if flawed, electoral democracy since 1950. The country currently appears to have found a place for Islamic piety within its political system without jamming any of its democratic wheels, although the process has been noisy and contentious. Its present elected government, under the Justice and Development Party (known by its Turkish acronym AKP), consists primarily of politicians who see themselves as pious individuals running a secular system. Some Turks believe that their intentions are secular, some don’t, but the democratic wheels keep turning. The AKP government has managed to make Turkey’s economy the fifteenth biggest in the world in GDP, only lightly sideswiped by the global turndown. There’s another election coming up this June and AKP leader Recep Tayyip Erdogan has promised that if his party doesn’t win, he’ll leave politics. All indicators show that he has nothing to worry about, but the critical element of his promise is the assumption that his party could lose, and then he would leave. That’s the trick of democracy that “eternal leaders” in the Middle East haven’t come to terms with. You lose, you leave. What has to be in place for this simple equation to become as second-nature as it is in Turkey? I happen to be teaching a course on Turkey this semester, so I posed the question to my students: Would the “Turkey Model” work in the Middle East? Here are some of the variables they came up with.
Who chooses the system? Is it top-down or bottom-up?
Turkey’s democracy was an entirely top-down imposition by Ottoman officers and bureaucrats who had wrestled back the territory that now makes up Turkey from European powers that had conquered the Ottoman Empire in WWI. Their leader Mustafa Kemal Ataturk was celebrated as a war hero, the savior of Turkey, and became its first president. He initiated extreme reforms, among them replacing Ottoman with the Latin alphabet (imagine someone telling you that in six months time, we’ll only be using Arabic script) and requiring men and women to emulate western fashion; veiling was discouraged and outright banned for civil servants, teachers, students, doctors.
What will be the central elements of a new national identity? Islam? Ethnicity? Nationalism? Who gets left out?
Early in the Republic, the Turkish state took control of mosques and religious instruction in schools; imams became civil servants and their sermons were vetted. Ataturk outlawed Islamic brotherhoods, even the ones that had supported his revolution, because he considered religion – especially the organized kind – to be potentially divisive, just like ethnic identities. Kurds were free to be full members of society and even members of parliament, but only if they did so as Turks. Unity became a fetish, and people with other ethnic and religious affiliations and identities were demonized or worse. Between 1923 and 1950, only one party – Ataturk’s Republican People’s Party – was in charge.
You would think that this top-down reorganization of the most basic elements of society would incur some resistance. Indeed, there were rebellions led, for instance, by the Nakshibendi Islamic brotherhood, which objected to the Turkish Grand National Assembly's summary vote to eliminate the Khalifate, a formal positon of leadership of the world’s Muslims. Kurds weren’t happy either (although the syncretistic minority Alevi Kurds tended to support the new regime), nor were non-Muslims, who suffered one pogrom and indignity after another. And most of the country still had no clue about what was going on. Three-quarters of the population were peasants in the countryside, where it might take days to ride a donkey from one village to another.
So the reforms primarily affected the few urban centers where the already westernized bureaucrats and officers lived. Last week, Egypt's Tahrir Square was populated by Muslim and Christian Egyptians, young secular techies and Muslim Brothers, men and women – their history of friction laid aside for the revolution. Egyptians will be weaving a new national unity from their own tattered skeins of identities.
Where are the women?
Turkish women were encouraged to attend university, enter the professions, to vote and populate the public sphere –women’s civilized modernity becoming the prime exhibit in the Kemalist revolution’s display window. Despite a highly visible contingent of educated women in the professions, though, today Turkey ranks near the bottom on international measurements of women’s status, due primarily to women’s extremely low and still declining labor participation rate (now 22%, down from 34% in 1988) and the low numbers of women in public life. When the Grand National Assembly was constituted, Ataturk appointed a few women MPs. Women’s representation in parliament has only recently increased – to 9 percent. Just this week, Turkey’s Supreme Board of Judges and Prosecutors elected 211 judges to the Supreme Court of Appeals and the Council of State. Only six were women.
A common pattern seems to be that during a revolution women are activists, heroines in the line of fire, and up-front emblems of the struggle for equality and liberty. But once the revolution is won and becomes consolidated as political work, that is the arena of men, and women are asked to please step into this comfy, honorable glassed-in space where you can be seen, but not heard. Where are the women from Egypt’s Tahrir Square in the negotiations with the army for a new constitution, a new national system?
Is there a charismatic leader? What is his intention?
Ataturk was an autocratic but beloved dictator, by all accounts a charismatic man, who planned to institute elections when he thought the country was ready – when they had imbibed national unity and become educated in the principles of the Kemalist plan. The upshot is that the founding myth of a military strongman who brings democracy (think, the Egyptian military at this moment) only works if the strongman has democratic intentions to begin with and is popular or charismatic enough to make the necessary changes in education, lifestyle, economy, and so on, that produce a productive, cohesive nation. Turkey isn’t there yet, even after sixty years of free elections. The Kurdish problem continues to bite; the army for a long time felt no compunction to take over the government or push out politicians if they deviated from the Kemalist path of unity.
What is the army’s role?
The military early on appointed itself as guardian of Ataturk’s principles and of the unity and integrity of the state, stepping in at will to correct the "dangerous" populist tendencies of the elected governments. Last week my class discussed Turkey’s first multi-party election in 1950 and its aftermath a decade later -- a coup by a military that thought the popularly elected government liked power a bit too much and was pandering to Islam. The 1971 coup was followed by new elections and yet another coup in 1980 -- a coup every decade. We fast-forwarded from time to time to see where all this was headed – the more radical Islamist parties of the 1990s, replaced by their moderate offspring, the AKP, that these days keeps winning elections and recently has managed to put the army into a box. Here is another important characteristic of the Turkish model over most of the past sixty years: The army carries out a coup, rewrites the constitution, then steps back and allows elections. The Turkish army sees itself as the guardian of democracy.
Will Egypt’s army step back and allow the people to elect whomever they choose? Will the elected party be willing to step down if it is thrown out in the next round? In the region, the answers to these questions generally have been no. Whereas Turkey’s army (or its Constitutional Court) has removed governments from power that it believes are too “Islamic” or ineffectual or authoritarian, they’ve stepped back to let their citizens try again. And again. (Inevitably in post-coup elections, the army’s favored candidate does not win.) Compare this to other countries in the region where the army dislikes the ideological or religious stand of the elected party, takes over and stays in power, not trusting the process. Or elections are rigged so the same party stays in power for decades, allowing the army, like other government supporters, to grow fat and rich.
Who in the region has any experience with giving up power willingly? Can that be learned as a principle, a rule, like telling time on your watch? Or does it have to be ingrained from a young age along with an appreciation of the satisfactions of democracy and the benefits of accommodating the will of the people? Does democracy have to be learned, and can an army learn it? Much depends on the personality and intention of the strongman.
Was there a colonial history? Does the nation want to be like Europe, or throw off Europe?
Another important difference between Turkey and any country in the Middle East is that Turkey was never colonized, as was Egypt by the British, Libya by the Italians, Tunisia by the French, and so on. The Ottoman Turks were the colonial power in the region for hundreds of years before the Europeans barged in. So if they selected Western lifestyle elements, clothing styles, literature and architecture, as they had been doing since the 19th century, that was a choice made from a position of strength. Egyptian nationalists initially emulated the West, but in the 1970s, spurred by the Islamist movement, Egyptians began to reject the lifestyle of the colonizer. To be Egyptian meant to find your own non-European identifying national characteristics. Muslim dress, for instance, became national dress.
What role does political Islam play?
Arab nationalism made some inroads across the region as a unifying political identity, especially for the middle class, but young men migrating from the countryside found the Muslim Brotherhood more welcoming; it provided a network to find jobs, brides, and connections in the heartless cities. This is also true in Turkey, where the Kemalist nationalist ideal was pounded into the heads of generations of children, many of whom still gravitated to Islamic movements like the relatively recent and now widespread Fethullah Gülen movement that provides exactly that allure – education, job training, national and global business connections, and the warmth of community. Arab nationalism in the 1970s brought peasants into the cities to be educated, but there they also saw first-hand the system’s corruption and many injustices, driving them to seek economic justice through socialism and then through Islam. Is this an opportunity for unity, justice, and upward mobility, or is it a threat to democracy? That depends on whether and how the system addresses aspirations for social mobility, justice, and the desire to live a pious lifestyle.
In Turkey, despite military interference, one elected politician after another has added brick by brick the elements that shore up democracy when it becomes more than just voting, but rather allows the fractious merging of opposing views and lifestyles. The first freely elected government in 1950 built roads and factories that allowed millions of peasants to flood the cities for work. That government was deposed by the army in 1960 for being too autocratic and religious, and the prime minister was hanged. But with geographic mobility, social mobility became possible. And with social mobility, new sectors of the population came to power and got to define society. Empty stomachs keep people – and their sometimes disturbingly conservative and/or radical ideas, lifestyles, and demands -- down.
How is the economy doing? Can young men get married?
Can Turkey afford to experiment with incorporating Islam into the political system because people’s stomachs are full, weddings are subsidized, cheap products are widely available, and people believe they can get ahead, even if they probably can’t? Despite Turkey’s booming economy, the unemployment rate has stagnated around 10 percent for more than a decade. And there are still pockets of great poverty, especially in Turkey’s eastern, mostly Kurdish provinces. That’s where the Kurdish rebellion took hold and where the Marxist-Leninist Kurdish PKK and the Turkish army are still locked in a death grip. In the mid-1980s, after half a century of a state-controlled economy, Turkey opened its doors to the global economy and bloomed. Fruit and vegetable exports from Turkey’s fertile lands boomed, as did exports of manufactured items like refrigerators and televisions, and construction know-how. The expertise had been there, but had had nowhere to go. Mid-sized enterprises that had been stifled have recently let loose entrepreneurial jet trails.
Many of these were owned not by the secular Kemalist elites, but by pious businessmen from the provinces. Their success – the press called them the Anatolian Tigers – fed the development of an Islamic bourgeoisie with big houses and couture veiling, pious gated communities and vacation resorts, a whole Islamic lifestyle based on commodities and overlapping closely – and challenging -- the tastes of the secular elite. It’s one thing if the woman cleaning your kitchen floor is wearing a headscarf, but it’s quite another when a veiled woman arrives in an SUV to shop at your favorite trendy boutique. The Islamic bourgeoisie -- and the hope for upward mobility that they inspired – encouraged people to vote for parties like the AKP that seemed to embody that hope. A good proportion of AKP voters are not pious, but respond to that same hope. Some voters simply wanted their pious lifestyles to be respected, something they felt the secular and Kemalist parties did not do.
Like Turkey before the boom, Egyptian state industries provided redundant dead-end jobs for many people – a tea man for every floor. Egypt’s economy went global in the 1990s, but few of the profits trickled down. Wages have remained so low for decades that a tiny rise in the price of state-subsidized bread might mean that a man cannot marry. His family eats, say, ten loaves of bread a day in lieu of more expensive food, and saves a few pennies toward the apartment and dowry without which their son can’t marry. If the price of bread goes up a penny, there is nothing to save. The overthrow of Mubarak is much more than a bread riot, but surely one factor driving young men is that a small elite has hogged all the matrimony. If there is no satisfying economic transformation like Turkey’s that spreads the wealth and encourages entrepreneurship, or if the army sticks with sweatshop enterprises that they and their cronies control, then what would Egyptian voters look for down the road? Could there be a credible pious prosperity party like AKP? If not, what would parties have to offer their voters?
Is there mobilizing potential?
The talk is all about Twitter and Facebook as mobilizing engines of revolution. Overthrowing a dictator is a simple enough message, but will social networking work with more nuanced positions and issues? Who in Egypt’s hinterland would read Tweets from politicians about their stand on issues? Perhaps there could be virtual parties; Rachid Gannouchi, the Tunisian religious leader of the Nahda Party is said to have 73,000 Facebook friends.
When the Turkish Republic was founded, there was simply no communication. The Kemalist revolution incubated in the cities for two decades, allowing the system to be set up, the lifestyle expectations to sink in, an educational system to be fine-tuned that would produce Kemalist citizens. That system was then systematically expanded throughout the nation as it became accessible through roads. The nation was brought online slowly, so to speak, and there was time to write the program and tweak it before submitting it to the information shock of a multiparty electoral system. What country in the Middle East has that kind of time – or patience? The availability of instant communication lures us into imagining mass agreement and believing that everyone can be brought on board simultaneously for the long haul.
Do you see what I see?
I showed my class the results of a recent (pre-revolution) survey carried out by a Turkish think-tank. The Turkish Economic and Social Studies Foundation (TESEV) report surveyed people in Egypt, Jordan, Lebanon, the Palestinian territories, Saudi Arabia, Syria, Iraq and Iran about Turkey’s role in the Middle East. More than 65 percent of respondents said they felt Turkey could be a model for the region; 18 percent disagreed. Those who agreed did so for the following reasons: Turkey was Muslim; its economy; its democratic government; and its support for Palestinians and Muslims, in that order. Those who didn’t think Turkey was a good model for the region cited its secular political system, it's not Muslim enough, the country’s relations with Western nations, and because there is “no need for a model”, in that order. Most Arabs see the AKP as a religious party that found acceptance not just in a secularist Turkey but in Europe as well.
In other words, people in the Middle East seem to see the Turkish Model primarily as Muslim (whether they are pro or con). Yet when Westerners speak about the Turkish Model, they assume it will be secular. And the Turks? They are first and foremost Turkish nationalists and tend to view their own system, their society and even their Islam as intrinsically Turkish and superior. They’re willing to guide, share the benefit of their hard-accumulated wisdom, but don’t expect Egypt to become Turkey. As a student in my class exclaimed in exasperation, “The Middle East looks at Turkey and sees the Muslim; the Europeans focus on the democratic part, and Turks are focused on Turkishness.” Tweet that!
Monday, February 07, 2011
Accommodationism and Atheism
by Scott Aikin and Robert B. Talisse
Our book Reasonable Atheism does not publish until April, yet we have already been charged with
accommodationism, the cardinal sin amongst so-called New Atheists. The charge derives mainly from the subtitle of our book, “a moral case for respectful disbelief.” Our offense consists in embracing idea that atheists owe to religious believers anything like respect. The accusation runs roughly as follows: “Respect” is merely a euphemism for soft-pedaling one’s criticisms of religion; but religion is a force of great evil, and thus must be fought with unmitigated vigor. Atheist calls for respect in dealing with religion simply reflect a failure of nerve, and must be called out. Anything less than an intellectual total war on religion is capitulation to, and thus complicity with, irrationality.
In our case, the charge of accommodationism as a failure of critical nerve is misplaced; anyone who actually reads our book will find that we pull no punches. But we also think that, as it is commonly employed in atheist circles, the idea of accommodationism involves a conflation between two kinds of evaluation which should be kept distinct. Some clarification is in order.
When it is aimed at rational persuasion, argumentation has two closely related objectives. The first is the obvious aim of demonstrating that the view that one favors is true. We engage in argument in order to make explicit the grounds upon which we base our beliefs; in making them believe explicit, we simultaneously provide support for our beliefs. The second aim of argumentation is easily overlooked. When we argue, we also engage in a diagnostic project. We aim not only to demonstrate the truth of our own view; we additionally endeavor to understand how our opponent arrived at her view, how she conceives of the relation between her view and her evidence. Put another way, in argumentation, we aim to discover where our opponent has gone wrong. Being able to identify others’ errors is often a crucial part of persuading them to change their views. Furthermore, being able to diagnose our opponents’ mistakes is intimately related to fully grasping our own views. Knowing an issue means not only knowing the right answers, but also where the wrong turns are. As Mill observed, “He who knows only his own side of the case knows little of that.”
This dual-aspiration of argumentation maps on to the elementary distinction in epistemology between truth and justification. Consider: One can believe what is true and have good reasons for believing as one does; one can believe what is true on the basis of bad reasons; one can believe what is false, but on the basis of good reasons; and one can believe what is false for bad reasons. It is by means of the distinction between what is true and the quality of one’s reasons that we are able to distinguish between, say, knowing that Kennedy died in 1963 and correctly guessing that he did. This distinction also enables us to make sense of the claim that, despite getting nearly everything wrong, Aristotle was a great mind.
This distinction also enables us to recognize that there are two distinct kinds of epistemic evaluation: belief-evaluation and believer-evaluation. Evaluating beliefs is a matter of seeing what evidence there is for holding them. Evaluating believers is a matter of examining whether the evidence someone has indeed supports the belief he or she holds (and if so, to what extent). It makes perfect sense to say that Aristotle’s physics is wrong (a belief evaluation), even though he was a brilliant natural scientist (a believer evaluation). Given the evidence he had and the tools at his disposal for gathering evidence, Aristotle was highly justified in holding his (false) beliefs. He was entirely wrong, yet frighteningly smart.
These distinctions help us to see that the diagnostic ambition of argumentation involves the attempt to devise a responsible believer evaluation of one’s interlocutor. Part of what is involved in the attempt to rationally persuade someone is to try to disclose what evidence he has and how he sees his evidence as providing support for his view. In doing this, we may discover that he has an insufficient conception of what evidence there is; or maybe he has misinterpreted the evidence; or maybe he has simply drawn the wrong conclusion from a proper understanding of the evidence; and so on. This endeavor makes the difference between the project of rationally persuading an interlocutor and simply persuading them; it also makes the difference between rational persuasion and browbeating.
This much is elementary. Yet the distinction between being wrong and being stupid is essential to our cognitive lives. We affirm in Reasonable Atheism that we believe that distinctively religious beliefs are false, and that religious believers are therefore wrong. Yet having false beliefs does not make one stupid; it simply makes one wrong. The stupid person is one who believes against what he takes to be evidence. And, as it turns out, there are very few stupid people. Yet there is a lot of false believing going on; in fact, we hold that in matters of religion, there is a lot of belief in what is demonstrably and obviously false. What could explain this?
The answer is straightforward. Religious believers have an inflated sense of the strength of the evidence in support of their view and a correspondingly deflated estimation of the power of atheist arguments. It is worth noting that this kind of distortion is precisely what one should expect in a society that fits the description offered by the New Atheists. They say, correctly, that our society is infused with religiosity and superstition, that religion “poisons everything” and amounts to a collective delusion. Given this, it is no mystery why religious belief is so widespread and persistent. The social ubiquity of religiosity causes individuals to overestimate the strength of the case for religion.
It is important to note that so far, we are very much in agreement with the New Atheists. Most religious claims are demonstrably false, and religion’s cultural influences have distorting effects on how believers assess the evidence. The religiosity of the background culture explains the persistence of religious belief.
But once this kind of explanation of the persistence of religious belief is adopted, the charge of accommodationism, as it is typically wielded, is rendered facile. One can wholeheartedly and unequivocally deny the truth of the religious believer’s commitments without thereby impugning his integrity as a cognitive agent. The claim that religious believers deserve respect, therefore, need not entail any degree of positive regard for religious belief; the call for respect rather is a call to respect religious believers. And respecting religious believers qua believers involves adopting the working presumption that, though they are mistaken and perhaps obviously so, they are nonetheless not stupid; instead, they are mistaken about what evidence there is and what weight it has.
The proper response to this state of affairs is to address religious believers as fellow rational agents, to elicit from them their best arguments and their conception of what evidence there is, and to make a case for one’s own view. Correspondingly, it is foolish to begin with an effort to discredit the intellects of religious believers or to diagnose them as benighted, foolish, and intellectually cowardly. To be sure, there are plenty of religious believers who fit these descriptions. But there are plenty of atheists who do too. It is here we part ways with the New Atheists, as what makes one a fool is not what one believes, but rather how one’s beliefs are related to one’s evidence.
A further point follows. Part of what fuels the charge of accommodationism is the view that religious believers should be treated with contempt. The view has it that those who are contemptible are not worthy of respect. This seems true as far as it goes. But notice that to hold a person in contempt is to ascribe to him a capacity for responsibility. Accordingly, we do not hold the mentally deranged in contempt for their delusional beliefs; rather, we see their beliefs as symptoms of their illness. To see religious believers as proper objects of contempt, then, is to see them as people who should know better than to believe as they do. It is hence to see them as wrong but, importantly, not stupid. Thus it is a confusion to regard religious believers as both contemptible and cognitively beyond-the-pale. Atheists must decide whether to proceed as if religious belief is a kind of mental disability or rather an error. If we choose the former, it is a mistake to see religious belief as a failure of intellectual responsibility; if we choose the latter, we must engage with religious believers in a way which manifests a proper regard for their cognitive capacities, and accordingly seeks to hear and address their best reasons and arguments. In Reasonable Atheism, we take this latter path. If this amounts to accommodationism, then atheists should be accommodationists. We, at least, will gladly accept the term.
Monday, January 31, 2011
Pakistan predictions 2009 and now...
In 2009, I took a road trip across the Northeastern United States and asked friends at every stop for their opinion on what was likely to happen next in Pakistan. The predictions I heard were gathered into the following article, which was published on Wichaar.com in April 2009. I am reproducing that article below, followed by a few words about how things look to me now, two years later.
I recently went on a road trip across the North-Eastern United States and at every stop, the Pakistanis I met were talking about the situation in Pakistan. As is usually the case, everyone seemed to have their own pet theory, but for a change ALL theories shared at least two characteristics: they were all pessimistic in the short term and none of them believed the “official version” of events. Since there seems to be no consensus about the matter, a friend suggested that I should summarize the main theories I heard and circulate that document, asking for comments. I hope your comments will clarify things even if this document does not. So here, in no particular order, are the theories.
1. Things fall apart: This theory holds that all the various chickens have finally come home to roost. The elite has robbed the country blind and provided neither governance nor sustenance and now the revolution is upon us: the jihadis have a plan and the will to enforce it and the government has neither. The jihadis have already captured FATA and most of Malakand (a good 20% of NWFP) and are inevitably going to march onwards to Punjab and Sindh. The army is incapable of fighting these people (and parts of it are actively in cahoots with the jihadis) and no other armed force can match these people. The public has been mentally prepared for Islamic rule by 62 years of Pakistani education and those who do resist will be labeled heretics and apostates and ruthlessly killed. The majority will go along in the interest of peace and security. America will throw more good money after bad, but in the end the Viceroy and her staff will be climbing rope ladders onto helicopters and those members of the elite who are not smart enough to get out in time will be hanging from the end of the ladder as the last chopper pulls away from the embassy. Those left behind will brush up their kalimas and shorten their shalwars and life will go on. The Taliban will run the country and all institutions will be cleansed and remodeled in their image.
2. Jihadi Army: The army is the army of Pakistan. Pakistan is an Islamic state. They know what to do. They will collect what they can from the Americans because we need some infidel technologies that we don’t have in our own hands yet, but one glorious day, we will purge the apostate officers and switch to full jihadi colors. The country will be ruled with an iron hand by some jihadi general, not by some mullah from FATA. All corrupt people will be shot. Many non-corrupt people will also be shot. Allah’s law will prevail in Allah’s land. And then we will deal with Afghanistan (large scale massacre of all apostates to be held in the stadium), India, Iran and the rest of the world in that order.
3. Controlled burn: This theory holds that there is no chance of any collapse or jihadi takeover. What we are seeing are the advanced stages of a Jedi negotiation (or maybe a Sith negotiation would be a better term). The army wants more money and this is a controlled burn. They let the Taliban shoot up some schools and courts (all bloody useless civilian institutions anyway). Panic spreads across the land. People like John Kerry come to Islamabad and almost shit in their pants at the thought of Taliban “60 miles away from the capital”. Just as Zia played the drunken Charlie Wilson and the whole Reagan team for fools, the current high command is playing on.
4. The coming war on the Indian border: The border of India is on the Indus, not on the Radcliffe line. The Taliban will take over the mountains, but they will be resisted at the edge of the plains. The Americans will train the army to fight this new war. There will be setbacks and loads of violence, but in the end the center will hold. America will fight a new kind of drone war in the mountains and in time, the beards will be forced to negotiate. Along the way, many wedding parties will also get bombed but you cannot make an omelet without breaking eggs. The Indian part of Pakistan will make peace with India and India will help us fight the Northern invaders. The army high command is NOT jihadi. But they lack capacity and need time to build it up. They need to be supported and strengthened. America should pay them more money and pay more heed to their tactical advice.
5. Buffer state: a variant of the above theory holds that Punjab is the historic buffer of India. All sorts of invaders come in, fight over the Punjab and capture it. Then the peasants get to work. We might even convert to whatever barbaric ideology they have brought, but in time the peasants outbreed and outflank the invaders. In the end, the invaders become Indian and help us outbreed and outlast the next invading horde. We win by “assimilation and attrition”. I am not sure if this is an optimistic theory or a pessimistic one. In India, the two are practically the same anyway.
6. No one seemed to think that peace would break out soon. No one thought the “peace deal” is the end of the matter. Jihadi sympathizers regard it as a way for the Taliban to consolidate in Swat before the inevitable advance into new territory. Anti-jihadis regard it as a necessary break to buy time while the new FC is trained, or as a surrender, or as an army plot, but NOT as a peace deal that leads to any kind of stable peace by some direct route.
My personal opinion in 2009: The state is stronger than many people think. But it is grossly incompetent and the elite itself is split and infiltrated by jihadi sympathizers. It won’t collapse soon, but all problems will continue to get worse for the foreseeable future. A big drone offensive is coming and there will be much secondary fighting in Pakistan. But there is at least a 50-50 chance that Jihadistan will NOT be able to expand into the Punjab and Sindh (though much terrorism will surely happen). The army will be gradually purged of jihadis and will one day come around to being a serious anti-jihadi force, but it won’t be easy and it may not happen. If the army continues to have jihadi sympathies, then all bets are off and many horrendous scenarios are imaginable. The US embassy may know more than we do. On the other hand, their declassified documents make it clear that they are incredibly naïve and racist in their assumptions and tend to regard the people they have colonized as mildly retarded children; so there is a good chance they don’t know batshit about what is going on, but are able to present impressive looking PowerPoints about three cups of tea with Kiyani to their bosses back home.
So what would I change today? I think the general outline remains the same and my leftist friends remain convinced that the army has not changed its spots and is still maintaining its links with jihadists while playing a double game with the US. But I am going to go out on a limb and say that I think the army is now serious about making a deal with India over Kashmir (both countries keep their current borders but allow free movement and trade across the existing line of control) and has put its jihadi dreams into very deep cold storage. But while their priorities may have changed, their propaganda narrative remains stuck in the same old anti-Indian, Jewish conspiracy mode. If anything, the usual “international Jewish-Hindu conspiracy” theory has become more entrenched. Whether the army seriously believes the old narrative is still useful, or whether this propaganda is now mainly used as a smokescreen to protect GHQ’s commanding position in national discourse while changing course below the radar is not clear to me.
Meanwhile, the domestic political picture remains confused and governance and corruption have gone from bad to worse. The PPP and the PMLN have cooperated more than most people imagined possible and the political class as a whole has done better than their terrible media reputation would suggest, but they have not been able to raise their performance to any significantly improved level. Inflation and poor economic performance have made the lives of the poor even more painful and elite corruption is as bad as ever. So while the deep state may not be on its previous suicidal Jihadist path, they risk becoming irrelevant if they do not improve governance and economic management fairly quickly. It is concievable that if some new economic disaster hits, then the ruling elite may face a very serious revolt. In addition, blasphemy and other such distractions remain potent tools in the hands of the religious right and it is possible that the army may lose control of the Islamists and the Islamist insurgency could spread deeper into Punjab. Still, if I had to make a guess one way or the other, I would say that the state will survive in more or less present format and while terrorism will continue, the existing system may still become reasonably stable. This is not saying much, but may be better than the alternatives.
Finally, I would add that this narrative is obviously politically incorrect and does not make too many allowances for liberal sensitivities. e.g. I do not write as if all evil is due to powerful White people and the innocent Brown folk will return to a state of nature once imperialism pulls out its oil-soaked fangs. That is not because I consider the imperialists to be necessarily good, but because I do not regard everyone else as lacking in agency. More on that next month, but if this narrative seems distant from the Imran Khan view of recent history, you can check out some of my reasons here.
Monday, January 10, 2011
Some thoughts about Poe's Law
The website LandoverBaptist.com has posted headlines that run from the goofy (“What Can Christians Do to Help Increase Global Warming?” and “New Evidence Suggests Noah’s Sons Rode Flying Dinosaurs”) to the chilling (“Satan Calls Another Pope to Hell” and “Trade Us Your Voter’s Registration Card for Free Fried Chicken from Popeye’s”). The site is designed to parody the racism, scientific illiteracy, and religious bigotry widely attributed to American fundamentalist and evangelical Christians. But, judging from the site’s posted mail, it seems that the general public does not recognize that the site is parodic. Most email responses begin by chastising the authors for not knowing the true meaning of Christianity, for having misinterpreted some quoted Bible passage, or for being hypocrites with respect to some point of contention. Very little of the posted mail actually confronts the owners and writers at Landover with what they are doing: presenting a grotesque, overblown, and bombastic parody of Christian religious life. LandoverBaptist.com’s mail bag has entries from its first days, and there has been a consistent failure on behalf of the writing public to recognize that the site is a parody. What gives? Poe’s Law (Wiki).
Nathan Poe is widely credited for formulating the eponymous law. He first noted a particular difficulty in an entry on a Christianforms.com chat page regarding creationism:
Without a winking smiley or other blatant display of humor, it is utterly impossible to parody a Creationist in such a way that someone won’t mistake (it) for the genuine article.
This is to say that unless there are unmistakable and explicit cues that one is being ironic or sarcastic, many parodies are not only likely to be interpreted as earnest contributions, they will, in fact, be indistinguishable in content to sincere expressions of the parodied view. The law can be fleshed out in a few ways, but the following thought capture the core of the Poe’s Law: For any webpage which parodies religious extremity, if the webpage has no overt cues of its status as parodic, no appeal to the page’s content can distinguish it from that of a webpage with sincerely expressed religiously extreme views. That a webpage is filled with Biblically-inspired scientific illiteracy, racism, or sexism doesn’t mean that the poster sincerely believes such things; the page might be a parody. Yet the problem is that this works in reverse as well. Blatant errors and blinding ignorance may mean that the poster is truly an immoral idiot. For every crazy thing on LandoverBaptist.com, there’s something just as (or maybe more) crazy on Godhatesfags.com. Looking just at the content, one cannot tell the difference between them.
Now, our objective here is not that of determining whether Poe’s Law is true. Our interest rather is in the effects of accepting it as true. What happens to interpersonal argument when disputants generally accept Poe’s Law? What are the effects of believing that a parodic expression of an extreme view is indistinguishable from a sincere expression of an extreme view?
To get a handle on the issue, consider first the straw man fallacy.
The straw man fallacy consists in distorting one’s opponent’s views and arguments so that they are feeble and indefensible, and then attacking the distorted versions of the views. When employing the straw man, one constructs a new, dumber, opponent and engages with that flimsy construction instead of arguing with the real opponent. Importantly, straw man arguments not only do our dialectical opponents a disservice, they also disserve the audiences to whom they are addressed. Unless they are as knowledgeable as the speaker who is deploying the straw man argument, audiences rely on the speaker to accurately represent the dialectical situation that obtains between those who accept the speaker’s view and those who disagree. The whole point of the straw man is to distort the perception of dialectical situation. Straw-manning, then, badly educates listeners on the difficulties of the issue and the state of deliberations on it.
Parody sites may seem vicious for roughly the same reason. They not only fail to engage the views and arguments actually endorsed by proponents of the other side, but they saturate the intellectual space surrounding an issue with imaginary buffoons. Lampooning one’s dialectical opponents with grotesque portrayals of their alleged unrepentant intellectual and moral vice may be deeply satisfying, but in doing so, one runs the risk of distorting one’s view of what one’s opponents actually believe. One comes to see oneself locked in battle with opponents who are beyond reason and unredeemable. This destroys the chances of rationally resolving real disagreements; in fact, it encourages the view that attempts at resolution by means of cooperative communication are futile.
Hence there is a term in popular parlance for the action of dismissing a purported interlocutor as a mere parody. When one “calls Poe” in a discussion, one claims that one or more disputants in an argument are simply playing at espousing the views they assert. Calling Poe is a way of bringing argument to a halt by asserting that there wasn’t an argument in the first place. Further, it is a way of canceling whatever points one’s interlocutor may have scored in the discussion; when an interlocutor is Poed, his or her views can no longer be taken seriously. Hence Poe’s Law often functions as a strategic maneuver in argument; it is a tool which enables one to simply dismiss one’s opponents.
There is reason, then, for thinking that accepting Poe’s Law has deleterious effects on argumentation. However, if Poe’s Law is true, parody sites do not distort the current state of argument, but rather reflect how things stand with respect to extremists. To clarify: As the parodies are indistinguishable in content from the real things, the parodies simply cannot be misrepresentations; rather, they are accurate portrayals of how dire the intellectual climate has become. Poe’s Law says that no matter how crazy or irrational an image the religious fundamentalist one constructs, there will always be an equally crazy and irrational but sincere defender of fundamentalism that one could have simply found in a Google search. In essence, LandoverBaptist.com does not really straw man the religious extremists with its parodies. In fact, it doesn’t directly refer to them at all; rather, it references them under pseudonym.
Hence a surprising result: If Poe’s Law is correct, straw manning is impossible. Every possible straw man has a real man correlate. And this realization encourages the tendency to regard the most extreme versions of the views one opposes as the standard or paradigmatic versions. That is, if one comes to regard, say, Christian fundamentalism as a family of views that is broad enough to embrace even the most intemperate, extreme, and ignorant versions possible, one will tend to see the most extreme versions of fundamentalism as typical of the view as such. Accordingly, one comes to see those who affirm more temperate fundamentalisms as insincere or disingenuous; the measured fundamentalisms are interpreted as strategically watered-down covers for the more extreme varieties. So why bother arguing with even moderate fundamentalists?
There are several unfortunate consequences of this attitude. Perhaps the most pressing is connected with the phenomenon, popularized by the philosopher and legal scholar Cass Sunstein, known as group polarization (Sunstein 2002). When groups hold each other in cognitive contempt and, as a consequence, refuse to cooperatively communicate, their views become more extreme. That is, when one talks with only like-minded people, one’s views actually shift to being more extreme versions of the originals. When there are actual impediments to discussion between groups or even reflections on intelligent exchange, the discussion within the competing groups tend to be enclaved and disconnected from those on their opponent’s side. In drawing group members’ beliefs towards the extremes, group polarization also expands the range of views that they regard as utterly ridiculous and unbelievable. Opposing views begin to seem not only mistaken but ludicrous and unintelligible. Accordingly, as groups polarize, they not only become less interested in engagement with those who oppose them, they become less able to do so.
Perhaps in the end it is pointless to try to argue with fools who hold extreme views. But notice that the question of who is a fool is different from the question of which views are extreme. No one is a fool simply in virtue of what he believes. Even those who believe absurd things may do so because they have an especially corrupt sense of what the evidence that bears on their belief indicates. The difference between the fool and the sage is a difference between the ways in which one’s beliefs reflect the evidence one has. The fool believes despite his evidence. Poe’s Law encourages us to draw firm conclusions about who is and is not a fool based on the content of the beliefs they espouse. This is, in the end, a dangerous cognitive policy. Moreover, given the group polarization phenomenon, it is a policy whose danger only increases.
Monday, January 03, 2011
One Thousand Year Writers Block
William Burroughs famously remarked that Islam had hit a one thousand year writer’s block. Is this assessment justified? First things first: obviously we are not talking about all writing or all creative work. Thousands of talented writers have churned out countless works of literature, from the poems ofHafiz and Ghalib to the novels of Naguib Mahfooz and the fairy tales of innumerable anonymous (and amazing) talents . There is also no shortage of talent in other creative fields, e.g. I can just say “Nusrat Fateh Ali Khan” and be done with this discussion. But what about the sciences of religion and political thought, or the views of biology, history and human society to which these are connected? Is there a writer’s block in these dimensions?
The correct answer would be “it depends” or “compared to what”? After all, it’s not so much that everyone else in Eurasia stopped thinking 500 years ago, but rather than an explosion of knowledge occurred in Europe that rapidly outstripped other centers of civilization in Eurasia. And after a period of relative decline, the rest of the world is catching up. Culture matters, but cultures also evolve. For better and for worse, cultures in Japan and Taiwan are now full participants in the global knowledge exchange, both as consumers and as producers. Iran has been trying to move beyond previous (and obviously flawed) models of personal autocracy and hereditary rule interspersed with violent and devastating civil wars, for over a hundred years, and the Islamic republic, for all its problems, is not a brain-dead culture.
But what of the Sunni world? There is no one uniform pattern., but states like Turkey and Indonesia that are doing “better” (apologies for making a judgment, but how else to judge?) are running imported systems that maybe fairly functional, but that the ruling elite cannot seem to defend on “Islamic” grounds. This means that they are forever exposed to ideological assault from the Islamists. Much of the population appears to prefer the “imported” arrangement to any “authentic” Islamist alternative, but neither the elite nor the wider population seems to have arguments that can systematically justify the acceptance and import of new ideas, particularly ideas labeled “un-Islamic”. And what prevents such a discourse from developing? The answer lies hidden in two concepts that the Islamicate world has not been able to shed: the twin notions of blasphemy and apostasy and a subsidiary idea that these laws can be enforced by free-lance enforcers where the state fails to take action. This leads to a limited and hypocritical public discourse in which everyone (in public) pays lip-service to a mythology of Islamic perfection, completeness and ahistorical permanence, while struggling to carry on with life above and beyond those formulaic pieties. Ideologically, this concedes the public space to the Islamists and it is only their intellectual and practical bankruptcy that prevents them from taking full advantage of this tremendous ideological monopoly.
So the problem (and there is a problem) is not a thousand year writer’s block. And it’s certainly not a problem intrinsic to Islam as such (because there is no such things as “Islam as such”). We do not have to buy into the notion that Sunni Islam is barbaric in essence . Salafist and Wahabist interpretations (which are, let us assume, ipso-facto barbaric) are not a return to some original essence, they are a new invention and not a particularly good one. There is no essence and no pure past to which everything tends. Islam is a product of history, with roots in the past that go far past 7th century Arabia. The religion evolved and borrowed freely and eclectically to create a new and dynamic hybrid civilization. The Persian cultural renaissance civilized the Eastern half of the Arab empire ( a renaissance that includes, incidentally, a stunning example of how much impact one person can have; that one person being the poet Ferdowsi) and Islamicate civilization in this region incoroporated many literate and highly sophisticated elements. In India, the Islamic elite was Persianized (and like all Muslim elites, partly Hellenized) while folk Islam was eclectic, assimilative and full of life, producing such poets and creative geniuses as Shah Hussein, Shah Latif and Waris Shah. Khwaja Farid, a completely orthodox Punjabi poet of the 19th century, had no problem with writing that Adam must have been Hindu, since Hinduism is the oldest religion of man. The poet Jaun Elya (a Marxist/anarchist not particularly friendly to organized religion) wrote that when he was growing up 70 years ago, the Mullahs and Ulama who taught him and his brothers would be unrecognizable to Pakistanis of today. They were cultured and sophisticated people, with fine manners and elaborate courtesies, whose Islam was self-confident and literate and was light years away from the ignorant and barbarous world of the Ulama, Jihadists and Generals of today. But while there was no one thousand year writer’s block, something else did happen; the Americans found oil in Saudi Arabia and simultaneously discovered that hardline Islam was a bulwark against communism. Those two discoveries joined with other wider worldwide trends and existing strains of fascist Islam to inject life into the most vicious and most barbaric strains of Islam and have played a very big role in bringing us to today.
Again, it must be emphasized that it was not always thus. Tabari’s history is in no way limited by his status as a major exegete of orthodox Islam (albeit one whose school did not survive the competition). Muslim scholars struggled with every question and took almost every conceivable philosophic position in the time of the Abbassids and their successors. Though the so-called “golden age” was not always that golden, it was a time of great creative effort and intellectual exchange, in no way comparable to the brain-dead theology of modern-day Islamic militants. And this willingness to read and learn from many sources did not die a thousand years ago. It has been dealt a body blow relatively recently, with the union of oil wealth, Wahhabi ideology and cold-war requirements. And this is not to deny the simpler political explanations of many current conflicts, but over and above the miasma generated by the usual problems of the world (land, occupation, injustice, etc.), there is an additional layer of insanity and it has the potential to co-opt and devour many other struggles and arguments and push them over to barbarism.
Individually, Shia Muslims outside of Iran and South Lebanon have an advantage over us Sunnis. They do not have to deal with Shia theology or mythology as a concrete alternative to existing political arrangements, so one can be liberal, agnostic, fascist or anything else, and still maintain a reasonably healthy relationship with the annual Shia passion play and its associated historical myths. But for Sunnis in the Sunni majority countries, the options appear to have shrunk . It was not always thus. 40 years ago, most leftists in Pakistan were protesting the Vietnam war and arguing whether Zulfiqar Ali Bhutto was a leftist or a feudal. No one regarded the Islamic parties as a serious alternative to anything and voters agreed, wiping out most of those parties in elections. Today, we may still wish to ignore the jihadists and their insane projects (exemplified by this refusal by Hafiz Saeed to show his face on a TV interview because appearing on video is against Islam), but they won’t let us ignore them. Slowly but steadily the Islamists are forcing Sunnis to face the unpleasant fact that if they continue to ignore the Islamists, their own freedom to deal with issues without invoking an imaginary history of Islam, will be severely restricted.
For completeness, it must be added that there are other options. For example, matters like the blasphemy law can be subsumed into an academic Western debate about postmodernism and cultural relativism if you are a Westernized postmodern thinker, safely ensconced in New York and thoroughly immersed in the categories and arguments of the Western academy. But this route is not available to most of the citizens of the Muslim world (and would be stunningly incongruous there outside of a few Universities, which, as islands of Western influence, will value such scholars). For everyone else, a rather dramatic opening up of the debate is urgently needed. What is needed is not just a new look at history, but an outbreak of cultural creativity, using literature, painting, music and movies to explore and re-create the entire history of Islam. Imagine how many movies can be made about the first andsecond civil wars alone! When I first read Tabari’s account of the last days of Uthman, where the aged caliph, abandoned by most of his comrades and besieged by rebellious soldiers, goes to the mosque and defends himself in an eloquent Friday sermon, I imagined many different ways such a scene could be played and many talented actors who could play it. For example, we could have Amitabh Bachan as the caliph, making his speech and softening hearts, but then a rebel played by Nana Patekar rises to condemn him and the mood of the crowd changes, ending with the aged caliph being pelted with stones. There must be ten movies in that scene alone, and Tabari has hundreds of other dramatic scenes to choose from. And imagine how many talented novelists would love to have a crack on these characters and their cosmic struggles. Unfortunately, instead of getting better, in some places, things have gonefrom bad to worse. There is no thousand year writer’s block, but there is indeed a very large damper that has been thrown relatively recently on Sunni Islam, and its time we return to the freer time of Attar, Rumi, Hafez and Khusro, even if we cannot yet make all the movies we want.
Monday, November 22, 2010
Floods and Plagues: New Lessons From the Old Testament
The late spring/early summer of 2010 was much wetter than normal in West Central Illinois. The sewer backed up into my basement while I was out of town. I returned home to an unmistakable smell and dismissed it as a "freak event" while I cleaned it up. A couple weeks later, I was home during a particularly Biblical downpour. The sewer began to back up again and, despite my best efforts to staunch the flow with a plunger, sewage poured out of my basement toilet with a ferocity that was reminiscent of the elevator scene in Stanley Kubrick's "The Shining" except in sepia-tone. When I called the city to remind them that I paid for sewage to be taken away from my house not delivered to it, I was told that May and June of 2010 were unusually wet and that the city's old-school combined system could not handle it (newer systems have separate pipes for sewage and storm run-off). The voice on the phone told me that we had received 24" of rain in May and June. I checked the weather for 2010: In May we received 11.90" and June 11.78". I checked the climate records: The long term average for May was 4.27" and the previous record for the month 11.29" recorded in 1908. The long term average for June was 4.26" and the previous monthly record of 13.97" had been set in 1902. In other words, in two consecutive months we had nearly equaled or exceeded all time records, which were set over a century ago! This gave me something to think about as I squee-geed, shop-vacced, and Cloroxed my basement for the second time in as many weeks: How does a culture or civilization respond when all of its assumptions about the world (and the resulting necessary embodiment in infrastructure) no longer apply?
The instant flood and prospect of illness presented by the excrement got me thinking about two classic tales in the Old Testament: The Noahic Flood from Genesis and the Ten Plagues of Egypt from Exodus. As a Biologist I get some grief for being a scientist and for Science and Religion being incompatible. On the one hand, science is not known for supporting supernatural explanations of any kind. On the other hand, naturalistic accounts could explain some phenomena that appeared to be supernatural to people of the Old Testament.
I was brought up by a completely lapsed Southern Baptist, thoroughly agnostic father and Bahá'í mother (who was herself the product of a non-practicing Jewish father and non-practicing Catholic mother). Not surprisingly, I decided at a pretty young age that everything in the Abrahamic tradition could be read metaphorically rather than strictly literally, so I was amazed when I began to realize there was a cottage industry of scientists who tried to explain things in the Bible using modern methods and methodologies. If for no other reason than that I could tell people that science supported some of the things in the Bible (and that therefore they were not completely opposed to each other), I began to save some articles and make some notes.
In the late 1990's a pair of geologists published a book that explained the Noahic flood as the flooding of land around the Black Sea as the Mediterranean rose from melting glacial ice sheets and spilled over the Bosporus and offered some compelling evidence to support their ideas. At about the same time, a pair of epidemiologists (Marr and Malloy, 1996) arrived at a plausible epidemiological explanation of the 10 plagues of Egypt. I would like to explore both of these hypotheses a bit and put together my own synthesis.
As the Pleistocene ended about 12,000 years ago, the great ice sheets that covered much of the northern continents retreated and their run-off made the ocean rise hundreds of feet. That this happened globally probably explains why flood myths tend to be universal (Wilson, 2001). Some of these past floods were truly epic. Nearly 12,000 years ago, much of the Columbia River Gorge may have been carved out in about a week, when a glacial ice dam failed, allowing 2,000 feet deep Glacial Lake Missoula to empty at a flow rate of 9.46 cubic miles per hour, which is about 50 to 60 times the flow of the Amazon River (Gould 1980, Glacial Lake Missoula)! The scale of this event was so far beyond the pale, that it was initially dismissed as being incompatible with the Uniformitarianism and Gradualism that are the bedrock (pun intended!) assumptions of modern Geology.
The Noahic flood may be more familiar to many people. Two geologists (Ryan and Pittman, 1998), argued that it resulted from the rising waters of the Mediterranean Sea overflowing the Bosporus and filling the freshwater Black Sea with saltwater. By some estimates, a day's flow over the giant falls at the Bosporus would have a equalled up to a year's flow over Niagara! As the water rose at about a foot a day, the flooding of the low-lying area around the Black Sea led to a diaspora that spread agriculture, along with tales of a great flood, all over Eurasia.
We tend to present science as a monolithic enterprise in which the scientific method is solely practiced. Double-blind experiments with treatments, controls, and replication are the "Gold Standard," and anything else is regarded as lesser or even suspect. Unfortunately, the real world is rarely so accommodating and it is just not ethical to infect half of a population with something nasty while the other half gets a placebo. Epidemiologists have assembled their own set of tools for practicing science within their set of constraints.
Focusing on the sequence and timing of events, and the specificity of symptoms and their causes, Marr and Malloy (1996) present the following argument: The Egyptians at that time were a river and agricultural people. A freshwater "red tide" caused by the aptly-named dinoflagellate Pfisteria piscimorte (Plague 1: Blood) killed fish (a major source of dietary protein) and forced frogs onto the land (Plague 2: Frogs), where they died, and thus were no longer around to control insect populations. Instead, their carcasses provided plenty of food for insect larvae that transformed into the adults of Plague 3: Lice, and Plague 4: Flies. Marr and Malloy believe these insects were Culicoides biting midges ("no-see-ums") and Stomoxys stable flies, both efficient vectors for orbivirus infections that resulted in the death of livestock (Plague 5), and bacterial infections that caused Boils (Plague 6), both of which further reduced dietary protein and left fewer animals and people to practice agriculture with. Hail (Plague 7) killed people, animals, and lodged grain, further reducing food stocks. Solitary locusts, responding to crowding and stress, morphed into migratory swarms and devoured much of what grain remained (Plague 8: Locusts). Sandstorms, likely khamsin (hot Saharan winds) or sobaa (severe, multiday-long storms) caused the Darkness of Plague 9, and covered the wet grain with warm dust and locust feces, which promoted mold growth and mycotoxin production that led to the Death of the Firstborn: Plague 10 (or using a different translation, the death of first fruits or shoots). Whether this final plague involved killing children or new crops is not so important as the idea that this powerful civilization suddenly had no future. The assumptions behind their relationships to the water and land no longer held.
A common theme that ties together The Great Flood and The Ten Plagues is Global Climate Change. As the earth began warming near the end of the Pleistocene; ice sheets melted and the sea level rose, inundating coastlines around the world and some inland areas like those around the Black Sea. Another major prediction of global warming is that extreme weather will become more severe and more frequent. A look at global temperature measurements show a slight bump between 2000BCE and 1000BCE. The Ten Plagues are thought by many scholars to have occurred between 1500-1200 BCE. Warming could have triggered the Red Tide that then precipitated the next five plagues. Warming could also have increased the frequency of severe local weather like hailstorms, and sandstorms (Plagues 7 & 9), which initiated the plagues of Locusts and the Death of the Firstborn. Starvation and disease amplify each other's effects with the result that 1 + 1 = 5.
The end result of the Ten Plagues, was that the Israelite slaves were freed by the pharoah and subsequently fled Egypt. Slavery is not something that we tend to associate with our lives today, but in 21st Century North America we each rely upon 100 to 200 "energy slaves" in the form of fossil fuels for our daily activities. Like it or not, whether we voted for Palin or Obama, we are all pharoahs or plantation owners in that we rely on energy that is not from our own bodies for heating, cooking, manufacturing, and transport. Certainly, depending on black gold is preferable to exploiting black bodies, but is it in our best interests to be so inextricably entwined with fossil fuels? Peak oil and global warming suggest no.
Is oil our slave or are we its slave? That America has spent trillions of dollars over the last decades supporting a military that ensures safe passage of oil through the Persian Gulf suggests the latter. Just as the Egyptians depended on the flow of the Nile to water crops and slake the thirst of animals and human laborers, we depend on an ever increasing river of oil arriving at our shores from all over the planet to supply energy and chemical feedstocks for our civilization.
Of course, the real threat may be in the Carbon dioxide that is released by burning fossil fuels. Just as Abraham Lincoln and Stephen A. Douglas debated the future of human slavery 152 years ago, a few blocks from where I am writing this, we need to seriously address "energy slavery" and its consequences today. Unfortunately, I don't yet see either the political will or insight among any of our leaders.
A new exhibit about human origins at the Smithsonian Institution in Washington, D.C. declares "Humans Evolved in Response to a Changing World," and seems by extension to imply that "we've done it before, we'll do it again," while never discussing the role fossil fuels play in our current situation, or the role mass mortality plays in making natural selection work. Responding to climate change may be one of the factors that drove human evolution, but our domination of the planet has arisen during a period of relative climatic stability that we are in the process of pushing ourselves out of. Even if the average climate stays the same, the extremes will become even more extreme.
Not surprisingly, David H. Koch, billionaire oil tycoon, and climate change denier, underwrote the exhibit just as he underwrote the Mercatus Center at George Mason University, The Cato Institute, Americans for Prosperity, and Tea Partiers, among others (Mayer, 2010). It's his money and he can do what he wants with it, but it seems to me that one individual having so much influence undermines the concept of one man, one vote that underpins our democracy. One argument made by deniers is similar to that made by tobacco companies: We can't do a proper experiment, therefore we can never really be sure about the causal links between and X and Y (substitute tobacco and lung cancer or fossil fuel consumption and global climate change). Against that kind of money and argument, all I can do is point to the geological and historical records, and the epidemiology of Marr and Malloy, which suggest that we have already participated in some global warming experiments with severe results. The great flood chronicles the global rise in sea level that accompanied the end of the last Ice Age, from which refugees fled en masse. Exodus may be in fact describing the first epidemics and epizootics that would be expected when increasing population size mixes with a bit of climatic warming.
I am using science to attempt to confirm and explicate a literal reading of parts of The Old Testament, but one that implicates climate change as a causative agent for floods and plagues. Followers of the Abrahamic tradition could agree with the details of a literal reading but conclude that the causative agent was an angry God for the Great Flood, and one who later came down on the side of the enslaved Israelites by ensuring their emancipation and safe passage to freedom. Recognizing God's omnipotence affirms our smallness. With nearly seven billion people, some of whom have huge ecological footprints, that smallness is questionable today. Positing an all-powerful God may also have the effect of relieving us of any individual or collective responsibility for our actions. Many of the same people who believe in an all-powerful God also deny climate change, yet over the last century and a half, we have in fact achieved God-like power with our ability to change the earth's climate through our activities.
With great power comes great responsibility. Unfortunately, we have not fully owned-up to this responsibility. At the time of Exodus, an estimated 2.5 million people lived along the Nile and environs. Today a large fraction of the nearly 7 billion people live on or near coastlines or rivers. The rest will also be increasingly susceptible to floods and plagues that will be realized in a warming world. Recent events in Haiti and Pakistan may well be a preview of coming attractions. The effects on other species will likely be catastrophic as well. Minimizing the causes and effects of these changes will likely be the central challenge to humanity in the 21st Century.
As for my basement, the city informed me that a check valve installed on the line between the house and the sewer main would prevent future sewer back-ups. It will not be cheap, but it may be the first serious external cost of global change that I have to pay for directly. I hope it will be the last, but don't think it will be.
Stephen J. Gould. 1980. The Great Scablands Debate. in The Panda's Thumb: More Essays in Natural History. Norton. New York.
John S. Marr and Curtis D. Malloy. 1996. Epidemiologic Analysis of the Ten Plagues of Egypt. Cadaceus. (12):1: 7-24.
Jane Mayer. 2010. Covert Operations: The Billionaires Who Are Waging A War Against Obama. The New Yorker. August 30.
William Ryan and Walter Pitman. 1998. Noah’s Flood: The New Scientific Ideas About the Event that Changed History. Simon and Schuster, New York.
Ian Wilson. 2001. Before the Flood: The Biblical Flood as a Real Event and How It Changed the Course of Civilization. St. Martin's Press, New York.
Monday, November 15, 2010
Waging War on Christmas, to Save Thanksgiving
Weeks before Halloween, Christmas decorations started appearing around town. At the local department stores, mannequins of witches and zombies were crowded by Santa’s elves. The Christmas season has, it seems, overcome Halloween. Halloween is a charming holiday, so this is lamentable to some degree. But given the relatively stable interest children have in candy and play-acting, Halloween is not in danger of extinction. The constantly-expanding Christmas season does not threaten to undermine its spirit.
Sadly, the same cannot be said for Thanksgiving. When pitted against the aggressive encroachment of Christmas and the corresponding shopping season, Thanksgiving, our most humane and decent holiday, doesn’t stand a chance.
Unlike Halloween, Thanksgiving is a holiday of human significance. Though it is occasioned by the mythology of Pilgrims and Wampanoag Indians, the point of Thanksgiving is not that of rehearsing or commemorating that original event. In this respect, Thanksgiving differs crucially from other holidays. The Thanksgiving gathering is not a means to some other end, such as memorializing the signing of a document (July 4th), observing an ancient liberation (Passover), celebrating the birth of a god (Christmas), or honoring the bravery and sacrifice of soldiers in war (Veterans Day). The point of Thanksgiving is rather to gather with loved ones, to reaffirm social bonds, to enjoy company, and to appreciate the goods one has. To be sure, the Thanksgiving celebration is focused on a meal, typically involving large portions of turkey and cranberries. Still, the details of the meal are ultimately incidental. The aim of the Thanksgiving gathering is not to eat, but to be a gathering. The coming of people together is the point-- and the whole point-- of Thanksgiving.
Consequently, Thanksgiving is the least commercialized major holiday. There are no special items to purchase, no material obligations, and no gift-exchanging. Since the point is to come together with loved ones, there is no need for commercial items to mediate the relations between people. We gather on Thanksgiving in order to be in each other’s company.
Christmas is different. It is suffused with its two myths, one of the North Pole and the other of the North Star. Neither myth is particularly inspiring. Consider: Santa is a man of miraculous ability. He is morally omniscient, he produces a copious amount of toys, and he distributes them across the globe with astounding speed and accuracy. But, alas, Santa is not a good man. He delivers presents to children when his powers could be used, instead, for redressing injustice and suffering. Why doesn’t he deliver desperately needed supplies-- food, medicine, clean water, comfort-- to those most needy? He knows how to travel to all of the world’s households in a single night. Why won’t Santa share this technology? He claims to be concerned with rewarding those who are good and punishing those who are bad, and yet he spies on children, even as they sleep. How contemptible.
The North Star myth fares no better. Jesus’ birth occasions Herod’s slaughter of Bethlehem’s innocent first born boys under two years of age. Thanks to an angel’s warning, the Holy Family skips town. What of the other families who didn’t get the warning? Tough luck. So much for “love your neighbor.”
Jesus grows up to be a shoddy moral exemplar. He heals the blind, but offers no cure for blindness. He treats the sick, but he offers no preventative measures against sickness. He could have introduced the practice of hand-washing to human society, but didn’t bother. He gets angry at money changers in the temple, but it is for money-changing in the temple, not for dishonest business practices. All this while women are subjugated, men are enslaved, innocent people are starving, and children are abused. On top of this, if Mark is to be believed, Jesus promises that those who do not follow him will burn with “unquenchable fire.” Disagreeing with Jesus warrants unending torture. How utterly contemptible.
The Christmas myths are morally horrid. That’s not the worst of it, though. They are overwhelming, suffocating. The way in which Christmas is celebrated overpowers the genuine human contact the holiday might otherwise occasion. Presents are the focus of Christmas, and the days, weeks and, now, months leading up to Christmas are consumed with travails of procuring gifts. That is, Christmas is focused on want. People gather, but for the sake of exchanging gifts, providing material items to satisfy wants. Accordingly, we must make lists of the things we want others to buy for us. In fact, not to tell loved ones explicitly what one wants for Christmas is to place a heavy burden on them-- they must now try to figure out what to buy. To avoid the hassle, many elect simply to exchange gift certificates. In the end, we’re simply funding each other’s shopping; it’s all just money-changing.
Given its focus on acquisition, it should come as no surprise that the Christmas season is constantly, and aggressively, expanding. The Christmas shopping season now begins at 12:01 a.m. on the Friday following Thanksgiving. A long weekend which could be spent enjoying the company of family and friends is claimed for bustling and angry crowds, long lines in shopping malls, disputes over parking spaces, and unavoidable traffic jams. Within a few hours at most, one wants only to be alone, to get away from other people. The spirit of Thanksgiving is destroyed by Christmas.
There is an overabundance of opportunity throughout the year to hassle with strangers in shopping malls. We have plenty of opportunity in our lives to gain increased appreciation for the Sartrean dictum that “hell is other people.” And every day we are constantly bombarded with commercial reminders of the things that we want and of the ways in which what we have is not sufficient. Christmas heightens these phenomena, engendering discontent. Thanksgiving, by contrast, provides a weekend escape from all of this. It counsels us to sit back, relax, appreciate what we have, and spend time with the ones we love. On Thanksgiving, we appreciate what we have, and acknowledge the ways we are indebted to others. Our families and friends are imperfect, but they nonetheless are ours; they are unique, idiosyncratic, and irreplaceable. Unlike Christmas, which is fixated on the new and the disposable, Thanksgiving calls us to appreciate the durable and the familiar. In order to preserve this civilized oasis that is Thanksgiving, we must wage war on Christmas and all of its madness.
We recommend that the war should be waged on the following two fronts:
First, stay home on Thanksgiving weekend. Do not shop on “Black Friday.” Sleep in instead. Spend time with your family; relax, eat leftovers, have a drink, watch a movie, take a walk. The shopping malls will survive, the sales will continue, the shelves will remain stocked. You have plenty of time.
No doubt some will dispute that last claim. They will say that time is short, and that they need the long Thanksgiving weekend in order make a dent in their Christmas shopping list. Hence the second front of our war on Christmas:
Second, rethink gift-giving. It is a simple and lamentable fact that the percentage of the Christmas gifts you receive that are useless to you is pretty high. Yes, it’s the thought that counts. But if it’s the thought that counts, then it is perfectly acceptable for people to exchange the kind of gift that cannot be purchased in a store, namely, the gift of time. Tell the adults on your Christmas list that this year you’re giving them the gift of free time; you are releasing them from the obligation to buy for you a gift, and you are encouraging them to spend in some other way the time they would otherwise spend at the mall purchasing a material gift for you. Offer to make time in January for a long and relaxed lunch date (and then make good on the offer). For friends with children, offer to babysit so that they may have time for themselves or for each other. For far-away friends and relatives, resolve to write letters; real letters, with details and thoughts just for them, with questions and occasions for beginning ongoing conversation.
These proposals are hardly militant, though widespread adoption of them would result in a decisive blow against the aggressiveness of Christmas, thereby saving Thanksgiving. Waging a war on Christmas in this way might also have an additional benefit, namely, that of saving Christmas from itself. A more humane, civilized, and sane version of Christmas, one more like Thanksgiving, might even be worth celebrating.
Monday, October 25, 2010
Statistics - Destroyer of Superstitious Pretension
In Philip Ball’s Critical Mass: How One Thing Leads to Another, he articulates something rather profound: statistics destroys superstition. The idea, once expressed, is simple but does not stem its profundity. Incidents in small numbers sometimes become ‘miraculous’ only because they appear unique, within a context that fuels such thinking. Ball’s own example is Uri Geller: in the 1970’s, the self-proclaimed psychic stated he would stop the watches of several viewers. He, perhaps, twisted his face and furrowed his brow and all over America watches stopped. America, no doubt, turned into an exclamation mark of incredulity. What takes the incident out of the sphere of the miraculous, however, is the consideration of statistics: With so many millions of people watching, what was the likelihood of at least some people’s watches stopping anyway? What about all those watches that did not stop?
Our psychological make-up seeks a chain in disparate events. Our mind is a bridge-builder across chasms of unrelated incidents; a credulity stone-hopper, crouching at each juncture awaiting the next link in a chain of causality. To paraphrase David Hume, we tend to see armies in the clouds, faces in trees, ghosts in shadows, and god in pizza-slices.
Many incidents that people refer to as miraculous, supernatural, and so on, become trivial when placed within their proper context. Consider the implications of this: Nicholas Leblanc, a French chemist, committed suicide in 1806; Ludwig Boltzmann, the physicist who explained the ‘arrow of time’ and gave us the Boltzmann Constant, committed suicide in 1906; his successor, Paul Ehrenfest, also committed suicide, in 1933; the American chemist Wallace Hume Carothers, credited with inventing Nylon, killed himself in 1937. This seems to ‘imply’ a strong link between suicide and science. Of course, as Ball indicates himself, we must look at the contexts: We must ask what the suicide-rating of these different demographics was in general: of Americans, Europeans, males, and any other demographic.
Ball shows that in the 19th- to 20th-century Austria of Boltzmann and Ehrenfest, suicides were quite common: ‘[Suicide in Austria] claimed the lives of three of [the philosopher] Wittgenstein’s brothers, [the composer] Gustav Mahler’s brother, and in 1889, the Crown Prince Rudolf of Austria.’ Seen in the ‘light of the relevant demographic statistics’, says Ball, Ludwig Boltzmann’s death does not indicate something special about suicide and science. Statistics made this incident banal by removing it from isolation; statistics returned these strange facts about the Austrian scientists and their suicides into a context that bridged the chasm where the miraculous or spectacular are birthed. Statistics helps us show the echoes in this Chasm of Credulity harmonise with a larger context, helps us weed out the isolated incidents before they grow into poisoned fruit of proclamations of superstitious awe. Science seeks ways to bridge, if not narrow, this Chasm of Credulity.
Whether the incidents are psychic-telephone calls or astrology charts, nearly all can be minimised, and thus emptied, of their pretensions. Bloated anecdotes of precognitive abilities are drained when we think of their corollary: how many more times have you thought of someone and the telephone hasn’t wrung? What are the chances of several hundred people’s watches stopping in a crowd of a few million? With the millions of combinations of baked dough, tree bark, and mountain cliffs, perhaps it’s more likely for us not to see face in these various phenomena. Statistics can aid us here, bringing us back down to earth, instead of drifting among the clouds of make-believe.
To make sense of this, consider the ‘birthday problem’: what are the chances that, in a small group of people, any two share a birthday? Let us assume a group of 30 people and there are 365 days in a year. Two people must share one of those 365 days. Thus, we first work out the total possible combinations of two people’s birthdays if they asked each other: that would be 365 x 365 x 365 … for the number of people. That means 36530, which is a massive number. This is the denominator. We can now calculate the number of matches that are not birthdays, working our way backwards to figure out the probability.
Person #1 states his birthday. Person # 2 has 364 days to choose from, Person # 3 has 363, and so on (remember, the birthdays do not match hypothetically). An image useful in considering this is Person # 1 drawing a red cross on yearly-calendar, Person # 2 doing the same in the available spaces, and so on, until thirty people have done it. That is working your way down. So, in trying to work out how many people do not share a birthday, we have to say 365 x 364 x 363 … and so on until you’ve done it 30 times. Thus, we write it as follows:
‘N’ equals the amount of participants and ‘!’ indicates a factorial, which works its way down as we indicated above (365 x 364 … 336 x 335). This is the numerator for our example.
Now, we simply combine our figures.
We are left with: [365!/335!]/365^30
According to the calculations, we should get: 0.2936. Remember this is the chance of people not sharing a birthday. So, the chance of sharing a birthday is inverted (1 - 0.2936): making it about 70%, between two people, in a group of 30.
Using careful calculations we encounter a counter-intuitive conclusion: in a group of 30 people, the chances that two people share a birthday is above 20-, 50- and even 60-percent. On face value, not many of us would probably think the chances that high. This shows there is actually nothing remarkable or special or spooky about two people sharing a birthday, considering that cold calculation indicates the likelihood being more than a coin-toss.
How does this reflect in superstition? Using the horizon this little but wonderful example provides, we can eclipse all manner of abysmal superstitious exclamations: What were the chances that we would meet again? What was the likelihood that I should win the lottery/win at Blackjack after I wore my lucky-jacket, prayed to my god, etc.? What were the chances of recovery from my cancer, after I went to a homeopath, a crystal-healer, a witch-doctor? All these are important questions, but are asked in a rhetorical flourish meant to indicate that the chances ‘were slim’ or ‘highly unlikely’, thus it must be the magic-man that heals, or your hidden psychic connection that provoked meeting your friend.
Consider the danger of ignoring proper calculations in medicine. People often tell us they go to a homeopath after going to a doctor; the doctor who is merely a puppet to ‘big-pharma’, who treats ‘me like a machine’ and so on. The medicines ‘Western’ doctors supply ‘do not work’, so people attend something more catchy, comforting and casual: the homeopath, the angel-healer, the witch-doctor. Strangely, one thing doctors can learn from these hucksters is the attention given to patients: the care, the pampering and the dignity conveyed. These all appear to play a factor, though people, like the great Barbara Ehrenreich, destroyer of all positive thinking, remain sceptical of how much attitude really affects health. If for no other reason than to keep patients, doctors could learn from these practitioners (they may be ‘practitioners’ but they are not medical ‘practitioners’). However, in the most important engagements of medicine, there is no time for pampering or it is simply inappropriate in an environment where, for example, the most important thing is to immunise a child.
Back to the patient: Firstly, what were the chances of you getting cured of your ailment anyway? Secondly, are we talking about a cold or a cancer? Is it absolutely impossible for cancers to suddenly go into remission without medical foresight? Of course not; oncologists will relate many stories where this has happened suddenly. The irony of course is that people imagine medical treatment as a coin-toss; you flip a coin once, the chances of getting tails are fifty-fifty. If you flip it again, the chances of getting tails remain fifty-fifty. The chances are ‘reset’ each time (this is different to asking how many tails I can get in a row, for example). But medical treatment does not ‘reset’ (similar to Ian Hacking’s Inverse Gambler’s Fallacy). Medical effects carry over.
People forget that medicine takes time to have an effect. When the effect happens to coincide with you drinking glorified water or smelling pretty aromas, many will point to homeopathy or aromatherapy as being the curing solution. But you might as well point to closing your car door or scratching your chin, since these might also have coincided with your body’s defence recognising the aid you had taken months or weeks ago. This false attribution to alternate stuffs gives them undeserved recognition and detracts from the things that actually cured you: even if it was not the medicine, we can safely say it was just your body! Health, though incredibly advanced, is still swathed in mystery but it does not mean we resort to made-up answers or whatever is convenient.
All these are factors made apparent when we put it into a proper context, asking for calculations and chances. Statistics is also wonderful since numbers do not discriminate, though obviously people may use them to do so.
The only thing remarkable about the strange world of ‘alternative medicine’ is the extent to which we allow ourselves to be duped, paying billions of our currencies into industries that consistently prove the power of the placebo. We are watching the pretensions of assertions squander our money. These fraudsters are using the Chasm of Credulity, the gap of isolated incidents, where the echoes of events removed from their context reside, leading to the fruition of bad thinking and anecdotal justifications. This same chasm across which people take leaps of faith and jump to conclusions.
The main reason scientists do not automatically trust anecdotal evidence is primarily because we need to put it into a context, test it, prod it, poke it. Anecdotes were and could be the first stirrings of something magnificent. But if the scientific eye turns toward the phenomena and it shrivels up and dies under scrutiny, it probably was not worth pursuing anyway. Someone’s clouds of hot air dissipate when cold reason enters the room.
For example: simply saying I felt better after being ‘touched’ by a magic man tells us nothing. Even if millions of people testify to the abilities of holy men and women, as they do in India with certain gurus, we need to obtain a context, the likelihood of their abilities occurring naturally (for example, did he really cure someone or was the patient’s disease likely to disappear anyway? What are the chances the storm clouds had been gathering for days and not summoned by a rain-dance?) Anecdotes are by definition after the fact, often not repeatable, and often, and most important, divorced of their context. Remember: what makes an event miraculous or supernatural is, more often than not, ignorance about the statistics of its occurrence within a specific context; as we saw with science and suicides, and sharing a birthday between random people.
To give you a further idea of this, consider this seemingly incredible find: Ben Goldacre relates a story from England in which ‘drinking the Queen’s Royal Deeside spring water improved arthritis symptoms in two-thirds of patients.’ Sounds remarkable until we put it into context, as Goldacre does: ‘It was a study of 34 patients over three months and there was no control group.’ To truly engage us, it should have much more patients, over a longer time and have a control group: i.e. a group that serves as a foil to the original and has similar characteristics as the experimental group but are given a placebo. To create a context for this remarkable find, we must offer a control to see whether it was truly the Queen’s Royal Deeside spring water or something else (if the control group gets similar results it does not mean the control was the cure, but that, more ldrinking the Queen’s Royal Deeside spring water improved arthritis symptoms in two-thirds of patientsikely, it was neither the experimental cure nor the control). Goldacre quips: ‘It’s hard to imagine an experiment where it would have been easier to come up with a convincing placebo [for a control group]. Water.’ Remember the birthday experiment: it sounds remarkable until we actually use statistics. Similarly, things become remarkable when we are unaware of the likelihood of, for example, arthritis being improved anyway due to the body’s own resistance.
Michael Shermer, in Scientific American, wrote: ‘thinking anecdotally comes naturally, whereas thinking scientifically does not.’ Because thinking scientifically is, most often, counter-intuitive to our ape minds: we are not computers or calculators. Would anyone think that there was above 50% chance that two people, in a random group of 30, share a birthday? Would anyone automatically think tiny things called bacteria and germs and viruses can cause untold misery and death, sometimes able to destroy entire civilisations?
No wonder for this latter we invoked gods since it seemed there was no other explanation: the irony being that both explain the death of crops but: which has been more useful, helped with preventative measures and so on?
We could say (1) sacrificing a virgin and letting her blood drain into the soil satisfied the gods resulting in our crops being restored or (2) we could point out that specific bacteria are infecting our plants and getting rid of these leads to restored crops. We face enormous problems if we use the first considering, for example, that not all virgins seem to pacify the gods. At the least, for simply practical, testable reasons – not to mention that crops have been restored despite no sacrifices over the years – the latter is more helpful and indeed more people realise as such for this simple, pragmatic reason. Yet we can’t escape the fact that both explain the same phenomenon. To explain is not to justify or even to reasonably justify. It is simply a story we tell to narrate our target events. Gods or bacteria, both result in the same thing. The duty of statistics and indeed of science can help disconnect the two showing that, whilst it is true both are explanations, only one survives objective testing so that even outsiders can ‘cure the appetites of the gods’.
I am reminded of Wittgenstein’s pertinent question: ‘Why did people think the sun went around the Earth?’ A reply given was: ‘Well, it just looks that way!’ Wittgenstein looked up at the sky and said: ‘But what does it look like when the Earth revolves around the Sun?’ Both heliocentrism and geocentrism arise from the same platform: looking up at the ‘movement’ of the Sun. We now know which is true (it’s heliocentrism in case you’re wondering).
Today, our societies face even worse submission before the altar of intuition, upon which bleeds all the evidence to the contrary.
The recent horror of anti-vaccination foolishness is a direct point that could be sharpened with an awareness of statistics. Shermer relates that there were a number of ‘parents who noticed that shortly after having their children vaccinated autistic symptoms began to appear.’ It was the beginning of the furore that would claim children’s lives, all because people believed anecdotal dogma above scientific reasoning. This was then compounded by the fraudulent blathering of Andrew Wakefield. Indeed, Wakefield is an excellent case-study for showing the power of statistics to empower us against charlatans like him.
Wakefield published an article in the prestigious Lancet journal, in 1998. It was more a speculative piece not warranting the media’s salacious transformation. In it, Wakefield reported 12 cases about his topic, the first stirrings of the link between autism and vaccination. Depending on the criteria, this is either remarkable or statistically negligible: 12 people sprouting wings or extra limbs from touching a wall warrants attention. Goldacre says: ‘For things as common as MMR and autism, finding 12 people with both is entirely unspectacular.’ This plugs the case back into context, stripping it of anything remarkable. Johann Hari agrees that the pool of test-subjects was too small: ‘It was based on a tiny pool of infants, most of whom were in the study because their parents believed in the link [between vaccines and autism] and wanted to sue for compensation.’
Wakefield then went on a media-rampage, publishing wherever he could to poke and prod the current medical procedures involved (specifically saying that the vaccinations should be separated by perhaps a year). Hari and Goldacre correctly condemn the media as being the main culprit in this saga of salaciousness, this epic of idiocy; giving an equal platform for health professionals and grieving parents, as if both had a basis for the scientific justifications. Here’s a clue: tears aren’t evidence. No doubt there is nothing worse than to lose a child; but when the danger of your own anger and hatred will lead to the death and suffering of other children, you deserve no compassion. I am looking at you, Jenny McCarthy.
Parents were given a tangible culprit in the form of the ‘Western medical establishment’ to blame for their impaired child instead of facing the facts of an indifferent universe, with no cosmic balance or care for us. Using no scientific facts, except sometimes Wakefield’s now completely discredited authority, mothers could invoke their own intuition to decide whether ‘stabbing their child three times’ was a good thing; they were encouraged to consult something as unfounded as psychics: their gut-feelings.
If we want a possible half-definition of science, perhaps it is this: whatever is counter-intuitive, perhaps most upsetting to our axiomatic assumptions, that thoroughly and clearly and elegantly explains the phenomena we are encountering. It’s not a perfect definition, but then, it’s not meant to be. Consulting your gut-feelings is precisely how not to do science using this sense of it: we are not so ‘made’ that consultation with our internal organs will lead to a proper explanation of the world; indeed, we will get the same results by consulting the innards of other animals, like cows or chickens. The point being, time and time again, science has shown the world to be other than we expected it to be. (Of course, there is the opposite, too, but we are not relating that for now.) Mothers being encouraged to consult an animal’s innards, whether their own person or a bovine’s, were being encouraged to be antithetical to evidence-based medicine, a history long and hard fought to combat a disease, and our greatest achievement as a species which continues to save millions of lives.
Here is how statistics killed Wakefield’s reputation. His findings in the Lancet from his tiny, biased pool of patients, were overshadowed by a later investigation into 1.8 million, randomly-chosen children, in Finland. They found nothing unstable when the children encountered the MMR-vaccine. Hari tells us:
Even more startlingly, it was found that when MMR was suspended in Japan due to production problems, autism rates held steady - but 90 extra children died of measles. This evidence was waved away by much of the press as difficult and indigestible; they preferred to focus instead on brain-dead trivia… (italics added)
The statistics tell us then that there is no fruitful or engaging relation between MMR and autism. The size and measurement completely undermines Wakefield’s biased nonsense. Of course, these weren’t the only tests but the sheer size indicates why, just from them, we can at the least be highly suspicious and, at best, completely dismissive of Mr Wakefield.
Wakefield was, however, not main the problem. It was the media’s coverage, their dismissal of important statistics it took me merely seconds to find. If you want the actual culprits, all you need to do is investigate. My point is this: through merely putting Wakefield’s findings into a proper context, we can see whether he is worth taking seriously or is biased and mistaken, if not lying. Like the assertion that drinking from the Queen’s pond heals arthritis, we can just increase the size, look wider and farther, investigate other explanations or ponder if there is an explanation at all worth pursuing. For example, the MMR and autism link was not worth pursuing and was better off not pursuing: considering that even one child died from not being immunised. That is one child too many. However, as one powerful website has indicated, we can (at time of writing) attribute 612 preventable deaths, in the US, to the furore and madness that was targeted at vaccines. And another number is close to it: 66, 515 preventable illnesses.
If we need any more reasons, I can provide them. A problem close to home, in the metaphorical and literal sense, came about through the public poli(dio)cy of Thabo Mbeki, in his ‘denial’ of a link between HIV and AIDS. There is much speculation whether he really thought so, but he very fervently thought of it as a colonial problem, instead of a medical one. So much so that anti-retrovirals were affected in their distribution because Mbeki denied the ‘Western’ science’s diagnosis of HIV/AIDS. Here is the great Raymond Tallis, quoted in full, from his brilliant Hippocratic Oaths:
Of the 70,000 children born annually to HIV-positive mothers in South Africa, about half could have been protected from becoming HIV-positive themselves, and suffering a painful, protracted death, with a single dose of a cheap anti-retroviral drug. Mbeki did what he could to stop this happening. Many of the 800,000 non-infant deaths a year from Aids could also be prevented by making antiretroviral drugs available, but Mbeki’s ideological views did not permit it. According to a recent study (suppressed by the South African Government, which maintains that anti-HIV drugs are toxic and will primarily benefit pharmaceutical companies) immediate provision of such drugs could save up to 1.7 million people by 2010. As one of his former supporters, the Anglican Archbishop of Cape Town, the Most Rev Njongonkulu Ndungane, has said, Mbeki's Aids policies are as serious a crime as apartheid — and have already killed many more people.
Mbeki was and indeed is aware of the statistics, which highlights another problem: statistics can be ignored. But then, so can the preventable deaths of infants who die as a result of your bigoted delusions. Ignorance, like a flood, does not discriminate in what it sweeps away.
Empowering ourselves with numbers might seem strange, until we recall how statistics can destroy the pretentions of charlatans or miraculous happenings. Indifferent in itself, statistics displays information anyone is welcome to assess. You would be hard-pressed in defending Wakefield’s tiny Lancet study, with twelve children, over the thorough Finnish one, with 1.8 million. However, numbers are not the end: control-groups, double-blind mechanisms and sensitivity to scientific reasoning that comes with studying statistics are also necessary. In many instances of outrage, like the anti-vaccination uproar or Mbeki’s idiocy, we can reasonably assume that their assertions have no backing with regard to control-groups, alternative hypotheses, and so on. It is invariably the ape-man bursting out the lab coat, to pound his chest, beating out the rhythm of his own bias and delusion.