The Matisse we never knew

Peter Schjeldahl reviews A Life of Henry Matisse by Hilary Spurling, in The New Yorker:

Images20matisse20womangifHenri Matisse, unlike the other greatest modern painter, Pablo Picasso, with whom he sits on a seesaw of esteem, hardly exists as a person in most people’s minds. One pictures a wary, bearded gent, owlish in glasses—perhaps with a touch of the pasha about him, from images of his last years in Vence, near Nice, in a house full of sumptuous fabrics, plants, freely flying birds, and comely young models. Many know that Matisse had something to do with the invention of Fauvism, and that he once declared, weirdly, that art should be like a good armchair. A few recall that, in 1908, he inspired the coinage of the term “cubism,” in disparagement of a movement that would eclipse his leading influence on the Parisian avant-garde, and that he relaxed by playing the violin. Beyond such bits and pieces, there is the art, whose glory was maintained and renewed in many phases until the artist’s death, in 1954: preternatural color, yielding line, boldness and subtlety, incessant surprise. Anyone who doesn’t love it must have a low opinion of joy. The short answer to the question of Matisse’s stubborn obscurity as a man is that he put everything interesting about himself into his work. The long answer, which is richly instructive, while ending in the same place, is given in Hilary Spurling’s zestful two-volume biography, “A Life of Henri Matisse.”

More here.

Can neuroscience provide a foundation for ethics?

Maura Pilotti reviews The Ethical Brain by Michael S. Gazzaniga, in Metapsychology:

In The Ethical Brain, Michael S. Gazzaniga teaches us something about making informed decisions in settings where our personal sense of right and wrong does not seem to provide an unequivocal answer. The guiding theme of his book is what Gazzaniga calls Neuroethics, the notion that knowledge of the brain’s functioning and organizational structure can ground our views of controversial issues as well as inform our decisions on the appropriate course of action.  In defining Neuroethics, Gazzaniga presents readers with timely and important issues, explores the multifaceted claims that render them controversial, and applies his training in neuroscience to craft a solution that is based on scientific evidence and reason rather than dogma.  If knowledge of neuroscience cannot assist him in formulating a reasonable answer, he draws attention to what he considers to be the limitations (either current or long-standing) of such knowledge.  Even when he has an answer, Gazzaniga is always respectful of all points of view.  In doing so, he highlights another interesting theme of this book, which is its recognition that ethical matters are generally multi-layered, they have divisive ramifications and, often, there are no universally satisfactory or pleasing answers for the dilemmas they pose.

More here.

Transition – To What?

I noticed that Newsweek calls its Obituary section “Transition.” Isn’t this a tad euphemistic? This sounds like business jargon to me, the softening of the edges of what can only be considered bad news, viz., death. Death is so awfully grim and dreary, let’s call it something else! It’s bad, but maybe in another sense it’s something good…a transition. To be fair, I think Newsweek also prints birth and marriage notices here, but the point still stands – should we mix and match when it comes to death? I mean, really, “transition” to what? Death? “Congratulations,” I imagine a voice intoning from the Great Beyond, “you have successfully transitioned from life to death!” Or: “I’m going through a period of transition. I’m between lives.” Is there a metaphysics implied here – the assertion of an afterlife, something, in other words, to which one may transition? I’m just asking.

(From an email I recently sent to Richard M. Smith, Chairman and Editor-in-Chief of Newsweek. I’ll update 3QD if any response is forthcoming.)

leiris on duchamp


October publishes a translation of an essay on Marcel Duchamp by Michel Leiris. In the passage below he’s talking about The Bride Stripped Bare By Her Bachelors, Even. The rest of the essay is available as a pdf.

A work such as this—a veritable Pandora’s box which one manipulates at one’s own peril—needs to be approached not from the classic point of view of form and substance, but rather, strictly speaking, from that of container and contained. Our critical task will therefore consist of making a rapid inventory of its contents and then of demonstrating, should the verdict prove positive, that there is a necessary relationship between container and contained. To begin with, one has to realize that Duchamp—initially one of the most talented of the so-called “Cubist” painters—has, like a number of other innovators of his period, set himself several problems having to do with the legitimacy of representation (the role of perspective, the discovery of methods that would be just as—or more—valid than perspective in order to move from the three dimensions of an object to its figuration on a surface, the role of colors, of light, etc.), but that instead of more or less academically resolving these problems, he has come up with his very own method, an “ironism of affirmation” that is quite different from the “negative

Digital fantasies


A new generation of young creators has emerged that operates not from a deep contemplation of Duchamp, Beuys, Foucault, or Baudrillard, but from Dungeons and Dragons figurines, ‘80s “mythic” heavy-metal album covers, cyberpunk paperback art, and the kind of paintings of Vikings and missile-breasted Amazons found on the side of VW vans. Niche.LA and Lounge 441’s “Digital World: Oz” features the gently dissolving images of the flabbergasting Charli Siebert, described in the press materials as a self-taught “23-year-old digital artist from Huntington Beach, CA.” Siebert’s independence from art-school cant is gratifying in itself, but her frosted, distressed, tactile-but-ethereal images are the real thing—Goth and sci-fi kitsch ossified into beaux-arts stateliness. Her porny, morbid figures hover in a state of being pitched somewhere between photorealism and PhotoShop artifice, as if a family of Joel-Peter Witkin ghouls had invented their own video game to live in.

more about the show at Niche.La and Lounge 441 here.

What Makes People Gay?

From The Boston Globe:

Gay_1 With crystal-blue eyes, wavy hair, and freshly scrubbed faces, the boys look as though they stepped out of a Pottery Barn Kids catalog. They are 7-year-old twins. I’ll call them Thomas and Patrick; their parents agreed to let me meet the boys as long as I didn’t use their real names.

Spend five seconds with them, and there can be no doubt that they are identical twins – so identical even they can’t tell each other apart in photographs. Spend five minutes with them, and their profound differences begin to emerge.

More here.

Can Extreme Poverty Be Eliminated?

From Scientific American:

Poverty Almost everyone who ever lived was wretchedly poor. Famine, death from childbirth, infectious disease and countless other hazards were the norm for most of history. Humanity’s sad plight started to change with the Industrial Revolution, beginning around 1750. New scientific insights and technological innovations enabled a growing proportion of the global population to break free of extreme poverty.

Two and a half centuries later more than five billion of the world’s 6.5 billion people can reliably meet their basic living needs and thus can be said to have escaped from the precarious conditions that once governed everyday life. One out of six inhabitants of this planet, however, still struggles daily to meet some or all of such critical requirements as adequate nutrition, uncontaminated drinking water, safe shelter and sanitation as well as access to basic health care. These people get by on $1 a day or less and are overlooked by public services for health, education and infrastructure. Every day more than 20,000 die of dire poverty, for want of food, safe drinking water, medicine or other essential needs.

More here.

Can you be a good scientist and believe in God?

Cornelia Dean in the New York Times:

At a recent scientific conference at City College of New York, a student in the audience rose to ask the panelists an unexpected question: “Can you be a good scientist and believe in God?”

Reaction from one of the panelists, all Nobel laureates, was quick and sharp. “No!” declared Herbert A. Hauptman, who shared the chemistry prize in 1985 for his work on the structure of crystals.

Belief in the supernatural, especially belief in God, is not only incompatible with good science, Dr. Hauptman declared, “this kind of belief is damaging to the well-being of the human race.”

But disdain for religion is far from universal among scientists. And today, as religious groups challenge scientists in arenas as various as evolution in the classroom, AIDS prevention and stem cell research, scientists who embrace religion are beginning to speak out about their faith.

More here.

Tuesday, August 23, 2005

Why Medical Studies Are Often Wrong

John Allen Paulos in a very interesting column at ABC News:

Paulos_3How many times have you heard people exclaim something like, “First they tell us this is good or bad for us, and then they tell us just the opposite”?

In case you need more confirmation for the “iffy-ness” of many health studies, Dr. John Ioannidis, a researcher at the University of Ioannina in Greece writing in the Journal of the American Medical Association, recently analyzed 45 well publicized studies from major journals appearing between 1990 and 2003. His conclusion: the results of approximately one third of these studies were flatly contradicted or significantly weakened by later work.

There’s the well-known story of hormone replacement therapy, which was supposed to protect against heart disease and other maladies, but apparently does not. A good part of the apparent effect may have been the result of attributing the well-being of upper middle class health-conscious women to the hormones.

Another bit of health folklore that “everybody knows” that has turned out to be unfounded is vitamin E’s protective effect against cardiac problems. Not so says a recent large study.

And how about red wine, tea, fruits and vegetables? Surely the anti-oxidant effect of these wondrous nutrients can’t be doubted. Even here, however, the effect appears to be more modest than pinot noir lovers, among others, had thought.

And certainly many lung patients who inhale nitrous oxide and swear by its efficacy will be surprised to learn that a larger study does not show any beneficial effect…

More here.

Fetuses Likely Don’t Feel Pain Until Late in Pregnancy

Lindsey Tanner at ABC News:

Manipulate_fetus_lA review of medical evidence has found that fetuses likely don’t feel pain until the final months of pregnancy, a powerful challenge to abortion opponents who hope that discussions about fetal pain will make women think twice about ending pregnancies.

Critics angrily disputed the findings and claimed the report is biased.

“They have literally stuck their hands into a hornet’s nest,” said Dr. Kanwaljeet Anand, a fetal pain researcher at the University of Arkansas for Medical Sciences, who believes fetuses as young as 20 weeks old feel pain. “This is going to inflame a lot of scientists who are very, very concerned and are far more knowledgeable in this area than the authors appear to be. This is not the last word definitely not.”

The review by researchers at the University of California, San Francisco comes as advocates are pushing for fetal pain laws aimed at curtailing abortion. Proposed federal legislation would require doctors to provide fetal pain information to women seeking abortions when fetuses are at least 20 weeks old, and to offer women fetal anesthesia at that stage of the pregnancy. A handful of states have enacted similar measures.

But the report, appearing in Wednesday’s Journal of the American Medical Association, says that offering fetal pain relief during abortions in the fifth or sixth months of pregnancy is misguided and might result in unacceptable health risks to women.

More here.

In Finland’s Footsteps: If We’re So Rich and Smart, Why Aren’t We More Like Them?

Robert G. Kaiser in the Washington Post:

Cid_002d01c45d68207f1d8045846543br62z01Finland is a leading example of the northern European view that a successful, competitive society should provide basic social services to all its citizens at affordable prices or at no cost at all. This isn’t controversial in Finland; it is taken for granted. For a patriotic American like me, the Finns present a difficult challenge: If we Americans are so rich and so smart, why can’t we treat our citizens as well as the Finns do?

Finns have one of the world’s most generous systems of state-funded educational, medical and welfare services, from pregnancy to the end of life. They pay nothing for education at any level, including medical school or law school. Their medical care, which contributes to an infant mortality rate that is half of ours and a life expectancy greater than ours, costs relatively little. (Finns devote 7 percent of gross domestic product to health care; we spend 15 percent.) Finnish senior citizens are well cared for. Unemployment benefits are good and last, in one form or another, indefinitely.

On the other hand, Finns live in smaller homes than Americans and consume a lot less. They spend relatively little on national defense, though they still have universal male conscription, and it is popular. Their per capita national income is about 30 percent lower than ours. Private consumption of goods and services represents about 52 percent of Finland’s economy, and 71 percent of the United States’. Finns pay considerably higher taxes — nearly half their national income is taken in taxes, while Americans pay about 30 percent on average to federal, state and local governments.

Should we be learning from Finland?

More here.  And check out the photo galleries here.

stem cells from a growing fetus can colonise the brains of mothers

Andy Coghlan in New Scientist:

Everyone knows that kids get their brains, or lack of them, from their parents. But it now seems that the reverse is also true. Stray stem cells from a growing fetus can colonise the brains of mothers during pregnancy, at least in mice.

If the finding is repeated in humans, the medical implications could be profound. Initial results suggest that the fetal cells are summoned to repair damage to the mother’s brain. If this is confirmed, it could open up new, safer avenues of treatment for brain damage caused by strokes and Alzheimer’s disease, for example.

This is a long way off, but there are good reasons for thinking that fetal stem cells could one day act as a bespoke brain repair kit. It is already well known that during pregnancy a small number of fetal stem cells stray across the placenta and into the mother’s bloodstream, a phenomenon called microchimerism. They can survive for decades in tissues such as skin, liver and spleen, where they have been shown to repair damage.

More here.

7 NASA panelists report program is still troubled

Traci Watson in USA Today:

ShuttleThe culture inside the space shuttle program remains arrogant, sloppy and schedule-driven, says a scathing statement published Wednesday by a faction on the panel that oversaw NASA’s efforts to return the shuttle to space.

The statement, which was not endorsed by the majority of the oversight panel, comes three weeks after NASA put shuttle flights on hold until it can keep debris from falling off the fuel tank. Such foam debris triggered the disintegration of shuttle Columbia in 2003 and plagued the flight of shuttle Discovery, which landed Aug. 9.

The main report says NASA fulfilled 10 of 13 safety goals the agency accepted after the accident, which were laid out by the accident investigators and included steps such as development of a technique to fix the ship in orbit. The main report does not comment on the shuttle program’s culture, which was not part of the panel’s official purview. The minority statement is included as an annex to the main report, as are statements from other panelists praising NASA.

More here.  [Thanks to Winfield J. Abbe.]


Ross Douthat in The New Republic:

The appeal of “intelligent design” to the American right is obvious. For religious conservatives, the theory promises to uncover God’s fingerprints on the building blocks of life. For conservative intellectuals in general, it offers hope that Darwinism will yet join Marxism and Freudianism in the dustbin of pseudoscience. And for politicians like George W. Bush, there’s little to be lost in expressing a skepticism about evolution that’s shared by millions.

In the long run, though, intelligent design will probably prove a political boon to liberals, and a poisoned chalice for conservatives. Like the evolution wars in the early part of the last century, the design debate offers liberals the opportunity to portray every scientific battle–today, stem-cell research, “therapeutic” cloning, and end-of-life issues; tomorrow, perhaps, large-scale genetic engineering–as a face-off between scientific rigor and religious fundamentalism. There’s already a public perception, nurtured by the media and by scientists themselves, that conservatives oppose the “scientific” position on most bioethical issues. Once intelligent design runs out of steam, leaving its conservative defenders marooned in a dinner-theater version of Inherit the Wind, this liberal advantage is likely to swell considerably.

And intelligent design will run out of steam…

More here.

issa touma


The triumphs and travails of Syrian photographer Issa Touma make for pretty gripping stories in themselves. But above and beyond that, he has taken some truly amazing and beautiful photographs. Touma’s account of his struggles with the Baath party in Syria while trying to run his gallery and an international photography exhibit can be found at Joshua Landis’ site here.

More information about Nazar: Photographs from the Arab World, can be found here.

Some amazing pictures from Touma’s series, Sufi, can be found here.



By the time Kim Jong Il, the Dear Leader, took over from his father as the absolute ruler of North Korea, the country was a slave society, where only the most trusted caste of people were allowed to live in sullen obedience in Pyongyang, while vast numbers of potential class enemies were worked to death in mines and hard-labor camps. After Kim Il Sung’s death, in 1994, the regime suspended executions for a month, and throughout the following year it committed relatively few killings. Since this was at the height of a famine, largely brought on by disastrous agricultural policies, hundreds of thousands were already dying from hunger. Then word spread that Kim Jong Il wished to “hear the sound of gunshots again.” Starving people were shot for stealing a couple of eggs.

More from the admirable Ian Buruma in The New Yorker here.

The Other Brain Also Deals With Many Woes

From The New York Times:

Gut Two brains are better than one. At least that is the rationale for the close – sometimes too close – relationship between the human body’s two brains, the one at the top of the spinal cord and the hidden but powerful brain in the gut known as the enteric nervous system.

For Dr. Michael D. Gershon, the author of “The Second Brain” and the chairman of the department of anatomy and cell biology at Columbia, the connection between the two can be unpleasantly clear. “Every time I call the National Institutes of Health to check on a grant proposal,” Dr. Gershon said, “I become painfully aware of the influence the brain has on the gut.” In fact, anyone who has ever felt butterflies in the stomach before giving a speech, a gut feeling that flies in the face of fact or a bout of intestinal urgency the night before an examination has experienced the actions of the dual nervous systems.

More here.

Monkey see, monkey go all-in: Primates prefer gamble over safe reward

From MSNBC:Monkey_1

When given a choice between steady rewards and the chance for more, monkeys will gamble, a new study found. And they’ll keep taking risks as the stakes rise and dry spells get longer. The research, in which scientists also pinpointed brain activity during the gambling, could provide insight into the human penchant for risk. In humans, it’s thought that low levels of the neurotransmitter serotonin might make one more risk-prone and impulsive. Perhaps, the scientists say, future work will shed light on the source of pathological gambling, obsessive-compulsive disorder and even depression.

More here.

Monday, August 22, 2005

Atelier: Real Sweat Shops, Virtual Gold

In December of 2004, a 22-year-old Australian gamer spent $26,500 of real money to buy a virtual island in the online game Project Entropia. Its game developers described the virtual island as containing “…beautiful beaches ripe for developing beachfront property, an old volcano with rumors of fierce creatures within, [an] outback… overrun with mutants, and an area with a high concentration of robotic miners guarded by heavily armed assault robots indicates interesting mining opportunities…” Though the sheer dollar value of the purchase may strike us as a fairly outlandish (pardon the pun) sum to be laying out for a castle in the sky, the virtual real estate market is booming; it’s quite possible that this young Australian gamer will even turn a profit on his virtual property once he has rented, leased, or sold plots of his island paradise to other online gamers. It is not only virtual real estate, however, that is being traded online.

As a result of the wide-spread popularity of MMORPGs – an acronym which stands for massive(ly) multiplayer online role-playing games – virtual objects of all kinds have begun to emerge as sources for potential profit. Online site No Sweat, describes the trajectory of online trading as follows:

“In these games, as in other role playing and computer games, over time one acquires possessions, skills, rank and so on. Often, moving on in the game is a long, slow tedious process — and many computer gamers look for short-cuts to get beyond the lower levels of the game. In MMORPGs, those shortcuts might involve getting hold of objects (including virtual money) from other players. Those objects can be traded. Which means that outside of the virtual worlds, trading can also take place. Many players seem willing to part with their cash (real-world cash, that is) in order to buy virtual objects in the games.”

The end result of this virtual trading is staggering; as virtual world critic Julian Dibbell points out, the economic yield of virtual commerce is very real:

“[I]n an academic paper analyzing the circulation of goods in Sony Online’s 430,000-player EverQuest…an economist calculated a full set of macro- and microeconomic statistics for the game’s fantasy world, Norrath. Taking the prices fetched in the $5 million EverQuest auctions market as a reflection of in-game property values, professor Edward Castronova of Cal State Fullerton multiplied those dollar amounts by the rate at which players pile up imaginary inventory and came up with an average hourly income of $3.42. He calculated Norrath’s GNP at $135 million — or about the same, per capita, as Bulgaria’s. In other words, assuming roughly proportional numbers for other major online role-playing games… the workforce toiling away in these imaginary worlds generates more than $300 million in real wealth each year.”

Even more alarming, however, is the fact that this virtual economy has begun to employ exploitative methods more commonly found in the “real world.” In a recent U.S. court case, a member of the online gaming world of EverQuest sued Sony Online for its newly enacted ban of virtual object trading. During the course of the case it came to light that the plaintiff had been running a series of Mexican sweatshops in which workers were paid to play these online role-playing games and to virtually farm, forage, and otherwise produce virtual objects that were then sold for real U.S. currency on E-bay and other online trading houses. And this is hardly an isolated incident. According to an article by Tim Guest , in mainland China “people are employed to play the games [from] nine to five, scoring virtual booty which IGE [Internet Gaming Entertainment] can sell on at a profit to Western buyers.” And a California-based company known as was employing Romanians to play MMORPGs for ten hours a day, earning $5.40 a day, or the equivalent of $0.54 an hour.

Insofar as these virtual worlds are capable of producing objects which seamlessly enter into our real-world economy, items that are priceable, desirable, and scarce, if not exactly material or useful in the ways that we are used to, and insofar as these virtual objects have a real effect on the “real economy”, the distinction between the virtual and the real seems to have become disturbingly attenuated. The fact that these virtual objects are as exchangeable as any other material commodity seems to suggest that, at least from money’s lofty perch, it looks like dollars all the way down.

MMORPG programmers, in fact, have become quite adept at tweaking these online economies. The Economist reports that programmers

“routinely produce the virtual equivalent of an antiquities market, creating overwhelmingly high demand for certain virtual objects that have no other utility within the game, a demand based on nothing more than the sheer scarcity of a given item. They control the inflation rate of their online currency by having players sink huge amounts of virtual gold and platinum into exorbitantly expensive luxury items [according to an online report, neon-colored avatar hair dye has recently become the luxury item par excellance] that can only be bought from non-player merchant bots, effectively taking large sums of money out of general circulation.”

Given the sheer oddness of our increasingly digitized economy, how is it, then, that we still tend to view the world (if we, in fact, still do) as a relatively stable place? From what golden coffer do we pluck out that highly burnished but unfounded belief that our money will be worth as much tomorrow as it is today? As Nigel Thrift has pointed out, “…unlike previous times, there is remarkably little anxiety now about the apparent loss of traditional representations of money. Generally speaking, the system of money is trusted.”

That the value of these virtual items, (including various forms of virtual gold which serve as currency in many of these online gaming worlds, and which seem to function and accrue value as any other “real-world” currency would) is predicated upon nothing more than the online gaming communities acknowledgement of their value is, in many respects, nothing new. A similar epistemological structure – one dependent upon our shared belief in the power of socially produced fictions – seems to underpin all of our monetary and financial instrumentation. This begs the question: Does this newly emergent virtual online gaming economy mark a threshold in how we have come to transmit, to produce, and to imagine value? Or is it merely the case that these gamers are a new type of virtual investor, one whose play happens to yield real monetary value and, consequently, produce real exploitative side-effects?

Monday Musing: Terrorism, Free Will and Methods of Comparison

For the last four years, since the attack on September 11, 2001, the political side of the blogosphere has tossed arguments back and forth about cause, free will, and responsibility. I first noticed it in a piece by Hitchens shortly after the attack. September 11th was also the 28th anniversary of the coup d’etat of the Allende government by Pinochet. Hitchens’ invocation of the coup and comparison of the Chilean left with al Qaeda had a simple point. The US had been instrumental in the overthrow of Allende and the massacre of leftists that followed. The Chilean left had a real and deep grievance against the US, yet, we couldn’t possibly imagine Chilean socialists hijacking planes and flying them into the World Trade Center, killing thousands of people. The implication was clear: grievances fueled by the sins of the US just aren’t enough to justify the actions of al Qaeda terrorists.

Nothing really followed in terms of the debate from Hitchens’ piece, even though he’d mentioned it a few times. But the question of the role of grievances (in the form of US foreign policy) in 9/11 picked up and keeps popping up. The debate was extended to discussions of Hamas, Islamic Jihad, and Al Aqsa Martyr’s brigade terrorism, and a brief but quickly curtailed discussion of the massacre of children at Beslan a year ago. By the time of the bombings in London, the debate had become clarified.

Few, if any, of those engaged in the back and forths were confused about explanation and responsibility. An action or event by victims can causally contribute to an act of terrorism, but what that means for responsibility was at heart of the issue. In terms of the present war, it’s hard to argue that had the US not been involved in Middle East politics—if it did not support Israel, had it not had bases in Saudi Arabia, and had it not been behind placing sanction on Iraq—the acts would’ve taken place anyway. That claim is a causal claim in the “not without which not” way.

Very few responded to any explanation of terrorist attacks by referring to US foreign policy with accusations of being an apologist for terrorism—after all, no one thinks that a scholar of how the Holocaust happened is letting Nazis off the hook. Moreover, the administration itself had implicitly admitted that US foreign policy (support for corrupt governments) had helped fuel extremist movements.

But the debate wasn’t about cause but about “root causes” and what “root causes” meant for responsibility. More sharply, it raised a question about when explanation melds into a justification or apology for terrorism. The issue led to a brief back and forth between Norm Geras (with Eve Gerrard) and Chris Bertram. The former:

“One morning Elaine dresses in that particular way and she crosses Bob’s path in circumstances he judges not too risky. He rapes her. Elaine’s mode of dress is part of the causal chain which leads to her rape. But she is not at all to blame for being raped. The fact that something someone else does contributes causally to a crime or atrocity, doesn’t show that they, as well as the direct agent(s), are morally responsible for that crime or atrocity, if what they have contributed causally is not itself wrong and doesn’t serve to justify it. Furthemore, even when what someone else has contributed causally to the occurrence of the criminal or atrocious act is wrong, this won’t necessarily show they bear any of the blame for it. If Mabel borrows Zack’s bicycle without permission and Zack, being embittered about this, burns down Mabel’s house, Mabel doesn’t share the blame for her house being burned down. Though she may have behaved wrongly and her doing so is part of the causal chain leading to the conflagration, neither her act nor the wrongness of it justifies Zack in burning down her house. So simply by invoking prior causes, or putative prior causes, you do not make the case go through – the case, I mean, that someone else than the actual perpetrator of the wrongdoing is to blame. The ‘We told you so’ crowd all just somehow know that the Iraq war was an effective cause of the deaths in London last week.”

Bertram’s response was simple.

“One of their examples concerns rape. Of course rapists are responsible for what they do, but suppose a university campus with bad lighting has a history of attacks on women and the university authorities can, at minimal cost, greatly improve the night-time illumination but choose not to do so for penny-pinching reasons. Suppose the pattern of assaults continues in the darkened area: do Geras and Garrard really want to say that the university penny-pinchers should not be blamed for what happens subsquently? At all? I think not.”

These discussions were about clarifying intuitions and understanding of cause and responsibility (agency, free will). But it was a spike; discussions continued to be peppered with comparisons with historical examples. Juan Cole in a post had pointed to Israeli occupation as the cause/reason for Palestinian terrorism, a post that drew the following from Jeff Weintraub.

“[I]n 1922-1923 about a million and a half Greeks fled or were expelled from Anatolia (with several hundred thousand Turks and other Muslims ‘exchanged’ in the opposite direction). Most of these people lived in refugee camps for a while, in both Israel and Greece, but I am not aware that they generated terrorist groups with a policy of systematically murdering Arab or Turkish civilians. . . Did these expulsions ‘provoke significant terrorism on the part of the displaced’? Not that I can recall. . . [I]t is not inevitable, or even common, for large-scale transfers or expulsions of populations (which, unfortunately, have been all too frequent during the past century) to ‘provoke significant terrorism on the part of the displaced’.”

I raise this discussion about terrorism, its causes, and moral responsibility not to jump into it. But it did strike me how an everyday form of Mill’s method of comparison plays itself out in partisan debates. John Stuart Mill spelled out an inductive method of causal reasoning. We infer that for a class or set of instances of phenomena we find a common circumstance or element, we infer that the common element(s) cause the phenomenon. Similarly, if we are facing differing outcomes in which all elements were common save one, we infer that difference is causally relevant to the outcome. These can be joined. They can be measured in degrees, in the sense of the degree to which the common element was present and the outcome covaries with its presence. Get enough causal understandings together (pairing up causes and outcomes, being sophisticated to account for interactions, etc.) and we can generate law-like propositions. While methods of uncovering law have become much more sophisticated, this basic approach remains common in the social science, even though deductive approaches, such as those that are based on rationality, are also very prominent.

Mill proposed this methodology largely to understand natural phenomena and they remain a serious element of how we examine natural phenomenon. Statistical inference is a descendant of this technique. But the social world has been far, far less amenable to the objective that the method was aimed for, uncovering laws, or law-like regularities.

Some time ago, the philosopher Jon Elster argued that the social sciences confront a problem in that the same (social) mechanism can operate in different directions, largely due to differing contexts—but in a situation where we cannot fully specify all the elements of the ‘context’. We are faced with a complex interaction of several mechanisms in way we haven’t fully specified. The social “sciences” don’t quite make the “science” cut for that reason.

The tendency in discussions, especially in political discussions, has been to toss in free will, which is hardly unreasonable. But I’m not sure that comparison will get us there. My belief that the dispossessed have a choice over their response and means of their response doesn’t depend on the information that Anatolian Greeks didn’t blow up civilians. Rather, it depends instead on not being able to see what mechanism would get me there in the narrow comparative case. Add a lot more elements—indoctrination, differing organizational capacity perhaps—then maybe, which has been the response.  But if the debate has reached what feels like a dead end, it may speak more of the kinds of arguments we appeal to.

Happy Monday.