Framing Morality

by Gerald Dworkin

TrolleyIt has been a well-recognized phenomenon for some time that how we frame our questions to others affects the answers they give. The best known work on the topic is by Kahneman and Tversky. They give examples such as the following.

Subjects were asked to choose between two treatments for 600 people who had a fatal disease. Treatment A was predicted to result in 400 deaths.

Frame Treatment A
Positive saves 200 lives
Negative 400 people will die

Treatment A was chosen by 72% of participants when it was presented with positive framing (“saves 200 lives”) dropping to only 22% when the same choice was presented with negative framing (“400 people will die”).

Another example: 92% of Ph.d students registered early when there was a penalty for late registration, but only 67% did so when the penalty was framed as a discount for early registration.

Those who choose because of these framing effects display a cognitive bias leading to choices that are less than fully rational.

More recently psychologists and philosophers who are part of the so-called experimental philosophy group (x-phi) turned their attention to whether such framing effects affect judgments about what to do in various well-known examples of moral dilemmas such as the notorious trolley problems. Those of you lucky enough to have escaped the latter will be exposed to them anon.

Here are some examples of framing effects in people's responses to the following hypothetical cases used by philosophers to determine what are called “moral intuitions.” These are the judgments that people make about what is right or wrong, and what they would choose to do in these cases.

1) Standard trolley case. A runaway trolley is heading for a track on which five people are trapped. You are standing by a switch that can divert the trolley onto a track where there is only one person trapped. Should you or should you not divert the trolley?

2) Heavy Man. There is a heavy man standing on a bridge over the tracks. He is standing on a trap door that you can release by pulling a lever. If he drops onto the track with the five people ahead his body will stop the trolley before it gets to them.

3) Heavy Man. Same as above but you are standing on the bridge and have to push him over.

Many people believe that while you ought to divert the trolley in the standard case one ought not to do so in the Heavy Man case. It does not matter, for our purposes, whether you agree or not. What you probably do not believe, and should not believe, is that the order in which you present the cases should affect your judgments about what to do. That is, if we present 1 and then 3, or 3 and then 1, the judgments about what one ought to do in the two cases should not be affected.

In fact the percentage of people who thought that it was morally acceptable in the standard trolley case was lower when case 3 was presented before case 1 than when case 1 was presented before case 3. In the latter case 94% would divert the trolley; in the former case 74% would. Interestingly, the order of presentation did not affect the judgment of whether it was permissible to push the Heavy Man. Roughly 50% of the subjects believed it acceptable to do so in both conditions.

Another framing effect experiment wanted to see whether the moral judgment of a lie vs. that of a failure to inform would depend on the order in which two cases were presented. In the first case, to greatly abbreviate the scenario, A is selling a car to B . It is a 1984 Mazda– a year in which many of the engines fell apart. B in interested in the car but says ” I seem to recall that the 1984's had a lot of troubles.” A replies : “No, that was the 1983's. They had fixed the problem by 1984.”

In the second Scenario B says the same thing but then goes on to say “Oh, no. I remember now it was the 1983″ that had the problem.” A does not correct her.

The moral issue in the experiment was to rate how good a person A was when he (1) lied or (2) omitted to correct B's mistake. The cases were again presented either with the lying case and then the omission or vice-versa. When the act case was presented first 50% rated A as worse for lying than for omitting to tell the truth. When the order was reversed 80% rated A worse for lying.

II

What, if any, significance do these results– and by now there is a large experimental literature that shows similar things–have for the question of whether we can have reliable moral beliefs?

Ot, somewhat stronger, whether we can know some moral claim is true or false.

It might seem there is a straight-forward argument with a sceptical conclusion about our moral beliefs. Some of our moral beliefs must be non-inferential, i.e. not supported by argument. For if we use an argument it must have premises and those premises require justification. But then we have an infinite regress unless we arrive at a premise which is known without further argument..

Call a moral belief fundamental if it is not justified by further argument? An example of a fundamental moral belief might be ” Lying is worse than a failure to tell the truth.” But, as we have seen, this proposition is subject to framing effects. But framing effects are irrational. The order in which questions are posed cannot rationally affect the correctness of the judgments we make.

Since we know that these judgments are subject to framing effects we cannot be justified in

accepting them as justified. They may be true but we do not have adequate reason to believe them.

III

What kinds of replies are open to those who wish to argue that we can have reliable knowledge of fundamental moral beliefs? I shall just list them and leave the reader to think about whether any of them are sufficient to overcome moral scepticism. If none of these are, perhaps readers will have other suggestions which do.

1) The experimental results are shaky. Perhaps the case scenarios are not detailed enough. If they were the framing effects would disappear. Or, perhaps what happens in the laboratory is not what happens in real life.

2) Experimental results must be interpreted. The data might be sound but what they show is open to controversy.

3) All the experiments show is that some moral beliefs are subject to framing effects. Perhaps some are not.

4) The experiments are fine, but the wrong judgments are being tested. Instead of propositions such as “Lying is worse than failing to to tell the truth” we should consider what are called defeasible principles such as “Other things being equal, lying is worse than failing to tell the truth.” These generalisations are not shown to be wrong because there are particular cases of failing to tell the truth being worse than lying. In those case, it was not the case that other things were equal. The experiments were all about particular cases but defeasible principles are not about particular cases.

5) If moral scepticism based on the experimental results were justified, than all our knowledge would be at risk. Framing effects hold for factual judgments such as Treatment A is more likely to cure you than Treatment B. But as our first example showed this judgment was also subject to framing effects.

6) At most what the experimental evidence supports is the claim that there are no fundamental moral judgments, i.e. judgments which are accepted without having further argument to support them. The alternative is that all justified moral belief is inferential in nature. No argument rests on beliefs which just have to be “seen” or “accepted” as true. Moral theory is “top down” not “bottom up.”

* * *

For those wishing more details see
W. Sinnott Armstrong, “Framing Moral Intuitions” in
Moral Psychology, Volume 2: The Cognitive Science of Morality, ed. W. Sinnott Armstrong
Cambridge: MIT Press, 2008), pp. 47-76