Is applied ethics applicable enough? Acting and hedging under moral uncertainty

by Grace Boey

Making-decision

A runaway train trolley is racing towards five men who are tied to the track. By pulling a lever, you could divert the train's path to an alternative track, which has only one man on it …

If you're gearing up to respond with what you'd do and why, don't bother. It doesn't matter whether you'd pull the lever: it's too late. The five were run over almost fifty years ago, because philosophers couldn't decide what to do. They have been – pun most certainly intended – debated to death.

Formulated by the late Philippa Foot in 1967, the famous “trolley problem” has since been endlessly picked apart and argued over by moral philosophers. It's even been reformulated – apart from “classic”, the trolley problem also comes in “fat man”, “loop”, “transplant” and “hammock” varieties. Yet, in spite of all the fascinating analysis, there still isn't any good consensus on what the right thing is to do. And, not only do philosophers disagree over what to do, a significant number of them just aren't sure. In a 2009 survey of mostly professional philosophers, 34.8% of the respondents indicated some degree of uncertainty over the right answer [1].

Philosopher or not, if you're in the habit of being intellectually honest, then there's a good chance you aren't completely certain about all your moral beliefs. Looking to the ethics textbooks doesn't help – you'd be lucky not to come away from that with more doubts than before. If the philosophical field of ethics is supposed to resolve our moral dilemmas, then on some level it has obviously failed. Debates over moral issues like abortion, animal rights and euthanasia rage on, between opposing parties and also within the minds of individuals. These uncertainties won't go away any time soon. Once we recognize this, then the following question naturally arises: what's the best way to act under moral uncertainty?

Ethicists, strangely, have mostly overlooked this question. But in relatively recent years, a small group of philosophers have begun rigorous attempts at addressing the problem. In particular, attempts are being made to adapt probability and expected utility theory to decision-making under moral uncertainty.

The nature of moral uncertainty

Before diving into such theories, it's useful to distinguish moral uncertainty from non-moral uncertainty.

Non-philosophers often get impatient with philosophical thought experiments, because they depict hypothetical situations that don't do justice to the complexities of everyday life. One real-life feature that thought experiments often omit is uncertainty over facts about the situation. The trolley problem, for example, stipulates that five people on the original track will definitely die if you don't pull the lever, and that one person on the alternative track will definitely die if you do. It also stipulates that all the six lives at stake are equally valuable. But how often are we so certain about the outcomes of our actions? Perhaps there's a chance that the five people might escape, or that the one person might do the same. And are all six individuals equally virtuous? Are any of them terminally ill? Naturally, such possibilities would impact our attitude towards pulling the lever or not. Such concerns are often brought up by first-time respondents to the problem, and must be clarified before the question gets answered proper.

Lots has been written about moral decision-making under factual uncertainty. Michael Zimmerman, for example, has written an excellent book on how such ignorance impacts morality. The point of most ethical thought experiments, though, is to eliminate precisely this sort of uncertainty. Ethicists are interested in finding out things like whether, once we know all the facts of the situation, and all other things being equal, it's okay to engage in certain actions. If we're still not sure of the rightness or wrongness of such actions, or of underlying moral theories themselves, then we experience moral uncertainty. As the 2009 survey indicates, many professional philosophers still face such fundamental indecision. The trolley problem – especially the fat man variant – is used to test our fundamental moral commitment to deontology or consequentialism. I'm pretty sure I'd never push a fat bystander off a bridge onto a train track in order to save five people, but what if a million people and my mother were at stake? Should I torture an innocent person for one hour if I knew it would save the population of China? Even though I'd like to think of myself as pretty committed to human rights, the truth is that I simply don't know.

Moral hedging and its problems

So, what's the best thing to do when we're faced with moral uncertainty? Unless one thinks that anything goes once uncertainty enters the picture, then doing nothing by default is not a good strategy. As the trolley case demonstrates, inaction often has major consequences. Failure to act also comes with moral ramifications: Peter Singer famously argued that inaction is clearly immoral in many circumstances, such as refusing to save a child drowning in a shallow pond. It's also not plausible to deliberate until we are completely morally certain – by the time we're done deliberating, it's often too late. Suppose I'm faced with the choice of saving one baby on a quickly-sinking raft, and saving an elderly couple on a quickly-sinking canoe. If I take too long to convince myself of the right decision, all three will drown.

In relatively recent years, some philosophers have proposed a ‘moral hedging' strategy that borrows from expected utility theory. Ted Lockhart, professor of philosophy at Michigan Technological University, arguably kicked off the conversation in 2000 with his book Moral Uncertainty and its Consequences. Lockhart considers the following scenario:

Gary must choose between two alternatives, x and y. If Gary were certain that T1 is the true moral theory, then he would be 70% sure that x would be morally right in that situation and y would be morally wrong and 30% that the opposite is true. If Gary were sure that T2 is the true theory, then he would be 100% certain that y would be morally right for him to do and that x would be morally wrong. However, Gary believes there is a .6 probability that T1 is the true theory and a .4 probability that T2 is the true theory. (p.42)

There are at least two ways that Gary could make his decision. First, Gary might pick the theory he has the most credence in. Following such an approach, Gary should stick to T1, and choose to do x. But Lockhart thinks that this 'my-favourite-theory' approach is mistaken. Instead, Lockhart argues that it is more rational to maximize the probability of being morally right. Following this, the probability that x would be morally right is .42 and the probability that y would be morally right is .58. Under this approach, Gary should choose y.

This seems reasonable so far, but it isn't the end of the story. Consider the following scenario described by Andrew Sepielli (professor of philosophy at University of Toronto, who has written extensively about moral uncertainty and hedging over the past few years):

Suppose my credence is .51 that, once we tote up all the moral, prudential, and other reasons, it is better to kill animals for food than not, and .49 that it is better not to kill animals for food. But suppose I believe that, if killing animals is better, it is only slightly better; I also believe that, if killing animals is worse, it is substantially worse – tantamount to murder, even. Then it seems … that I have most subjective reason not to kill animals for food. The small gains to be realized if the first hypothesis is right do not compensate for the significant chance that, if you kill animals for food, you are doing something normatively equivalent to murder.

Both Lockhart and Sepielli agree that it isn't enough for us to maximize the probability of being morally right. The value of outcomes under each theory should be factored into our decision-making process as well. We should aim to maximize some kind of ‘expected value', where the expected value of an outcome is the probability of its being right, multiplied by the value of its being right if it is indeed right. Lockhart's specific strategy is to maximize the ‘expected degree of moral rightness', but the broad umbrella of strategies that follows the approach can be called ‘moral hedging'.

Moral hedging seems like a promising strategy, but it's plagued by some substantial problems. The biggest issue is what Sepielli calls the ‘Problem of Intertheoretic Comparisons' (PIC). How are we supposed to compare values across moral theories that disagree with each other? What perspective should we adopt while viewing these ‘moral scales' side by side? The idea of intertheoretic comparison is at least intuitively intelligible, but on closer inspection, values from different moral theories seem fundamentally incommensurable. Given that different theories with different values are involved, how could it be otherwise?

Lockhart proposes what he calls the ‘Principle of Equity among Moral Theories' (PEMT), which states that maximum and minimum degrees of rightness should be fixed at the same level across theories, at least for decision-making purposes. But Sepielli points out that PEMT seems, amongst other things, arbitrary and ad hoc. Instead, he proposes that we use existing beliefs about 'cardinal ranking' of values to make the comparison. However, this method is open to its own objections, and also depends heavily on facts about practical psychology, which are themselves messy and have yet to be worked out. Whatever the case, there isn't any consensus on how to solve the problem of intertheoretic comparisons. PIC has serious consequences – if the problem turns out to be insurmountable, moral hedging will be impossible.

This lack of consensus relates to another problem for moral hedging, and indeed for moral uncertainty in general. In addition to being uncertain about morality, we can also be uncertain about the best way to resolve moral uncertainty. Following that, we can be uncertain about the best way to resolve being uncertain about the best way to resolve moral uncertainty … and so on. How should we resolve this seemingly infinite regress of moral uncertainty?

One last and related question is whether, practically speaking, calculated moral hedging is a plausible strategy for the average person. Human beings, or at least most of them, aren't able to pinpoint the precise degree to which they believe that things are true. Perhaps they are uncertain about their own probabilistic beliefs (or perhaps it doesn't even makse sense to say something like “I have a .51 credence that killing animals for food is wrong”). Additionally, surely the average person can't be expected to perform complex mathematical calculations every time she's faced with uncertainty. If moral hedging is too onerous, it loses it edge over simply deliberating over the moral theories themselves. Moral hedging must accomodate human limits if it is to be applicable.

Moving ahead with moral uncertainty

It's clear that there's much more work to be done on theories of moral hedging, but the idea seems promising. Hopefully, a good method for inter-theoretic value comparison will be developed in time. It's also good that these philosophers have brought the worthwhile question of moral uncertainty to life, and perhaps strong alternatives to moral hedging will subsequently emerge. Maybe philosophers will never agree, and textbooks on moral uncertainty will join the inconclusive ranks of textbooks on normative ethics. But we can hardly call such textbooks useless: they show us the basis (or non-basis) of our initial intuitions, and at least give us the resources to make more considered judgments about our moral decisions. Over the years, humanity has benefitted enormously from the rigorous thinking done by ethicists. In the same way, we stand to benefit from thinking about what to do under moral certainty.

It's also clear that theories have to accommodate, and be realistic about, human psychology and our mental capacities to apply such strategies. Any theory that doesn't do this would be missing the point. If these theories aren't practically applicable, then we may as well not have formulated them. The practical circumstances under which we encounter moral uncertainty also differ – in some situations, we're under more psychological stress and time constraints than others. This raises the interesting question of whether optimal strategies might differ for agents under different conditions.

Something else to consider is this: in what way, if any, are we required to cope with moral uncertainty in ways such as moral hedging? Theorists like Lockhart and Sepielli argue for the most ‘rational' thing to do under uncertain circumstances, but what if one doesn't care for being rational? Might we still be somehow morally obliged to adopt some strategy? Perhaps there is some kind of moral duty to act rationally, or 'responsibly', under moral uncertainty. It does seem, in a somewhat tautological sense, that we're morally required to be moral. Perhaps this involves being morally required to ‘try our best' to be moral, and not to engage in overly 'morally risky' behaviour.

Other than Lockhart and Sepielli, others who have engaged the issue of moral uncertainty – at least tangentially – include James Hudson (who rejected moral hedging), Graham Oddie, Jacob Ross, Alexander Guerrero, Toby Ord and Nick Bostrom. Hopefully, in time, more philosophers (and non-philosophers too) will join in this fledgling debate. The discussion and its findings will be a tribute to all the victims of ethical thought experiments everywhere – may we someday stop killing hypothetical people with analysis paralysis.



[1] 23.1% “leaned toward” (as opposed to “accepted”) switching, 6.4% were undecided, and 4.8% leaned toward staying.