Saintly Simulation

by Evan Selinger

ScreenHunter_02 Sep. 17 08.14My colleague Thomas Seager and I recently co-wrote “Digital Jiminy Crickets,” an article that proposed a provocative thought experiment. Imagine an app existed that could give you perfect moral advice on demand. Should you use it? Or, would outsourcing morality diminish our humanity? Our think piece merely raised the question, leaving the answer up to the reader. However, Noûs—a prestigious philosophy journal—published an article by Robert J. Howell that advances a strong position on the topic, Google Morals, Virtue, and the Asymmetry of Deference”. To save you the trouble of getting a Ph.D. to read this fantastic, but highly technical piece, I’ll summarize the main points here.

It isn’t easy to be a good person. When facing a genuine moral dilemma, it can be hard to know how to proceed. One friend tells us that the right thing to do is stay, while another tells us to go. Both sides offer compelling reasons—perhaps reasons guided by conflicting but internally consistent moral theories, like utilitarianism and deontology. Overwhelmed by the seeming plausibility of each side, we end up unsure how to solve the riddle of The Clash.

Now, Howell isn’t a cyber utopian, and he certainly doesn’t claim technology will solve this problem any time soon, if ever. Moreover, Howell doesn’t say much about how to solve the debates over moral realism. Based on this article alone, we don’t know if he believes all moral dilemmas can be solved according to objective criteria. To determine if—as a matter of principle—deferring to a morally wise computer would upgrade our humanity, he asks us to imagine an app called Google Morals: “When faced with a moral quandary or deep ethical question we can type a query and the answer comes forthwith. Next time I am weighing the value of a tasty steak against the disvalue of animal suffering, I’ll know what to do. Never again will I be paralyzed by the prospect of pushing that fat man onto the trolley tracks to prevent five innocents from being killed. I’ll just Google it.”

Let’s imagine Google Morals is infallible, always truthful, and 100% hacker-proof. The government can’t mess with it to brainwash you. Friends can’t tamper with it to pull a prank. Rivals can’t adjust it to gain competitive advantage. Advertisers can’t tweak it to lull you into buying their products. Under these conditions, Google Morals is more trustworthy than the best rabbi or priest. Even so, Howell contends, depending on it is a bad idea.

Read more »