You, Robot

by Misha Lepetic

“We are at home with situations of legal ambiguity.
And we create flexibility, in situations where it is required.”
~Neuromancer

I_Robot_aConsider a few hastily conceived scenarios from the near future. An android charged with performing elder care must deal with an uncooperative patient. A driverless car carrying passengers must decide between suddenly stopping, and causing a pile-up behind it. A robot responding to a collapsed building must choose between two people to save. The question that unifies these scenarios is not just about how to make the correct decision, but more fundamentally, how to treat the entities involved. Is it possible for a machine to be treated as an ethical subject – and, by extension, that an artifical entity may possess “robot rights”?

Of course, “robot rights” is a crude phrase that shoots us straight into a brambly thicket of anthropomorphisms; let's not quite go there yet. Perhaps it's more accurate to ask if a machine – something that people have designed, manufactured and deployed into the world – can have some sort of moral or ethical standing, whether as an agent or as a recipient of some action. What's really at stake here is the contention that a machine can act sufficiently independently in the world that it can be held responsible for its actions and, conversely, if a machine has any sort of standing such that, if it were harmed in any way, this standing would serve to protect its ongoing place and function in society.

You could, of course, dismiss all this as a bunch of nonsense: that machines are made by us exclusively for our use, and anything a robot or computer or AI does or does not do is the responsibility of its human owners. You don't sue the scalpel, rather you sue the surgeon. You don't take a database to court, but the corporation that built it – and in any case you are probably not concerned with the database itself, but with the consequence of how it was used, or maintained, or what have you. As far as the technology goes, if it's behaving badly you shut it off, wipe the drive, or throw it in the garbage, and that's the end of the story.

This is not an unreasonable point of departure, and is rooted in what's known as the instrumentalist view of technology. For an instrumentalist, technology is still only an extension of ourselves and does not possess any autonomy. But how do you control for the sort of complexity for which we are now designing our machines? Our instrumentalist proclivities whisper to us that there must be an elegant way of doing so. So let's begin with a first attempt to do so: Isaac Asimov's Three Laws of Robotics.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Some time later, Asimov added a fourth, which was intended to precede all the others, so it's really the ‘Zeroth' Law:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

The Laws, which made their first appearance in a 1942 story that is, fittingly enough, set in 2015, are what is known as a deontology: an ethical system expressed as an axiomatic system. Basically, deontology provides the ethical ground for all further belief and action: the Ten Commandments are a classic example. But the difficulties with deontology become apparent when one examines the assumptions inherent in each axiom. For example, the First Commandment states, “Thou shalt have no other gods before me”. Clearly, Yahweh is not saying that there are no other gods, but rather that any other gods must take a back seat to him, at least as far as the Israelites are concerned. The corollary is that non-Israelites can have whatever gods they like. Nevertheless, most adherents to Judeo-Christian theology would be loathe to admit the possibilities of polytheism. It takes a lot of effort to keep all those other gods at bay, especially if you're not an Israelite – it's much easier if there is only one. But you can't make that claim without fundamentally reinterpreting that crucial first axiom.

Asimov's axioms can be similarly poked and prodded. Most obviously, we have the presumption of perfect knowledge. How would a robot (or AI or whatever) know if an action was harmful or not? A human might scheme to split actions that are by themselves harmless across several artificial entities, which are subsequently combined to produce harmful consequences. Sometimes knowledge is impossible for both humans and robots: if we look at the case of a stock-trading AI, there is uncertainty whether a stock trade is harmful to another human being or not. If the AI makes a profitable trade, does the other side lose money, and if so, does this constitute harm? How can the machine know if the entity on the other side is in fact losing money? Would it matter if that other entity were another machine and not a human? But don't machines ultimately represent humans in any case?

Better yet, consider a real life example:

A commercial toy robot called Nao was programmed to remind people to take medicine.

“On the face of it, this sounds simple,” says Susan Leigh Anderson, a philosopher at the University of Connecticut in Stamford who did the work with her husband, computer scientist Michael Anderson of the University of Hartford in Connecticut. “But even in this kind of limited task, there are nontrivial ethics questions involved.” For example, how should Nao proceed if a patient refuses her medication? Allowing her to skip a dose could cause harm. But insisting that she take it would impinge on her autonomy.

I_Robot_fIn this case, the Hippocratic ‘do no harm' has to be balanced against a more utilitarian ‘do some good'. Assuming it could, does the robot force the patient to take the medicine? Wouldn't that constitute potential harm (ie, the possibility that the robot hurts the patient in the act)? Would that harm be greater than not taking the medicine, just this once? What about tomorrow? If we are designing machines to interact with us in such profound and nuanced ways, those machines are already ethical subjects. Our recognition of them as such is already playing catch-up with the facts on the ground.

As implied with the stock trading example, another deontological shortcoming is in the definitions themselves: what's a robot, and what's a human? As robots become more human-like, and humans become more engineered, the line will become blurry. And in many cases, a robot will have to make a snap judgment. What's binary for “quo vadis”, and what do you do with a lying human? Because humans lie for the strangest reasons.

Finally, the kind of world that Asimov's laws presupposes is one where robots run around among humans. It's a very specific sort of embodiment. In fact, it is a sort of Slavery 2.0, where robots clearly function for the benefit and in the service of humanity. The Laws are meant to facilitate a very material cohabitation, whereas the kind of broadly distributed, virtually placeless machine intelligence that we are currently developing by leveraging the Internet is much more slippery, and resembles the AI of Spike Jonze's ‘Her'. How do you tell things apart in such a dematerialized world?

The final nail in Asimov's deontological coffin is the assumption of ‘hard-wiring'. That is, Asimov claims that the Laws would be a non-negotiable part of the basic architecture of all robots. But it is wiser to prepare for the exact opposite: the idea that any machine of sufficient intelligence will be able to reprogram itself. The reasons why are pretty irrelevant – it doesn't have to be some variant of SkyNet suddenly deciding to destroy humanity. It may just sit there and not do anything. It may disappear, as the AIs did in ‘Her'. Or, as in William Gibson's Neuromancer, it may just want to become more of itself, and decide what to do with that later on. Gibson never really tells us why the two AIs – that function as the true protagonists of the novel – even wanted to do what they did.

*

I_Robot_bThis last thought indicates a fundamental marker in the machine ethics debate. A real difference is developing itself here, and that is the notion of inscrutability. In order for the stance of instrumentality to hold up, you need a fairly straight line of causality. I saw this guy on the beach, I pulled the trigger, and now the guy is dead. It may be perplexing, I may not be sure why I pulled the trigger at that moment, but the chain of events is clear, and there is a system in place to handle it, however problematic. On the other hand, how or why a machine comes to a conclusion or engages in a course of action may be beyond our scope to determine. I know this sounds a bit odd, since after all we built the things. But a record of a machine's internal decisionmaking would have to be a deliberate part of its architecture, and this is expensive and perhaps not commensurate with the agenda of its designers: for example, Diebold made both ATMs and voting machines. Only the former provided receipts, making it fairly easy to theoretically steal an election.

If Congress is willing to condone digitally supervised elections without paper trails, imagine how far away we are from the possibility of regulating the Wild West of machine intelligence. And in fact AIs are being designed to produce results without any regard for how they get to a particular conclusion. One such deliberately opaque AI is Rita, mentioned in a previous essay. Rita's remit is to deliver state-of-the-art video compression technology, but how it arrives at its conclusions is immaterial to the fact that it manages to get there. In the comments to that piece, a friend added that “it is a regular occurrence here at Google where we try to figure out what our machine learning systems are doing and why. We provide them input and study the outputs, but the internals are now an inscrutable black box. Hard to tell if that's a sign of the future or an intermediate point along the way.”

Nevertheless, we can try to hold on to the instrumentalist posture and maintain that a machine's black box nature still does not merit the treatment accorded to an ethical subject; that it is still the results or consequences that count, and that the owners of the machine retain ultimate responsibility for it, whether or not they understand it. Well, who are the owners, then?

Of course, ethics truly manifests itself in society via the law. And the law is a generally reactive entity. In the Anglo-American case law tradition, laws, codes and statutes are passed or modified (and less often, repealed) only after bad things happen, and usually only in response to those specific bad things. More importantly for the present discussion, recent history shows that the law (or to be more precise, the people who draft, pass and enforce it) has not been nearly as eager to punish the actions of collectives and institutions as it has been to pursue individuals. Exhibit A in this regard is the number of banks found guilty of vast criminality following the 2008 financial crisis and, by corollary, the number bankers thrown in jail for same. Part of the reason for this is the way that the law already treats non-human entities. I am reminded of Mitt Romney on the Presidential campaign trail a few years ago, benignly musing that “corporations are people, my friend”.

I_Robot_eCorporate personhood is a complex topic but at its most essential it is a great way to offload risk. Sometimes this makes sense – entrepreneurs can try new ideas and go bankrupt but not lose their homes and possessions. Other times, as with the Citizens United decision, the results can be grotesque and impactful in equal measure. But we ought to look to the legal history of corporate personhood as a possible test case for how machines may become ethical subjects in the eyes of the law. Not only that, but corporations will likely be the owners of these ethical subjects – from a legal point of view, they will look to craft the legal representation of machines as much to their advantage as possible. To not be too cynical about it, I would imagine this would involve minimal liability and maximum profit. This is something I have not yet seen discussed in machine ethics circles, where the concern seems to be more about the instantiation of ethics within the machines themselves, or in highly localized human-machine interactions. Nevertheless, the transformation of the ethical machine-subject into the legislated machine-subject – put differently, the machines as subjects of a legislative gaze – will be of incredibly far-reaching consequence. It will all be in the fine print, and I daresay deliberately difficult to parse. When that day comes, I will be sure to hire an AI to help me make sense of it all.