Are we being manipulated by artificially intelligent software agents?

by Michael Klenk

Someone else gets more quality time with your spouse, your kids, and your friends than you do. Like most people, you probably enjoy just about an hour, while your new rivals are taking a whopping 2 hours and 15 minutes each day. But save your jealousy. Your rivals are tremendously charming, and you have probably fallen for them as well.

I am talking about intelligent software agents, a fancy name for something everyone is familiar with: the algorithms that curate your Facebook newsfeed, that recommend the next Netflix film to watch, and that complete your search query on Google or Bing.

Your relationships aren’t any of my business. But I want to warn you. I am concerned that you, together with the other approximately 3 billion social media users, are being manipulated by intelligent software agents online.

Here’s how. The intelligent software agents that you interact with online are ‘intelligent agents’ in the sense that they try to predict your behaviour taking into account what you did in your online past (e.g. what kind of movies you usually watch), and then they structure your options for online behaviour. For example, they offer you a selection of movies to watch next.

However, they do not care much for your reasons for action. How could they? They analyse and learn from your past behaviour, and mere behaviour does not reveal reasons. So, they likely do not understand what your reasons are and, consequently, cannot care for it.

Instead, they are concerned with maximising engagement, a specific type of behaviour. Intelligent software agents want you to keep interacting with them: To watch another movie, to read another news-item, to check another status update. The increase in the time we spend online, especially on social media, suggests that they are getting quite good at this.

Of course, it is questionable whether algorithms ‘want’ or ‘aim for’ anything in the sense in which humans want or aim for some things. But describing the behaviour of algorithms in such terms is explanatorily useful.

An intelligent software agent behaves like an agent because the algorithm is engaged in a game with you. When it chooses an action (e.g. offering either a documentary, a thriller, or a comedy to watch next) it does so based on what it believes that you will do in turn (e.g. leave the site, or instead click the ‘play now’ button to watch another documentary). The game involves (virtual) payoffs for the algorithm that increase as you remain engaged (e.g. as you click on a link). So, there is a clear sense in which the algorithm ‘aims to’ keep you engaged, and so we can think of them as agents.

Here’s why that is a problem. In the ‘engagement game’ we play with the algorithm, the algorithm cannot be finally concerned with whether you have reasons for performing some behaviour. It may understand that you would probably like to watch another documentary, and thus it will offer that option to you, but it does not understand nor care whether you have reason to watch another documentary. Indeed, it may be better for you to go to bed, and not watch another film, but the fact that you’ll prefer the latter makes the algorithm offer it to you.

Hence, in your relationship with the algorithm, your reasons for doing or believing anything hardly matter. It is as if I were telling you: ‘Get a new haircut, but I don’t care whether you should.’ Utterances like these would surely indicate that we are in a problematic relationship.

You probably find that behaviour problematic and inappropriate because it is manipulative. When you try to make someone else do something and you do not even try to show that person how she has reasons to do as you want, then that just is manipulation.

Manipulation is defective interaction because it falls short of an important element of proper interaction: Intending to show to your interlocutor what their reasons are. In a proper interaction, when, for example, I want you to do something, I should at least be concerned with whether you have (at least from your point of view) good reasons to for doing so and try to make you see these reasons, too.

Moreover, I should be concerned about your reasons not only instrumentally (for example, because appealing to your reasons might make you do what I want), but I should be finally concerned with your reasons: I should be concerned about your reasons because that is required for respecting you as an autonomous, rational being that makes its own decisions. If I fail to do so, then I am manipulating you.

The respect we owe to each other as rational beings illuminates why manipulation is potentially problematic. Interacting with others in ways intended to make them see their reasons helps them lead a good life. It makes it more likely that they come to their own conclusions about what to do and think, which many philosophers regard as valuable for its own sake. And even if autonomy is not valuable for its own sake, it is plausible that it is instrumentally good for most other things that we as rational beings consider valuable in our lives. Manipulating someone is, therefore, not an ideal form of interaction.

Given that intelligent software agents do not represent our reasons for actions, nor aim to reveal them to us in interaction, it follows that they are manipulators. A significant part of our interactions, 2 hours and 15 minutes on average, may thus turn out to be of a problematic kind.

Of course, that does not mean that intelligent software agents (or their creators, who might use them as mere tools) are evil. Manipulation isn’t always morally wrong. It depends, for example, on whether we are dealing with someone that we ought to treat as a rational being.

For example, in the case of very young children, there might not be the need (or the possibility) to treat them as rational beings and, thus, it might be permissible to be manipulative sometimes.

Moreover, manipulative actions may produce positive effects, and that may play a role in the moral assessment of manipulation. Some intelligent software agents already know a lot about our emotional states, and they might offer us options that make us happier.

For example, it is possible to reliably infer personality traits such as extroversion from a user’s Facebook activity. Insofar as extroverts like to see pictures of happy cats (which I just made up), and insofar as that has lead to more engagement in the past, the intelligent software agent will likely increase happiness.

However, not even hard-nosed hedonists would claim that identical outcomes brought about through manipulative and non-manipulative influences are equally good. At the very least, revealing someone’s reasons should help that person to become a good decisions maker, which might lead to benefits in the future. So, we have good reason to be concerned about manipulative relationships, no matter what we believe about well-being.

So, we might routinely be engaged in manipulative relationships online, which might not be good for us. At the very least, the manipulation-threat should give us pause and allow us to reconsider our relationships with intelligent software agents. That doesn’t mean ending them: That would be immature. We have to acknowledge that it was often fun, helpful, even instructive to deal with them. Indeed, you might learn quite surprising things about your preferences after using Netflix for a while, and many praise, for example, Spotify’s music recommendations.

Nonetheless, that are having fun times obviously doesn’t exclude the manipulation-threat. We should give some thought to the conditions in which we might want to allow manipulation to occur and then we can decide whether we want to continue in the same way going forward. What exactly we should do about our relationship with is the topic of another post. For now, I want to emphasise two points about this result.

First, we need to pay much more attention to the relationship between intelligent software agents and us. Much of the research focuses on human relationships online. For example, online media might change how we behave toward each other, and it might be that we are more prone to deception online than offline.

Studies of user-to-user behaviour online are important, of course, but we are missing big parts of the picture. Quantitatively, most interactions online are between intelligent software agents and humans, not between humans. And while the few human-to-human interactions certainly have been of greater qualitative significance in the past (e.g. a drug deal in the dark web), the future is bound to change that change with ever-increasing capabilities of intelligent software agents.

Second, we need to start thinking more about our relationships to machines: We need to consider what good relationships would look like (which also includes how we ought to treat them), and brace ourselves for being able to deal with bad kinds of relationships. We need to enable ourselves to deal, technologically and psychologically, with potentially manipulative behaviours. Manipulative actions need not result in manipulated actions. So, even if there are fundamental boundaries to intelligent software agents attending to our reasons, we need not come away manipulated from these relationships.

As indicated above, the relationships we have with intelligent software agents are quite powerful, in the sense that they have real effects in our lives. We spend more and more time online, which may be credited to the behaviour of intelligent software agents. As they accumulate knowledge about us, they should have an easier time to ensure that we behave as they desire. If anything, the ‘rivals’ vying for the attention of your spouse, kids, and friends (and yours, too!) are getting more efficient, more charming, and perhaps more manipulative. We should get cracking at reflecting on the influences we want in our lives.

Until we have figured it out, it will certainly be no harm to turn back to more real-world interaction with your family and friends. It’s probably healthy and rewarding; and even if it isn’t, at least you might have a better idea of what they want to convince you of.