When is My Choice My Own? A Reflection on the Impact of Persuasion and Big Data

by Robyn Repko Waller

Whether a data-driven nudge diminishes my agency turns on more than just its algorithmic origin.

Photo by Photos Hobby on Unsplash

With the US Presidential Election and other national contests a mere weeks away, voter persuasion efforts of all stripes are at a peak. While traditional methods of pressing the flesh (but not too literally these days — COVID and all) and handwritten postcard appeals abound, bespoke data-driven means of reaching voters have surged. And although some platforms have banned political advertising since the Cambridge Analytica scandal, not all have. 

The targeted ads aren’t restricted to politics, of course. Our social media feeds are cultivated to show the best balance and order of posts for us as individuals, those of our connections peppered with well-placed content to pique our clicking interests and keep us scrolling, all to increase platform profits. Meanwhile your watch has reminded you to stand up. But, it’s not all bad, you say. Sure I may have wasted a regrettable amount of time checking out house renovation reveals. But I also found those cute burgundy Oxford shoes and that algorithm-promoted post by a friend on algorithmic bias was deliciously ironically useful. 

So when does influence undermine my choice? What makes some choices my own and other choices problematically of outside origination? 

Notice I do mean choice here. In these engineered choice contexts, the consumer has the option to not buy or click or believe the content. It’s just a tap on the shoulder — you are ever so slightly more likely to engage the content. Your freedom of choice is retained. No one made me buy those amazing shoes, after all. But then no one made me click that spiraling series of home makeover links either. I’ll take responsibility for both, despite the nudge. But the placement of the latter feels more intrusive than the former. And certain targeted material, like that age-related fat-burning ad for women and that train yourself to become a not-terrible parent ad seem downright creepy. I didn’t click on it. 

But if a person does click to watch that conspiracy video, who owns that choice? After all, it’s not like you’d just bump into a nichey conspiracy video in the street like you could a nice pair of shoes. Worse yet, what if that conspiracy video and the rabbit hole of bizarre videos that follow dramatically changes your personality?

In many ways this is a familiar question that admits of familiar examination. Social media machine learning algorithms — or perhaps the data scientists who engineer them — seek to exert intentional influence over consumers. In all cases of intentional influence, one person or entity wants another person, the agent, to do something (or be some way) but does not want to overtly force the agent to do it (or be that way). Even before our machine-learning-saturated times, good old-fashioned intentional influence was everywhere, from the politician attempting to convince you to vote for her in the town hall debate to the nudge that a young kid gets from a cable TV toy ad to beg his parents to buy “the coolest toy ever” (hence the Ren and Stimpy Log ad joke). Some of these motivating influences are with your blessing — you attend or watch the town hall event, open to being convinced. Others are more a gray area.

Further, all intentional influence can go with or against the grain. That is, the agent may already prefer to act the way that she is being influenced to act, or she may not already so prefer. Moreover, some people are prone to adaptive preference formation — once the influence to act in that way is imposed, they embrace the (sometimes only) option: For instance, finding oneself subject to a national COVID-19 lockdown or mandatory quarantine, perhaps one comes to find the new situation satisfactory. (Not the experience of everyone, I’m sure.)

Issues of consent and prior preference matter for your ownership of choices that result from a nudge. If I knowingly and willingly watch hours of interviews and debates with all of the primary candidates for my party before voting for one candidate, I can own my choice to vote as I do. If I intend to find a new job, I may sign up for zip recruiter or seek out algorithm-sorted lists of narrowly defined employment opportunities in accordance with my preferred job parameters. When I choose to apply, I was nudged, but only in a direction I was inclined in the first place. I don’t feel undermined, but aided in my own choice by the algorithms. 

In these cases I sought out the influencing content and/or had an inclination toward the selected option in the first place. And, if we remind ourselves that we had options all along —nothing in the presentation of the influencing information dictated that I must choose as I did — all of this nudging can be benign or even conductive to my own agency. This is, in part, the Thaler and Sunstein (2009) view of libertarian paternalism — the influencing intervention did not force me to choose as I did, and I ended up better off, as I myself would judge. Maybe that’s why, in part, the get-in-shape and turn-your-awful-parenting-around ads seem inappropriate. I neither welcome nor endorse the implications of those targeted ads. 

But that can’t be the whole story of why some algorithmic nudges are agency-undermining. Algorithmic nudges have new and enhanced troublesome features over those good-old-fashioned nudges. For instance, there is often a “face” of the influencer for traditional nudges. A parent, a teacher, a peer, a politician, a salesperson. Some cases are a bit trickier or institutional in nature, such as paid spokespersons or the government. 

But when it comes to algorithmic nudges, the nudger is ambiguous. One might propose it isn’t: The company or organization who contracted the data scientist to code the algorithm is the influencer. The company is the entity that seeks increased revenue from the platforms. But those folks don’t directly influence. Perhaps the data scientist who engineers the algorithms?

What about the algorithms themselves? Can algorithms intentionally influence in and of themselves? Cases of flash crashes on Wall Street, caused by systems of interconnected trading algs influencing each other, suggest it’s plausible. The data scientists don’t aim to cause flash crashes with the trading algorithms, but they occur as a result of the algorithm’s influence. And notice that social media algorithms themselves are the ones doing the influencing of the human consumers, ranking content with you specifically “in mind.”

If the algorithms themselves, run amok at times, are the intentional influencers of human choices, should this undermine that those choices were the agent’s own? I want to suggest here that, yes, this can absolutely be a threat to a choice being mine. The problem of the activities of machine learning algorithms leading to outcomes not intended by the owners or programmers is even more critical than thus far introduced: For instance, a great deal of attention recently has been paid to issues of bias within machine learning and artificial intelligence. Machines can replicate our own societal biases, a problem termed ‘algorithmic injustice.’ It is documented that machine-learning tools run the risk of filtering candidates for a job according to similarities to previous successful applicants, hence replicating earlier bias in the hiring process.

This machine bias extends to the domain of individual consumer decisions. For example, Facebook settled a lawsuit recently regarding the permissibility of advertisers to exclude certain demographics of users from viewing content, such as gender-stereotyped job ads to women. The company’s policies were modified accordingly. Still, even with more intentional inclusive targeting, it has been reported that the advertising algorithms led to cases of gender-stereotyped and race-stereotyped distribution of ads. 

Moreover, the use of machine learning algorithms without proper understanding can create, however unintentional on the part of the programmer, a potential for categorically new biases being introduced by machine-learning models and the artificial intelligences they produce. An apparent reason for this is the inherent difficulty we, as people, have in recognizing and understanding what such biases might be. A major driver behind the employment of artificial intelligence is its ability to find structure within data oftentimes so vast, and so unstructured, that we ourselves are not able to do so (e.g., IBM Watson Health). So, the problem of recognizing any bias being introduced now becomes exceedingly difficult. 

Moreover, the artificial intelligence itself is of very little help to us, as it is unable to recognize and delineate its own bias. Imagine if such undetected new biases are operative in the algorithms determining social media ad and video distribution. Now real individuals’ choices would be inappropriately targeted and biased in ways that both takes the control from their hands and, worse yet, grows societal bias, in ways that were neither originally intended nor known. How could these resultant choices flow from the consumer?

What we should take from this, then, is that some Big Data nudges may indeed fall under the umbrella of paternalistic libertarianism, provided the agent has alternatives to her choice and seeks similar ends. In some cases, the agent’s decision may indeed be ‘her own.’ However, not all Big Data nudges are so innocuous. To the extent that the agent has not consented to be nudged or does not endorse the outcome, the resultant choice is less her own. To the extent that the algorithm is infested with bias, regardless of the creator’s intention, the factors influencing the agent’s choice are beyond her and others’ knowledge, and so the choice cannot be fully hers.

Like the Big Data driving our choices, owning your Big Data actions is a high-dimensional affair.