Some Are Born To Sweet Delight

by Misha Lepetic

“Except for a wig of algorithms, and tears and automation.”
~Noah Raford,
Silicon Howl

Blake_01Last month I attempted to set up two conflicting frames. On the one hand, there is the advance of technology in its myriad forms, eg: social media, artificial intelligence, robotics. This may seem like an arbitrary selection. For example, why exclude fields of medicine, or energy production, or infrastructure? Of course, all technologies are intrinsically social, especially given the complexities required to design, develop, disseminate and maintain them on a global scale. But my concern here are those technologies that are explicitly social in nature: those inventions, whether hardware or software, that intervene in our lives to enable or enhance communications, experiences, or that provide services along such lines.

On the other hand, these technologies are laid over a long-established matrix of social differentiations. Categories that have traditionally motivated the investigations of social scientists, such as class, race, culture, religion, education, gender and age, form the inescapable substrate upon which technology is seeded and elaborates itself, or withers and dies. As I showed, and contrary to most writing about technology in the mainstream media, these boundaries are not magically dissolved by technology, and in many cases they may be further exacerbated. They are certainly not elided, which seems to be the most common attitude. Instead, those occupying the more privileged ends of these spectra of difference benefit more greatly from each advance, and the underprivileged are further shunted to the side. It is the technological equivalent of income inequality, except it is subtler, since we lack the pithiness of a single number, such as the Gini coefficient, to use as a signpost. (Incidentally, even this metric has of late become increasingly less useful as global inequality ascends to hyperbolic levels.)

Thus the object of our scrutiny should really be the ways in which technology further complicates a landscape that is already extremely difficult to parse. In this sense, these two frames are not really in conflict, but at least from a critical point of view, are rather insufficiently engaged with one another. Furthermore, and perhaps even more importantly, the inquiry should not have as its final destination any hope that technology will ultimately dissolve these differences. This is where efforts to bridge the so-called “digital divide” fall short for me: the idea of a level playing field has always been a fiction. Why should we aspire to it? Isn't it more compelling to understand what difference a difference makes? Conversely, if technology really does succeed in eroding all these categories of difference, we will have to scramble for another definition of what it means to be human. Given the difficulty we have with the current state of the definition, I somehow doubt that a tabula rasa approach would be at all helpful.

*

Nevertheless, the advent of the broad trifecta of social media, AI and robots seems to be engaging in a subtle subversion of precisely this definition. For instance, something I brought up in my previous essay was the phenomenon of people interacting with software and not really comprehending that fact. And while the example (of a Twitter bot) was trivial and amusing, there are others that strike a deeper chord.

Consider “I Love Alaska“, a short film made in 2008 by Sander Plug and Lernert Engelberts. The film is broken up into thirteen shorts, and frankly isn't much to look at: it is mostly footage of Alaskan wilderness, and not necessarily the very pretty bits, either. However, it's the script that counts; as the filmmakers describe the project:

August 4, 2006, the personal search queries of 650,000 AOL (America Online) users accidentally ended up on the Internet, for all to see. These search queries were entered in AOL's search engine over a three-month period. After three days AOL realized their blunder and removed the data from their site, but the sensitive private data had already leaked to several other sites.

“I Love Alaska” tells the story of one of those AOL users. We get to know a religious middle-aged woman from Houston, Texas, who spends her days at home behind her TV and computer. Her unique style of phrasing combined with her putting her ideas, convictions and obsessions into AOL's search engine, turn her personal story into a disconcerting novel of sorts.

Plug and Engelberts basically have taken the concept of found poetry and cast it into the digital age, and very effectively at that. Throughout the film, a voiceover delivers the search queries in a finely tuned deadpan, as they were entered into AOL's search engine. User #711391 doesn't really use keywords. The first phrase we hear is “Cannot sleep with snoring husband.” More of an entreaty than a query, it is followed by “How to sleep with snoring husband” (it's unclear if a question mark ends this). Obviously the first query did not yield the desired result, so we have an example of how we are forced to bend language towards the machine. But the behavior here is delightfully obtuse, for she doesn't allow herself to be reduced to using keywords, which is the customary practice when using search engines.

In fact, sometimes it's unclear what she is actually trying to find out. Having (possibly) satisfied her curiosity about dealing with snoring spouses and annoying birds, we then get “Online friendships can be very special.” As an elementary school teacher might say, “Are you asking me, or are you telling me?” But there is a very private communion that is happening here. In fact, the AOL search log dump was an absolute gold mine for academic researchers, who were starved for real-life data on how people used search engines. Nevertheless, there is something deeply affecting about bearing witness to the way in which user #711391 comes to regard the AOL search engine not as an anonymous reference gateway but more as a kind of interlocutor, and how her queries eventually lead her to take some substantially consequential actions. It replaces the concept of a diary with a one-sided transcription of a fragmentary telephone conversation; we are left to extrapolate much of the details of what seems to otherwise be a perfectly ordinary, if lonely life.

*

9196_11“I Love Alaska” points to a critical discursive element in the way that internet technologies are read. On the one hand, we get a (somewhat aestheticized) view of how one person engages with a technology that can, to a certain extent, accommodate a fair amount of natural language input. Perhaps her mode of engagement is substantially different from the way ‘the rest of us' use search engines. Or is it? Although AOL was a significant force in bringing people to the Internet in the 1990s, its subscribers were generally not known to be savvy, and Google was already eating AOL's lunch by 2006. Nevertheless, in that year AOL still had about 15 million subscribers. So when we say ‘the rest of us' we are discounting a large population. In fact, consider if you are at all familiar with how your friends or family use search engines – there's really no reason why you would be. There is no ‘rest of us'.

This matters because, on the other hand, the people who know all about this are the ones who created the platforms, of which search engines are but one typology. From their perspective, they are equally concerned with how a middle-aged Houston housewife uses their service as they are anyone else. And just as the AOL search log leak demonstrates that people will use search engines with the idea that no one is looking, the developers of that software will strive to make results for such queries as relevant as possible (User #311045: “how to get revenge on a ex girlfriend”). None of this works, however, if people do not engage the platform. In fact, the more richly they engage the platform, the more data is available for it to evolve. And what is needed is empathy.

How far the arbiters of our brave new world will go to solicit empathy was exposed recently in a post on Medium concerning Facebook's much-vaunted venture into the AI-driven virtual personal assistant market space. The initiative, known as M, flips on its head the usual assumptions. Whereas most AIs would like to convince you they are human, M wants you to know it is an AI, albeit a modest one: it cheerfully chirps “I'm AI but humans help train me!” when asked about its ontological status. Arik Sosman, the author of the Medium post, became increasingly suspicious of M's ability to seemingly navigate queries well beyond any other stae-of-the-art AI and undertook the task of snookering the poor thing.

What ensues is a fascinating forensic exercise into investigating a technology that is intended to replace the search engine itself. But in order to do so, Facebook must train its technology to a much higher standard. And M cannot do that without people. Eventually Sosman is able to ascertain that there is so much human activity going on behind M that the AI is actually more of a veneer than anything else – a sort of “pay no attention to the man behind the curtain” moment. Still, I think of Sosman's dissatisfaction as stemming not from that fact – after all, Facebook never tried to hide the fact that M would have some undisclosed number of human ‘handlers' to assist it. Rather, he was upset that M dissembled in its presentation of itself, pretending to be an AI more than it actually was.

*

I seem to have strayed from the argument I promised you, though. What happened to class, gender and the rest of the categories that ought to be shaping technology? We shouldn't let the rich ironies of Sosman's anecdote distract us from what is really at stake. As Wired wrote on the occasion of M's launch:

Facebook is, by design, rolling out its new assistant in a community in which the users are demographically similar to the M trainers who will be thinking up gifts for their spouses and fun vacation destinations for them… Will M be as good at helping users in the Bronx access food stamps? How about coming to the aid of the single mother in Oklahoma who has a last-minute childcare issue?

Thus the end game for M is clear: you start with what you know, and from there you eventually digest the rest of the world. M needs the data so that it can reach everyone else: identifying who they are, their needs and preferences, and consequently what kinds of ads and other services they might be most inclined to consume. I don't think anyone knows how much more is needed, but one thing that has become clear in AI research is that it's not how clever your algorithms are, but how much data you have to throw at them. So it would be reasonable to posit that the amount of data required is infinite, or at least indeterminate.

Will M actually achieve such reach? It's impossible to say at the moment, but in the meantime the people who benefit from M are those who are most similar, in terms of socio-economic signifiers, as its creators (indeed, Sosman himself is exactly one of those people, recalling the adage that it takes a thief to catch a thief). But even if M successfully reached all 700 million users currently on Facebook's Messenger app, that would still be less than 10% of the global population. An optimist might say that this just demonstrates how much more room there is to grow, but, given the rate of technological failure, it would be just as realistic to bet that M will only ever remain useful to those users in its initial demographic.

H2_14.81.1Despite the uncertainty of its success, M's brief is wide and the resources behind it are vast. Since it aspires to be all things to all people (or at least those people who are on Messenger), M doesn't really shed very much light on the selective application of technology to various social segments. It's more instructive to look at the various niches that robots are beginning to fill in this regard. And since robots have come up, I have to perform the obligatory turn towards Japan. (I apologize for such a hackneyed gesture, and I hope that at some point someone will disabuse me of the need for such a cliché.)

What makes robots useful in this discussion is the fact that, unlike a search engine or a virtual personal assistant, they must be designed for a fairly specific purpose. As embodied technologies, they will stick around and keep their shape until they break or are rendered obsolete. And as embodied technology, they traffic much more explicitly in our concepts of empathy; the designed intention is to both invoke empathy, and to materialize empathy in return. This is what makes them effective objects. The drawback is that you have to either keep making them, or at least keep fixing them. Still, at some point the rope runs out. Thus Sony stopped making, and eventually fixing, its Aibo robot dog. A victim of insufficient sales and corporate restructuring, Aibo left hundreds of Japanese bereft of robot dog companionship, which is no small deal (see this video, documenting Shinto ceremonies to help Aibos transition to wherever Aibos go when they die).

But what's more important is that many of those left without their Aibo were senior citizens. In Japan's gradually unraveling demographic decline, there are fewer young(er) people to function as caregivers; by 2011, already 22% of the population was at least 65 or older. So an integral part of the Japanese narrative is not just that they are smart and gadget-obsessed; it's also that they have fewer people around to fulfill the complete assortment of jobs that a well-functioning modern society requires. Hence robots, and if a robot dog is no longer around then perhaps a robot seal will be an adequate substitute.

Similarly, robots are targeting other Japanese demographics. Witness this odd video that was just uploaded to YouTube a few days ago, where a lonely young woman find companionship with her robot pal. There is bike riding (the robot sits in a basket with its arms raised), dance parties and burger-eating. There are even disagreements, fights and tears, although nothing that can't be reconciled in the end. And finally the young woman goes on a date, and meets a nice boy, and gets a ‘good job' wink from her robot companion, who is benevolently lurking in the background while the couple dances. At the end of the video the robot fades into silhouette, and its LED eyes glow with an ominous sort of friendliness. The fading words are “You were me, I was you.” I should add that, for whatever inscrutable reason, interspersed between these scenes are lines from William Blake's Auguries Of Innocence.

Aside from being supremely creepy, the video, a promotion for the SOTA line of robots, really delivers the argument. Even if it is marketing, the implication is that machines can help people go on, even in the absence of human contact. Whether we are talking about senior citizens or insecure youth, the point of insertion is the same: machines can help you feel less lonely, at least until you either meet someone new, or you die. Extending this principle further leads us to a very strange vision of society, which is this: software and hardware is cheap, and humans are messy, unpredictable and expensive. Therefore it is not unreasonable to postulate that only wealthy people, or people at the privileged ends of the various social spectra, will be able to afford the services of other humans. Since this essay has gone on long enough already, I will flesh out what this kind of a world might look like next time.

Like what you're reading? Don't keep it to yourself!
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Share on Reddit
Reddit
Share on LinkedIn
Linkedin
Email this to someone
email