Inconceivable!

by Misha Lepetic

“People for them were just sand, the fertilizer of history.”
~ Chernobyl interviewee
VM Ivanov

3406285_c8b3a9d5-7c21-4342-97f6-3f57cbc41c99-inconceivableFor a few years, if you were on Twitter and you used the word “inconceivable” in a tweet, you would almost immediately receive an odd, unsolicited response. Hailing from the account of someone named @iaminigomontoya, it would announce “You keep using that word. I do not think it means what you think it means.” Whether you were just musing to the world in general, or engaging in the vague dissatisfaction of what passes for conversation on Twitter, this Inigo Montoya fellow would be summoned, like some digital djinn, merely by invoking this one word.

Now, those of us who possessed the correct slice of pop culture knowledge immediately recognized Inigo Montoya as one of the characters of the film “The Princess Bride”. Splendidly played by Mandy Patinkin, Montoya was a swashbuckling Spaniard, an expert swordsman and a drunk. Allied to the criminal mastermind Vizzini, played by Wallace Shawn, Montoya had to listen to Vizzini mumble “inconceivable” every time events in the film turned against him. Montoya was eventually exasperated enough to respond with the above phrase. Like many other quotes from the 1987 film, it is a bit of a staple, and has since been promoted to the hallowed status of meme for the Internet age.

Of course, it's fairly obvious that no human being could be so vigilant (let alone interested) in monitoring Twitter for every instance of “inconceivable” as it arises. What we have here is a bot: a few lines of code that sifts through some subset of Twitter messages, on the lookout for some pattern or other. Once the word is picked up, @iaminigomontoya does its thing. Now, and through absolutely no fault of their own, there will always be a substantial number of people not in on the joke. These unfortunates, assuming that they have just been trolled by some unreasonable fellow human being, will engage further, such as the guy who responded “Do you always begin conversations this way?”

So here we have an interesting example of contemporary digital life. In the (fairly) transparent world of Twitter, we can witness people talking to software in the belief that it is in fact other people, while the more informed among us already understand that this is not the case. Ironically, it is only thanks to the lumpy and arbitrary distribution of pop culture knowledge that we may at all have a chance to tell the difference, at least without finding ourselves involuntarily engaged in a somewhat embarassing mini-Turing Test. But these days, we pick up our street smarts where we can.

*

Except we rarely pay attention to the lumpy, arbitrary nature of technology, and nowhere less so than in its latest, apotheotic form: social media. It's this idea of technology as the great leveler, and this is perhaps the principal myth that we are relentlessly fed, as if we were geese on a foie gras farm. And like those geese, we never seem to get tired of the feeding. Nor is there any shortage of those queueing up to do the feeding. Just this weekend I attended a fairly abysmal conference sponsored by the Guggenheim Museum, and had to listen to what I thought were otherwise discerning minds discuss how, for example, the ability of people to participate in a real-time discussion on Twitter about the Ferguson riots made true the claim that there was no longer possible to be ‘outside' of events – or rather, that the only people who were on the ‘outside' were those who were on the receiving end of the obsolete ‘broadcast media', ie: television and radio.

This idea – that people who are passive receivers of information constitute a lesser class of citizenry than those who seek to ‘actively participate' in media – is not just problematic. In fact, let's just call it out for what it is: a barely disguised elitism. Consider the hurdles that you have to overcome to access this allegedly level landscape. You have to know what the Internet is and be able to access it; you have to know what Twitter is and be willing to use it, which is itself no mean feat; and you have to care enough about all of these things, as well as the specific phenomenon of the Ferguson riots, in order to ‘participate' in it. Only at that point are you ready to suffer the slings and arrows of your fellow discussants. Thus the resulting population that jumps through all these hoops is a deeply self-selected one. Not only are the necessary cultural and technological proficiencies required to even get to this conversation substantial, but they are inevitably accompanied by – if not simply borne out of – all the attendant structural inequalities that constitute the context of society in the first place. How many people who are subject to discriminatory policing are not online, simply because they are poor, or uneducated, or most likely, just unconnected? In order to reach a putative place of ‘no outside', one must have all the tacit and consequential social, financial and cultural resources to be able to navigate quite a lot of layers of ‘inside'.

41maY+YcXdL._SY300_On the other hand, those belonging to the latter group of ‘passive consumers' may be more varied than one suspects. To stay with the example of Ferguson, if I watched the riots on cable news, but did so with friends and family, or with strangers in a bar or an airport lounge, and then had a meaningful discussion, well, it's almost as if this didn't happen, since my participation can't be measured in terms of tweets or likes or what-have-yous. It's just conversation, or private contemplation, as has been the case for quite some time. But if it can't be data-mined then of what use is it? At the same time, it bears mentioning that the ‘conversation' that happens on Twitter or anywhere else in social media is by no means guaranteed to be meaningful, simply because that's where it happens. The technorati merely encourage this sort of magical thinking in order to nudge us into a form of participation that occurs much more on their platforms' terms than we might think. When was the last time you went on line seeking to have your opinion changed by someone, whether it was a friend or family member – let alone a complete stranger?

Why is this the case? There is the old (at least by Internet standards) chestnut that, in real life, no one is as happy as they pretend to be on Facebook, nor as angry as they pretend to be on Twitter. So when self-selecting populations opt into participating on a specific platform, the subtle but influential effects on the participants' behavior results in a discourse that is deeply mediated. This occurs not only as a result of the platform itself (ie, the way graphic and textual elements are constructed and arranged on screen, and how users are allowed and incentivized to participate), but also thanks to how people expect their performance to be received by others, and who those others are.

We attempt to shape our online presences to be reflections of who we think we are in the first place. To think that this will suddenly give rise to some unprecedented sort of diversity – that we will step outside of ourselves to embrace new and uncomfortable truths – is naïve. I am not talking about pleasure-seeking or hedonistic pursuits (although, given the ongoing way GamerGate has problematized the seemingly innocuous pastime of video gaming, it's increasingly difficult to say that social media is capable of treating anything as a mere hobby). Rather, I mean to counter the Pollyanna-ish stance held by many techno-pundits that somehow the arc of social media bends towards justice. It may, or it may not. Perhaps the safest thing that can be said is that it will only make us more of who we are already, for better and for worse.

This is what I mean when I claim that the qualities and consequences of technology are lumpy and arbitrary. In reality, the idea that the world is flat has only ever held true for those people with the financial and social resources to make it so. Theirs is a frictionless world. The rest of us must make do with a pale imitation of this: the world seems flat to us only because we successfully ignore vast swathes of it, and social media is an excellent tool for creating the illusion that we are not ignoring anything really important, and that in fact we are paying more attention than ever before. Who can point fingers and say you're not concerned about social injustice when you've clearly been expressing your outrage by liking, sharing and hashtagging all over the damn place? Which is to say, to your friends and friends of friends and perhaps a few other random passers-by who, by definition, must be on the same platform as you. It is this lumpiness and arbitrariness that is really worth our attention.

*

HqdefaultOn the face of it, an innocuous Twitter bot like @iaminigomontoya doesn't seem to have anything in common with the grand hypothesis that social media, as it is currently constituted, may not be doing us any great favors. But it will indeed take us to the next stage of the argument. I claimed above that social media is the apotheotic form of technology. Aside from being awfully pretentious, this claim is almost certainly already false, in the sense that social media is being augmented and perhaps gradually supplanted by the emergence of artificial intelligence; agents of varying autonomy, veracity and interactivity; and robots of many stripes. But since every stage of technological evolution builds upon already existing infrastructure, social media is where much of this change is manifesting itself.

More importantly, this is happening not just because all this stuff is new and clever, but because we want to talk to anything we possibly can, and we fervently desire for those things to talk back to us. This has already been amply proven by our proclivities to talk to dogs, cats and houseplants. But talking to technology is going to bring matters to a completely different level, because what is unique to technology is its ability to create massive, long-lived feedback loops that are initiated and sustained by our talk.

Here are a few examples of the things that we are building that are designed to talk to us. In addition to @iaminigomontoya, there are many such bots on Twitter, which, due to its restrictive 140-character format, is fertile ground for such experimentation. There are bots that, like our friend, will blithely reply to tweets or insert themselves into conversations, but in order to correct your grammatical and homophonic misdemeanors (“your” vs “you're”; “sneak peek” vs “sneak peak”). There are more aspirational creations as well. One of my favorites is @pentametron, which appropriates tweets that, usually quite unintentionally, happen to have been written in perfect iambic pentameter. @pentametron goes the extra mile, though, and re-assembles the tweets into Shakespearean sonnet form, the results of which can be savored here.

Of course, it's reasonable to argue that these bots are really no different than a wind-up toy. Even if you don't know precisely how it works, you know how to set it in motion, and once you've done so you get your hit of childlike wonder and then you put it down and go on with the rest of your day. But however simple, charming and/or irritating as they may be on their own, when taken as a phenomenon, these bots point to a shift that has already been under way for some time. People are, to one degree or another, not just content to interact with machines in a purposive way, but they are expecting to do so, and their expectations are increasingly open-ended. Sometimes they know they terms of the conversation – that is, that they are conversing with a constructed or artificial subject. And sometimes they do not. The truth is, software doesn't even have to pretend to be human for people to seek out human-like interactions with it. It turns out that willing suspension of disbelief is not just a literary device. As Coleridge defined it, “human interest and a semblance of truth” are all that is required to bring it about.

19789999So what happens when we take our credulous nature and jam it into the lumpy and arbitrary distribution and consequences of technology in general, and social media in particular? In next month's post, I will propose that thinking about the intersection of these two tendencies can give us the opportunity to better envision scenarios of likely technological and social futures. It helps us to avoid the sensationalistic fallacy of a Terminator- or Matrix-style dystopia, where strong AIs destroy our way of life, if not the entire planet. Rather, it is about coming to terms with what is already among us, and of how we are already deeply entangled with it. It may even suggest how we might best adapt ourselves to a world that is perhaps already aswarm with artificial subjects that are inscrutable if not nearly invisible, so accustomed have we become to their presence.

“Inconceivable!” I hear you protest. Of course, Inigo Montoya is all too happy to ask if you know what that word really means.