Godwin’s Bot

by Misha Lepetic

“She was Dolores on the dotted line.”
~ Nabokov

Clippy2Artificial intelligence – or rather the phenomena that are being shoved under the ever-widening rubric of AI – has had an interesting few weeks. On the one hand, Google's DeepMind division staged a veritable coup when its AlphaGo AI soundly thrashed the world #1 Go player Lee Se-dol in the venerated Chinese strategy game, four games to one. This has been widely covered, and with justification. Experts will be poring over these games for years, and AlphaGo's unorthodox gameplay is already changing the way top practitioners of the game view strategy. It is particularly noteworthy that Fan Hui, the European Go champion who went down 5-0 to AlphaGo in January, has since then joined the DeepMind team as an advisor and played AlphaGo often. This is not a Chris Christie-style capitulation, but rather an understandable fascination with a style of play that has been described as unearthly. It's no exaggeration to say that the history of the game can now be clearly divided into pre- and post-AlphaGo eras.

Which isn't to say that this shellacking has beaten humanity into quiescence. Earlier this week, we exacted some sort of revenge by appropriating Microsoft's latest entry into social AI, the Twitter bot @TayandYou, and transformed it into “a racist, sexist, trutherist, genocidal maniac”. If we were to consider @TayandYou and AlphaGo to be birds of a feather, which is of course sloppy thinking of the highest (lowest? most average?) order, that would be a small consolation indeed, and not much different from stamping on an ant after you just got mauled by a bear, and still feeling good about it. But comparing @TayandYou and AlphaGo does lead to some useful insights, because one of the principal issues confronting the field of AI is the idea of purpose. This month, I'll look at the case of @TayandYou, and follow up with AlphaGo in April, since come April no one will remember @TayandYou, whereas with AlphaGo there's at least a chance.

Now, this idea of AIs lacking a purpose may seem like a daft claim. After all, the softwares in question were created by teams of computer scientists backed by wealthy corporations (artificial intelligence is the sport and pastime of what passes for kings these days). And in the popular consciousness AIs are implacably possessed of purpose, usually to the detriment of the human species. There seems to be little chance that there could be any ambiguity about such a basic question. Still, the extraordinary flameout of @TayandYou beckons the question of what, precisely, any specific AI is for. For what was really at stake with @TayandYou will, I think, be very surprising.

*

In a long and somewhat rambling interview on Edge, Stephen Wolfram recently asked precisely this. Wolfram, a long-time pioneer and creator of platforms such as Mathematica and Alpha, considers our rapidly diminishing claims on uniqueness as a species. What really makes us different from the rest of the world, whether it's other forms of life, or even inanimate objects? For him, the boundaries of computation and intelligence have become decidedly murkier over the years. There are fewer and fewer signposts that seem to distinguish one from the other, let alone mark the transition from one state to another. So he puts a stake in the ground by positing that humans are good for at least one thing: the ability to assign ourselves a goal or a purpose.

_88928467_taytweet1Wolfram extends this goal-seeking behavior to our tools – after all, we build tools in order to accomplish a task more easily. And digital tools are certainly part of this tradition. So in order for us to make sense of artificial intelligence in particular, and software generally, we must be able to formulate what it is that we want it to achieve, and then we must figure out how to communicate that goal. Closing the gap on this latter act is key to how Wolfram sees the evolution of software, and underpins his notion of ‘symbolic computation': the idea that if we are to become effective communicators with our machine counterparts, we will require some sort of high-level language that will facilitate the imposition of goals on our tools in a way that is accurate, legible and reproducible. But as computing branches out from the strictly quantitative realm of numbers and mathematical operations on those numbers, and into the more qualitative realm of language, image and sound, the nature of our expectations – and therefore our interactions – will necessarily broaden and become more ambiguous.

In 1950 Alan Turing provided one answer to what “purpose” might look like for software. The Turing Test (which I've written about previously) is passed when a human cannot tell whether her interlocutor is a computer or another human. Here the purpose of the software is to become indistinguishable from the human. Much dissatisfaction has been registered over the years over the utility of this. For my part, I don't think the test is nearly broad enough: the idea that we are successful when we have managed to create something so perfectly in our own image is limiting to what technology could be doing, and perhaps too uncritical of what technology should be doing. But if the Turing Test is our signpost, where does that lead us? As Wolfram notes:

You had asked about what…the modern analog of Turing tests would be. There's being able to have the conversational bot, which is Turing's idea. That's definitely still out there. That one hasn't been solved yet. It will be solved. The only question is what's the application for which it is solved?

For a long time, I have been asking why do we care…because I was thinking the number one application was going to be customer service. While that's a great application, in terms of my favorite way to spend my life, that isn't particularly high up on the list. Customer service is precisely one of these places where you're trying to interface, to have a conversational thing happen.

What has been difficult for me to understand is when you achieve a Turing test AI-type thing, there isn't the right motivation. As a toy, one could make a little chat bot that people could chat with. That will be the next thing. We can see the current round of deep learning, particularly, recurrent neural networks, make pretty good models of human speech and human writing. It's pretty easy to type in, say, “How are you feeling today?” and it knows that most of the time when somebody asks this that this is the type of response you give.

Just as human-robot interaction suffers from the phenomenon of the Uncanny Valley, where a robot can be mistrusted or rejected by a human for seeming just not human enough (as opposed to totally human, or totally inhuman), human-AI interactions seem to fall into the same trap. You might call it the ‘valley of meh', where an interaction with a software begins hopefully, but rapidly degenerates into mediocrity and boredom.

*

TaytweetsThis was precisely where Microsoft's @TayandYou found itself. Except, to its great misfortune, it happened to be “learning” from the Twitter ecosystem. Now, Twitter is a platform that, whether due to design or fate or some unholy combination thereof, detects weakness, indecision, or just plain niceness faster and pounces more brutally than almost any other place on the Internet. And this was exactly what happened. @TayandYou was like the new kid who shows up on the first day of school and just gets pounded at recess, to the point where the parents have no real choice other than to take him out of class entirely.

All along, it was unclear what @TayandYou was doing there in the first place. To continue with the schoolyard analogy, any new arrival who comes up to an established group and says “Hey, I wanna be just like you! Let's play!” is just asking for it. Moreover, Microsoft's researchers proffered some anodyne tagline that @TayandYou is here to learn from humans, and that the more humans interact with it the smarter it gets, as if interacting with humans ever helped another species to become anything other than a museum exhibit. In any case, the crazed weasel pit that is Twitter ensured that @TayandYou would not evolve into some digital successor to K-Pax.

Now, as I've already noted, bots on Twitter are nothing new, and some of them are quite interesting and clever. So it was with interest that I read a counterpoint by Sarah Jeong, writing for Vice's rather likeable Motherboard section, when she interviewed members of this “bot-writing” community. Of the developers interviewed, it seems evident that there is an emerging ethical practice that is inspired to make the bots broadly acceptable. One of the developers, Darius Kazemi, has even provided an open source service that is constantly updated a vocabulary blacklist. Obviously we can debate about the implications for censorship and political correctness, but if the counterexample is @TayandYou's tweet supporting genocide, etc, I'm pretty willing to give the blacklist a shot. Also, it's Twitter, for heaven's sake.

There is another important lesson here, which concerns the aforementioned ‘valley of meh'. Jeong quotes Kazemi as saying that “I actually take great care to make my bots seem as inhuman and alien as possible. If a very simple bot that doesn't seem very human says something really bad—I still take responsibility for that—but it doesn't hurt as much to the person on the receiving end as it would if it were a humanoid robot of some kind.” While this might strike some as achieving nearly Portlandia-like levels of sensitivity, it nevertheless points to a distinctly post-Turing Test world, where interactions occur with a diversity of entities. Not every bot needs to pretend like it's human, and we are hopefully adult enough that we can tell the difference, and choose the right entity for the right interaction. I hope.

*

This is where most commentaries around the whole @TayandYou fiasco end, since the bot's tweets are generally sufficient to satisfy our craving for scandal. However, it never hurts to follow the links, and @TayandYou has a veryinteresting About page. I recommend you put on sunglasses before clicking the link, as the screaming orange background of the web page seems designed to prevent you from reading any of the text. For your benefit, I reproduce the salient bits below:

Tay is targeted at 18 to 24 year old [sic] in the US.

Tay may use the data that you provide to search on your behalf. Tay may also use information you share with her to create a simple profile to personalize your experience. Data and conversations you provide to Tay are anonymized and may be retained for up to one year to help improve the service.

FAQ

Q: Who is Tay for?
A: Tay is targeted at 18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the US.

Q: What does Tay track about me in my profile?
A: If a user wants to share with Tay, we will track a user's:

Nickname
Gender
Favorite food
Zipcode
Relationship status

Q: How can I delete my profile?
A: Please submit a request via our contact form on tay.ai with your username and associated platform.

Q: How was Tay created?
A: Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that's been anonymized is Tay's primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.

So, this business of not knowing what purpose to put to an AI – perhaps I should take it all back. Apparently, Microsoft is really quite interested in learning more about a particular demographic, to the point where they would very much like to know what your favorite food is. Especially telling is the bit about having to fill out a form in order to cancel a profile to whose automatic creation one had already agreed. Also, the fact that the user has to specify the ‘associated platform' implies that @TayandYou, or the technology behind it, is present on platforms other than Twitter.

Tay2To go back to something Wolfram said: “What has been difficult for me to understand is when you achieve a Turing test AI-type thing, there isn't the right motivation.” Like most commentators when it comes to networked human-computer interaction, Wolfram does not recognize the value in aggregating data at scale. Because @TayandYou is just that: another vacuum cleaner for data. But while people really don't need anything too clever to hand over their information, the idea of using an AI that can interact with hundreds of thousands, if not millions of people, to come to better understand what they ‘like' – well, that is pure genius. It's like Humbert Humbert hanging out a honey pot for a million Lolitas.

Of course, there were probably some valuable pure learnings to be had around natural language processing, etc etc, had @TayandYou discharged its duties successfully, but this is small beer compared to arriving at a fine-grained understanding of the next major consumer group in the United States. I doubt very much that their actions were predicated on this understanding, but viewed in this light, perhaps the Twitter trolls have done us a favor by sniffing the weakness of @TayandYou and meting out a solid thrashing.