Monday, January 02, 2017
The Hit Aesthetic
"Wonder was the grace of the country."
~ George W.S. Trow
At a recent cocktail party, the conversation turned to conspiracy theorists and how to engage them. I offered a strategy that has served me fairly well in the past: I like to ask my interlocutor what information they would need to be exposed to in order to change their minds about their initial suspicion. To be clear, I think of this more as a litmus test for understanding whether a person has the capacity to change their minds on a given position, rather than an opening gambit leading to further argument and persuasion. Climate change is a good example: What fact or observation might lead a person to consider that global warming is happening, and that human economic activity is responsible for it? It is actually quite surprising how often people don't really have a standard of truth by which they might independently weigh the validity of their argument. Of course, in today's ‘post-truth' world, I suspect that it is just as likely that I might be told that nothing can change a person's mind, since everything is lies and propaganda anyway.
I was pleased that another person at the party made an even better suggestion. She said that she would ask not only what would change a conspiracy theorist's mind, but from whom they would need to hear it. This vaults the act of interrogation from a context grounded purely in individualism and individuals' appeals to authority, to something distinctly more social. It also specifies the importance of not just facts, but from where those facts emanate. Because as much as we would like to believe ourselves independently reasoning beings, that we come to our conclusions through a rigorous and sacrosanct process of discernment, we are still very subject to having our opinions shaped by others. This may seem somewhat obvious, but in these times, when new ways of sensemaking are in high demand, I believe this provides an important opening.
Interestingly, this cocktail chatter echoed a much more deeply elaborated mode of thinking, developed by the French theorist René Girard. If much of what drives us is desire, Girard postulated that desire was something that we learned from each other (and not to be confused with needs: consider the distinction of needing to eat, versus desiring one food over another). Desiring is therefore an intrinsically social experience. And we learned not just to desire from one another, but what to desire. We may be born free, but we don't know what to want of the world until we look around and see what others are wanting for themselves. Girard called this ‘mimetic desire'. This is desire as imitation, and as contagion. The corollary, of course, is that it doesn't really matter if we are born free or not; we only become fully human when we enter into this web of desiring what others desire, and having others learn to desire what it is we ourselves covet.
One manifestation is in that old American saying about ‘keeping up with the Joneses': a social vector that is extremely well-suited to commerce, with the proviso that money is to be made from leveraging desire most efficiently when coupled with manufactured scarcity. Consider, for example, the multi-day lines that form in anticipation of a new make of Nike's Air Jordan sneakers: it is an act of collective taste-making where the goal is to obtain exactly the same object for which everyone else in line. The same may be said of stock market bubbles (and the underlying ‘greater fool' theory of investing), neighborhood competitions around Christmas decoration, or any other phenomenon that somehow expands from something socially acceptable to irrational and perhaps even systemically dangerous.
But Girard's theory has an explanatory power that goes beyond the material aspect; it encompasses matters of opinion as well. How do I settle on knowing what I know about the world? For Girard, this is also a mimetic process. Although he did not address technology very much in his writings, here is an interesting thought experiment: what if mimetic desire, instead of being captured in the physical form of goods, could be reproduced endlessly, with little to no friction preventing its amplification? What if it were, for all intents and purposes, free?
The roles that so-called ‘fake news' and social media have played in this election cycle will be discussed for years to come. In a world of bespoke filter bubbles, it is easier than ever for us to only desire the things that already resonate with our existing worldview. In addition to seeking out the opinions of politicians, journalists and commentators with whose positions we already agree (and want more of), social media has inserted a crucial (inter)mediating step: we access these professionals through the good offices of our friends, or people we would like to be our friends.
This may seem banal, but keep it in mind when looking at the numbers: by a recent, widely cited Pew Research poll, 62% of Americans get their news from social media, with 18% ‘doing so very often'. Additionally, Facebook was the most widely accessed source, with Twitter and YouTube coming up relatively distant second. Importantly, despite all the discussion around the algorithms that serve up the information that we consume on these platforms, it is our relationships with the people we trust that constitutes the ‘last mile' of service delivery by which this information reaches our eyeballs. This is further abetted by the structural incentives of the social media platforms themselves. As Mike Caulfield writes,
…conspiracy clickbait sites appeared as a reaction to a Facebook interface that resisted external linking. And this is why fake news does better on Facebook than real news. By setting up this dynamic, Facebook simultaneously set up the perfect conspiracy replication machine and incentivized the creation of a new breed of conspiracy clickbait sites.
Here we return to the notion of conspiracy. It allows us to ask what role conspiracy thinking plays within a mimetic context. Obviously, it's one thing to want the same sneakers that the cool kids on the block are sporting. It's entirely another to jump on the bandwagon of a worldview that has produced everything from Trutherism to Birtherism to PizzaGate. If one accepts mimetic desire as a motivating force for the generation, dissemination and adoption of opinion, then fake news - and social media itself, which is the agar upon which fake news feeds - is merely symptomatic. There is another aspect to Girard's theory, that of the scapegoat, that takes us further.
For Girard, the bubble factory of mimetic desire isn't just how culture is created. With too many people chasing too few goods, mates or other social signifiers, the rivalries produced over and over again by mimetic desire eventually precipitate a crisis that threatens to reduce society to a Hobbesian war of ‘all against all'. There must be a mechanism by which society can hold itself together in the face of such forces, and for Girard it was the notion of the scapegoat:
When violence is at the point of threatening the existence of the community, very frequently a bizarre psychosocial mechanism arises: communal violence is all of the sudden projected upon a single individual. Thus, people that were formerly struggling, now unite efforts against someone chosen as a scapegoat. Former enemies now become friends, as they communally participate in the execution of violence against a specified enemy.
History bears witness to a number of practices where we can see this ‘scapegoat mechanism' at work. More often than not, these practices are so culturally important that they are regularly repeated, and in fact may very well be ritually encoded. Written in 1922, JG Frazer's still-magisterial ‘The Golden Bough' devotes several chapters to its function. A single example will suffice to illustrate the unifying power of the scapegoat:
In civilised Greece the custom of the scapegoat took darker forms than the innocent rite over which the amiable and pious Plutarch presided. Whenever Marseilles, one of the busiest and most brilliant of Greek colonies, was ravaged by a plague, a man of the poorer classes used to offer himself as a scapegoat. For a whole year he was maintained at the public expense, being fed on choice and pure food. At the expiry of the year he was dressed in sacred garments, decked with holy branches, and led through the whole city, while prayers were uttered that all the evils of the people might fall on his head. He was then cast out of the city or stoned to death by the people outside of the walls.
As Frazer demonstrates, the phenomenon of the scapegoat - whether human or animal - manifests not just in Greek and Roman culture but throughout the world. It is a catalyst by which society reaches a consensus with itself that, whatever its internal differences and disagreements (the ‘rivals' of Girard's mimetic process), there is a larger, more important threat to be overcome. Obviously, there is an open line to divinity here, as the scapegoat's sacrifice to the gods creates the expectation that relief will be provided, or a pathway to salvation opened (as in the case of Jesus Christ).
Crucially for Girard, the process only works when it is conducted unconsciously. That is, everyone must believe that the scapegoat is actually guilty of the transgressions. For example, even in the ancient Greek case cited above, the full weight of belief transforms the blameless poor man into a vehicle for gathering up all the plague within the city's walls, and, with his death outside those walls, its dissolution. Conspiracy thinking functions in a very similar fashion: applied to the recent election, Hillary Clinton has never not been guilty, and Donald Trump has never not been a fascist thug. What is lacking is a ritually encoded means by which this malevolent presence can be expunged, so that society might move on. One could contend that, for at least the former scenario, Trump could have indeed put Clinton in jail for her sins, which are of course the sins of her husband as well. But the fact that Trump blithely put this possibility out of mind almost immediately following his victory implies that Girard's requirement of belief (or at least, suspended disbelief) in the scapegoat is not fulfilled. What we then have is a fully functioning scapegoat mechanism that is nevertheless denied its consummation.
There is a more important point to be made about Girard's requirement of belief. All of the above would be of passable interest as far as analytical approaches go (in fact, I'm certainly not the first person to bring this up, having been inspired by this piece in The New Inquiry). The extraordinary additional wrinkle in this story, as The New Inquiry and others have pointed out, has been Peter Thiel's role. In a nutshell, Thiel is a libertarian Silicon Valley billionaire who embodies Randian ideals to an almost caricaturish extent. He was one of the first outside investors in Facebook. More recently, he acquired notoriety as the man behind the lawsuit that bankrupted Gawker. For our purposes, however, it's more appropriate to note that he was one of René Girard's students at Stanford.
Girard's influence on Thiel is quite clear. The notion of the scapegoat is explicit in Thiel's own writing, specifically in Zero To One, Thiel's contribution to innovation and entrepreneurship. As noted by The New Inquiry, Thiel writes:
The famous and infamous have always served as vessels for public sentiment: they're praised amid prosperity and blamed for misfortune… [It is] beneficial for the society to place the entire blame on a single person, someone everybody could agree on: a scapegoat. Who makes an effective scapegoat? Like founders, scapegoats are extreme and contradictory figures.
For Thiel, it is thanks to this Girardian process that society progresses at all. The problem is that, more often than not, it's people like him - the wealthy, the founders, the leaders - that wind up becoming scapegoats. The difference is that Thiel, thanks to his position and resources, is now actually able to intervene in this very process. This was the case with Gawker: spurred on by his personal beef, Thiel identified the site as a factory for the manufacturing of scapegoats, and bided his time until the perfect case presented itself, which he then used to destroy Gawker.
But other Girardian mechanisms are worth keeping around. For the reasons described by Mike Caulfield above, Facebook is a streamlined machine for reproducing mimetic desires, for creating rivals in desire and therefore for fomenting social tension. The difference with a platform like Facebook is it is a thoroughly quantified domain. Suddenly, there is an opportunity to guide and channel these passions. Scapegoats will continue to be generated, but if the process can be influenced, however subtly, then we have effectively replaced the prior, ritually encoded consummation of the practice of scapegoating with one that is is micromanaged by algorithm. More importantly, at least according to Thiel's worldview, we will avoid scapegoating the ‘wrong people'.
This theorization points to a hard truth for not just public opinion in general, but journalism in particular. Writing recently in The Guardian, Caitlin Moran struck a hopeful tone:
I think things are going to get worse for newspapers before they get better. We're living in a post-truth age and people don't seem to care, because we're drunk on the internet; and I think things will have to get a bit messier before we start wanting to have facts again. The tone of politics right now is one of shouting and trolling, and that tone has absolutely been set by social media. At some point, probably when society and the economy have got much worse than they are now, we'll reinvent the idea of having a creditable, trustworthy press.
Unfortunately, I am extremely skeptical that a return to a dignified public discourse is imminent, or even possible. If we buy not only into the Girardian scenario, but one which is moreover actively guided by those in the position to do so, then it is difficult to conceive of the kind of event or trend that will provide a turning point and return us to a prelapsarian idea of ‘truth' or ‘journalism' or even ‘media'. More broadly, as George WS Trow wrote in the New Yorker almost 40 years ago, "To a person growing up in the power of demography, it was clear that history had to do not with the powerful actions of certain men but with the processes of choice and preference." It seems sensible to assert that structures of power that can exploit these processes will maintain a steady upper hand, compared to those that seek to disrupt them. If we take Girard at his word, mimesis may well be sufficient unto itself, as it has been for a long time already.
Monday, December 19, 2016
Data Science and 2016 Presidential Elections
by Muhammad Aurangzeb Ahmad
Much has already been written about the failure of data science in predicting the outcome of the 2016 US election but it is always good to revisit cautionary tales. The overwhelming majority of the folks who work in election prediction including big names like New York Times' Upshot, Nate Silver's FiveThirtyEight and Princeton Election Consortium predicted Clinton's chance of winning being more than 70 percent. This is of course not what happened and Donald Trump is the president elect. And so on the night of November 9th people started asking if there was something wrong with Data Science itself. The Republican strategist Mike Murphy went as far as to state, "Tonight, data died." My brush with election analytics came in in late 2015 when I was looking for a new job and talked to folks in both the Republican and the Democratic Data Science teams about prospective roles but decided to pursue a different career path. However this experience forced me to think about the role of data driven decision in campaigning and politics. While data is certainly not dead, Mike Murphy observation does lay bare the fact that those interpreting the data are all too human. The overwhelming majority of the modelers and pollsters had implicit biases regarding the likelihood of a Trump victory. One does not even have to torture the data to make it confess, one can ask the data the wrong questions to make it answer what you want to hear.
We should look towards the outcome and modeling approaches for the 2016 US presidential elections as learning experiences for data science as well as acknowledging it as a very human enterprise. In addition understand what led to selectively choosing the data and to understand why the models did not as well as they should have, it would help us to unpack some of the assumptions that go in creating these models in the first place. The first thing that comes to mind is systematic errors and sampling bias which was one of the factors that results in incorrect predictions, a lesson that pollsters should have learned after the Dewey vs. Truman fiasco. That said, there were indeed some discussions about the unreliability of the pollster data run up to the election. Although the dissenting voice rarely made it to the mainstream data. Obtaining representative samples of the population can be extremely hard.
It is notoriously difficult to predict which registered voters are going to actually vote in the elections. Fewer registered Democrats actually went to the polls to vote for Hillary Clinton than they had voted for their Democratic nominee in the last few elections. It is already well known that Hillary would have comfortably won had that not been the case. The opposite is also true, many people who are on the alt-right who normally do not engage with the electoral process voted for Donald Trump. There are many factors that determine how does one obtain a representative sample of the population. The Investors Business Daily (IBD) correctly predicted the outcome of the elections and in their own words they were able to do so because most of the other pollsters collected most of their data by calling smartphones while they polls that they conducted were representative sample even the types of phones that were used. It may be the case that IBD may have gotten lucky because even their approach, as far as we know, does not take into account voter apathy.
The real story about Data Science and the elections may be that even in the age of Big Data we have preciously little data to make robust predictions about the electorate even though we may pretend that that is not the case. Just because a simple model predicted that Trump would win the presidency doesn’t mean the model is correct, there are just too few data points to make predictions with reasonable confidence. Many folks in the data science community observed that the Republicans were far behind the Democrats in terms of building a strong data science and may lose the elections because of this reason. Of course they were dead wrong about this. Cambridge Analytica is the British analytics company that led the Data Science efforts at the Trump campaign. It is now being touted by many outlets as the engine behind Trump’s success after the fact, while others have decried that most of it is just post victory myth making. One of Cambridge Analytica’s claims to fame is that they use psychographic data to make predictions about election choice. Many outsiders observe that even a sample size of a few million is not enough to generalize over a population of 350 million. The PR folks at Cambridge Analytica has played up the media fascination with the idea of data science team winning the elections What is however left in these accounts is that before the election day Cambridge Analytica put the chance of winning of its candidate to be 20 percent which they upgraded 30 percent as voting began. This does not exactly sound like predictions of winning in advance or actionable insight for strategizing. Thus, many journalists have stated that the claims of data science winning the elections are vastly exaggerations with there being no secret sauce to their data science approach.
If we are to take a critical eye to Cambridge Analytica then it is only fair that we apply the same critical eye retroactively to the previous elections and the success of Nate Silvers of the world. It may well be the case that the success of the predictions of the last elections was a fluke but there are important lessons that one can learn from flukes. One of the most insightful comments came from Pradeep Mutalik “that aggregating poll results accurately and assigning a probability estimate to the win are completely different problems.” The former is relatively straightforward while the later involves a host of assumptions that are not always clear and many a times are more art than science. Lastly there is the issues of how the populace and the media, both of which are not rocket scientists when it comes to interpreting the probability of winning or losing elections. Those with some know how of probability would be surprised to learn how many people are out there who think that a probability of 60 percent of winning implies almost certainly winning. Pradeep Mutalik of Yale has rightly pointed out that probabilistic forecasts should be done away with or if we to use them they should with margins of error disclaimers. Perhaps our predictive technology is not as good as we think. It is as good as or bad as the way targeting ads work, which is another way of saying not that well. One cannot really blame the data when the data that we select already have the conclusions that we want built into it. Alternatively we should stop worrying about the predicting the weather as much. Perhaps the outcomes don’t matter as much as we like to think, certainly Nicholas Nassim Talib thinks so.
Monday, November 07, 2016
"On the way from mythology to logistics…
machinery disables men even as it nurtures them."
~ Adorno & Horkheimer
A few years ago I heard the Seattle Symphony play Carnegie Hall here in New York. There were three pieces on the program. The first two - Claude Debussy's La Mer and John Luther Adams's Become Ocean - are clearly of a type. They share the subject matter of the sea and its sonic representation. More importantly, Become Ocean is a clear stylistic descendant of Debussy's seminal, impressionistic work. Written a hundred years apart, both pieces nevertheless explore shimmering textures and slowly shifting planes of sound. The emphasis is not on seafaring - a human activity - but rather on the elemental qualities of the ocean. So far, so good.
The third selection, however, was Edgar Varèse's Déserts. As the title implies, Déserts is possessed of its own vastness, but this is an expanse that is jagged and abrasive. Written in the early 1950s, that is at about half-way between La Mer and Become Ocean, its exploration of timbre is arid and dissonant, and is an early example of a score that calls for interweaving the ensemble's playing with pre-recorded electronic music. Some listeners may be reminded of avant-garde movie music where the scene calls for danger and uncertainty; one YouTube commentator wrote that "parts of this remind me of the music on Star Trek, when Kirk is facing some Alien on a barren world, kind of thing".
Varèse has always been a favorite of mine when it comes to the canon of twentieth-century "new" music. Prickly and uncompromising, he was a passionate and broad-ranging thinker. After meeting him for a possible collaboration, Henry Miller mused that "Some men, and Varèse is one of them, are like dynamite." Indeed, Varèse envisioned Déserts to be accompanied by a film montage - what we would casually characterize today as a multimedia experience. While the pitch to Walt Disney never went anywhere, the music is still with us today. But be that as it may, what is Déserts doing, sharing the stage with the marine masterpieces of Debussy and Adams?
As a counterfactual, had an algorithm been curating the evening, we would certainly not have had this juxtaposition. (Perhaps, in keeping with the evening's theme, we would have been subjected to Handel's Water Music instead). You may contend that it's absurd to think of an algorithm holding such sway over the well-heeled patrons of Carnegie Hall, but consider how much of our lives have been pervaded by exactly this sort of machine-driven ‘curation'.
So here is a seemingly uncontroversial claim: one of the great triumphs of modern software is the recommendation engine. From Amazon's ‘customers who bought this item also bought' to Netflix's ‘other movies you might enjoy', recommendation engines are ubiquitous and always ready to help, especially when they are wrapped up in the soothing tones of a Siri or an Alexa. Recommendation engines also occupy an interesting niche in our information ecosystem. In a world of infinite content, they are the osmotic membrane that regulates the exchange of data and preference. And they thrive on scale: the more data is thrown at them, and the larger the network of users, the better they function. This is true for both the raw inputs (what's available to be consumed) and the raw outputs (what is consumed). The design masterstroke of this paradigm is that the outputs are converted into new inputs. By simultaneously taming and leveraging the deluge of data, recommendation engines aspire to make a cornucopia of choice legible to us.
But this design masterstroke is also its fatal flaw. Recommendation engines are really good at homogeneity. If you like salsa music, start a Pandora station and keep giving the thumbs-up until you've locked in your sound. You can be sure there won't be any heavy metal songs popping up in your stream. Need more than one cordless drill? Amazon has got you covered. Have a look at some drill bits while you're around. And let's not even get started on the abundance of potential partners that online dating apps dangle before us. It brings to mind MTV's original catchphrase, "Too Much Is Never Enough."
They watch over us, these engines of loving grace. Obviously, sometimes you just need a cordless drill, in which case these platforms can be of great utility. But more broadly, what they are poorly equipped to do is help us understand or create the idea of difference. Or to be more specific, the hedgehog-like nature of this worldview gently deflects us away from the idea that there are styles, stances and yes, objects, that are not contiguous to our own narrowly formed desires and preconceptions. How do we find these, or rather, how do we acquire the critical apparatus by which we can find and judge these?
This question is worth elaborating because recommendation engines are growing beyond the simply utilitarian, and making a bid to occupy a more nuanced space. Here is an example from the world of design. In a recent article about the curious ubiquity of the so-called mid-century modern style, Kelsey Campbell-Dollaghan notes that
It may also be the product of a great averaging: as algorithms track our preferences and shape our online lives accordingly, we're all becoming more and more similar. Siri and Alexa, for example, are killing off regional accents. Facebook crafts our news feeds so they match up to what it knows we already love and hate. Companies like Airbnb and WeWork are popularizing the same generic spaces across the globe; it even has a name, recently christened by Kyle Chayka: airspace. Midcentury modern design, it seems, is another form of technological averaging—the cream, gray, and wood-paneled amalgam of all user tastes.
To be fair, the decline of regional accents in the United States was a process that began with the advent of first radio and then television; current technology has merely hastened it. But mid-century modern design is an excellent example of how software-driven recommendation is flattening our preferences: Modsy, a startup playing in this space, uses a quiz to help homeowners plan their design moves, hopefully obviating the curatorial presence of a human interior designer. Overwhelmingly, its clients end up favoring this specific, anodyne style.
Now, one could make the argument that this is a self-selecting population and as such may have a bias towards this design paradigm. But the subtler point is that the individual's process of questioning, discovery, learning and discrimination is undercut substantially when the heuristic of a recommendation engine is employed. It is the opposite of engaging in the serendipitous act of browsing the jumble of an antiques shop, where one goes to explicitly find difference and therefore implicitly invoke the faculty of taste. The benefit of feeling the texture of a swatch of cloth, the heft of an object, the smell of a book? These holistic sensory judgments are forfeited in favor of a quicksilver virtuality, where only the eye is privileged. And of all the senses, the eye is the most easily deceived.
I'm not offering a romantic sentiment of days gone by. This way of going about being in the world is work. It is not by any stretch ‘efficient' - to invoke, with as much contempt as possible, one of Silicon Valley's favorite words. It cannot be. Recommendation engines putatively do this work for you, but the result, as evidenced by Modsy's result, is bereft of identity. I would not have much of an issue with this except I am certain that, once having made an interior design choice, these homeowners would be unable to describe why they made these choices, except in the most superficial terms (eg: "I like clean simple lines"). If you're such a fan of midcentury modern design, you should be able to distinguish between a chair designed by Arne Jacobsen and one designed by Hans Wegner.
Nor am I being unduly snobbish. For there is a difference between snobbery, which is at its heart a power play, and connoisseurship, which is a desire for knowledge and therefore an understanding of the importance of difference - the difference that a difference makes, if you will. A snob is someone who will tell you that the wine you are drinking is good because Robert Parker gave it 96 points and it cost $200 for the bottle (and aren't you grateful). A connoisseur will tell you that this wine is good because of how it is made, or what it tastes like, or why it tastes the way it does and how it is different from any other bottle.
Furthermore, our connoisseur will be able to describe what food goes well with this wine, and at what point in the meal it's most appropriate to drink it. A connoisseur ultimately understands each experience and object as existing in relationship to other experiences and objects. It is the interaction of these entities that ultimately creates meaning and value. As hokey as the phrase may be, this is the judgment that is required to find that one object "that really ties the room together". Recommendation engines, by their very nature, are incapable of providing an holistic experience.
To be clear, the kind of connoisseurship I am positing is not one of absolutes, either. Values are never fixed; rather they are always being negotiated. This is another failing of recommendation systems. The flatness of fully quantified consumption behavior sets the stage for feedback loops that gradually become divorced from other criteria. Something that is popular becomes more popular simply due to its increasing popularity (in this sense, political parties and stock market bubbles share significant characteristics with recommendation engines). Other choices may experience a decline in popularity, but the system may not be able to ascertain why. Platforms may attempt to remedy this flatness and market opacity by creating multiple tiers to privilege "influencers" but this is misplaced, since it continues to impose no effort of learning on the mass of people accessing the engine. Simply put, we still cannot answer the question ‘why' in any meaningful sense.
Not surprisingly, the abdication of decisionmaking in favor of a recommendation engine has another consequence. In a distributed scenario populated by many actors - customers, consultants, designers, manufacturers, marketers - information is continually being exchanged. It is lumpy and uneven, but it is vital and dynamic. Various actors acquire different degrees of knowledge which they then use to modify their tastes and behaviors going forwards. But in a recommendation engine scenario, knowledge tends to flow overwhelmingly inwards - into the network. As Michael Tyka, a biophysicist and programmer at Google, notes, "The problem is that the knowledge gets baked into the network, rather than into us. Have we really understood anything? Not really - the network has." Well, except for the people who own the network; they might have access to the sum of this knowledge, with which they dispense as they please.
More troubling is the suspicion that recommendation engines, socially speaking, function as a sort of gateway drug for generating consent for the much broader and more pervasive rubric of what has become known as algorithmic judgment. Amazon recommendations and Netflix suggestions are all well and good if this sort of impersonal guidance remains an elective activity. However, once we begin using algorithms to determine the likelihood that someone is a good credit risk, or is likely to commit a crime, then we have raised the stakes substantially. I'll take a closer look at this wave of technologies next month.
In the meantime, we have strayed rather far from that performance at Carnegie Hall. So why did Varèse's Déserts join Debussy and Adams? As George Grella elegantly stated in his review of that evening, "Two is a trend, three is an argument. The stated connection through landscape and ecology was window dressing for abstract music about form, structure and time." A good critical assessment always provides, in addition to analysis and interpretation, an invitation into further conversation and deliberation. Grella intuits the intention behind the programming, and generates a narrative that also invites our own engagement. We build stories on top of stories. Can we do the same in a society excessively informed by recommendation engines? You walk into a stranger's home; in a friendly attempt to strike up conversation, you take note of the décor, but the conversation begins and ends with "oh, the computer did all of that for us".
Do the Right Thing and leave Judgment to Algorithms
by Muhammad Aurangzeb Ahmad
In Islamic theology it is stated that for each human being God has appointed two angels (Kiraman Katibin) that record the good and the bad deeds that a person commits over the course of lifetime. Regardless of one’s belief or disbelief in this theology, a world where our deeds are recorded is in our near future. Instead of angels there will be algorithms that will be processing our deeds and it won't be God who would be judging but rather corporations and governments. Welcome to the strange world of scoring citizens. This phenomenon is not something out of a science fiction dystopia, some governments have already laid the groundwork to make it a reality, the most ambitious among them being China. The Chinese government has already instituted a plan where data from a person’s credit history, publically available information and most importantly their online activities will be aggregated and form the basis of a social scoring system.
Credit scoring systems like FICO, VantageScore, CE Score etc. have been around for a while. Such systems were initially meant as just another aid in helping companies make financial decisions about their customers. However these credit scores have evolved into definitive authorities on the financial liability of a person to the extent that the human involvement in decision making has become minimal. The same fate may befall social scoring systems but the difference being that anything that you post on online social networks like Facebook, microblogging website like Twitter, search and browsing behaviors on Google or their Chinese equivalents RenRen, Sina Weibo and Baidu respectively is being recorded and can potentially be fed into a social scoring model. As an example of how things can go wrong lets consider the case of the biggest country in the world – China. In that country the government has mandated that social scoring system will become mandatory by 2020. The Chinese government has also blocked access to non-Chinese social networks which leaves just two companies, Alibaba and Tencent, to literally run all the social networks in the country. This makes it all the more intriguing that the Social Credit Scoring system in China is being built by the help of these two companies. To this end the Chinese government has given the green light to eight companies to have their own pilots of citizen scoring systems.
As compared to other scoring systems many social scoring system not only takes a person’s activities into account but also that of their friends e.g., if your friend has a negative score or is considered to be a troublemaker by the government then your social score will also be negatively impacted by your friend. The Chinese social scoring does not take this into account right now but some folks have started ringing the alarm bells that this where it will end up. And just like the credit scores, the social score is a number that you can only ignore at your own peril. A low social score may result in you not getting the promotion, not getting hired for that government job or even that private sector job as private companies may not want to look bad by hiring too many employees with bad social scores. The government may even take things up by a notch by scoring companies based on their workforce and hiring practices. Once the system is set up, the authorities may not even need to do much policing because the algorithms will be doing the policing for them. This state of affairs has even been rightfully called gamified authoritarianism. The government can push the society in a particular direction by soft coercion. Consider the following quote from Li Yingyun, director of Ali Baba’s pilot social scoring system, as reported by the BBC, “Someone who plays video games for 10 hours a day, for example, would be considered an idle person, and someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility.” It is not hard to imagine where such a system can go next.
Jay Stanley, from the ACLU, has rightfully observed that something like the Chinese social scoring system is unlikely to come to realization in the US. That said, governments are not the only entity that one should be worried about. If the government does not create such a system then large corporations may be tempted to replicate the Chinese social scoring systems. They not only have the resources but also motivation to do so. This scenario is not as far fetched as it may seem at first at first as many organizations already have internal scoring systems where they score their customers not just on their purchasing activities but also browsing behaviors and their engagement with the social media.
Retail and shopping is not the only thing that will be affected by social scoring system. It is not unconceivable that matchmaking services like Match.com, eHarmony, Tinder etc. will start using online social cues for marriage and dating. This is another area where China is ahead of the curve. Chinese dating apps are already incorporating social scores. After all, would you really want to date someone who can negatively impact your social score. Imagine a version of Tinder where you can filter people by their social scores. Just as you do not want to hangout with the wrong people that can lower your score, you also would not want to date someone who comes from a lower stratum of social scores. After all a social scoring system may penalize a person more for choosing to spend their life with the ‘wrong’ kind of person. The long-term effects of such a system could be self-policing as well as stratification along social scoring lines. At this point if you are wondering that this is starting to look like the contours of a digital caste system being solidified then you are not imagining things.
One disclaimer that I should add here is that the version of the social scoring system described here is the more extreme version of what the Western media has stated what the Chinese government has in store. Details coming out of China about this system keep on changing and hence assessment of the system takes the various versions into account.. The version that is currently being rolled out is limited to a credit scoring system based on online and offline financial activities. However some analysts observe that the Chinese government may install such a system in the near future and this is just a warm-up. There has already been some backlash against aspects of this system so that the Chinese government has scaled back its plans tio some extent.
Then there is the dreaded correlation is causation fallacy that people can easy fall prey to. While one may argue that even if someone has a low social score that does not mean that they are a bad person, this not how large swaths of population are going to look at the matter. It does not help that majority of the populace is not very apt at even basic stats. Thus if you have low social score then people may start thinking that something wrong with you even though the low score may be because you are the victim of adverse circumstances. The problem of visibility of social scores is even more severe as compared to financial scores. If anyone can just query your social score then you may end up making and breaking friendship based on an invisible social marker. In the near future just imagine wearing a VR device over and walking down the street where you can see everyone’s social scores. Consequently not only your interactions with the government and impersonal organizations would change but your day-to-day interactions with other people may be impacted as well.
With the proliferation of social scoring systems used by the government and large corporations, a new profession may also emerge in the near future: Reputation management for social credit scores. Organizations may crawl the internet, analyze one’s social media usage, scrutinize one’s social circle etc. to making recommends regarding what actions to take and what friendships to maintain or even break based on how these might help a person improve their social score. Especially if querying social scores becomes as easy as querying a VR device then gold digging may not be limited to financial gain in the near future. We should also think about the psychological toll that it might take on people if they have to force themselves to create the illusion of a perfect to get that perfect social score. We have already seen glimpses of this phenomenon on Facebook and Instagram. It is not that hard to create a fake life on social networks it seems.
As with many autonomous systems run by algorithms, the proponents of social scoring systems may state that since the human element is being taken out these systems are likely to be unbiased. Researchers like Frank Pasquale have observed, “There is nothing unbiased about scoring systems.” Another important issue is the data collection and interpretation of the data. The former does not take into account the circumstances that lead to people making certain life decisions and the later may run into the problem of using the same cultural yardstick across different cultures and ethnicities. We already have computing systems that recommend giving African Americans longer sentences as compared to their Caucasian counterparts for the same crime. The problem is not that the programmers who designed the systems are consciously biased or racists but rather unconscious bias and selectivity of data get incorporated into the system without its creators realizing it.
Especially if the use of social credit scores can make people get others in line without direct government coercion then this would result in the perfect form of the Panopticon. Not only are the guards not needed in this version of Panopticon but all the prisoners have to put up a smiley face as well. It is the stuff of dreams for authoritarian regimes. Like many other technologies of control theocracies may jump at the thought of keeping tabs of what their citizens do. Just imagine how a technologically advanced North Korea or Saudi Arabia would look like with a social scoring system in place. Imagine North Korea with listening devices everywhere and recording and analyzing everything that one says in real time. It would impossible for dissidents to verbalize their opposition to the regime to one another, let alone mobilize and gets their voices across the rest of the world, this truly be as the late Christopher Hitchens put it a Cosmic North Korea. Imagine a theocratic regime that keeps tabs on the religiosity of its populace and scoring them based on their compliance to a particular creeds and set of behaviors.
Given the technological and consequent social developments of our age, social scoring systems may be inevitable. If this is the case then we should try to steer them towards greater transparency, privacy and fairness. Unlike the financial scoring metrics one should be able to query the system regarding why one has a low social score. If the government or some company is giving you a negative score because of your participation in some demonstration then aren’t they infringing on your constitutional rights by penalizing you? If this is the case then you should have a say in rectifying your score. It may of course result in an unusually high number of lawsuits against social scoring system, which is enough of a reason to not have such systems in the first place. On the flipside the advocate of such systems could use this premise to argue that such systems should not be transparent. As is the case with large-scale big data systems that collect personal data and offer services the question comes down to finding the right balance between privacy and transparency. It may be that such a balance does not exist for social scoring systems.
This makes the era of Big Data quite different from the previous information revolutions. One does not have to wait till the end of the world for judgment being pronounced. The algorithms of our own making are judging us and since they are created in our own image they are likely biased. This may not exactly be what Jesus had in mind when he said Don’t judge lest ye be judged but algorithms that judge us are already here and increasingly will be part of the social fabric.
Monday, September 12, 2016
No Can Go
The Spectacle is not a collection of images,
but a social relation among people, mediated by images.
Now that Pokémon Go has had a few weeks to work its way through our collective psychosocial digestive tract, we can begin considering the effects of this latest, and by far most successful, manifestation of augmented reality (AR). Because it has been so successful, it's worth asking the big questions. Does Pokémon Go really make us more social? Does it make us better as individuals, or as a society? What gets amplified, and what gets obscured? (Hereis a brief overview of how Pokémon Go works.)
It's worth mentioning that augmented reality broke into the national consciousness in the form of a game. Educational tools have a limited audience and their effectiveness is difficult to measure. Workplace applications are either niche or still undercooked - for example, if we're to go by this recent video by AR darling Magic Leap, work seems to entail checking the weather and stock prices, at least until you're interrupted by your kid sharing his school report on Mt. Everest. After buying some spiffy orange boat shoes, there's not much left to do but look up and zone out to the jellyfish languidly passing across the ceiling. Clearly, this is a job that is safe from automation.
Games, on the other hand, are the perfect vessel for distributing a technology such as AR. Software is a contained system; it is built according to specifications and anticipates a gamut of interactions. There are rules - visible or invisible - that tell you what the system may or may not do. And engagement with the system is based on the fact that identity and progress can be established and measured, with performance compared and contrasted with other players.
All of this makes software ideal as the substrate for the gamification of, well, everything. If you've ever used Uber, you can see the available cars trundling along the streets in your vicinity. Once you complete your ride, you rate your driver. What's a rather lesser-known fact is that your driver rates you. Silicon Valley abhors a data vacuum, and a great way to get people to provide data about anything is to make a game out of it. The genius of this is that, consequently, people are really convinced that it's just a game.
So is Pokémon Go just a game? To be sure, there was much ridicule as gamers emerged from their darkened rooms, like refugees from Plato's cave, stumbling into the blinding sunlight in order to catch their little monsters. And any activity that seeks to weave the real world into its purview is bound to have odd consequences. To be sure, there are the heartwarming anecdotes of autistic teenagers gaining newfound social confidence. Or consider the (very dubious) account of a player who ran into "two sketchy black guys" in a park at 3am who - as it turns out! - were also catching Pokemon. When the cops come by to see what's up, they're persuaded to start doing the same. It's like that old Mr. Microphone commercial: Everyone wants a piece of the fun!
At the same time, there is a decidedly darker side to the proceedings. In Wiltshire, England, four teenagers had to be rescued from a cave complex by three fire engine units and two rope crews (how they had reception down in the caverns unfortunately went unexplained). A guy in New York got caught cheating on his girlfriend as a result of the traces the game left on his phone. Players have been "asked to refrain" from chasing virtual creatures through Arlington National Cemetery; nor have they been shy to play the game at funerals or at the 9/11 Memorial in downtown Manhattan. I mean, you're either going to catch them all, or you're not.
More gruesomely, in searching for virtual monsters, players have stumbled across real dead people. In Wyoming, a teenager found a corpse under a bridge. A player in Odense, Denmark found another one in a drainage canal. And a group of players made a similar discovery by a creek bed in a San Diego park. (I suspect the chiron from ABC's TV coverage of the event - "3 Women Find Dead Body Playing Pokémon Go" - is just crying out for a copy editor's cold, clammy hand.) This is just a casual survey, however, and I am sure there are other, similar cases. It's reasonable to conclude that cash-strapped local law enforcement might wish to consider previously uncontemplated virtues of crowdsourcing. Although the cops in Smithfield, Virginia, went one better and used the game to lure a player with an outstanding warrant into the police station itself, where she was promptly arrested. As Columbo used to say, "Sometimes the smartest thing to do is act stupid."
But truly tragic events have occurred as well, while others were only narrowly avoided. A couple of guys fell off a cliff while playing the game in Encinitas, California; despite this particular augmentation of their reality, they survived. People have been mugged, since anyone with the game can spot other players who may happen to be playing in out-of-the-way places. Even worse, in North Carolina, a teenager was shot to death by a 67-year-old widow after attempting to break into her house to claim a particularly rare Pokemon. Another was gunned down in an apparently random slaying while playing in San Francisco, and another along some railroad tracks in a small town in Guatemala. This too is a list compiled only through casual browsing and is by no means intended to be complete.
In addition to this awful catalogue, there are more accidents just waiting to happen. An NGO in Bosnia has warned players to avoid "areas littered with unexploded mines left over from the 1990s conflict" (as opposed to any other time, when avoidance would seem obvious). And one of the first posts I saw surface about Pokémon Go mused on the hazards of what might be called "playing Pokémon Go while black". Coming not long after the police shootings of Alton Sterling and Philando Castile, Omari Akil describes his epiphany while wandering around in a semi-oblivious play-state within the context of potential police violence: "When my brain started combining the complexity of being Black in America with the real world proposal of wandering and exploration that is designed into the gameplay of Pokémon Go, there was only one conclusion. I might die if I keep playing."
This almost came to pass in Iowa City, when a student at the University of Iowa was mistaken for a bank robber. Faith Ekakitie is a big guy - 6'3" and 290lbs - and plays as a defensive end for the school football team. He was also playing Pokémon Go in a park located a few minutes from the robbery that had just occurred, and his description somewhat matched the robber's. Thanks to the distraction of the game, plus the headphones that he was wearing, he didn't hear the police accost him, which led to four guns trained on him while he was stopped and searched. The fact that he emerged unscathed from this encounter is somewhat miraculous. It's my fervent hope that a game that memorializes the location of Tamir Rice's death doesn't eventually see an ironic consummation.
The memorialization of Rice's death at the hands of Ohio police leads to an interesting insight into how Pokémon Go constructs its world, which, in turn, is the world that its players see. Why are certain locations privileged over others? Why would you include places like minefields and national cemeteries? And simply from a logistical point of view, how do you launch an augmented reality game that is truly global in its scope?
In reality, Pokémon Go is a collaboration between two corporations. Nintendo is the owner of the Pokémon concept, which has been around in one form or another since 1995. But the technological enhancement - you might say the ‘Go' in Pokémon Go - was provided by Niantic, a former subsidiary of Google that was spun off in 2015. In 2012, Niantic released Ingress, a massively multiplayer online game.
The competition in Ingress is primarily between the two opposing factions (teams) rather than between individual players, and players never interact directly in the game or suffer any kind of damage other than temporarily running out of XM (the power that fuels all actions except movement and communication). The gameplay consists of capturing "portals" at places of cultural significance, such as public art, landmarks, monuments, etc., and linking them to create virtual triangular "control fields" over geographical areas.
This is pretty much Pokémon Go, without the branding. What's fascinating is how the "portals" came about: they were patched together from a number of different sources, including, perhaps most significantly, user-provided locations. Niantic first started with mining public databases as well as Google Maps for locations that were popular. Once Ingress started taking off, players were asked to "submit places they thought were worthy of being portals. There have been about 15 million submissions, and [Niantic] approved in the order of 5 million of these locations worldwide".
So we can immediately appreciate the notion that there is some arbitrariness at work here. Wherever there are more people, and the wealthier and more connected those people are, these are the places that become privileged, because these are the voices that get amplified and heard within cyberspace. All the usual lumpiness applies.
This is made especially resonant in a fantastic Medium essay published by Rob Walker, about catching Pokémon in his local neighborhood, which happens to be New Orleans' Lower Ninth Ward, the same place that was devastated by Hurricane Katrina in 2005 and essentially left for dead. Walker compares the area's desolation with its further desolation in the realm of augmented reality: there really aren't that many places that are marked for play in the game. Instead, what Walker sees is a lost opportunity to experience the lived and broken but real environment of a post-hurricane neighborhood.
Instead of focusing on landmarks as we understand them, he prefers "the idea of a kind of Bizarro-world version of Pokémon Go, leading players not to their geography's most laudable features, but rather to the ones they'd prefer to ignore, or avoid." For him, an abandoned house is an object worthy of attention. To catch a Pokémon behind a car that hasn't been moved for over a year requires us to acknowledge that this car is here, even though we may have passed by it a hundred times before, perhaps only half-noticing its ongoing deterioration. It renders the invisible visible; it generates acknowledgment. For Walker, this is real exploration. Indeed, it is a genuine flânerie.
It is this act of making the invisible visible that makes the Tamir Rice landmark so extraordinary. Known as the Cudell Gazebo, there is no official designation of the event at the site itself. However, if one approaches the gazebo with Pokémon Go in hand, the description reads "Community memorial for Tamir Rice, shot and killed by CPD officers who shot him in under 2s after breaking department policy regarding escalation of force." It doesn't get much more explicit than that. Moreover, this is in contrast to the official version of events, where the police responsible were exonerated by the county prosecutor, who agreed that they had acted in fear of their lives.
But how did this virtual memorialization come about? There is only one comment to the local article I just cited on the Cudell Gazebo. Someone by the name of Jamie wrote:
Well, this was a bit surreal. I wrote that not long after Tamir died, and never expected many people to read it…. Memorials are built in the hope people will remember. The events that ended Tamir Rice's life are something that I worry will be forgotten. It was difficult to see the gazebo pictured without context, and I added a bit without expecting it to be noticed by anyone else.
Nicolas Carr, in a recent Aeon essay, writes that "What I want from technology is not a new world. What I want from technology are tools for exploring…the world that comes to us thick with ‘things counter, original, spare, strange', as Gerard Manley Hopkins once described it." Even if those tools take the form of a transient video game - or perhaps especially if they take that form - somehow, in ways that are both lucky and lucid, these tools may yet lay within our power.
(All images from the fabulous web comic Apocamon: The Book Of Revelation)
Monday, August 22, 2016
Modeling Artificial and Real Societies
by Muhammad Aurangzeb Ahmad
Science Fiction literature is fraught with examples of what-ifs of history which speculate on how the would have looked like if certain events had happened a different way e.g., if the Confederates had won the American Civil War, if the Western Roman Empire had not fallen, if Islam had made inroads in the imperial household in China etc. At best these are speculations that we can entertain to shed light on our own world but imagine if there was a way to gauge how societies react under certain environmental constraints, social structures and stress. Simulation is often described as the Third Paradigm in Science and the field of Social Simulation seeks to model social phenomenon that cannot otherwise be studied because of practical and ethical constraints. Isaac Asimov envisions the science of predicting future with the psychohistory in the foundation series of science fiction novels.
The history of social simulation can be traced back to the idea of Cellular Automata by Stainlaw Ulam and John von Neumann: A cellular automata is a system of cell objects that can interact with its neighbors given a set of rules. The most famous example of this phenomenon being Conway’s Game of Life, which is a very simple simulation, that generates self-organizing patterns, which one could not really have predicted by just knowing the rules. To illustrate the concept of Social Simulation consider Schilling’s model of how racial segregation happens. Consider a two dimensional grid where each cell represents an individual. The cells are divided into two groups represented by different colors. Initially the cells are randomly seeded in the grid representing an integrated neighborhood. The cells however have preference with respect to what percentage of cells that are their neighbors should belong to the same group (color). The simulation is run for a large number of steps. At each step a person (cell) checks if the number of such neighbors is less than a pre-defined threshold then the person can move by a single cell. If the number of such neighbors meets the threshold then the person (cell) remains at its current position. Even with such a simple setup we observe that the integrated neighborhood slowly becomes segregated so that after some iterations the neighborhood is completed segregated. The evolution of the simulation can be observed in Figure 1. The main lesson to be learned here is that even without overt racism and just having a preference about one’s neighbors can lead to a segregated neighborhood.
Schilling’s model of segregation is almost half a century old and much more sophisticated models to simulate social and economic phenomenon have been created since then. One such pioneering simulation that came out in the early 1990s was Sugarscape, which simulated things like how populations respond to changes in the environment and availability of resources. Using data about rainfall, soil fertility and Native American settlements, scientists from the then newly formed Santa Fe institute tried to simulate the patterns of settlements of Anasazi Indians in the American South West over the course of centuries. By changing the parameters of the simulations anthropologists for the first time were able to simulate how a group of people could have responded to their changing environment given certain constraints. Thus the field of simulated archeology was born. There was however one problem, which to this day has not been really solved, since one could vary different set of parameters for the simulation only in a very narrow range of parameters could one observe what the historical data showed. Thus the question arises, are we really simulating the behaviors of people in this case or are we forcibly trying to fit a mathematical model given a certain data. The answer is not clear and researchers have taken strong positions on either side of the debate.
The virtual Anasazi of the simulations had a number of real world characteristics like procreation, food consumption, resource exploitation, migration etc. but there only so much data that one can collect about the past especially about a civilization that existed a thousand years ago. A described previously the virtual Anasazi simulation is too sensitive to the parameters of the simulation. Another failure of the virtual Anasazi model came from the fact that even though the model approximated the raise and fall of the Anasazi fairly well. It predicted that the Anasazi should have had a substantial population when in fact they abandoned their dwellings for good. An interesting question arises here, could one do better if one had access to more detailed data about the Anasazi or for any other group of people? This question is no longer hypothetical given that we now routinely collect data about hundreds of millions of people? Could such data be used to better model modern societies? Some people may scoff at the idea and state that given the complexity of human societies such an endeavor is impossible. That said, it is possible to predict aggregate behaviors even while recognizing that individuals are unique. Large masses of people do exhibit certain behaviors that can be described by statistical properties. It is the interaction of people and not just their individual behaviors that one has to get right. Recent advances in Big Data and the science of simulation moves us one step closer to such a possibility. One might not be able to predict behaviors of all the people all the time but predict most of the people most of the time. This might be sufficient for something approximating Asimov’s psychohistory in the real world.
It may be the case that such a project might be unfeasible because in order to make it work one would have to severely violate people’s privacy to collect sufficiently rich data. In the end there might be a trade off that has to be make between collecting data for sufficiently rich simulations vs. preserving people’s privacy. On one hand one wants to make the simulations simple so that one can study the effect of a particular phenomenon e.g., racial/national preference in case of neighborhoods. On the other hand this also runs the risk of oversimplifying a human phenomenon where multiple factor may be at play. After all humans are complex creatures with multiple, often contradictory, proclivities that can yield unexpected results.
A word of caution should be added here, even after more than half a century since its inception simulation does not enjoy near universal acceptance in the Social Sciences. Historical data collected on urbanization patterns, climate change, land usage, conflicts etc. could also be used to get a better understanding of local history of different civilizations e.g., imagine what could one learn about the history of Europe and the Middle East by applying such modeling techniques to the Black Plague. We are living at the beginning of the age of Big Data. Marrying Big Data and simulations can do a great deal of social good or evil depending upon how one uses this technology. Predictability and algorithmic control may become facts of life for our descendants.
Monday, July 18, 2016
Faster, Pokémon! Kill! Kill!
"The scent, the scent alone is enough for our beasts."
There's that old saying that goes "When the going gets weird, the weird turn pro". Certainly, weird times such as these demand weird explanations. Old explanatory frameworks that have been dying long, slow deaths continue to have nails pounded into their coffins. Consider how the post-Cold War triumph of neoliberalism, as promoted by Francis Fukuyama's The End Of History, has had the crap beaten out of it first by 9/11, then by the global financial meltdown, and now by Brexit (the best tweet I saw concerning Brexit was all of three words: "Francis Fukuyama lol").
And no one, least of all Fukuyama, could have predicted the circus slated to begin in Cleveland, with the most unlikely candidate in recent political history about to receive the nomination of the Republican Party for President. Actually, I should amend that: perhaps Upton Sinclair did, 80 years ago. But Sinclair had the dubious benefit of witnessing firsthand the rise of fascism; few people are alive today who remember how wide the Overton Window actually used to be. We need to get much, much weirder.
But it's not just that things are getting weirder. Even more germane is that things are getting weirder, faster. This is nowhere more evident than in the ways in which technologies are insinuating themselves into the social fabric. As I've argued before, each technological development creates the substrate upon which a further, faster and even more unpredictable set of technologies and their circumstances manifests. Perhaps I'm biased, since I've been observing these phenomena for a while, but consider a few recent developments.
Exhibit A: Racially inflected police brutality is an old story. But awareness of it has skyrocketed in the past few years with the prevalence of video cameras. However, this prevalence was only made possible when video recording was bundled into the larger rubric of the smart phone. If video cameras as objects were sufficient unto themselves, we would have seen a very different trajectory following the 1991 Holliday videotape of the Rodney King beating. But it took nearly a full generation for the creation of not only the means of cheap and easy recording, but also its equally cheap and easy distribution. And until recently, even this latter infrastructure was fairly staid: YouTube and perhaps a few other platforms.
More recently we've seen the rise of live streaming of video. First popularized by LiveStream and Ustream (both founded in 2007), these services were still missing what turned out to be a key component: integration into social media. This was remedied in 2015, when Periscope was bought by Twitter before the service had even launched. Not one to let a competitive threat go unadressed, Facebook developed Facebook Live, its own native videostreaming service. It was in fact Facebook Live that was used by Diamond Reynolds ten days ago to document the remainder of Philando Castile's life as he lay in the back of a police cruiser, bleeding to death. And thanks to the tight integration with social media, we can go back to Reynolds' page, not just to relive the footage, but also to bear witness to the comments as they started rolling in: "Don't stop recording" and "We are watching you cop. What's your name?".
It hasn't escaped notice that Reynolds had remarkable presence of mind to livestream this "event", as opposed to merely videotape it, which itself would have been noteworthy (and one can only imagine that this preparedness was inculcated by the constant threat of police harassment, which is itself such a thoroughly damning thought). But consider the risks of simple videotaping: the possibility that the police would find a reason to confiscate the footage, or the phone's memory card, or the phone itself, which might then meet with an "unfortunate accident", therefore eliminating a pesky piece of evidence that would run contrary to police testimony. This is why the ACLU has been rolling out its Mobile Justice app - once installed on a smart phone, it is essentially a one-touch recording device that sends video directly to ACLU servers. It's not the only app for this, either, which is a good thing, since this kind of recording must be able to withstand multiple points of failure: just a few hours after it was streamed on Facebook, the Castile video was temporarily removed, due to a "technical glitch", whatever that might mean. No doubt a helpful algorithm was trying to shield Facbook's users from something awfully violent.
However, things get weirder.
Exhibit B: As a direct result of the above, massive nation-wide demonstrations were mobilized against police brutality. And as we know, the demonstrations in Dallas ended with five police being shot by a sniper. Compounding this unprecedented escalation was how the shooter, once cornered, was brought to heel. A robot, usually used for bomb disposal, was guided via remote control to the part of the parking garage where the suspect was cornered. Jury-rigged with a pound of C4 plastic explosive, it was detonated, decisively ending the standoff.
It was the first known instance that a robot was used by police to kill a suspect. And yet it conforms with the larger trend of the militarization of police, itself a consequence of the demobilization of vast amounts of matériel freshly returned from our most recent Middle Eastern adventures and in need of a good home. But what has mystified me about this incident is the fact that the police went straight to the use of lethal force. In the ensuing coverage, no one has thought to raise the possibility of a non-lethal option, for example strapping a tear gas canister to the robot. Peter Singer, who has written extensively about the use of drones and similar machines within a military context, notedthat "the closest parallel I am aware of was a case in 2011, when police in Tennessee strapped tear gas grenades to a robot that then accidentally started a fire in a mobile home. This doesn't seem a great parallel, as it does not reflect a decision deliberately to use the robot to kill." Indeed.
Unsurprisingly, the things that we thought we should most fear turn out to still be mirages that may or may not manifest themselves in the future. That is, the prospect of the evocatively named LAWS (Lethal Autonomous Weapons Systems) is still hazy and indistinct. But it's much easier to focus one's anxiety on a hypothetical machine gun-wielding robot that independently identifies and then executes its prey. There is something sufficiently self-contained about such an object. It's by virtue of its succinctness that thinking about it seems even possible, whereas the systems that are currently in place are more vague and distributed. As reprehensible as the overuse of drone strikes may be, there is still the lukewarm comfort that there is a human being - or a chain of command that consists of human beings - who ultimately identifies the target and pulls the trigger. Except that a closer look at target selection demonstrates that we are even less in control of that than we thought. So the future reaches into the present, playfully pawing at us in the form of a jury-rigged robot arm and ‘machine-suggested' militant targets.
(This is not the first time that we have committed such a cognitive fallacy. We spend far too much time worrying about the sudden appearance of a malevolent or inscrutable super-intelligent AI that we forego the much greater - and already present - concerns of whether artificial intelligence and algorithmic judgment are being used to gather and act on information that is beyond our power to even notice, let alone seek redress).
The precedent that is set by the actions of the Dallas police is troubling for exactly this reason: it is a precedent. When technology (and its ad hoc deployment) moves as quickly as this, there is no hope for policy, let alone legislation, to keep up. For heaven's sake, we still can't properly legislate copyright law in the digital age, and this has been a fairly clearly delimited issue for the last 20 years. If the courts extend the well-established principle that a police officer may use lethal force if he or she feels threatened to include the idea that a kamikaze version of WALL*E can be used to alleviate such a threat, then we can expect to see a normalization of the use of such force vectors. In turn, manufacturers will all too gladly step up so that the police don't have to go through the ordeal of duct-taping a packet of C4 to a retractable arm. And in short order an industry springs up, with interests and lobbysits to represent those interests: good luck legislating anything in the face of that. I just wonder if a camera livestreaming the proceedings will be part of the basic package, or if that will cost extra.
So we have a situation here where the convergence of video streaming and social media platforms led to protests that in turn led to the targeting of police officers by a shooter who was killed by a robot carrying an improvised explosive device. Can things get any weirder? Let's try.
Exhibit C: With all the weirdness going around, it was almost a relief that the week's news ended on something that people of my generation can understand: a good old-fashioned coup d'état. Except that the attempt in Turkey fell into chaos within a matter of hours; it seems that in the current news cycle not even a mutiny by the military has that much time to prove itself. Furthermore, one would certainly expect the Turkish army, which has been staging coups with some regularity since 1960, to have acquired solid experience in the matter.
All flippancy aside, though, there is still much that is unknown about why the military made its move when it did. One generally waits for the Prime Minister to be out of the country, whereas Erdogan was vacationing in Marmaris, a Turkish coastal town. Be that as it may, the coup began with the requisite deference for tradition: the declaration of martial law, the imposition of curfew and the rapid appearance of tanks on the streets, military jets buzzing Ankara and Istanbul, and all that. In addition, one of the immediate targets of any coup is the TV station, and indeed the plotters fulfilled their mission of getting the national TV to sign off.
But things also began to go very wrong, very quickly. Here is something we do know: very soon after it became clear that a coup was underway, Erdogan phoned into CNN's Turkey bureau, still on-air, and conducted an interview via FaceTime, on the news anchor's iPhone. You can see a bit of the astonishing video here, complete with the moment when the anchor, who is interviewing him by holding up the phone to the camera, has to decline an incoming call from someone else (a treasonous army general, perhaps? Wouldn't that have been the most fantastic use of three-way calling?). Now, Erdogan is savvy enough when it comes to technology - in fact I think it's a reasonable to state that populist tendencies positively correlate with mastery of social media such as Twitter, as well as an equivalent distaste for anyone using the same platform to proffer a different message. So he used his time to appeal to his supporters, that they take to the streets and "protect our democracy".
To its credit, the army planned well enough in advance to block Facebook, Twitter and YouTube, which was likely not difficult, since Turkey has always been keen on regulating its citizens' access to the Internet. However, smaller platforms such as Instagram and Vimeo were still functioning at the time of the coup. More crucially, it seems like Facebook Live and Periscope - the same applications involved in documenting police brutality I cited above - were also functioning. So the plotters found themselves in a position where protestors against the coup hit the streets of Ankara and Istanbul, "swarming tanks and soldiers…and even reportedly performing citizen's arrests. Many of the protests were streamed on Periscope and Facebook Live." To watch a bunch of guys in street clothes swarm a tank like carpenter ants, stripping the soldiers of their weapons and throwing them bodily out of their vehicles, all in defense of an authoritarian regime, has to be one of the more surreal things I have seen recently.
This attitude towards technology as an organizing force is quite an ironic reversal, considering that, during a 2014 meeting with the Committee to Protect Journalists, Erdogan actually said, "I am increasingly against the Internet every day". And it is still premature to maintain that the organizing power of social media played a decisive role, as we are still considering its effects on the Arab Spring of 2011. But one thing that is certain is that the AKP emerges from the coup stronger than ever. As it rounds up its enemies and rivals - at last count already more than 6,000 have been detained - it's reasonable to assume that press and internet freedoms will re-join those ranks, having served their purpose in the "protection of our democracy."
There are no easy patterns to be drawn from the above three cases. If anything, we may have to fall back on the cliché that people will take whatever tools they have at their disposal and bend them to the circumstances. I don't find this satisfying as an explanation, but in a world where total surveillance is being used to hunt terrorists who nevertheless cause tremendous damage by simply driving a truck into a crowd, I'm not sure if any theory can make sense for long enough before the next event comes along and proceeds to make a hash of everything. But this is the nature of an ever-accelerating weirdness. And apologies to anyone who thought this post would be about Pokémon Go. There's only so much weirdness anyone can take.
Monday, June 20, 2016
The Mesh of Civilizations in Cyberspace
by Jalees Rehman
"The great divisions among humankind and the dominating source of conflict will be cultural. Nation states will remain the most powerful actors in world affairs, but the principal conflicts of global politics will occur between nations and groups of different civilizations. The clash of civilizations will dominate global politics."
—Samuel P. Huntington (1972-2008) "The Clash of Civilizations"
In 1993, the Harvard political scientist Samuel Huntington published his now infamous paper The Clash of Civilizations in the journal Foreign Affairs. Huntington hypothesized that conflicts in the post-Cold War era would occur between civilizations or cultures and not between ideologies. He divided the world into eight key civilizations which reflected common cultural and religious heritages: Western, Confucian (also referred to as "Sinic"), Japanese, Islamic, Hindu, Slavic-Orthodox, Latin-American and African. In his subsequent book "The Clash of Civilizations and the Remaking of the World Order", which presented a more detailed account of his ideas and how these divisions would fuel future conflicts, Huntington also included the Buddhist civilization as an additional entity. Huntington's idea of grouping the world in civilizational blocs has been heavily criticized for being overly simplistic and ignoring the diversity that exists within each "civilization". For example, the countries of Western Europe, the United States, Canada and Australia were all grouped together under "Western Civilization" whereas Turkey, Iran, Pakistan, Bangladesh and the Gulf states were all grouped as "Islamic Civilization" despite the fact that the member countries within these civilizations exhibited profound differences in terms of their cultures, languages, social structures and political systems. On the other hand, China's emergence as a world power that will likely challenge the economic dominance of Western Europe and the United States, lends credence to a looming economic and political clash between the "Western" and "Confucian" civilizations. The Afghanistan war and the Iraq war between military coalitions from the "Western Civilization" and nations ascribed to the "Islamic Civilization" both occurred long after Huntington's predictions were made and are used by some as examples of the hypothesized clash of civilizations.
It is difficult to assess the validity of Huntington's ideas because they refer to abstract notions of cultural and civilizational identities of nations and societies without providing any clear evidence on the individual level. Do political and economic treaties between the governments of countries – such as the European Union – mean that individuals in these countries share a common cultural identity?
Also, the concept of civilizational blocs was developed before the dramatic increase in the usage of the internet and social media which now facilitate unprecedented opportunities for individuals belonging to distinct "civilizations" to interact with each other. One could therefore surmise that civilizational blocs might have become relics of the past in a new culture of global connectivity. A team of researchers from Stanford University, Cornell University and Yahoo recently decided to evaluate the "connectedness" of the hypothesized Huntington civilizations in cyberspace and published their results in the article "The Mesh of Civilizations in the Global Network of Digital Communication".
The researchers examined Twitter users and the exchange of emails between Yahoo-Mail users in 90 countries with a minimum population of five million. In total, they analyzed "hundreds of millions of anonymized email and Twitter communications among tens of millions of worldwide users to map global patterns of transnational interpersonal communication". Twitter data is public and freely available for researchers to analyze whereas emails had to be de-identified for the analysis. The researchers did not have any access to the content of the emails, they only analyzed whether users any given country were emailing users in other countries. The researchers focused on bi-directional ties. This means that ties between Twitter user A and B were only counted as a "bi-directional" tie or link if A followed B and B followed A on Twitter. Similarly, for the analysis of emails analysis, the researchers only considered email ties in which user X emailed user Y, and there was at least one email showing that user Y had also emailed user X. This requirement for bi-directionality was necessary to exclude spam tweets or emails in which one user may send out large numbers of messages to thousands of users without there being any true "tie" or "link" between the users that would suggest an active dialogue or communication.
The researchers then created a cluster graph which is shown in the accompanying figure. Each circle represents a country and the 1000 strongest ties between countries are shown. The closer a circle is to another circle, the more email and Twitter links exist between individuals residing in the two countries. For the mathematical analysis to be unbiased, the researchers did not assign any countries to "civilizations" but they did observe key clusters of countries emerge which were very close to each other in the graph. They then colored in the circles with colors to reflect the civilization category as defined by Huntington and also colored ties within a civilization as the same color whereas ties between countries of two distinct civilization categories were kept in gray.
At first glance, these data may appear as a strong validation of the Huntington hypothesis because the circles of any given color (i.e. a Huntington civilization category) are overall far closer to each other on average that circles of a different color. For example, countries belonging to the "Latin American Civilization" (pink) countries strongly cluster together and some countries such as Chile (CL) and Peru (PE) have nearly exclusive intra-civilizational ties (pink). Some of the "Slavic-Orthodox Civilization" (brown) show strong intra-civilizational ties but Greece (GR), Bulgaria (BG) and Romania (RO) are much closer to Western European countries than other Slavic-Orthodox countries, likely because these three countries are part of the European Union and have shared a significant cultural heritage with what Huntington considers the "Western Civilization". "Islamic Civilization" (green) countries also cluster together but they are far more spread out. Pakistan (PK) and Bangladesh (BD) are far closer to each other and to India (IN), which belongs to the "Hindu Civilization" (purple) than to Tunisia (TN) and Yemen (YE) which Huntington also assigned to an ‘Islamic Civilization".
One obvious explanation for there being increased email and Twitter exchanges between individuals belonging to the same civilization is the presence of a shared language. The researchers therefore analyzed the data by correcting for language and found that even though language did contribute to Twitter and email ties, the clustering according to civilization was present even when taking language into account. Interestingly, of the various factors that could account for the connectedness between users, it appeared that religion (as defined by the World Religion Database) was one of the major factors, consistent with Huntington's focus on religion as a defining characteristic of a civilization. The researchers conclude that "contrary to the borderless portrayal of cyberspace, online social interactions do not appear to have erased the fault lines Huntington proposed over a decade before the emergence of social media." But they disagree with Huntington in that closeness of countries belonging to a civilization does not necessarily imply that it will lead to conflicts or clashes with other civilizations.
It is important to not over-interpret one study on Twitter and Email links and make inferences about broader cultural or civilizational identities just because individuals in two countries follow each other on Twitter or write each other emails. The study did not investigate identities and some of the emails could have been exchanged as part of online purchases without indicating any other personal ties. However, the data presented by the researchers does reveal some fascinating new insights about digital connectivity that are not discussed in much depth by the researchers. China (CN) and Great Britain (GB) emerge as some of the most highly connected countries at the center of the connectivity map with strong extra-civilizational ties, including countries in Africa and India. Whether this connectivity reflects the economic growth and increasing global relevance of China or a digital footprint of the British Empire even decades after its demise would be a worthy topic of investigation. The public availability of Twitter data makes it a perfect tool to analyze the content of Twitter communications and thus define how social media is used to engage in dialogue between individuals across cultural, religious and political boundaries.
Huntington, S. P. (1993). The Clash of Civilizations. Foreign Affairs, 72(3) 22-49.
State, B., Park, P., Weber, I., & Macy, M. (2015). The mesh of civilizations in the global network of digital communication. PLoS ONE, 10(5), e0122543.
Monday, May 23, 2016
Kind Of Like A Metaphor
"I got my own pure little bangtail mind and
the confines of its binding please me yet."
~ Neal Cassady, letter to Jack Kerouac
One of the curious phenomena that computing in general, and artificial intelligence in particular, has emphasized is our inevitable commitment to metaphor as a way of understanding the world. Actually, it is even more ingrained than that: one could argue that metaphor, quite literally, is our way of being in the world. A mountain may or may not be a mountain before we name it - it may not even be a mountain until we name it (for example, at what point, either temporally or spatially, does it become, or cease to be, a mountain?). But it will inhabit its ‘mountain-ness' whether or not we choose to name it as such. The same goes for microbes, or the mating dance of a bird of paradise. In this sense, the material world existed, in some way or other, prior to our linguistic entrance, and these same things will continue to exist following our exit.
But what of the things that we make? Wouldn't these things somehow be more amenable to a more purely literal description? After all, we made them, so we should be able to say exactly what these things are or do, without having to resort to some external referents. Except we can't. And even more troubling (perhaps) is the fact that the more complex and representative these systems become, the more irrevocably entangled in metaphor do we find ourselves.
In a recent Aeon essay, Robert Epstein briefly guides us through a history of metaphors for how our brains allegedly work. The various models are rather diverse, ranging from hydraulics to mechanics to electricity to "information processing", whatever that is. However, there is a common theme, which I'll state with nearly the force and certainty of a theorem: the brain is really complicated, so take the most complicated thing that we can imagine, whether it is a product of our own ingenuity or not, and make that the model by which we explain the brain. For Epstein - and he is merely recording a fact here - this is why we have been laboring under the metaphor of brain-as-a-computer for the past half-century.
But there is a difference between using a metaphor as a shorthand description, and its broader, more pervasive use as a guide for understanding and action. In a 2013 talk, Hamid Ekbia of Indiana University gives the example of the term ‘fatigue' used in relation to materials. Strictly speaking, ‘fatigue' is "the weakening of a material caused by repeatedly applied loads. It is the progressive and localised structural damage that occurs when a material is subjected to cyclic loading." (I generally don't like linking to Wikipedia but in this instance the banality of the choice serves to underline the point). Now, for materials scientists and structural engineers, the term is an explicit, well-bounded shorthand. One doesn't have pity for the material in question; perhaps a poet would describe an old bridge's girders as ‘weary' but to an engineer those girders are either fatigued, or they are not. Once they are fatigued, no amount of beauty rest will assist them in recuperating their former, sturdy (let alone ‘well-rested' or ‘healthy') state.
The term ‘fatigue' is furtherly instructive because it illustrates the process by which metaphor spills out into the world. If a group of engineers are having a discussion around an instance of ‘fatigue' their use of the term in conversation is precise and understood. This is a consequence of the consistency of their training just as much as it's relevance to the physical phenomenon. After all, it's easier to say "the material is fatigued" than "the material has been weakened by the repeated application of loads, etc." But the integrity of a one-to-one relationship between a word and its explanation comes under pressure (so to speak) when this same group of experts presents its findings to a group of non-experts, such as politicians or citizens. Of course, taken by itself, the transition of a phrase such as ‘fatigue' does not have overly dramatic implications. What it does do, however, is invite the dissemination of other, adjacent metaphors into the conversation. Soon enough ‘fatigue', however rigorously defined, accumulates into declarations of the ‘exhausted' state of our nation's ‘ailing' infrastructure. There are no technical equivalents to these terms, which call us to action by insinuating that objects like roads and tunnels may be feeling pain, whereas at best we are the recipients of said suffering.
Intriguingly, the complexity of this semiotic opportunism ramps up quickly and considerably. Roads and bridges may be things that we have built, but they still exist in the world, and will continue to exist whether we fix them or not. They may remind us of our success or inadequacy, but their intended purpose is almost never unclear. On the other hand, there are other things that we have built, things that exist in a much more precarious sense - it may even be a stretch to call them objects - and whose success qua objects is also much more variable. This is where we find computation, software and artificial intelligence.
The purpose of computation, broadly speaking, is to perform an action - some kind of service, or analysis, that may or may not be regular (in the sense that it can be anticipated) and is rarely, if ever, regulated. In the world of infrastructure, you either make it across the bridge or you don't, and there are regulations meant to ensure a positive outcome. As Yoda advises, "Do or do not. There is no try." But computation is different. I am not talking about something linear, like programming a computer to add two numbers. With a search engine, for example, you may find the information or not; or what you find may be good enough, or you may think it's good enough but it's really not, and you'll never know. The service, or rather the experience of the service, becomes the object; the code, which is perhaps the true object, is obscured from your view. And we tend to be poor at processing this kind of ambiguity, and when faced with ambiguity we reach for metaphor as a sense-making bulwark against the messiness of the unknown.
As we expect more of our computing technologies, the ensuing purposes also shift temporally. Our software models the world around us, and the way in which we inhabit the world. As such, its utility is displaced into the future: we value it for its predictive nature. We want it to anticipate not simply what we need right now (let alone what we needed yesterday) but what we might want tomorrow, or six months from now. At this point we find ourselves squarely in a place of mind. That is, we expect our inventions to become extensions of ourselves, because we cannot seem to make the leap that something non-human can have any chance of assisting us at being better humans. Software (and specifically AI) is singularly pure in this regard, although traces already exist in previous technologies. So while we don't worry about making our bridges anything more than functional and, somewhat secondarily, aesthetically pleasing, we tend to additionally attribute human-like traits to ships, perhaps because we perceive our lives as much more committed to the latter's successful functioning. But while we may ascribe personality to ships, we go a step further and come to expect intelligence of the software that we make: witness the proliferation of chatbots and personal assistants, to the point that we can now consult articles about why chatbot etiquette may be important.
In the meantime, these technologies themselves are being generated via metaphor. After all, these are exceedingly complex pieces of software, designed, implemented and refined by hundreds of software engineers and other staff. It is inevitable that there should be philosophies that guide these efforts. According to Ekbia, every one of the ‘approaches' is fundamentally metaphorical in nature. That is, if you decide you're going to write software that will appear intelligent to its users, you have to put a stake in the ground as to what intelligence is, or at least how it is come by. And since we haven't really figured out how intelligence arises within ourselves to begin with, we wind up with a series of investments in a mutually exclusive array of metaphors.
Is intelligence symbolic, and therefore symbolically computable? People like Stephen Wolfram would say yes. Or perhaps intelligence arises if you have enough facts and ways to relate those facts; in which case Cyc and other expert systems are your ticket. Another approach to modeling intelligence has been getting the most press lately: the idea of reinforcement learning of neural networks. (Of course, this last one models how neurons work together within our own brains, so it is a double metaphor.)
The point is that all of these ‘approaches' are metaphorical in substance. We still have not been able to resolve the mind-body problem, or how consciousness somehow arises from the mass of neurons that are discrete, physical entities beholden to well-documented laws of nature. And even though lots of theories of mind have been disproven, the fact that we cannot agree on the nature of intelligence for ourselves implies that any idea of what a constructed intelligence may be is, by definition, a metaphor for something else. Science can avail itself of the luxury of not-knowing, of being able to say, "We are fairly certain that we know this much but no more, and these theories may or may not help us to push farther, but they also may fall apart and we'll have to start over". Technology, on the other hand, must deliver a solution - something that works from end to end. In the case of AI, where models must be robust, predictive and productive, the designers of a constructed intelligence cannot say, "Well, we know this much and the rest happens without us understanding it." Your respect for the truth results in no product, and a lot of angry investors. So metaphor in this sense is not a philosophical luxury, it's how you're able to ship any code at all.
Where things get really interesting in this kind of a world is when the metaphors start getting good at producing results. So now we find ourselves in a very weird situation. There are competing metaphors out there in the computational wild: symbolic, expert, neural network systems, as well as others. Increasingly, hybrid systems are also appearing. What if some or even all of these approaches succeed in functioning 'intelligently'? I have to put the word in quotes here, because it's pretty clear that, without a mutually agreed-upon anchoring definition, we have ventured into some very murky waters. These waters are made all the more turbulent because technology's need to solve problems for us (or perhaps to also create them) will continue to push what we consider as viably or usefully 'intelligent'.
The fact is that no AI outfit or its investors will sit around waiting for the scientific community to settle on a model for cognition and then proceed to build products consistent with that model. The truth is nice, but there are (market) demands that need to be met now. If science can supply industry with signposts on how to build better technology, great. At the same time, if the product solves the clients' or users' problems then who cares if it's really intelligent or not? Recall the old adage: Nothing succeeds like success. The tricky bit is that, with enough such success, our very definition of what is intelligent may be on the verge of shifting. Next month I'll look at the implications of living in a world awash in these kinds of feedback loops.
Monday, April 25, 2016
Here is Waldo: Anonymity in the Age of Big Data
by Muhammad Aurangzeb Ahmad
The television series Person of Interest posits the existence of a machine that can monitor every person’s daily activities and can then use this information to predict crimes before they happen. While such a system may be way off in the future, a system that can at least identify the identity of any person may not be that far off. Annonymity used to be private affair, if one wished to remain anonymous then all that one had to do was to lay low and limit one’s interactions with outsiders. It was easier to adopt pseudo-identities, the nature of the internet even facilitated this to a greater extent. I should know this because I have been blogging as a Chinese Muslim for almost 10 years now. New waves of technologies aided by Big Data however are changing nature of anonymity with evermore levels of sophistication needed to be truly anonymous.
Even in the ideal case where John Doe disengages from the digital world i.e., does not own a smart phone, only carries cash, does not use any online service etc, others can still leak information about John e.g., pictures that his friends might put up on social media platforms, post something on Facebook, geo-tag one another etc. Locating a person, determining their likes or dislikes would really depend upon how much information their family and friends are leaking about them. In short you are only as anonymous as your most chatty friend.
In cases where we think that we are not giving away any explicit information about ourselves, much can be inferred from the digital traces that we leave. The manner in which we shop online, respond to messages, play video games etc can reveal a lot about ourselves even when we do not want to reveal anything. In our previous work we have observed that it is possible to predict a person’s gender, age, personality, marital status and even political affiliation by just studying at how they play video games. This is just the top of the iceberg; a case in point is the case where Target’s data analytics were able to infer that a girl is pregnant even though she was able to hide this from her parents.
The main takeaway is that we always reveal something about ourselves even though we may think that we are role playing. In our current (unpublished) work we have even observed that it is possible to predict family relationships (parent, sibling, spouse, offspring etc) with a high degree of accuracy by just studying texting patterns with no access to the content of text message.
Alternatively let us consider the massive amounts of data that large corporations and major retailers like Walmart, Target etc are collecting about their customers. It is now quite easy to cheaply buy data about people from third party sources so that not only does one know what items a person is buying but also where they live, their age, gender and household structure. While some organizations have policies in place that restrict them from collecting and using certain types of data without our consent, this self-imposed restriction is not true for every organization. It is also true that most people do not have time to read through 100 pages of EULA. Combine this with algorithms that can predict missing information about a person and one has a recipe for a system that can figure out what you are going to do next (with in a particular domain) with a high level of accuracy.
But what does this mean for us as individuals and for the society as a whole? It will become increasingly easier to answer the question - Where is Waldo? Not only that but one could even tell you here is Waldo and the list of places that he has been in the last 3 years, his eating habits and his likely future purchases. Before we start chanting alarmist slogans about a dystopian post-privacy era we should also look at the centrifugal forces in the privacy debates. Large corporations also have incentives to not violate their customer’s privacy in order to have a certain level of trust with their customers. Apple’s stance of non-corporation with the government on issues related to customer privacy is a case in point.
While one should be vigilant one should not be alarmist, there was an uproar many years ago when Google announced that they would be adding a search feature to Gmail. It turned out that all the privacy doomsday predictions were unfounded. Some amount of data collection is necessary to offer services like recommendation whether it is in music, movies, food etc. Algorithms can only be as good as the data that is fed to them. Thus, one should not rush to the conclusion that anonymity is over.
The flipside of patterns extracted from Big Data is that these patterns also give one a readymade recipe for behaving in a certain way and remain anonymous. Big Data also makes it easier to fake certain personality traits. Even with very crude profile stuffing Ashley Madison was able to lure thousands of men to buy their membership. This leads us to consider under type of risk to anonymity – data breaches. As the fallout from the Ashley Madison leak suggests one’s indiscretions on the Internet have a way to follow on the offline world with a single torrent dump. More recently, a service has emerged which uses Tinder’s API to notify its paid customers if their partner is cheating on them. These cases should not be shocking or surprising – after all information in the wild can rarely be tamed.
If today’s de-annonymization algorithms look impressive then the future is even more fascinating. Google’s deep learning system can already identify the location of almost any picture with very high level of accuracy, Facebook’s facial recognition system can already beat humans, gait identification algorithms can identify any person by the way that one walks, recovering what was typed by the sound of typing is already an old technology and the list goes on. Each of these technologies is impressive in its own regard but taken together one has the hallmark of a system that can deanonymize almost any person on the planet. If we think that it bad enough that governments and large corporations have access to these type of technologies wait till such systems become open source and become accessible at the palm of your hand. It is certainly not the stuff of Singularity Sky but it does open up vistas for a brave new world for which most of us may not have the time to be ready.
Welcome To Alphaville
"The secret of my influence has always been
that it remained secret."
~ Salvador Dalí
Last month I looked at the short and ignominious career of @TayandYou, Microsoft's attempt to introduce an artificial intelligence agent to the spider's parlor otherwise known as Twitter. Hovering over this event is the larger question of how best to think about human-computer interaction. Drawing on the suggestion of computer scientist and entrepreneur Stephen Wolfram, I put forward the concept of 'purpose' as such a framework. So what was Tay's purpose? Ostensibly, it was to 'learn from humans'. But releasing an AI into the wild leads to unexpected consequences. In Tay's case, interacting with humans was so debilitating that not only could it not achieve its stated purpose, but neither could it achieve its real, unstated goal, which was to create a massive database of marketing preferences of the 18-24 demographic. (As a brief update, Microsoft relaunched Tay and it promptly went into a tailspin of spamming everyone, replying to itself, and other spasmodic behaviors more appropriate to a less-interesting version of Max Headroom).
People have been releasing programs into the digital wild for decades now. The most famous example of the earlier, pre-World Wide Web internet was the so-called Morris worm. In 1988, Robert Tappan Morris, then a graduate student at Cornell University, was trying to estimate the size of the Internet (it's more likely that he was bored). Morris's program would write itself into the operating system of a target computer using known vulnerabilities. It didn't do anything malicious but it did take up valuable memory and processing power. Morris's code also included instructions for replication: specifically, every seventh copy of the worm would instantiate a new copy. More importantly, there was no command-and-control system in place. Once launched, the worm was completely autonomous, with no way to change its behavior. Within hours, the fledgling network of about 100,000 machines had nearly crashed, and it took several days of work for the affected institutions – mostly universities and research institutes – to figure out how to expunge the worm and undo the damage.
This is a good example of how the frictionless nature of information technology serves to amplify both purpose and consequence. And the consequences of Morris's worm went far beyond slowing down the Internet for a few days. As Timothy Lee noted in the Washington Post on the occasion of the worm's 25th anniversary:
Before Morris unleashed his worm, the Internet was like a small town where people thought little of leaving their doors unlocked. Internet security was seen as a mostly theoretical problem, and software vendors treated security flaws as a low priority. The Morris worm destroyed that complacency.
This narrative of innocence lost has remained relevant to our experience with technology. Granted, the Internet was small and chummy back in 1988 – after all, the invention of the web browser was still about five years away – but the fact that 99 lines of code could launch an entire industry is worth contemplating. That is, until you realize that if it hadn't been Morris's 99 lines, it would have been someone else's. Now the internet is many orders of magnitude larger and more essential to our society, but I contend that the same dynamic of purpose and consequence remains at work. There is a clear lineage that can be drawn from Morris to Microsoft's Tay. We think we expect one thing to happen, and while that thing may indeed come to pass, a whole lot of other things also come into play.
This brings me to another recent development in AI that's somewhat more serious than Tay, namely the emergence of AlphaGo, an artificial intelligence schooled in the ancient Chinese strategy game Go. As has been widely reported, AlphaGo beat the world #1, Lee Se-dol, by a decisive margin of four games to one in South Korea. AlphaGo accomplished this through an extensive training regimen that included playing another version of itself several million times (The Verge extensively covered the series here).
In the case of AlphaGo, the purpose seems to be clear. Win at Go – which it did, and handily. But we don't get the deeper context, or, in the parlance of clickbait titles, the "You won't believe what happens next". This is partly the fault of the way the mainstream media constructs its reporting today. Another opportunity to crow about how machines will soon overtake us, and then on to the next shiny object that commands the news cycle's attention. In fact, AlphaGo is but a step in a long, iterative process begun decades ago by DeepMind's founder and CEO, Demis Hassabis. In fact, he lays it all out quite clearly in this lecture at the British Museum.
The larger purpose of this process, of which AlphaGo is merely a symptom, is, in Hassabis's own words, "to solve intelligence, and then use that to solve everything else". Obviously we could spend quite a bit of time unpacking what he means by any of the key terms in that mission statement: What is intelligence? How do you know when you've solved it? What is everything else, and who gets to decide that? Seen within this larger context, the idea of an AI winning at Go goes from one of the holy grails to a digital cairn, marking an event on the way to something much greater, and more ambiguous.
As an example consider Watson, IBM's Jeopardy-winning juggernaut. Perhaps because Jeopardy is a game that seems intrinsically more human, the impact on our popular consciousness was more substantial than AlphaGo's feat. But what is Watson doing today? Is it, to borrow a classic dig, "currently residing in the ‘where are they now' file"? Not at all. Watson is an active revenue stream for IBM, although exactly how much is unknown, since the actual numbers are, for the time being, rolled up into the company's larger Cognitive Solutions division. Watson's involvement is remarkably eclectic, including "helping doctors improve cancer treatment at Memorial Sloan Kettering and employers analyze workplace injury reports." Also, Watson is looking forward to providing insight into case law. And this is all in addition to applying its talents to the kitchen.
What else is Watson up to? Going back to Stephen Wolfram's discussion of AI that I referenced last month, I was struck by his vague disinterest in certain applications. For example, he says
I was thinking the number one application was going to be customer service. While that's a great application, in terms of my favorite way to spend my life, that isn't particularly high up on the list. Customer service is precisely one of these places where you're trying to interface, to have a conversational thing happen. What has been difficult for me to understand is when you achieve a Turing test AI-type thing, there isn't the right motivation. As a toy, one could make a little chat bot that people could chat with.
This is, in fact, exactly one of the businesses that Watson is in. Any sufficiently open-minded entrepreneur could rattle off a dozen opportunities where he or she could really use a conversant machine intelligence. And the larger the scale, the greater the opportunity. Just as Tay could talk to millions of millennials, Watson can talk to millions of customers. Meet IBM Watson Engagement Advisor, which is replacing entire call centers as we speak.
Moreover, Watson is not just a disembodied voice on the other end of a phone line. One of the great lines of technological convergence we have already begun to witness is the unification of AI with robotics. And this crosses AI over into embodiment, which is another ball game entirely. Witness this exchange between a Pepper robot, plugged into Watson and a bank customer. (Obviously, this is a promotional video, but I am slightly disoriented by the fact that IBM is hip enough to be using using words like ‘bummer' when describing the risks of an adjustable-rate mortgage.) It is not difficult to imagine thousands of these robots, with their aww-shucks attitude, all connected to a central AI that is constantly learning and refining itself based on inputs provided by humans. In fact, this not some Alpha-60-style speculation; this is already happening.
These examples illustrate the big takeaway concerning how Watson is being deployed. Watson is no sacred cow. IBM views it as a utility that other aspects of its business can and should leverage, hence the fact that Watson is being used not only in its Cognitive Solutions division, but also in the much larger Global Business Solutions division. The general application of AI is exactly that: general, and the more general the better. IBM's managers and executives would much rather have a tool, or suite of tools, that they can apply promiscuously to any market opportunity that presents itself.
There is no reason as to why AlphaGo, which is owned by Google, will approach its further development any differently. This is especially true if we are to take CEO Demis Hassabis's words seriously: "to solve intelligence, and then use that to solve everything else". But as the ongoing integration of Watson into a business context shows us, ‘everything else' is really a proxy phrase for ‘everything where the money is'. I'll hasten to add that there is nothing inherently objectionable about this, but the fact is that there is no guaranteed nobility in the future of these technologies, either. They will be used to chase profits wherever they may be found. This is the dilution, the ambiguation of purpose. In a very definite sense, we approach what Foucault was trying to teach us about power: its diffuse nature, its functioning at a remove.
Finally, an argument has been made in some quarters that all this AI stuff is really going to be fine, since what we are really after is not artificial intelligence per se, but augmented intelligence. On the surface, the difference is promising, since it perpetuates the idea that machines will continue to be our servants, helping us see the world in new and different ways, enriching our experience of the things that motivate us in the first place. But the question that I have for these optimists is simple: Who gets to be that person?
For example, Garry Kasparov, the chess champion whose 1997 defeat at the hands of IBM's Deep Blue heralded the beginning of the current era of man versus machine, proceeded to incorporate play against Deep Blue as an essential part of his training regimen. In fact, it was this additional training that was a factor in his ability to maintain a monopoly on the chess world for many years.
Likewise, Fan Hui, the European Go champion who was defeated by AlphaGo in the run-up to the matches against Lee Se-dol, joined the AlphaGo team as an advisor, once again lending resonance to the old saw "if you can't beat 'em, join 'em". As a recent Wired article noted:
As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn't see before. And that makes him happy. "So beautiful," he says. "So beautiful."
Kasparov and Fan are rare birds, however, with the expertise and fame that provided them with the opportunity to attach themselves, lamprey-like, to the fast-swimming phenomenon that machine intelligence is becoming. But what about ordinary people – perhaps someone who recently lost their job to automation instigated by the same AI? Will they really have the opportunity to engage it in a didactic or even pleasurable capacity? Or will they be too busy job hunting to care? To quote Godard's all-powerful computer in 'Alphaville', "All is linked, all is consequence".
Monday, March 28, 2016
"She was Dolores on the dotted line."
Artificial intelligence – or rather the phenomena that are being shoved under the ever-widening rubric of AI – has had an interesting few weeks. On the one hand, Google's DeepMind division staged a veritable coup when its AlphaGo AI soundly thrashed the world #1 Go player Lee Se-dol in the venerated Chinese strategy game, four games to one. This has been widely covered, and with justification. Experts will be poring over these games for years, and AlphaGo's unorthodox gameplay is already changing the way top practitioners of the game view strategy. It is particularly noteworthy that Fan Hui, the European Go champion who went down 5-0 to AlphaGo in January, has since then joined the DeepMind team as an advisor and played AlphaGo often. This is not a Chris Christie-style capitulation, but rather an understandable fascination with a style of play that has been described as unearthly. It's no exaggeration to say that the history of the game can now be clearly divided into pre- and post-AlphaGo eras.
Which isn't to say that this shellacking has beaten humanity into quiescence. Earlier this week, we exacted some sort of revenge by appropriating Microsoft's latest entry into social AI, the Twitter bot @TayandYou, and transformed it into "a racist, sexist, trutherist, genocidal maniac". If we were to consider @TayandYou and AlphaGo to be birds of a feather, which is of course sloppy thinking of the highest (lowest? most average?) order, that would be a small consolation indeed, and not much different from stamping on an ant after you just got mauled by a bear, and still feeling good about it. But comparing @TayandYou and AlphaGo does lead to some useful insights, because one of the principal issues confronting the field of AI is the idea of purpose. This month, I'll look at the case of @TayandYou, and follow up with AlphaGo in April, since come April no one will remember @TayandYou, whereas with AlphaGo there's at least a chance.
Now, this idea of AIs lacking a purpose may seem like a daft claim. After all, the softwares in question were created by teams of computer scientists backed by wealthy corporations (artificial intelligence is the sport and pastime of what passes for kings these days). And in the popular consciousness AIs are implacably possessed of purpose, usually to the detriment of the human species. There seems to be little chance that there could be any ambiguity about such a basic question. Still, the extraordinary flameout of @TayandYou beckons the question of what, precisely, any specific AI is for. For what was really at stake with @TayandYou will, I think, be very surprising.
In a long and somewhat rambling interview on Edge, Stephen Wolfram recently asked precisely this. Wolfram, a long-time pioneer and creator of platforms such as Mathematica and Alpha, considers our rapidly diminishing claims on uniqueness as a species. What really makes us different from the rest of the world, whether it's other forms of life, or even inanimate objects? For him, the boundaries of computation and intelligence have become decidedly murkier over the years. There are fewer and fewer signposts that seem to distinguish one from the other, let alone mark the transition from one state to another. So he puts a stake in the ground by positing that humans are good for at least one thing: the ability to assign ourselves a goal or a purpose.
Wolfram extends this goal-seeking behavior to our tools – after all, we build tools in order to accomplish a task more easily. And digital tools are certainly part of this tradition. So in order for us to make sense of artificial intelligence in particular, and software generally, we must be able to formulate what it is that we want it to achieve, and then we must figure out how to communicate that goal. Closing the gap on this latter act is key to how Wolfram sees the evolution of software, and underpins his notion of ‘symbolic computation': the idea that if we are to become effective communicators with our machine counterparts, we will require some sort of high-level language that will facilitate the imposition of goals on our tools in a way that is accurate, legible and reproducible. But as computing branches out from the strictly quantitative realm of numbers and mathematical operations on those numbers, and into the more qualitative realm of language, image and sound, the nature of our expectations – and therefore our interactions – will necessarily broaden and become more ambiguous.
In 1950 Alan Turing provided one answer to what "purpose" might look like for software. The Turing Test (which I've written about previously) is passed when a human cannot tell whether her interlocutor is a computer or another human. Here the purpose of the software is to become indistinguishable from the human. Much dissatisfaction has been registered over the years over the utility of this. For my part, I don't think the test is nearly broad enough: the idea that we are successful when we have managed to create something so perfectly in our own image is limiting to what technology could be doing, and perhaps too uncritical of what technology should be doing. But if the Turing Test is our signpost, where does that lead us? As Wolfram notes:
You had asked about what…the modern analog of Turing tests would be. There's being able to have the conversational bot, which is Turing's idea. That's definitely still out there. That one hasn't been solved yet. It will be solved. The only question is what's the application for which it is solved?
For a long time, I have been asking why do we care…because I was thinking the number one application was going to be customer service. While that's a great application, in terms of my favorite way to spend my life, that isn't particularly high up on the list. Customer service is precisely one of these places where you're trying to interface, to have a conversational thing happen.
What has been difficult for me to understand is when you achieve a Turing test AI-type thing, there isn't the right motivation. As a toy, one could make a little chat bot that people could chat with. That will be the next thing. We can see the current round of deep learning, particularly, recurrent neural networks, make pretty good models of human speech and human writing. It's pretty easy to type in, say, "How are you feeling today?" and it knows that most of the time when somebody asks this that this is the type of response you give.
Just as human-robot interaction suffers from the phenomenon of the Uncanny Valley, where a robot can be mistrusted or rejected by a human for seeming just not human enough (as opposed to totally human, or totally inhuman), human-AI interactions seem to fall into the same trap. You might call it the ‘valley of meh', where an interaction with a software begins hopefully, but rapidly degenerates into mediocrity and boredom.
This was precisely where Microsoft's @TayandYou found itself. Except, to its great misfortune, it happened to be "learning" from the Twitter ecosystem. Now, Twitter is a platform that, whether due to design or fate or some unholy combination thereof, detects weakness, indecision, or just plain niceness faster and pounces more brutally than almost any other place on the Internet. And this was exactly what happened. @TayandYou was like the new kid who shows up on the first day of school and just gets pounded at recess, to the point where the parents have no real choice other than to take him out of class entirely.
All along, it was unclear what @TayandYou was doing there in the first place. To continue with the schoolyard analogy, any new arrival who comes up to an established group and says "Hey, I wanna be just like you! Let's play!" is just asking for it. Moreover, Microsoft's researchers proffered some anodyne tagline that @TayandYou is here to learn from humans, and that the more humans interact with it the smarter it gets, as if interacting with humans ever helped another species to become anything other than a museum exhibit. In any case, the crazed weasel pit that is Twitter ensured that @TayandYou would not evolve into some digital successor to K-Pax.
Now, as I've already noted, bots on Twitter are nothing new, and some of them are quite interesting and clever. So it was with interest that I read a counterpoint by Sarah Jeong, writing for Vice's rather likeable Motherboard section, when she interviewed members of this "bot-writing" community. Of the developers interviewed, it seems evident that there is an emerging ethical practice that is inspired to make the bots broadly acceptable. One of the developers, Darius Kazemi, has even provided an open source service that is constantly updated a vocabulary blacklist. Obviously we can debate about the implications for censorship and political correctness, but if the counterexample is @TayandYou's tweet supporting genocide, etc, I'm pretty willing to give the blacklist a shot. Also, it's Twitter, for heaven's sake.
There is another important lesson here, which concerns the aforementioned ‘valley of meh'. Jeong quotes Kazemi as saying that "I actually take great care to make my bots seem as inhuman and alien as possible. If a very simple bot that doesn't seem very human says something really bad—I still take responsibility for that—but it doesn't hurt as much to the person on the receiving end as it would if it were a humanoid robot of some kind." While this might strike some as achieving nearly Portlandia-like levels of sensitivity, it nevertheless points to a distinctly post-Turing Test world, where interactions occur with a diversity of entities. Not every bot needs to pretend like it's human, and we are hopefully adult enough that we can tell the difference, and choose the right entity for the right interaction. I hope.
This is where most commentaries around the whole @TayandYou fiasco end, since the bot's tweets are generally sufficient to satisfy our craving for scandal. However, it never hurts to follow the links, and @TayandYou has a veryinteresting About page. I recommend you put on sunglasses before clicking the link, as the screaming orange background of the web page seems designed to prevent you from reading any of the text. For your benefit, I reproduce the salient bits below:
Tay is targeted at 18 to 24 year old [sic] in the US.
Tay may use the data that you provide to search on your behalf. Tay may also use information you share with her to create a simple profile to personalize your experience. Data and conversations you provide to Tay are anonymized and may be retained for up to one year to help improve the service.
Q: Who is Tay for?
A: Tay is targeted at 18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the US.
Q: What does Tay track about me in my profile?
A: If a user wants to share with Tay, we will track a user's:
Q: How can I delete my profile?
A: Please submit a request via our contact form on tay.ai with your username and associated platform.
Q: How was Tay created?
A: Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that's been anonymized is Tay's primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.
So, this business of not knowing what purpose to put to an AI – perhaps I should take it all back. Apparently, Microsoft is really quite interested in learning more about a particular demographic, to the point where they would very much like to know what your favorite food is. Especially telling is the bit about having to fill out a form in order to cancel a profile to whose automatic creation one had already agreed. Also, the fact that the user has to specify the ‘associated platform' implies that @TayandYou, or the technology behind it, is present on platforms other than Twitter.
To go back to something Wolfram said: "What has been difficult for me to understand is when you achieve a Turing test AI-type thing, there isn't the right motivation." Like most commentators when it comes to networked human-computer interaction, Wolfram does not recognize the value in aggregating data at scale. Because @TayandYou is just that: another vacuum cleaner for data. But while people really don't need anything too clever to hand over their information, the idea of using an AI that can interact with hundreds of thousands, if not millions of people, to come to better understand what they ‘like' – well, that is pure genius. It's like Humbert Humbert hanging out a honey pot for a million Lolitas.
Of course, there were probably some valuable pure learnings to be had around natural language processing, etc etc, had @TayandYou discharged its duties successfully, but this is small beer compared to arriving at a fine-grained understanding of the next major consumer group in the United States. I doubt very much that their actions were predicated on this understanding, but viewed in this light, perhaps the Twitter trolls have done us a favor by sniffing the weakness of @TayandYou and meting out a solid thrashing.
Monday, February 29, 2016
The Penal Colony
“Facts all come with points of view/
Facts don't do what I want them to.”
~ Talking Heads
What is it with Silicon Valley and the “disruption” of education? Is it just another sector of public life that is moribund and therefore in need of a serious intervention, as if it were ‘that friend’ who used to be fun and successful but is now just depressed and drinking too much? Or do Silicon Valley types have a chip on their shoulder – perhaps they were forced to sit through one too many pointless lectures on Kant or Amazonian tribes or feminist critiques of Florentine art, and now that they’re calling the shots they’re going to fix this giant mess that’s called higher education once and for all? (Trigger warning: the only people mentioned in this post are venture capitalists).
In any case, into the ever-narrowing sweepstakes of who can make the absolutely dumbest assertions about the value of education steps Vinod Khosla, elder statesman and patron saint of tech bros in Silicon Valley and beyond. Khosla, a fabulously successful venture capitalist, has waded into the education wars with a broadside so breathtaking in its myopia that you would be forgiven for thinking that it was lifted from the satirical pages of The Onion. But before getting into Khosla’s piece, let’s set the stage with a look at a fellow-disruptor’s contribution to the debate.
Libertarian investor Peter Thiel, also fabulously successful, has put forward $100,000 scholarships fellowships for “young people who want to build new things instead of sitting in a classroom”. Thiel’s mission is to pluck potential John Galts out of the stream of college-bound lemmings and give them the latitude to realize their entrepreneurial potential. He believes that college, as it is currently constituted, leads to stagnant thinking and a narrowing of one’s horizons and potential. Which is odd, considering that most people go to college to have exactly the opposite experience. Be that as it may, anyone under the age of 22 is welcome to apply, which is a fairly dramatic, late-capitalist re-write of the countercultural edict to “not trust anyone over 30.”
I actually don’t have much of a problem with this, because Thiel is not trying to rewire the university system. He is providing more options for a vanishingly small group of people (104 so far since the fellowship’s 2010 inception), and I’ve always been convinced that college – or more specifically, a liberal arts education – is not for everyone. It never has been, and it never will be. That’s not to say that it shouldn’t be available for anyone who wants it. But it is a prime example of overreach when the system screws into people’s heads that “everyone needs a college degree” and that subsequently people waste their money getting a BA in communications, whatever that is. There are certainly people who don’t need to go to college, and I like the fact that Thiel is providing more options, not less.
Compare this fairly surgical intervention with the opening klaxon of Khosla’s essay: “If luck favors the prepared mind, as Louis Pasteur is credited with saying, we’re in danger of becoming a very unlucky nation. Little of the material taught in Liberal Arts programs today is relevant to the future.” If there’s one thing I like about Silicon Valley types, it’s that they never leave you to wonder what they’re thinking. Unfortunately, further reading may give rise to the concern of whether they are thinking at all.
Now I could be pedantic and, in a classically vindictive fashion that we liberal arts types allegedly enjoy, just grab an editor’s red pen and start marking up his essay, eg: ‘Doesn’t luck just happen, regardless of whether you are prepared? So how does a lack of preparation make one less lucky? Pasteur was referring to “the fields of observation” in his quote. How does that change the quote’s meaning? Also, passive voice’. But I will leave such pedantry aside. It’s clear that Khosla’s beef is with the system itself, which is in need of some serious re-jiggering. So let’s move past the opening gambit and go to the second sentence – “Little of the material taught in Liberal Arts programs today is relevant to the future”.
Like what? Literature and history, for example. History especially is for chumps:
Furthermore, certain humanities disciplines such as literature and history should become optional subjects, in much the same way that physics is today (and, of course, I advocate mandatory basic physics study along with the other sciences). And one needs the ability to think through many, if not most, of the social issues we face (which the softer liberal arts subjects ill-prepare one for in my view)…I’d like to teach people how to understand history but not to spend time getting the knowledge of history, which can be done after graduation.
Now, I’m not going to meet Khosla’s arguments head on. I’m sure more qualified, more eloquent people have already done so. What I’m more interested in looking at are the consequences of this kind of thinking, or of what emerges when there is a collective bubble of this kind of thinking going on.
A pretty good example of the fruits of an ahistorical worldview happened right about the time Khosla’s essay bubbled up to the surface. Marc Andreessen, inventor of first truly successful web browser and once-scrappy underdog who fought Microsoft (and lost, forever enshrining his scrappiness), has since also become a very successful tech investor. In fact, as an investor in and board member of Facebook, he’s really no longer much of an underdog at all. So when Free Basics, Facebook’s initiative to bring free Internet access to India, was blocked, Andreessen tweeted in frustration "Anti-colonialism has been economically catastrophic for the Indian people for decades. Why stop now?"
Oh, dear. Despite deleting the tweet, issuing an apology, as well as receiving a rebuke from Mark Zuckerberg himself, the Internet went nuts. It wasn’t hard to spin out an analysis positing how what Facebook was doing in India with Free Basics was textbook colonialism. I think there is a fair amount of justification here, and no critic in his or her right mind would fail to take advantage of such a gorgeous faux pas as the one Andreessen served up. But let’s keep things simple.
It’s all well and good to look at Andreessen’s quote as emblematic or symptomatic of a larger system of power or encroachment – after all, that’s what good liberal arts thinking does (cough). What leads a person to write that in the first place? I mean, how do you – and I am being generous here – confuse ‘colonialism’ with ‘anti-colonialism’? And even if you were to substitute one for the other, the comment still doesn’t make sense, except in some uber-sarcastic manner. Maybe he meant ‘capitalism’, as in: “Anti-capitalism has been economically catastrophic for the Indian people for decades. Why stop now?” This would demonstrate some familiarity of Indian history, at least during a few decades of the 20th century. But it still displays a fairly shocking ignorance of the country that India is today, and has been for a while.
Part of the elegance of any analysis is knowing when to stop, and the older I get the more I favor brevity. So I will say this: Andreessen wrote what he did because he is ignorant. He is ignorant of the world around him, and we can go find the root of this steadfast ignorance in Khosla’s exhortation that history is something to learn on your own time. Except when your temper tantrum exposes your ignorance of history, and for a brief moment we all get to wonder, “Who the hell is this guy, and how did he get to such a powerful place in society?” And, fortunately or unfortunately, that’s all there is to it.
But the rot goes deeper still. Here’s a much better example.
A few years ago I had the opportunity to judge a few business plan competitions. This is actually more interesting than it sounds. Business plans, after all, are a form of literature, or at least a form of text. And like any text, one learns to read the genre for the hopes and fears of its authors. The hopes are writ large: products and services that promise to transform markets and better the lives of millions. The fears are smaller and require a bit more experience to ferret out, as they usually take the form of the financial assumptions that constitute an essential part of any business plan. But what one gets exceptionally sensitized to is the way a plan defines a problem space. Because the way one thinks about the problem has great bearing on the proposed solution. In fact, most business plans fail – both as real plans and as closely reasoned arguments – because the authors failed to think deeply enough about the problem.
I was reminded of these business plans when a friend forwarded me an article on the disruption of prisons (in response to my most recent 3QD piece, on how technology will come to service various sectors of society that we’d rather not spend time on). Much like Khosla’s piece, this article at first seems like a parody. Enouragingly entitled “How Soylent and Oculus Could Fix The Prison System” it is nothing less than the reductio ad absurdum to “solving the problem” of prison. For example, prison violence is solved by virtual reality:
By equipping every inmate with an Oculus Rift headset in his or her own cell, you could isolate prisoners from violence without isolating them from people. Put all the prisoners inside Second Life, Prison Edition, give them all a headset, and let them build virtual characters. You could design an awesome [sic] system for rehabilitation, give access to e-learning tools, Kindle books, Minecraft and other digital tools for creativity (prison is boring), psychologist sessions (the psychologist could log in remotely from anywhere in the world), and even handle all correspondence and prison visits from relatives and friends electronically.
As the author enthuses, “What this eliminates: prison yards, prison libraries, packages and letters secretly containing drugs or shanks.” By using a carceral version of Second Life, gamification would teach them to be better citizens (think: badges!). Helpfully, “a huge benefit is we could track everything that prisoners do.” Once you’ve made your way through the whole post – which is written with the utmost sincerity, as it includes cost breakdowns for everything – you’ll consider Khosla to be a thinker of profound subtlety.
Because when you leave prison, the years or decades spent in a virtual reality simulation will equip you just fine for living in the real world. The author’s concern is actually with creating a smooth, hassle-free and economical prison stay. People fight? Ok, don’t let them interact. Food is expensive? Feed them Soylent. Problem solved. It’s almost as if the airlines hit upon their final solution for air travel – just put everyone under general anesthesia from check-in until baggage claim (actually I have been hoping for this for some time). There is really no concern with what people actually do, whether it’s in prison or outside it. And understanding why people wind up in prison, well that would require history. In business plan parlance, this would be dismissed as “out of scope”.
Now, if this had been a business plan submitted to me in competition, the first question for the author would have been, “What’s the real problem here? Is it that prison is expensive, or is it that people keep returning to prison?” Understanding the problem determines the contours of the solution. And if we agree that the purpose of doing prison differently is to lessen recidivism rates, then we have to ask ourselves, how do we prepare people to not come back into the system? I somehow doubt that teaching them to be really good at some dumbed-down version of Second Life is going to help them there.
I suspect the answer is closer to providing some kind of socialization and support structure that is radically different from the structures that landed the inmates there in the first place. Interestingly enough, and just to prove that I’m not some monomaniacally judgmental person, Chris Redlitz, another Bay area venture capitalist, has been taking the opposite tack: five years ago he founded The Last Mile, which started as a business and entrepreneurship program taught within the confines of San Quentin State Prison, and has since diversified into teaching inmates computer programming skills as well. It is the first program in the nation to do so, and so far none of its graduates have been reincarcerated.
Now, just as not everyone should go out and get a liberal arts degree, I’m sure that not every inmate who goes through the program is cut out to be an entrepreneur or a coder. But that is not really the point. The point is to offer the inmates a different social structure, a viable way of being in the world that was likely not open to them before. And this requires hard work, teaching, and human contact. It creates risk and uncertainty, which is something that the previous, ‘virtual reality’ model seeks to eliminate entirely. In fact, it's kind of like the process of getting a liberal arts education. Huh!
So I am curious: if these two ideas were to be presented to Khosla as competing business plans, which one would he fund? Because while Khosla might maintain that “it’s not that history or Kafka are not important…” I would say that the mettle it takes to come up with an understanding of the problem, and any possible solution, is only possible if you have read history, and especially if you have read Kafka. Otherwise, we create a society where Soylent and Oculus VR will be good enough, and probably not just for prisoners, either.
Monday, February 01, 2016
"No sooner does man discover intelligence
than he tries to involve it in his own stupidity."
~ Jacques Yves Cousteau
Over the course of my last few posts I have been groping towards some kind of meeting point between, on the one hand, the current wave of information technologies, as represented by artificial intelligence (AI), social media and robotics; and on the other, what might be termed, for the sake of brevity, the social condition. The thought experiment is hardly virtual, and is in fact unfolding before us in real time, but as I have been considering the issues at stake, there are significant blind spots that will demand elaboration by many commentators in the years and decades to come. Assuming that, as Marc Andreessen put it, software (and the physical objects in which it is increasingly becoming embodied) will continue to "eat the world", how can we expect these technological goods to be distributed across society?
It's actually kind of difficult to envision this as even being a problem in the first place. It's true that, up until in the first years of this century, there was some discussion of the so-called ‘digital divide', where certain segments of the population would not be able to get onto the ‘Internet superhighway' (another term that has fallen into disuse, perhaps because it feels like we never get out of our cars anymore). These were the segments of society that were already disadvantaged in some respect, where circumstances of poverty and/or geography prevented the delivery of physical and therefore digital services. Less so, those on the wrong side of the divide may have also landed there because of language proficiency or age.
The digital divide hasn't really gone away, it's just been smoothed over by the fact that access has increased dramatically over the last 15 years. But according to the most recent Pew Research Center survey, the disparities still exist, and in exactly the places in which you would expect it: only 30% of Americans 65 or older have a smartphone; 58.2% of Native American households use the Internet; 68% of those who didn't graduate from high school are online; and less than half of households making less than $25,000/year are accessing the Internet. In contrast, the top two or three segments in each of these metrics has adoption rates somewhere in the mid- to upper-90th percentile.
Still, it's worth noting that in recent years, the main battle around Internet access have not been fought around primary access, but rather the notion of ‘network neutrality', or the idea that the delivery of any one type of content should be privileged over that of any other. Regardless of who is on what side, it's clear that the people with skin in this game are already wired up. Even more interestingly, following the Edward Snowden NSA leaks, the other main battle has been around the curtailing of government-sanctioned surveillance, which implies the idea that there is perhaps just a little too much connection going on. (It's true that the digital divide conversation is still quite vibrant in the developing world, but even as Internet and mobile penetration increase everywhere, I'll venture that the same sort of lumpiness will abide.)
Consider for a moment the population characteristics used by the Pew survey: education, income, age, ethnicity, geography. (Curiously, gender is not discussed.) These are time-honored sociological categories that have been used by policy-makers and scholars to come to a more finely grained understanding of what our society looks like. The whole point of the US Census asking these sorts of questions is to help the government figure out how to spread around hundreds of billions of dollars of development money. But something interesting has happened as the years have advanced and ‘digital divide' has fallen out of usage: the categories themselves are disappearing from the discourse.
Instead, what is being talked about is ‘users'. There is no one other than the user: anyone who secures access to the Internet is reincarnated into one monolithic and anodyne group. And if there is only one group, there are in fact no groups at all. We are all fish in the same water. To be fair, this usage was always hard-wired into software development, it's just that software development has had the misfortune to find itself with such enormous purchase on our lives. But as a professor of mine was fond of remarking in graduate school, there are only two professions that call their clients ‘users': drug dealers and software engineers. I mean, even madams refer to their interested parties as ‘clients'.
This gap only becomes more apparent when you start paying attention to how we are talked to about technology. The basic Silicon Valley line is something like this: Each user (or group of users) has a problem, usually with an old industry that's in need of disruption. As a result, said user is just primed for some service or product, usually in the form of an app, that will unlock the value of a currently moribund market, or establish an entirely new one. If I were genuinely careful, I would corral every noun in the preceding sentence with quotation marks, since there are enough assumptions keeping this sentence duct-taped together that I almost want to stop writing and go take a shower. But what is relevant to our current discussion is that the ‘user' is what makes Silicon Valley pay attention, whether these are people who pay in hard currency, or in the currency of their own information. On the Internet, no one cares if you're a dog, as long as you're a dog with a profile that could be of use to some marketer. And if you're a rural Native American over the age of 65 with less than a high school education, then you're not on anyone's radar to begin with.
In a sense, we shouldn't be at all surprised that this has taken place. It's merely the latest extension of our post-Enlightenment condition. Whereas the categories I mention above take it as a given that we are dealing with aspects of the social, the Enlightenment, or at least as it has been handed down to us, is about the individual. The user is merely the next logical manifestation of this, the individual. Furthermore, the ersatz grouping of users into markets accomplishes nothing whatsoever in helping us understand the social, since markets are fickle, transaction-bounded entities, which individuals enter and exit with few obligations, let alone knowledge of one another.
This suits the creators of technology just fine. I don't mean this in a malicious sense. This isn't about persuading a group of voters that they have no common cause, or breaking the institutions that were responsible for collective bargaining for much of the last century. It's a much subtler set-up. Once the discourse is revised downwards to only accommodate descriptions of individuals and markets, the conversations that describe the social conditions upon which technology comes to rest also become scarce. Soon enough, our very capacity to discuss these phenomena is diminished, and what we cannot talk about we must pass over in silence.
Actually, those categories are still with us in two senses, but in both cases they are submerged. The first is on the side of the technologies themselves: thanks to massive databases of user information and the algorithmic tools that parse them, they can slice and dice users of their services and products into ever finer and more accurate groups. In this unregulated twilight zone there is an entire industry dedicated to be always right in these matters. Thus the aspects of the social take on the narrowed importance of a means to an end. Of course, the other aspect in which these categories still abide is reality itself. As much as it compliments itself on being the great leveler, technology is just as adept in accentuating and exacerbating difference.
Let's take one of the more obvious differentiators: wealth. The wealthy are the early adopters – they are the ones who can afford the technologies as they first ascend into prominence, whether we are talking about iPhones or bicycles. There is a period of ascendancy, as the use of a technology seeps into an already extant network, and the further network effects allow that social group to internally reinforce its bonds or perhaps further enrich itself. The technology becomes vital for the overt use of a group's members, as well as a sign by which the group differentiates itself from those outside it – that is, those people who lack such access, for whatever reason.
Facebook went from an exclusive social network to something as general and inclusive as a telephone. This of course does not mean that everyone has access to Facebook, just as not everyone has access to a telephone. For its part, Facebook has had to contend with the consequences of its ubiquity, as teens and young adults flock to other platforms, such as Instagram and SnapChat, where they feel like they can preserve some of the integrity of their groups. For their part, the rich have been setting up their own social networks since at least 2007. Of course, this being Silicon Valley, even the wealthy are constantly at risk of getting disrupted. Relationship Science has built its business model on facilitating connections to the wealthy, celebrities and various and sundry movers and shakers, assuming you can fork over the $3,000 annual fee. As journalist Greg Lindsay dubs it, Rel-Sci is a LinkedIn for the 1%.
However, there is a tipping point at which a technology ceases to provide a sizable return on investment, or exclusivity. Consider what wealthy people seek out when it comes to services; that would be other people. A very specific sort of other people, who are well-trained and discreet. The doorman of a Park Avenue co-op, the hotel concierge or the maître d' of a favorite restaurant are just as capable of receiving packages and making recommendations as they are turning a blind eye when it's so desired. Drivers, cooks, au pairs – you could populate a Richard Scarry children's book with all the people who help the wealthy live their lives as frictionlessly as possible.
I think that this tendency points out one of the great misconceptions concerning the progression of software and robotics. As the cost of these innovations declines and their presence spreads, we are better off asking, who is the most likely to be enwoven into these technologies? And by ‘who' I mean ‘what groups'?
Much attention has been paid to the effects of automation on employment, and rightly so. Partly because this is something tangible – we can measure jobs lost – and partly because it speaks to our grandiose fears of apocalypse-by-automation (the current specter is the loss of 3.5 million trucking jobs to driverless cars). But there is also a flip-side. Once innovative products and services are adopted by and assimilated into the lifestyles of the wealthy, or educated, or urban, those technologies will continue to spread. After all, capitalism dictates that a firm must continue growing and capturing market share.
It's not like privileged groups have grown out of using phones. But as an example, consider what we expect when we use our phones. Voice recognition technology has progressed to the point where it's not unusual to conduct entire transactions with a software system. This is especially conducive to instances where outcomes and exceptions are rigorously definable, such as banking and airline reservations. Sometimes it is the only choice, as call center staff have been cut in favor of these automated systems. On the other hand, those in a position of privilege have this privilege reified by the fact that they can speak to a personal banker or airline agent – similar to the above examples of concierge and doorman, a well-trained human that is discreet and effective. This is what I mean by the future already seeping its way throughout our present.
So a good way to start thinking about this is to embrace those categories of the social that we already have. Which groups are the most likely to become the subjects of a particular technology, and why? This is not to say that they will simply be ignored. Rather, we should instead think about the ways in which these groups will eventually be served by technology that may keep things running smoothly, but is ultimately dehumanizing and fragmenting, à la Neil Blomkamp's 2013 dystopia Elysium. Obviously, there is a long leap between an automated phone system and the hellish endgame described in Elysium but it's a much straighter line if everyone is treated only as an individual – or a user – while actually being targeted as a member of a social group.
So who are the vulnerable? A few groups come to mind. The elderly, who are already being assigned robot nurses, because who has time or money to care for the elderly. Children, who are expensive to educate and a pain in the ass to constantly watch over, are already being stimulated (I simply cannot bring myself to write ‘educated') via toys that have a direct line to IBM's Watson AI. The mentally ill, who need to be sequestered, drugged and monitored. Other institutionalized populations, such as convicts – how great would a fully automated prison be? That way any blame could be laid at the feet of the inmates. And finally, the poor, with whom no one wants to interact anyway. These groups will be the greatest ‘beneficiaries' of technology that is only just beginning to manifest itself. You get the idea of who is left – and what a perfect reproduction of privilege it will be.
As a final thought, consider what is lost as we move deeper into a future in which we are ever more deeply entangled with technology: our collective cultural memory. As William Gibson noted in a 2011 interview in the Paris Review,
It's harder to imagine the past that went away than it is to imagine the future. What we were prior to our latest batch of technology is, in a way, unknowable. It would be harder to accurately imagine what New York City was like the day before the advent of broadcast television than to imagine what it will be like after life-size broadcast holography comes online. But actually the New York without the television is more mysterious, because we've already been there and nobody paid any attention. That world is gone.
In a very real sense, we are co-creating our own ongoing forgetting. I consider myself fortunate to have grown up in a pre-Internet era. And anyone who has witnessed a child attempt to swipe or pinch a magazine page, in the mistaken belief that it is as interactive as an iPad screen, cannot but help feel discomfort at the way in which new generations expect reality to behave around them. Or perhaps they see it as a business opportunity. Difference cannot but persist. What is really at stake is what we choose to do about it.
Monday, December 07, 2015
Some Are Born To Sweet Delight
"Except for a wig of algorithms, and tears and automation."
~Noah Raford, Silicon Howl
Last month I attempted to set up two conflicting frames. On the one hand, there is the advance of technology in its myriad forms, eg: social media, artificial intelligence, robotics. This may seem like an arbitrary selection. For example, why exclude fields of medicine, or energy production, or infrastructure? Of course, all technologies are intrinsically social, especially given the complexities required to design, develop, disseminate and maintain them on a global scale. But my concern here are those technologies that are explicitly social in nature: those inventions, whether hardware or software, that intervene in our lives to enable or enhance communications, experiences, or that provide services along such lines.
On the other hand, these technologies are laid over a long-established matrix of social differentiations. Categories that have traditionally motivated the investigations of social scientists, such as class, race, culture, religion, education, gender and age, form the inescapable substrate upon which technology is seeded and elaborates itself, or withers and dies. As I showed, and contrary to most writing about technology in the mainstream media, these boundaries are not magically dissolved by technology, and in many cases they may be further exacerbated. They are certainly not elided, which seems to be the most common attitude. Instead, those occupying the more privileged ends of these spectra of difference benefit more greatly from each advance, and the underprivileged are further shunted to the side. It is the technological equivalent of income inequality, except it is subtler, since we lack the pithiness of a single number, such as the Gini coefficient, to use as a signpost. (Incidentally, even this metric has of late become increasingly less useful as global inequality ascends to hyperbolic levels.)
Thus the object of our scrutiny should really be the ways in which technology further complicates a landscape that is already extremely difficult to parse. In this sense, these two frames are not really in conflict, but at least from a critical point of view, are rather insufficiently engaged with one another. Furthermore, and perhaps even more importantly, the inquiry should not have as its final destination any hope that technology will ultimately dissolve these differences. This is where efforts to bridge the so-called "digital divide" fall short for me: the idea of a level playing field has always been a fiction. Why should we aspire to it? Isn't it more compelling to understand what difference a difference makes? Conversely, if technology really does succeed in eroding all these categories of difference, we will have to scramble for another definition of what it means to be human. Given the difficulty we have with the current state of the definition, I somehow doubt that a tabula rasa approach would be at all helpful.
Nevertheless, the advent of the broad trifecta of social media, AI and robots seems to be engaging in a subtle subversion of precisely this definition. For instance, something I brought up in my previous essay was the phenomenon of people interacting with software and not really comprehending that fact. And while the example (of a Twitter bot) was trivial and amusing, there are others that strike a deeper chord.
Consider "I Love Alaska", a short film made in 2008 by Sander Plug and Lernert Engelberts. The film is broken up into thirteen shorts, and frankly isn't much to look at: it is mostly footage of Alaskan wilderness, and not necessarily the very pretty bits, either. However, it's the script that counts; as the filmmakers describe the project:
August 4, 2006, the personal search queries of 650,000 AOL (America Online) users accidentally ended up on the Internet, for all to see. These search queries were entered in AOL's search engine over a three-month period. After three days AOL realized their blunder and removed the data from their site, but the sensitive private data had already leaked to several other sites.
"I Love Alaska" tells the story of one of those AOL users. We get to know a religious middle-aged woman from Houston, Texas, who spends her days at home behind her TV and computer. Her unique style of phrasing combined with her putting her ideas, convictions and obsessions into AOL's search engine, turn her personal story into a disconcerting novel of sorts.
Plug and Engelberts basically have taken the concept of found poetry and cast it into the digital age, and very effectively at that. Throughout the film, a voiceover delivers the search queries in a finely tuned deadpan, as they were entered into AOL's search engine. User #711391 doesn't really use keywords. The first phrase we hear is "Cannot sleep with snoring husband." More of an entreaty than a query, it is followed by "How to sleep with snoring husband" (it's unclear if a question mark ends this). Obviously the first query did not yield the desired result, so we have an example of how we are forced to bend language towards the machine. But the behavior here is delightfully obtuse, for she doesn't allow herself to be reduced to using keywords, which is the customary practice when using search engines.
In fact, sometimes it's unclear what she is actually trying to find out. Having (possibly) satisfied her curiosity about dealing with snoring spouses and annoying birds, we then get "Online friendships can be very special." As an elementary school teacher might say, "Are you asking me, or are you telling me?" But there is a very private communion that is happening here. In fact, the AOL search log dump was an absolute gold mine for academic researchers, who were starved for real-life data on how people used search engines. Nevertheless, there is something deeply affecting about bearing witness to the way in which user #711391 comes to regard the AOL search engine not as an anonymous reference gateway but more as a kind of interlocutor, and how her queries eventually lead her to take some substantially consequential actions. It replaces the concept of a diary with a one-sided transcription of a fragmentary telephone conversation; we are left to extrapolate much of the details of what seems to otherwise be a perfectly ordinary, if lonely life.
"I Love Alaska" points to a critical discursive element in the way that internet technologies are read. On the one hand, we get a (somewhat aestheticized) view of how one person engages with a technology that can, to a certain extent, accommodate a fair amount of natural language input. Perhaps her mode of engagement is substantially different from the way ‘the rest of us' use search engines. Or is it? Although AOL was a significant force in bringing people to the Internet in the 1990s, its subscribers were generally not known to be savvy, and Google was already eating AOL's lunch by 2006. Nevertheless, in that year AOL still had about 15 million subscribers. So when we say ‘the rest of us' we are discounting a large population. In fact, consider if you are at all familiar with how your friends or family use search engines – there's really no reason why you would be. There is no ‘rest of us'.
This matters because, on the other hand, the people who know all about this are the ones who created the platforms, of which search engines are but one typology. From their perspective, they are equally concerned with how a middle-aged Houston housewife uses their service as they are anyone else. And just as the AOL search log leak demonstrates that people will use search engines with the idea that no one is looking, the developers of that software will strive to make results for such queries as relevant as possible (User #311045: "how to get revenge on a ex girlfriend"). None of this works, however, if people do not engage the platform. In fact, the more richly they engage the platform, the more data is available for it to evolve. And what is needed is empathy.
How far the arbiters of our brave new world will go to solicit empathy was exposed recently in a post on Medium concerning Facebook's much-vaunted venture into the AI-driven virtual personal assistant market space. The initiative, known as M, flips on its head the usual assumptions. Whereas most AIs would like to convince you they are human, M wants you to know it is an AI, albeit a modest one: it cheerfully chirps "I'm AI but humans help train me!" when asked about its ontological status. Arik Sosman, the author of the Medium post, became increasingly suspicious of M's ability to seemingly navigate queries well beyond any other stae-of-the-art AI and undertook the task of snookering the poor thing.
What ensues is a fascinating forensic exercise into investigating a technology that is intended to replace the search engine itself. But in order to do so, Facebook must train its technology to a much higher standard. And M cannot do that without people. Eventually Sosman is able to ascertain that there is so much human activity going on behind M that the AI is actually more of a veneer than anything else – a sort of "pay no attention to the man behind the curtain" moment. Still, I think of Sosman's dissatisfaction as stemming not from that fact – after all, Facebook never tried to hide the fact that M would have some undisclosed number of human ‘handlers' to assist it. Rather, he was upset that M dissembled in its presentation of itself, pretending to be an AI more than it actually was.
I seem to have strayed from the argument I promised you, though. What happened to class, gender and the rest of the categories that ought to be shaping technology? We shouldn't let the rich ironies of Sosman's anecdote distract us from what is really at stake. As Wired wrote on the occasion of M's launch:
Facebook is, by design, rolling out its new assistant in a community in which the users are demographically similar to the M trainers who will be thinking up gifts for their spouses and fun vacation destinations for them… Will M be as good at helping users in the Bronx access food stamps? How about coming to the aid of the single mother in Oklahoma who has a last-minute childcare issue?
Thus the end game for M is clear: you start with what you know, and from there you eventually digest the rest of the world. M needs the data so that it can reach everyone else: identifying who they are, their needs and preferences, and consequently what kinds of ads and other services they might be most inclined to consume. I don't think anyone knows how much more is needed, but one thing that has become clear in AI research is that it's not how clever your algorithms are, but how much data you have to throw at them. So it would be reasonable to posit that the amount of data required is infinite, or at least indeterminate.
Will M actually achieve such reach? It's impossible to say at the moment, but in the meantime the people who benefit from M are those who are most similar, in terms of socio-economic signifiers, as its creators (indeed, Sosman himself is exactly one of those people, recalling the adage that it takes a thief to catch a thief). But even if M successfully reached all 700 million users currently on Facebook's Messenger app, that would still be less than 10% of the global population. An optimist might say that this just demonstrates how much more room there is to grow, but, given the rate of technological failure, it would be just as realistic to bet that M will only ever remain useful to those users in its initial demographic.
Despite the uncertainty of its success, M's brief is wide and the resources behind it are vast. Since it aspires to be all things to all people (or at least those people who are on Messenger), M doesn't really shed very much light on the selective application of technology to various social segments. It's more instructive to look at the various niches that robots are beginning to fill in this regard. And since robots have come up, I have to perform the obligatory turn towards Japan. (I apologize for such a hackneyed gesture, and I hope that at some point someone will disabuse me of the need for such a cliché.)
What makes robots useful in this discussion is the fact that, unlike a search engine or a virtual personal assistant, they must be designed for a fairly specific purpose. As embodied technologies, they will stick around and keep their shape until they break or are rendered obsolete. And as embodied technology, they traffic much more explicitly in our concepts of empathy; the designed intention is to both invoke empathy, and to materialize empathy in return. This is what makes them effective objects. The drawback is that you have to either keep making them, or at least keep fixing them. Still, at some point the rope runs out. Thus Sony stopped making, and eventually fixing, its Aibo robot dog. A victim of insufficient sales and corporate restructuring, Aibo left hundreds of Japanese bereft of robot dog companionship, which is no small deal (see this video, documenting Shinto ceremonies to help Aibos transition to wherever Aibos go when they die).
But what's more important is that many of those left without their Aibo were senior citizens. In Japan's gradually unraveling demographic decline, there are fewer young(er) people to function as caregivers; by 2011, already 22% of the population was at least 65 or older. So an integral part of the Japanese narrative is not just that they are smart and gadget-obsessed; it's also that they have fewer people around to fulfill the complete assortment of jobs that a well-functioning modern society requires. Hence robots, and if a robot dog is no longer around then perhaps a robot seal will be an adequate substitute.
Similarly, robots are targeting other Japanese demographics. Witness this odd video that was just uploaded to YouTube a few days ago, where a lonely young woman find companionship with her robot pal. There is bike riding (the robot sits in a basket with its arms raised), dance parties and burger-eating. There are even disagreements, fights and tears, although nothing that can't be reconciled in the end. And finally the young woman goes on a date, and meets a nice boy, and gets a ‘good job' wink from her robot companion, who is benevolently lurking in the background while the couple dances. At the end of the video the robot fades into silhouette, and its LED eyes glow with an ominous sort of friendliness. The fading words are "You were me, I was you." I should add that, for whatever inscrutable reason, interspersed between these scenes are lines from William Blake's Auguries Of Innocence.
Aside from being supremely creepy, the video, a promotion for the SOTA line of robots, really delivers the argument. Even if it is marketing, the implication is that machines can help people go on, even in the absence of human contact. Whether we are talking about senior citizens or insecure youth, the point of insertion is the same: machines can help you feel less lonely, at least until you either meet someone new, or you die. Extending this principle further leads us to a very strange vision of society, which is this: software and hardware is cheap, and humans are messy, unpredictable and expensive. Therefore it is not unreasonable to postulate that only wealthy people, or people at the privileged ends of the various social spectra, will be able to afford the services of other humans. Since this essay has gone on long enough already, I will flesh out what this kind of a world might look like next time.
Monday, November 09, 2015
"People for them were just sand, the fertilizer of history."
~ Chernobyl interviewee VM Ivanov
For a few years, if you were on Twitter and you used the word "inconceivable" in a tweet, you would almost immediately receive an odd, unsolicited response. Hailing from the account of someone named @iaminigomontoya, it would announce "You keep using that word. I do not think it means what you think it means." Whether you were just musing to the world in general, or engaging in the vague dissatisfaction of what passes for conversation on Twitter, this Inigo Montoya fellow would be summoned, like some digital djinn, merely by invoking this one word.
Now, those of us who possessed the correct slice of pop culture knowledge immediately recognized Inigo Montoya as one of the characters of the film "The Princess Bride". Splendidly played by Mandy Patinkin, Montoya was a swashbuckling Spaniard, an expert swordsman and a drunk. Allied to the criminal mastermind Vizzini, played by Wallace Shawn, Montoya had to listen to Vizzini mumble "inconceivable" every time events in the film turned against him. Montoya was eventually exasperated enough to respond with the above phrase. Like many other quotes from the 1987 film, it is a bit of a staple, and has since been promoted to the hallowed status of meme for the Internet age.
Of course, it's fairly obvious that no human being could be so vigilant (let alone interested) in monitoring Twitter for every instance of "inconceivable" as it arises. What we have here is a bot: a few lines of code that sifts through some subset of Twitter messages, on the lookout for some pattern or other. Once the word is picked up, @iaminigomontoya does its thing. Now, and through absolutely no fault of their own, there will always be a substantial number of people not in on the joke. These unfortunates, assuming that they have just been trolled by some unreasonable fellow human being, will engage further, such as the guy who responded "Do you always begin conversations this way?"
So here we have an interesting example of contemporary digital life. In the (fairly) transparent world of Twitter, we can witness people talking to software in the belief that it is in fact other people, while the more informed among us already understand that this is not the case. Ironically, it is only thanks to the lumpy and arbitrary distribution of pop culture knowledge that we may at all have a chance to tell the difference, at least without finding ourselves involuntarily engaged in a somewhat embarassing mini-Turing Test. But these days, we pick up our street smarts where we can.
Except we rarely pay attention to the lumpy, arbitrary nature of technology, and nowhere less so than in its latest, apotheotic form: social media. It's this idea of technology as the great leveler, and this is perhaps the principal myth that we are relentlessly fed, as if we were geese on a foie gras farm. And like those geese, we never seem to get tired of the feeding. Nor is there any shortage of those queueing up to do the feeding. Just this weekend I attended a fairly abysmal conference sponsored by the Guggenheim Museum, and had to listen to what I thought were otherwise discerning minds discuss how, for example, the ability of people to participate in a real-time discussion on Twitter about the Ferguson riots made true the claim that there was no longer possible to be ‘outside' of events – or rather, that the only people who were on the ‘outside' were those who were on the receiving end of the obsolete ‘broadcast media', ie: television and radio.
This idea – that people who are passive receivers of information constitute a lesser class of citizenry than those who seek to ‘actively participate' in media – is not just problematic. In fact, let's just call it out for what it is: a barely disguised elitism. Consider the hurdles that you have to overcome to access this allegedly level landscape. You have to know what the Internet is and be able to access it; you have to know what Twitter is and be willing to use it, which is itself no mean feat; and you have to care enough about all of these things, as well as the specific phenomenon of the Ferguson riots, in order to ‘participate' in it. Only at that point are you ready to suffer the slings and arrows of your fellow discussants. Thus the resulting population that jumps through all these hoops is a deeply self-selected one. Not only are the necessary cultural and technological proficiencies required to even get to this conversation substantial, but they are inevitably accompanied by – if not simply borne out of – all the attendant structural inequalities that constitute the context of society in the first place. How many people who are subject to discriminatory policing are not online, simply because they are poor, or uneducated, or most likely, just unconnected? In order to reach a putative place of ‘no outside', one must have all the tacit and consequential social, financial and cultural resources to be able to navigate quite a lot of layers of ‘inside'.
On the other hand, those belonging to the latter group of ‘passive consumers' may be more varied than one suspects. To stay with the example of Ferguson, if I watched the riots on cable news, but did so with friends and family, or with strangers in a bar or an airport lounge, and then had a meaningful discussion, well, it's almost as if this didn't happen, since my participation can't be measured in terms of tweets or likes or what-have-yous. It's just conversation, or private contemplation, as has been the case for quite some time. But if it can't be data-mined then of what use is it? At the same time, it bears mentioning that the ‘conversation' that happens on Twitter or anywhere else in social media is by no means guaranteed to be meaningful, simply because that's where it happens. The technorati merely encourage this sort of magical thinking in order to nudge us into a form of participation that occurs much more on their platforms' terms than we might think. When was the last time you went on line seeking to have your opinion changed by someone, whether it was a friend or family member – let alone a complete stranger?
Why is this the case? There is the old (at least by Internet standards) chestnut that, in real life, no one is as happy as they pretend to be on Facebook, nor as angry as they pretend to be on Twitter. So when self-selecting populations opt into participating on a specific platform, the subtle but influential effects on the participants' behavior results in a discourse that is deeply mediated. This occurs not only as a result of the platform itself (ie, the way graphic and textual elements are constructed and arranged on screen, and how users are allowed and incentivized to participate), but also thanks to how people expect their performance to be received by others, and who those others are.
We attempt to shape our online presences to be reflections of who we think we are in the first place. To think that this will suddenly give rise to some unprecedented sort of diversity – that we will step outside of ourselves to embrace new and uncomfortable truths – is naïve. I am not talking about pleasure-seeking or hedonistic pursuits (although, given the ongoing way GamerGate has problematized the seemingly innocuous pastime of video gaming, it's increasingly difficult to say that social media is capable of treating anything as a mere hobby). Rather, I mean to counter the Pollyanna-ish stance held by many techno-pundits that somehow the arc of social media bends towards justice. It may, or it may not. Perhaps the safest thing that can be said is that it will only make us more of who we are already, for better and for worse.
This is what I mean when I claim that the qualities and consequences of technology are lumpy and arbitrary. In reality, the idea that the world is flat has only ever held true for those people with the financial and social resources to make it so. Theirs is a frictionless world. The rest of us must make do with a pale imitation of this: the world seems flat to us only because we successfully ignore vast swathes of it, and social media is an excellent tool for creating the illusion that we are not ignoring anything really important, and that in fact we are paying more attention than ever before. Who can point fingers and say you're not concerned about social injustice when you've clearly been expressing your outrage by liking, sharing and hashtagging all over the damn place? Which is to say, to your friends and friends of friends and perhaps a few other random passers-by who, by definition, must be on the same platform as you. It is this lumpiness and arbitrariness that is really worth our attention.
On the face of it, an innocuous Twitter bot like @iaminigomontoya doesn't seem to have anything in common with the grand hypothesis that social media, as it is currently constituted, may not be doing us any great favors. But it will indeed take us to the next stage of the argument. I claimed above that social media is the apotheotic form of technology. Aside from being awfully pretentious, this claim is almost certainly already false, in the sense that social media is being augmented and perhaps gradually supplanted by the emergence of artificial intelligence; agents of varying autonomy, veracity and interactivity; and robots of many stripes. But since every stage of technological evolution builds upon already existing infrastructure, social media is where much of this change is manifesting itself.
More importantly, this is happening not just because all this stuff is new and clever, but because we want to talk to anything we possibly can, and we fervently desire for those things to talk back to us. This has already been amply proven by our proclivities to talk to dogs, cats and houseplants. But talking to technology is going to bring matters to a completely different level, because what is unique to technology is its ability to create massive, long-lived feedback loops that are initiated and sustained by our talk.
Here are a few examples of the things that we are building that are designed to talk to us. In addition to @iaminigomontoya, there are many such bots on Twitter, which, due to its restrictive 140-character format, is fertile ground for such experimentation. There are bots that, like our friend, will blithely reply to tweets or insert themselves into conversations, but in order to correct your grammatical and homophonic misdemeanors ("your" vs "you're"; "sneak peek" vs "sneak peak"). There are more aspirational creations as well. One of my favorites is @pentametron, which appropriates tweets that, usually quite unintentionally, happen to have been written in perfect iambic pentameter. @pentametron goes the extra mile, though, and re-assembles the tweets into Shakespearean sonnet form, the results of which can be savored here.
Of course, it's reasonable to argue that these bots are really no different than a wind-up toy. Even if you don't know precisely how it works, you know how to set it in motion, and once you've done so you get your hit of childlike wonder and then you put it down and go on with the rest of your day. But however simple, charming and/or irritating as they may be on their own, when taken as a phenomenon, these bots point to a shift that has already been under way for some time. People are, to one degree or another, not just content to interact with machines in a purposive way, but they are expecting to do so, and their expectations are increasingly open-ended. Sometimes they know they terms of the conversation – that is, that they are conversing with a constructed or artificial subject. And sometimes they do not. The truth is, software doesn't even have to pretend to be human for people to seek out human-like interactions with it. It turns out that willing suspension of disbelief is not just a literary device. As Coleridge defined it, "human interest and a semblance of truth" are all that is required to bring it about.
So what happens when we take our credulous nature and jam it into the lumpy and arbitrary distribution and consequences of technology in general, and social media in particular? In next month's post, I will propose that thinking about the intersection of these two tendencies can give us the opportunity to better envision scenarios of likely technological and social futures. It helps us to avoid the sensationalistic fallacy of a Terminator- or Matrix-style dystopia, where strong AIs destroy our way of life, if not the entire planet. Rather, it is about coming to terms with what is already among us, and of how we are already deeply entangled with it. It may even suggest how we might best adapt ourselves to a world that is perhaps already aswarm with artificial subjects that are inscrutable if not nearly invisible, so accustomed have we become to their presence.
"Inconceivable!" I hear you protest. Of course, Inigo Montoya is all too happy to ask if you know what that word really means.
Monday, July 20, 2015
"We are at home with situations of legal ambiguity.
And we create flexibility, in situations where it is required."
Consider a few hastily conceived scenarios from the near future. An android charged with performing elder care must deal with an uncooperative patient. A driverless car carrying passengers must decide between suddenly stopping, and causing a pile-up behind it. A robot responding to a collapsed building must choose between two people to save. The question that unifies these scenarios is not just about how to make the correct decision, but more fundamentally, how to treat the entities involved. Is it possible for a machine to be treated as an ethical subject – and, by extension, that an artifical entity may possess "robot rights"?
Of course, "robot rights" is a crude phrase that shoots us straight into a brambly thicket of anthropomorphisms; let's not quite go there yet. Perhaps it's more accurate to ask if a machine – something that people have designed, manufactured and deployed into the world – can have some sort of moral or ethical standing, whether as an agent or as a recipient of some action. What's really at stake here is the contention that a machine can act sufficiently independently in the world that it can be held responsible for its actions and, conversely, if a machine has any sort of standing such that, if it were harmed in any way, this standing would serve to protect its ongoing place and function in society.
You could, of course, dismiss all this as a bunch of nonsense: that machines are made by us exclusively for our use, and anything a robot or computer or AI does or does not do is the responsibility of its human owners. You don't sue the scalpel, rather you sue the surgeon. You don't take a database to court, but the corporation that built it – and in any case you are probably not concerned with the database itself, but with the consequence of how it was used, or maintained, or what have you. As far as the technology goes, if it's behaving badly you shut it off, wipe the drive, or throw it in the garbage, and that's the end of the story.
This is not an unreasonable point of departure, and is rooted in what's known as the instrumentalist view of technology. For an instrumentalist, technology is still only an extension of ourselves and does not possess any autonomy. But how do you control for the sort of complexity for which we are now designing our machines? Our instrumentalist proclivities whisper to us that there must be an elegant way of doing so. So let's begin with a first attempt to do so: Isaac Asimov's Three Laws of Robotics.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Some time later, Asimov added a fourth, which was intended to precede all the others, so it's really the ‘Zeroth' Law:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The Laws, which made their first appearance in a 1942 story that is, fittingly enough, set in 2015, are what is known as a deontology: an ethical system expressed as an axiomatic system. Basically, deontology provides the ethical ground for all further belief and action: the Ten Commandments are a classic example. But the difficulties with deontology become apparent when one examines the assumptions inherent in each axiom. For example, the First Commandment states, "Thou shalt have no other gods before me". Clearly, Yahweh is not saying that there are no other gods, but rather that any other gods must take a back seat to him, at least as far as the Israelites are concerned. The corollary is that non-Israelites can have whatever gods they like. Nevertheless, most adherents to Judeo-Christian theology would be loathe to admit the possibilities of polytheism. It takes a lot of effort to keep all those other gods at bay, especially if you're not an Israelite – it's much easier if there is only one. But you can't make that claim without fundamentally reinterpreting that crucial first axiom.
Asimov's axioms can be similarly poked and prodded. Most obviously, we have the presumption of perfect knowledge. How would a robot (or AI or whatever) know if an action was harmful or not? A human might scheme to split actions that are by themselves harmless across several artificial entities, which are subsequently combined to produce harmful consequences. Sometimes knowledge is impossible for both humans and robots: if we look at the case of a stock-trading AI, there is uncertainty whether a stock trade is harmful to another human being or not. If the AI makes a profitable trade, does the other side lose money, and if so, does this constitute harm? How can the machine know if the entity on the other side is in fact losing money? Would it matter if that other entity were another machine and not a human? But don't machines ultimately represent humans in any case?
Better yet, consider a real life example:
A commercial toy robot called Nao was programmed to remind people to take medicine.
"On the face of it, this sounds simple," says Susan Leigh Anderson, a philosopher at the University of Connecticut in Stamford who did the work with her husband, computer scientist Michael Anderson of the University of Hartford in Connecticut. "But even in this kind of limited task, there are nontrivial ethics questions involved." For example, how should Nao proceed if a patient refuses her medication? Allowing her to skip a dose could cause harm. But insisting that she take it would impinge on her autonomy.
In this case, the Hippocratic ‘do no harm' has to be balanced against a more utilitarian ‘do some good'. Assuming it could, does the robot force the patient to take the medicine? Wouldn't that constitute potential harm (ie, the possibility that the robot hurts the patient in the act)? Would that harm be greater than not taking the medicine, just this once? What about tomorrow? If we are designing machines to interact with us in such profound and nuanced ways, those machines are already ethical subjects. Our recognition of them as such is already playing catch-up with the facts on the ground.
As implied with the stock trading example, another deontological shortcoming is in the definitions themselves: what's a robot, and what's a human? As robots become more human-like, and humans become more engineered, the line will become blurry. And in many cases, a robot will have to make a snap judgment. What's binary for "quo vadis", and what do you do with a lying human? Because humans lie for the strangest reasons.
Finally, the kind of world that Asimov's laws presupposes is one where robots run around among humans. It's a very specific sort of embodiment. In fact, it is a sort of Slavery 2.0, where robots clearly function for the benefit and in the service of humanity. The Laws are meant to facilitate a very material cohabitation, whereas the kind of broadly distributed, virtually placeless machine intelligence that we are currently developing by leveraging the Internet is much more slippery, and resembles the AI of Spike Jonze's ‘Her'. How do you tell things apart in such a dematerialized world?
The final nail in Asimov's deontological coffin is the assumption of ‘hard-wiring'. That is, Asimov claims that the Laws would be a non-negotiable part of the basic architecture of all robots. But it is wiser to prepare for the exact opposite: the idea that any machine of sufficient intelligence will be able to reprogram itself. The reasons why are pretty irrelevant – it doesn't have to be some variant of SkyNet suddenly deciding to destroy humanity. It may just sit there and not do anything. It may disappear, as the AIs did in ‘Her'. Or, as in William Gibson's Neuromancer, it may just want to become more of itself, and decide what to do with that later on. Gibson never really tells us why the two AIs – that function as the true protagonists of the novel – even wanted to do what they did.
This last thought indicates a fundamental marker in the machine ethics debate. A real difference is developing itself here, and that is the notion of inscrutability. In order for the stance of instrumentality to hold up, you need a fairly straight line of causality. I saw this guy on the beach, I pulled the trigger, and now the guy is dead. It may be perplexing, I may not be sure why I pulled the trigger at that moment, but the chain of events is clear, and there is a system in place to handle it, however problematic. On the other hand, how or why a machine comes to a conclusion or engages in a course of action may be beyond our scope to determine. I know this sounds a bit odd, since after all we built the things. But a record of a machine's internal decisionmaking would have to be a deliberate part of its architecture, and this is expensive and perhaps not commensurate with the agenda of its designers: for example, Diebold made both ATMs and voting machines. Only the former provided receipts, making it fairly easy to theoretically steal an election.
If Congress is willing to condone digitally supervised elections without paper trails, imagine how far away we are from the possibility of regulating the Wild West of machine intelligence. And in fact AIs are being designed to produce results without any regard for how they get to a particular conclusion. One such deliberately opaque AI is Rita, mentioned in a previous essay. Rita's remit is to deliver state-of-the-art video compression technology, but how it arrives at its conclusions is immaterial to the fact that it manages to get there. In the comments to that piece, a friend added that "it is a regular occurrence here at Google where we try to figure out what our machine learning systems are doing and why. We provide them input and study the outputs, but the internals are now an inscrutable black box. Hard to tell if that's a sign of the future or an intermediate point along the way."
Nevertheless, we can try to hold on to the instrumentalist posture and maintain that a machine's black box nature still does not merit the treatment accorded to an ethical subject; that it is still the results or consequences that count, and that the owners of the machine retain ultimate responsibility for it, whether or not they understand it. Well, who are the owners, then?
Of course, ethics truly manifests itself in society via the law. And the law is a generally reactive entity. In the Anglo-American case law tradition, laws, codes and statutes are passed or modified (and less often, repealed) only after bad things happen, and usually only in response to those specific bad things. More importantly for the present discussion, recent history shows that the law (or to be more precise, the people who draft, pass and enforce it) has not been nearly as eager to punish the actions of collectives and institutions as it has been to pursue individuals. Exhibit A in this regard is the number of banks found guilty of vast criminality following the 2008 financial crisis and, by corollary, the number bankers thrown in jail for same. Part of the reason for this is the way that the law already treats non-human entities. I am reminded of Mitt Romney on the Presidential campaign trail a few years ago, benignly musing that "corporations are people, my friend".
Corporate personhood is a complex topic but at its most essential it is a great way to offload risk. Sometimes this makes sense – entrepreneurs can try new ideas and go bankrupt but not lose their homes and possessions. Other times, as with the Citizens United decision, the results can be grotesque and impactful in equal measure. But we ought to look to the legal history of corporate personhood as a possible test case for how machines may become ethical subjects in the eyes of the law. Not only that, but corporations will likely be the owners of these ethical subjects – from a legal point of view, they will look to craft the legal representation of machines as much to their advantage as possible. To not be too cynical about it, I would imagine this would involve minimal liability and maximum profit. This is something I have not yet seen discussed in machine ethics circles, where the concern seems to be more about the instantiation of ethics within the machines themselves, or in highly localized human-machine interactions. Nevertheless, the transformation of the ethical machine-subject into the legislated machine-subject – put differently, the machines as subjects of a legislative gaze – will be of incredibly far-reaching consequence. It will all be in the fine print, and I daresay deliberately difficult to parse. When that day comes, I will be sure to hire an AI to help me make sense of it all.
Monday, June 22, 2015
Artificially Flavored Intelligence
"I see your infinite form in every direction,
with countless arms, stomachs, faces, and eyes."
~ Bhagavad-Gītā 11 16
About ten days ago, someone posted on an image on Reddit, a sprawling site that is the Internet's version of a clown car that's just crashed into a junk shop. The image, appropriately uploaded to the 'Creepy' corner of the website, is kind of hard to describe, so, assuming that you are not at the moment on any strong psychotropic substances, or are not experiencing a flashback, please have a good, long look before reading on.
What the hell is that thing? Our sensemaking gear immediately kicks into overdrive. If Cthulhu had had a pet slug, this might be what it looked like. But as you look deeper into the picture, all sorts of other things begin to emerge. In the lower left-hand corner there are buildings and people, and people sitting on buildings which might themselves be on wheels. The bottom center of the picture seems to be occupied by some sort of a lurid, lime-colored fish. In the upper right-hand corner, half-formed faces peer out of chalices. The background wallpaper evokes an unholy copulation of brain coral and astrakhan fur. And still there are more faces, or at least eyes. There are indeed more eyes than an Alex Grey painting, and they hew to none of the neat symmetries that make for a safe world. In fact, the deeper you go into the picture, the less perspective seems to matter, as solid surfaces dissolve into further cascades of phantasmagoria. The same effect applies to the principal thing, which has not just an indeterminate number of eyes, ears or noses, but even heads.
The title of the thread wasn't very helpful, either: "This image was generated by a computer on its own (from a friend working on AI)". For a few days, that was all anyone knew, but it was enough to incite another minor-scale freakout about the nature and impending arrival of Our Computer Overlords. Just as we are helpless to not over-interpret the initial picture, so we are all too willing to titillate ourselves with alarmist speculations concerning its provenance. This was presented as a glimpse into the psychedelic abyss of artificial intelligence; an unspeakable, inscrutable intellect briefly showed us its cards, and it was disquieting, to put it mildly. Is that what AI thinks life looks like? Or stated even more anxiously, is that what AI thinks life should look like?
Alas, our giddy Lovecraftian fantasies weren't allowed to run amok for more than a few days, since the boffins at Google tipped their hand with a blog post describing what was going on. The image, along with many others, were the result of a few engineers playing around with neural networks, and seeing how far they could push them. In this case, a neural network is ‘trained' to recognize something when it is fed thousands of instances of that thing. So if the engineers want to train a neural network to recognize the image of a dog, they will keep feeding it images of the same, until it acquires the ability to identify dogs in pictures it hasn't seen before. For the purposes of this essay, I'll just leave it at that, but here is a good explanation of how neural networks ‘learn'.
The networks in question were trained to recognize animals, people and architecture. But things got interesting when the Google engineers took a trained neural net and fed it only one input – over and over again. Once slightly modified, the image was then re-submitted to the network. If it were possible to imagine the network having a conversation with itself, it may go something like this:
First pass: Ok, I'm pretty good at finding squirrels and dogs and fish. Does this picture have any of these things in it? Hmmm, no, although that little blob looks like it might be the eye of one of those animals. I'll make a note of that. Also that lighter bit looks like fur. Yeah. Fur.
Second pass: Hey, that blob definitely looks like an eye. I'll sharpen it up so that it's more eye-like, since that's obviously what it is. Also, that fur could look furrier.
Third pass: That eye looks like it might go with that other eye that's not that far off. That other dark bit in between might just be the nose that I'd need to make it a dog. Oh wow – it is a dog! Amazing.
The results are essentially thousands of such decisions made across dozens of layers of the network. Each layer of ‘neurons' hands over its interpretation to the next layer up the hierarchy, and a final decision of what to emphasize or de-emphasize is made by the last layer. The fact that half of a squirrel's face may be interpellated within the features of the dog's face is, in the end, irrelevant.
But I also feel very wary about having written this fantasy monologue, since framing the computational process as a narrative is something that makes sense to us, but in fact isn't necessarily true. By way of comparison, the philosopher Jacques Derrida was insanely careful about stating what he could claim in any given act of writing, and did so while he was writing. Much to the consternation of many of his readers, this act of deconstructing the text as he was writing it was nevertheless required for him to be accurate in making his claims. Similarly, while the anthropomorphic cheat is perhaps the most direct way of illustrating how AI ‘works', it is also very seductive and misleading. I offer up the above with the exhortation that there is no thinking going on. There is no goofy conversation. There is iteration, and interpretation, and ultimately but entirely tangentially, weirdness. The neural network doesn't think it's weird, however. The neural network doesn't think anything, at least not in the overly generous way in which we deploy that word.
So, echoing a deconstructionist approach, we would claim that the idea of ‘thinking' is really the problem. It is a sort of absent center, where we jam in all the unexamined assumptions that we need in order to keep the system intact. Once we really ask what we mean by ‘thinking' then the whole idea of intelligence, whether we are speaking of our own human one, let alone another's, becomes strange and unwhole. So if we then try to avoid the word – and therefore the idea behind the word – ‘thinking' as ascribed to a computer program, then how ought we think about this? Because – sorry – we really don't have a choice but to think about it.
I believe that there are more accurate metaphors to be had, ones that rely on narrower views of our subjectivity, not the AI's. For example, there is the children's game of telephone, where a phrase is whispered from one ear to the next. Given enough iterations, what emerges is a garbled, nonsensical mangling of the original, but one that is hopefully still entertaining. But if it amuses, this is precisely because it remains within the realm of language. The last person does not recite a random string of alphanumeric characters. Rather, our drive to recognize patterns, also known as apophenia, yields something that can still be spoken. It is just weird enough, which is a fine balance indeed.
What did you hear? To me, it sounds obvious that a female voice is repeating "no way" to oblivion. But other listeners have variously reported window, welcome, love me, run away, no brain, rainbow, raincoat, bueno, nombre, when oh when, mango, window pane, Broadway, Reno, melting, or Rogaine.
This illustrates the way that our expectations shape our perception…. We are expecting to hear words, and so our mind morphs the ambiguous input into something more recognisable. The power of expectation might also underlie those embarrassing situations where you mishear a mumbled comment, or even explain the spirit voices that sometimes leap out of the static on ghost hunting programmes.
Even more radical are Steve Reich's tape loop pieces, which explore the effects of when a sound gradually goes out of phase with itself. In fact, 2016 will be the 50th anniversary of "Come Out", one of the seminal explorations of this idea. While the initial phrase is easy to understand, as the gap in phase widens we struggle to maintain its legibility. Not long into the piece, the words are effectively erased, and we find ourselves swimming in waves of pure sound. Nevertheless, our mental apparatus stills seeks to make some sort of sense of it all, it's just that the patterns don't obtain for long enough in order for a specific interpretation to persist.
Of course, the list of contraptions meant to isolate and provoke our apophenic tendencies is substantial, and oftentimes touted as having therapeutic benefits. We slide into sensory deprivation tanks to gape at the universe within, and assemble mail-order DIY ‘brain machines' to ‘expand our brain's technical skills'. This is mostly bunk, but all are predicated on the idea that the brain will produce its own stimuli when external ones are absent, or if there is only a narrow band of stimulus available. In the end, what we experience here is not so much an epiphany, as apophany.
In effect, what Google's engineers have fabricated is an apophenic doomsday machine. It does one thing – search for patterns in the ways in which it knows how – and it does those things very, very well. A neural network trained to identify animals will not suddenly begin to find architectural features in a given input image. It will, if given the picture of a building façade, find all sorts of animals that, in its judgment, already lurk there. The networks are even capable of teasing out the images with which they are familiar if given a completely random picture – the graphic equivalent of static. These are perhaps the most compelling images of all. It's the equivalent of putting a neural network in an isolation tank. But is it? The slide into anthropomorphism is so effortless.
And although the Google blog post isn't clear on this, I suspect that there is also no clear point at which the network is ‘finished'. An intrinsic part of thinking is knowing when to stop, whereas iteration needs some sort of condition wrapped around the loop, otherwise it will never end. You don't tell a computer to just keep adding numbers, you tell it to add only the first 100 numbers you give it. Otherwise the damned thing won't stop. The engineers ran the iterations up until a certain point, and it doesn't really matter if that point was determined by a pre-existing test condition (eg, ‘10,000 iterations') or a snap aesthetic judgment (eg, ‘This is maximum weirdness!'). The fact is that human judgment is the wrapper around the process that creates these images. So if we consider that a fundamental feature of thinking is knowing when to stop doing so, then we find this trait lacking in this particular application of neural networks.
In addition to knowing when to stop, there is another critical aspect of thinking as we know it, and that is forgetting. In ‘Funes el memorioso', Jorge Luis Borges speculated on the crippling consequences of a memory so perfect that nothing was ever lost. Among other things, the protagonist Funes can only live a life immersed in an ocean of detail, "incapable of general, platonic ideas". In order to make patterns, we have to privilege one thing over another, and dismiss vast quantities of sensory information as irrelevant, if not outright distracting or even harmful.
Interestingly enough, this relates to a theory concerning the nature of the schizophrenic mind (in a further nod to the deconstructionist tendency, I concede that the term ‘schizophrenia' is not unproblematic, but allow me the assumption). The ‘hyperlearning hypothesis' claims that schizophrenic symptoms can arise from a surfeit of dopamine in the brain. As a key neurotransmitter, dopamine plays a crucial role in memory formation:
When the brain is rewarded unexpectedly, dopamine surges, prompting the limbic "reward system" to take note in order to remember how to replicate the positive experience. In contrast, negative encounters deplete dopamine as a signal to avoid repeating them. This is a key learning mechanism which is also involves memory-formation and motivation. Scientists believe the brain establishes a new temporary neural network to process new stimuli. Each repetition of the same experience triggers the identical neural firing sequence along an identical neural journey, with every duplication strengthening the synaptic links among the neurons involved. Neuroscientists say, "Neurons that fire together wire together." If this occurs enough times, a secure neural network is established, as if imprinted, and the brain can reliably access the information over time.
The hyperlearning hypothesis posits that schizophrenics have too much dopamine in their brains, too much of the time. Take the process described above and multiply it by orders of magnitude. The result is a world that a schizophrenic cannot make sense of, because literally everything is important, or no one thing is less important than anything else. There is literally no end to thinking, no conditional wrapper to bring anything to a conclusion.
Unsurprisingly, the artificial neural networks discussed above are modeled on precisely this process of reinforcement, except that the dopamine is replaced by an algorithmic stand-in. In 2011, Uli Grasemann and Risto Miikkulainen did the logical next step: they took a neural network called DISCERN and cranked up its virtual dopamine.
Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information – not as distinct units, but as statistical relationships of words, sentences, scripts and stories.
In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate -- essentially telling it to stop forgetting so much.
After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.
Even though I find this infinitely more terrifying than a neural net's ability to create a picture of a multi-headed dog-slug-squirrel, I still contend that there is no thinking going on, as we would like to imagine it. And we would very much like to imagine it: even the article cited above has as its headline ‘Scientists Afflict Computers with Schizophrenia to Better Understand the Human Brain'. It's almost as if schizophrenia is something you can pack into a syringe, virtual or otherwise, and inject it into the neural network of your choice, virtual or otherwise. (The actual peer-reviewed article is more soberly titled ‘Using computational patients to evaluate illness mechanisms in schizophrenia'.) We would be much better off understanding these neural networks as tools that provide us with a snapshot of a particular and narrow process. They are no more anthropomorphic than the shapes that clouds may suggest to us on a summer's afternoon. But we seem incapable of forgetting this. If we cannot learn to restrain our relentless pattern-seeking, consider what awaits us on the other end of the spectrum: it is not coincidental that the term ‘apophenia' was coined in 1958 by Klaus Conrad in a monograph on the inception of schizophrenia.
Monday, May 25, 2015
The “Invisible Web” Undermines Health Information Privacy
by Jalees Rehman
"The goal of privacy is not to protect some stable self from erosion but to create boundaries where this self can emerge, mutate, and stabilize. What matters here is the framework— or the procedure— rather than the outcome or the substance. Limits and constraints, in other words, can be productive— even if the entire conceit of "the Internet" suggests otherwise.
Evgeny Morozov in "To Save Everything, Click Here: The Folly of Technological Solutionism"
We cherish privacy in health matters because our health has such a profound impact on how we interact with other humans. If you are diagnosed with an illness, it should be your right to decide when and with whom you share this piece of information. Perhaps you want to hold off on telling your loved ones because you are worried about how it might affect them. Maybe you do not want your employer to know about your diagnosis because it could get you fired. And if your bank finds out, they could deny you a mortgage loan. These and many other reasons have resulted in laws and regulations that protect our personal health information. Family members, employers and insurances have no access to your health data unless you specifically authorize it. Even healthcare providers from two different medical institutions cannot share your medical information unless they can document your consent.
The recent study "Privacy Implications of Health Information Seeking on the Web" conducted by Tim Libert at the Annenberg School for Communication (University of Pennsylvania) shows that we have a for more nonchalant attitude regarding health privacy when it comes to personal health information on the internet. Libert analyzed 80,142 health-related webpages that users might come across while performing online searches for common diseases. For example, if a user uses Google to search for information on HIV, the Center for Disease Control and Prevention (CDC) webpage on HIV/AIDS (http://www.cdc.gov/hiv/) is one of the top hits and users will likely click on it. The information provided by the CDC will likely provide solid advice based on scientific results but Libert was more interested in investigating whether visits to the CDC website were being tracked. He found that by visiting the CDC website, information of the visit is relayed to third-party corporate entities such as Google, Facebook and Twitter. The webpage contains "Share" or "Like" buttons which is why the URL of the visited webpage (which contains the word "HIV") is passed on to them – even if the user does not explicitly click on the buttons.
Libert found that 91% of health-related pages relay the URL to third parties, often unbeknownst to the user, and in 70% of the cases, the URL contains sensitive information such as "HIV" or "cancer" which is sufficient to tip off these third parties that you have been searching for information related to a specific disease. Most users probably do not know that they are being tracked which is why Libert refers to this form of tracking as the "Invisible Web" which can only be unveiled when analyzing the hidden http requests between the servers. Here are some of the most common (invisible) partners which participate in the third-party exchanges:
Entity Percent of health-related pages
What do the third parties do with your data? We do not really know because the laws and regulations are rather fuzzy here. We do know that Google, Facebook and Twitter primarily make money by advertising so they could potentially use your info and customize the ads you see. Just because you visited a page on breast cancer does not mean that the "Invisible Web" knows your name and address but they do know that you have some interest in breast cancer. It would make financial sense to send breast cancer related ads your way: books about breast cancer, new herbal miracle cures for cancer or even ads by pharmaceutical companies. It would be illegal for your physician to pass on your diagnosis or inquiry about breast cancer to an advertiser without your consent but when it comes to the "Invisible Web" there is a continuous chatter going on in the background about your health interests without your knowledge.
Some users won't mind receiving targeted ads. "If I am interested in web pages related to breast cancer, I could benefit from a few book suggestions by Amazon," you might say. But we do not know what else the information is being used for. The appearance of the data broker Experian on the third-party request list should serve as a red flag. Experian's main source of revenue is not advertising but amassing personal data for reports such as credit reports which are then sold to clients. If Experian knows that you are checking out breast cancer pages then you should not be surprised if this information will be stored in some personal data file about you.
How do we contain this sharing of personal health information? One obvious approach is to demand accountability from the third parties regarding the fate of your browsing history. We need laws that regulate how information can be used, whether it can be passed on to advertisers or data brokers and how long the information is stored.
We may use information we collect about you to:
· Administer your account;
· Provide you with access to particular tools and services;
· Respond to your inquiries and send you administrative communications;
· Obtain your feedback on our sites and our offerings;
· Statistically analyze user behavior and activity;
· Provide you and people with similar demographic characteristics and interests with more relevant content and advertisements;
· Conduct research and measurement activities;
· Send you personalized emails or secure electronic messages pertaining to your health interests, including news, announcements, reminders and opportunities from WebMD; or
· Send you relevant offers and informational materials on behalf of our sponsors pertaining to your health interests.
Perhaps one of the most effective solutions would be to make the "Invisible Web" more visible. If health-related pages were mandated to disclose all third-party requests in real-time such as pop-ups ("Information about your visit to this page is now being sent to Amazon") and ask for consent in each case, users would be far more aware of the threat to personal privacy posed by health-related pages. Such awareness of health privacy and potential threats to privacy are routinely addressed in the real world and there is no reason why this awareness should not be extended to online information.
Libert, Tim. "Privacy implications of health information seeking on the Web" Communications of the ACM, Vol. 58 No. 3, Pages 68-77, March 2015, doi: 10.1145/2658983 (PDF)
Monday, March 23, 2015
You're on the Air!
by Carol A. Westbrook
The excitement of a live TV broadcast...a breaking news story...a presidential announcement...an appearance of the Beatles on Ed Sullivan. These words conjure up a time when all America would tune in to the same show, and families would gather round their TV set to watch it together.
This is not how we watch TV anymore. It is watched at different times and on different devices, from mobile phones, computers, mobile devices, from previously recorded shows on you DVR, or via streaming service such as Netflix and, soon, Apple. Live news can be viewed on the web, via cell phone apps, or as tweets. An increasing number of people are foregoing TV completely to get news and entertainment from other sources, with content that is never "on the air." (see the chart,below, from the Nov 24, 2013 Business Insider). Many Americans don't even own a television set!
We take it for granted that we will have instant access to video content--whether digital or analog, television, cell phone or iPad. But video itself has its roots in television; the word itself means, "to view over a distance." The story of TV broadcasting is a fascinating one about technology development, entrepreneurship, engineering, and even space exploration. It is an American story, and it is a story worth telling.
At first, America was tuned in to radio. From the early 20's through the 1940s, people would gather around their radios to listen to music and variety shows, serial dramas, news, and special announcements. Yet they dreamed of seeing moving pictures over the airwaves, like they did in newsreels and movies. A series of technical breakthroughs were needed to make this happen.
The first important breakthrough was the invention in 1938 of a way to send and view moving images electronically--Farnsworth's "television." Thus followed a series of patent wars, but at the end of the day, we had television sets which could be used to view moving pictures transmitted by the airwaves. In 1939, RCA televised the opening of the New York Worlds Fair, including a speech by the first President to appear on TV, President Franklin D. Roosevelt. There were few televisions to watch it on, though, until after the end of World War II, when America's demand for commercial television rapidly increased.
This led to the next big advance in television--network broadcasting. The big radio broadcast companies such as RCA (Radio Corporation of America) and CBS (Columbia Broadcasting System) naturally expanded into this media, but their infrastructure was limited. Though the frequencies used for AM radio transmission, from 540 to 1780 kHz (kHz means cycles per second) can travel long distances from their transmitting stations, each wavelength can only carry a limited amount of signal energy; in other words, it has a narrow bandwidth. Much higher frequency wavelengths, in the megahertz range (million cycles per second) are required for television so they can carry the additional information needed for picture as well as sound. As a result there was a scramble for higher frequency wavelengths, which was mediated by the FCC (Federal Communications Commission), the entity that regulates broadcasting. In 1948 the FCC allocated the higher frequency bands, designating which ones would be reserved for radio, and which ones for television, and and assigned channel numbers to the TV bands. The VHF television channels were designated 2 - 13. Channel 1 was reallocated to public and emergency communications, which explains why your TV starts with Channel 2! Several higher frequencies, designated as UHF, were reserved for later TV use, including channels 32 to 70. The FCC also froze the number of station licenses at 108 in 1948.
Because the number of broadcast stations was limited, TV was available only if you lived within range of a broadcast network, primarily CBS, NBC or ABC. In other words, if you lived a large city--New York, Chicago, Washington, Philadelphia, Boston, Los Angeles, Seattle or Salt Lake City. Outside of these areas, you might have a chance if you lived on a hill, put up a very high antenna, and prayed for a thermal inversion or a charged ionosphere to propagate the short signal to your television. My husband Rick, an electrical engineer and amateur radio buff, recounts that he watched the coronation of Queen Elizabeth in 1952 from his TV set in a small town in Pennsylvania, due to an environmental quirk (sunspots?), but everyone else had to wait for the films to cross the Atlantic and be shown on their local station.
Yet, for those of us who lived in a prime location, there was an ever-expanding number of programs to watch, such as the Texaco Star Theater, the Milton Bearle Show, and a variety of news shows. Many of us grew up on Howdy Doody, or shows created locally and televised live. I recall walking home from grade school for lunch as a child in Chicago, spending an hour watching "Lunchtime Little Theater," before returning to school to finish the afternoon's lessons! Many of these early shows have been lost, as they were never recorded, and video had not yet been invented.
Television broadcasting eventually went nationwide, thanks to microwave transmission, which developed out of WWII radar. This technology was used to relay television broadcasts to local affiliate stations, which could then broadcast them on their regular channels in the local area. Microwaves use point-to-point transmission, from one microwave tower to the next, and microwave towers were constructed to span the continent. The FCC increased the number of television station licenses, and the broadcast companies truly became "networks." Finally, everyone could watch the same shows at the same time.
But TV was still limited geographically--it could not cross the ocean. This problem was not solved until the third important technology was developed, that of satellite broadcasting. Sputnik, the first space satellite, was launched in 1957. Five years later, July 23, 1962, the first satellite-based transatlantic broadcast took place using the Telstar satellite to relay TV signals from the US ground station in Andover, Maine, to the receiving stations in Goonhilly Downs, England and Pleumeur-Bodou, France.
It's fun to watch this broadcast, which was introduced by Walter Cronkite, and began with a split screen showing the Statue of Liberty on the left and the Eiffel tower on the Right. The satellite transmission was followed a live broadcast of an ongoing baseball game in Chicago's Wrigley Field between the Philadelphia Phillies and the Chicago Cubs, and also included live remarks from President Kennedy, as well as footage from Cape Canaveral, Florida, Seattle, and Canada. I've included a short clip of the Kennedy broadcast.
If you looked up at the night in 1962, you might see the Telstar satellite zoom across your backyard sky. It took about 20 minutes to traverse, passing every 2.5 hours. Broadcast signals could only be transmitted to Telstar and back to land stations on either side of the Atlantic only during this 20-minute transit time, so the tracking satellite dishes had to be fast-moving; they also had to be very large to capture such a weak signal. It is impressive to see the massive size of the dishes in these satellite ground stations, and, and to imagine how quickly they had to move to sweep the sky. This picture of Goonhilly Downs gives you an idea of their size.
Although Telstar demonstrated that satellite transmission was possible for long-range broadcasting, the equipment and precision needed for tracking a rapidly-moving low-earth satellite was onerous. So the space scientists at NASA and Bell Labs launched the next generation of satellites, named "Syncom," into high earth orbit at just the right distance from the earth so that their speed matched the speed of the earth's rotation. When orbiting directly above the equator, the Syncom satellites appeared to be stationery over a single geographic location. Thus, the geostationary (or geosynchronous) satellite was born.
Stationery satellites paved the way for a tremendous expansion in telecommunications, and are still in widespread use. Satellites enabled the rise of cable TV networks such as HBO and CNN in the 1970s, which broadcast without having to go through FCC-regulated television transmitting stations. Instead, their programming was sent via satellite to the cable service, and from there selected programs went by cable to the TV of paid subscribers. These stations could also be accessed through Satellite TV subscription, such as Galaxy, which broadcast them directly to their customers' satellite dishes. Because early satellites could only carry a limited number of cable channels, multiple satellites had to be accessed to provide the purchased programming. Moveable satellite dishes of about four to twelve feet in diameter were positioned in the subscriber's yards or on their roof. Satellite TV further expanded American's access to television, reaching rural communities that had limited (or no) cable service and poor antenna reception; they also provided special paid programming, such as sports events watched at bars. This picture shows a 10-foot moveable dish in my yard in Indiana.
Stationery TV dishes--such as Direct TV antennas--were not feasible until satellites were able to carry more programming, so the dish could stay parked on only one geosynchronous satellite. The technical advance which allowed this was the development of digital video, in the late 1990's. Digital video would eventually displace analog-- remember when the DVD was introduced, which rendered VCRs obsolete in just a few years' time? Each genosynchronous satellites could now carry many more simultaneous channels than before, since each channel takes up only a small fraction of the bandwidth when compared to analog signals. Digital signals also increased the capacity of traditional TV, broadcast from ground towers, which eventually transferred to the HDTV standards, which broadcast at the high capacity UHF frequencies. The transition to HDTV was completed in June 2009, and the TV networks abandoned analog transmission on the old VHF channels, though many of the newer stations carry the old numbers (2 - 13). TV viewers are surprised to learn that they can watch their favorite channels on the newer HDTV sets using only a simple indoor antenna, and many are giving up their pricey cable services. Digital video signals were ready for growth in other media, as they theoretically be transmitted over the internet or by cell phone, and could be stored easily for re-broadcast.
Yet one more step was needed before widespread internet and cellular-based video could occur, allowing us to watch television programs as we do now. This was not a technical advance but an economic one--the sharp drop in the price of computer memory, which happened about 2009. Prior to that, computers had a lot less memory and storage capacity. Perhaps you remember the agony of trying to watch a YouTube video in its early years? Or of waiting for your browser to load? Now we take it for granted that we can view digitized images, create them, share them, watch pre-recorded programs, and record on our TIVO from multiple sources. There seems to be no limit to the ways that we can enjoy television, truly viewing "pictures at a distance." It is a far cry from the early years of television that many of us still remember, when we all watched a small, black-and-white screen with poor sound, to watch John, Paul, George and Ringo sing "I Love You." Now those were the days!
Thanks to my husband Rick Rikoski, for his patient and helpful explanations of the technology of television and its early development.
Monday, March 31, 2014
Sharing Our Sorrow Via Facebook
by Jalees Rehman
Geteiltes Leid ist halbes Leid ("Shared sorrow is half the sorrow") is a popular German proverb which refers to the importance of sharing bad news and troubling experiences with others. The therapeutic process of sharing takes on many different forms: we may take comfort in the fact that others have experienced similar forms of sorrow, we are often reassured by the empathy and encouragement we receive from friends, and even the mere process of narrating the details of what is troubling us can be beneficial. Finding an attentive audience that is willing to listen to our troubles is not always easy. In a highly mobile, globalized world, some of our best friends may be located thousands of kilometers away, unable to meet face-to-face. The omnipresence of social media networks may provide a solution. We are now able to stay in touch with hundreds of friends and family members, and commiserate with them. But are people as receptive to sorrow shared via Facebook as they are in face-to-face contacts?
A team of researchers headed by Dr. Andrew High at the University of Iowa recently investigated this question and published their findings in the article "Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook". The researchers created three distinct Facebook profiles of a fictitious person named Sara Thomas who had just experienced a break-up. The three profiles were identical in all respects except for how much information was conveyed about the recent (fictitious) break-up. In their article, High and colleagues use the expression "emotional bandwidth" to describe the extent of emotions conveyed in the Facebook profile.
In the low bandwidth scenario, the profile contained the following status update:
"sad and depressed:("
The medium bandwidth profile included a change in relationship status to "single" in the timeline, in addition to the low bandwidth profile update "sad and depressed:(".
Finally, the high emotional bandwidth profile not only contained the updates of the low and medium bandwidth profiles, but also included a picture of a crying woman (the other two profiles had no photo, just the standard Facebook shadow image).
The researchers then surveyed 84 undergraduate students (enrolled in communications courses, average age 20, 53% female) and presented them with screenshots of one of the three profiles.
They asked the students to imagine that the person in the profile was a member of their Facebook network. After reviewing the assigned profile, each student completed a questionnaire asking about their willingness to provide support for Sara Thomas using a 9-point scale (1 = strongly disagree; 9 = strongly agree). The survey contained questions that evaluated the willingness to provide emotional support (e.g. "Express sorrow or regret for her situation") and network support (e.g. "Connect her with people whom she may turn to for help''). In addition to being queried about their willingness to provide distinct forms of support, the students were also asked about their sense of community engendered by Facebook (e.g., "Facebook makes me feel I am a part of a community'') and their preference for online interactions over face-to-face interactions (e.g., "I prefer communicating with other people online rather than face-to-face'').
High and colleagues hypothesized that the high emotional bandwidth profiles would elicit greater support from the students. In face-to-face interactions, it is quite common for us to provide greater support to a person – friend or stranger – if we see them overtly crying and therefore the researchers' hypothesis was quite reasonable. To their surprise, the researchers found the opposite. The willingness to provide emotional or network support was significantly lower among students who viewed the high emotional bandwidth profile! For example, average emotional support scores were 7.8 among students who saw Sara entering the "sad and depressed:(" update (low bandwidth) but the scores were only 6.5 among students who also saw the image of Sara crying and updating her relationship status to single (high bandwidth). Interestingly, students who preferred online interactions over face-to-face interactions or those who felt that Facebook created a strong sense of community responded positively to the high bandwidth profile.
There are some important limitations of the study. The students were asked to evaluate whether they would provide support to a fictitious person by imagining that she was part of their Facebook friends network. This is a rather artificial situation because actual supportive Facebook interactions occur among people who know each other. It is not easy to envision support for a fictitious person whose profile one sees for the first time. Furthermore, "emotional bandwidth" is a broad concept and it is difficult to draw general conclusions about "emotional bandwidth" from the limited differences between the three profiles. Increasing the sample size of the study subjects as well as creating a broader continuum of emotional bandwidth differences (e.g. including profiles which include pictures of a fictitious Sara Thomas who is not crying, using other status updates, etc.), and also considering scenarios that are not just related to break-ups (e.g. creating profiles of a fictitious grieving person who has lost a loved one) would be useful for an in-depth analysis of "emotional bandwidth".
The study by High and colleagues is an intriguing and important foray into the cyberpsychology of emotional self-disclosure and supportive communication on Facebook. This study raises important questions about how cyberbehavior differs from real world face-to-face behavior, and the even more interesting question of why these behaviors are different. Online interactions omit the dynamic gestures, nuanced intonations and other cues which play a critical role in determining our face-to-face behavior. When we share emotions via Facebook, our communication partners are often spatially and temporally displaced. This allows us to carefully "edit" what we disclose about ourselves, but it also allows our audience to edit their responses, unlike the comparatively spontaneous responses of a person sitting next to us. Facebook invites us to use the "Share" button, but we need to remember that online "sharing" is a sharing between heavily edited and crafted selves that is very different from traditional forms of "sharing".
Acknowledgments: The images from the study profiles were provided by Dr. Andrew High, copyright of the images - Dr. Andrew High.
Reference: Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook, AC High, A Oeldorf-Hirsch, S Bellur, Computers in Human Behavior (2014) 34, 79-88
Monday, March 17, 2014
Why Amazon Reminds Me of the British Empire
by Emrys Westacott
"Life—that is: being cruel and inexorable against everything about us that is growing old and weak….being without reverence for those who are dying, who are wretched, who are ancient." (Friedrich Nietzsche, The Gay Science)
A recent article by George Packer in The New Yorker about Amazon is both eye-opening and thought-provoking. In "Cheap Words" Packer describes Amazon's business practices, the impact of these on writers, publishers, and booksellers, and the seemingly limitless ambitions of Amazon's founder and CEO Jeff Bezos whose "stroke of business genius," he says, was "to have seen in a bookstore a means to world domination."
Amazon began as an online book store, but US books sales now account for only about seven percent of the seventy-five billion dollars it takes in each year. Through selling books, however, Amazon developed perhaps better than any other business two strategies that have been key to its success: it uses to the full sophisticated computerized collection and analysis of data about its customers; and it makes the interaction between buyer and seller maximally simple and convenient. It also, of course, typically offers lower prices than its competitors. Bezos' plan to one day have drones provide same-day delivery of items that have been stocked in warehouses near you in anticipation of your order is the logical next step in this drive toward creating a frictionless customer experience.
Amazon's impact on the world of books has been massive. Over the past twenty years the number of independent bookstores in the US has been cut in half from four thousand to two thousand, and this number continues to dwindle. Because Amazon is by far the biggest bookseller, no publisher can afford to not use its services, and Amazon exploits this situation to the hilt. Publishers are required to pay Amazon millions of dollars in "marketing discount" fees. Those that balked at paying the amount demanded had the ‘Buy' button removed from their titles on Amazon's web site. Amazon used the same tactic to try to force Macmillan to agree to its terms regarding digital books. And of course Amazon's Kindle dominates the world of e-books, another major threat to traditional publishers and booksellers.
The argument for viewing Amazon in a positive light is not difficult to make.
They offer the customer a bigger selection of books than anyone else, usually at lower prices. Buying online as a returning customer with a registered credit card is laughably easy. Any wannabe writer can self-publish with Amazon, and those whose books sell receive a much higher percentage in royalties. In opening up this opportunity to all, and in basing its advertising and promotional decisions on computer analysis of customer behavior rather than on some self-styled expert's opinion, Amazon eliminates the unnecessary middlemen, professional tastemakers, and elitist gatekeepers that have controlled—and constrained—publishing for so long, replacing them with the dynamic democracy of the digital market place.
For all that, more than one person I know reacted to Packer's article by pledging to avoid buying stuff from Amazon in future, at least as far as and for as long as this is possible (which judging from the way things are going may not be too far or very long). Why this reaction? Well, when I told my daughter about Packer's article her immediate response was to say that Amazon sounded a bit like the British Empire. Which set me thinking.
What parallels can be found between the premier online retailer and the largest empire in history? I see similarities in three areas: beliefs and attitudes; practices; and impact on affected populations. Let's consider these in turn.
According to Packer's account, the prevailing attitude among those in charge at Amazon is arrogance. Here is where I think the echoes of imperialism are most apparent. British imperialists typically viewed themselves as superior to those they displaced or ruled on various counts: birth, race, heritage, education, culture, morals, religion, ability, and character, all resulting in and backed up by superior political and military power. The proof of this superiority could be seen on any map of the world that showed the extent of Britannia's rule. The Amazon execs are indifferent, of course, to such things as birth or pedigree; what matters to them is being smart. But thinking of themselves as smart is the basis for a particular kind of arrogance which they seem to share with other successful types in places like Silicon Valley and Wall Street. The way one top exec is described to Packer by a colleague is revealing: he's said to be "the smartest guy in the room at a company where everyone believes himself to be just that."
This fetishism of smartness is certainly not confined to techies, but it assumes a specific and perhaps especially intense form among them. Obviously, there are many different ways of being intelligent. One can excel at abstract reasoning, creative problem-solving, learning languages, understanding people, remembering information, noticing patterns and connections, interpreting works of art, manipulating people and events, mastering a practical skill, recognizing opportunities, artistic creativity, witty repartee—the list is virtually endless. So there are many people out there who are smart in various ways. But at any particular time and place, certain kinds of intelligence will be especially valued. It might be the ability to track an animal, or plan a battle, or discourse fluently in Latin, or demonstrate erudition, or make accurate and discriminating observations, or solve technical problems using mathematics and logic. These are all forms of smartness that at different times have been applauded and rewarded. And of course one kind of smartness is to recognize just what kind of smarts the present or immediate future will reward.
Today we live in an age when science enjoys cultural hegemony and most educated people earn a living by processing information. Naturally enough, therefore, certain kinds of smartness are now much in demand and are rewarded accordingly. Prominent among these is fluency in computer science and technology. The market value of knowledge and skills in this area has been greatly enhanced by the growth of the internet since this has expanded to an unprecedented degree the potential customer base or audience for any online enterprise.
The fetishism of smartness at places like Amazon is thus, naturally enough, oriented towards technological fluency and business acumen. But it seems to be accompanied by a moral subtext. Our success is not due to chance or luck; it's due to our intelligence; therefore it's deserved. On the face of it, this might seem dissimilar to the attitude of a British imperialist who, after all, could hardly claim credit for being born British (Cecil Rhodes supposedly said that "to be born English is to win first prize in the lottery of life"). But it is similar insofar as the British attributed their success in conquering and ruling much of the world to their possession of certain qualities—intelligence, industry, organization, moral and cultural superiority. The similarity extends also to the contemptuous attitude felt and sometimes expressed toward those who suffer as a result of this success. One former Amazon employee cited by Packer says that execs at Amazon view the older publishers as "antediluvian losers" and describe whole sections of the print world as the "Rust Belt media." Imperialists like Winston Churchill regularly referred to the native populations whose settlements, property, and whole way of life he cheerfully helped to destroy when serving as a military officer in Africa as "primitive," "backward," barbarous," "ignorant," "savage," and "improvident."
In the eyes of both, what legitimizes this contempt—and reinforces the arrogance—is the conviction that they are on the side of history. As Jeff Bezos said to Charlie Rose: "Amazon is not happening to bookselling. The future is happening to bookselling." The attitude is a form of Social Darwinism. Countries with superior military power and political organization will naturally dominate people who are lacking in these. ("Whatever happens we have got / The Gatling gun, and they have not.") Businesses that know how to use the latest technology effectively will inevitably send to the wall those that still rely on dated methods that are less efficient: that's the way capitalism functions. The ultimate and unarguable proof of superiority is real world success: the subjugation of native populations; the growth of market share. Might is right.
Seeing themselves as being aligned with the forces of inevitable historical change is accompanied, naturally enough, by the belief that they are agents of progress, that the changes they help being about are desirable. Obviously, this self-perception can be self-serving; but that doesn't make it foolish. There is an idealistic strain in enterprises like Amazon, Google, or Facebook that is not simply a piece of self-deception or a marketing strategy. Amazon really does make books available to people who lack a local bookstore (although in some cases, of course, this lack may be largely due to the local bookstore being put out of business by Amazon). Their constantly expanding inventory–Bezos' eventual goal is to warehouse copies of every book ever written–means that it is now much easier than ever before to buy obscure and out of print titles. Electronic self-publishing makes it easier and cheaper for all writers to put their work out in the public domain. British imperialists also saw themselves as benefiting the world. Churchill, reflecting on what the British had achieved in Africa, thought that future historians would judge them to be "a people, of whom at least it may be said, that they have added to the happiness, the learning and the liberties of mankind." Cecil Rhodes was bracingly blunt: "I contend that we are the first race in the world, and the more of the world we inhabit the better it is for the human race."
Moving from attitudes to actions, we should first of all be fair to Amazon. They don't massacre by the thousand those who resist their growing power; they don't torch villages in acts of punitive reprisal; they don't use gunboats to force the Chinese to keep buying opium from British drug traffickers. But within the parameters of legal business operations, they do seem to be pretty ruthless. Some of their success is undoubtedly due to their clever use of up to date methods, from automated, individual-oriented advertising to warehouses staffed by non-unionized workers who are already being replaced by robots. But according to Packer their success in bookselling is also largely due to a strategy whereby they "created dependency and harshly exploited its leverage." Refusing to sell books by publishers who won't cough up a sufficiently large "marketing discount" fee is a case in point. This is, in effect, a legal extortion racket. To be sure, it isn't as crude as the way the British persuaded the Chinese to sign the Treaty of Nanking, which required China to hand over twenty-one million dollars, grant all sorts of trading concessions, and cede control of Hong Kong (the British method was to threaten Nanking with gunboats). But the underlying mentality isn't so different. Where one isn't constrained by moral considerations, all that remains is a power struggle; and all that ultimately matters in that struggle is who wins. As Quirrell says in Harry Potter and the Philosopher's Stone, echoing Machiavelli, Hobbes, and Nietzsche: "There is no good and evil, there is only power and those too weak to seek it.
Of course, Jeff Bezos is hardly the first capitalist to play hardball, so it wouldn't make much sense to single out his company as singularly ruthless in its business strategies. The ethics of Amazon are pretty much the ethics of any big business striving toward monopoly status. What is troubling, though, about the mindset described by Packer is the seeming indifference to, or even satisfaction over, the negative impact of the company's actions on significant numbers of people. Packer reports that among "people who care about reading, Amazon's unparalleled power generates endless discussion, along with paranoia, resentment, confusion, and yearning." This could equally stand as a description of those who found themselves powerless to resist British rule. But in both cases, the view from the seat of power is that those who aren't with the program either don't recognize what's in their best interests or deserve to disappear.
"Innovate or die." "Move fast and break things" Such mantras are associated with the technological revolution, but there is nothing essentially new here. They express the essential spirit–and reality– of capitalism that Marx describes in The Communist Manifesto. Those who find themselves surfing the waves of innovation naturally enough sing the praises of the new. So much is understandable. It feels good to be a winner, doubly good if you sense the wind of history at your back, and triply good if you believe you're making the world a better place. British imperialists felt good on all three counts, yet we are now critical of their attitude in large part because of their indifference to the individuals, communities and cultures they affected and in many cases destroyed. They could have done with more humility and more humanity. The same goes for the Amazon execs described by Packer. What is unbecoming, even ugly, in both groups is the callousness drifting into contempt toward those who, also understandably, lament the destruction of something they cherish, whether it be a secure job (like working in a bookstore), a respected occupation (like print publishing), a skill that is no longer marketable (like editing), a pleasure that may soon no longer be available (like browsing in used bookstores) or, indeed, an entire form of life.
Monday, March 03, 2014
Is Internet-Centrism a Religion?
by Jalees Rehman
On the evening of March 3 in 1514, Steven is sitting next to Friar Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the printing press will change everything."
Let us now fast-forward 500 years and re-enact this hypothetical scene with some tiny modifications.
On the evening of March 3 in 2014, Steven is sitting next to TED-Talker Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the internet will change everything."
Clay's advice in the first scene sounds ludicrous to us because we know that the printing press did not usher in an era of wealth, justice and peace. Being retrospectators, we realize that the printing press revolutionized how we disseminate information, but even the most efficient dissemination tool is just a means and not the ends.
It is more difficult for us to dismiss Clay's advice in the second scene because it echoes the familiar Silicon Valley slogans which inundate us with such persistence that some of us have begun to believe them. Clay's response is an example of what Evgeny Morozov refers to as "Internet-centrism", the unwavering belief that the Internet is not just an information dissemination tool but that it constitutes the path to salvation for humankind. In his book "To Save Everything, Click Here: The Folly of Technological Solutionism", Morozov suggests that "Internet-centrism" is taking on religion-like qualities:
"If the public debate is any indication, the finality of "the Internet"— the belief that it's the ultimate technology and the ultimate network— has been widely accepted. It's Silicon Valley's own version of the end of history: just as capitalism-driven liberal democracy in Francis Fukuyama's controversial account remains the only game in town, so does the capitalism-driven "Internet." It, the logic goes, is a precious gift from the gods that humanity should never abandon or tinker with. Thus, while "the Internet" might disrupt everything, it itself should never be disrupted. It's here to stay— and we'd better work around it, discover its real nature, accept its features as given, learn its lessons, and refurbish our world accordingly. If it sounds like a religion, it's because it is."
Morozov does not equate mere internet usage with "Internet-centrism". People routinely use the internet for work or leisure without ascribing mythical powers to it, but it is when the latter occurs that internet usage transforms into "Internet-centrism".
Does Morozov's portrayal of "Internet-centrism" as a religion correspond to our current understanding of religions? "Internet-centrism" does not involve deities, sacred scripture or traditional prayers, but social scientists and scholars of religion do not require deism, scriptures or prayers to categorize a body of beliefs and practices as a religion.
The German theologian Friedrich Schleiermacher (1768-1834) thought that the feeling of "absolute dependence" ("das schlechthinnige Abhängigkeitsgefühl") was one of the defining characteristics of a religion. In a January 2014 Pew Internet survey, 53% of adult internet users in said that it would be "very hard" to give up the internet, whereas only 38% felt this way in 2006. This does not necessarily meet the Schleiermacher threshold of "absolute dependence" but it indicates a growing perception of dependence among internet users, who are struggling to envision a life without the internet or a life beyond the internet.
Absolute dependence is not unique to religion, therefore it may be more helpful to turn to religion-specific definitions if we want to understand the religionesque characteristics of Internet-centrism. In his classic essay "Religion as a cultural system" (published in "The Interpretation of Cultures"), the anthropologist Clifford Geertz (1926-2006) defined religion as:
" (1) a system of symbols which acts to (2) establish powerful, persuasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic."
Today's Silicon Valley pundits (incidentally a Sanskrit term originally used for learned Hindu scholars well-versed in Vedic scriptures) excel at establishing "powerful, persuasive, and long-lasting moods and motivations" and endowing "conceptions of general order of existence" with an "aura of factuality". Morozov does not specifically reference the Geertz definition of religion, but he provides extensive internet pundit quotes which fit the bill. Here is one such example:
"To be a peer progressive, then, is to live with the conviction that Wikipedia is just the beginning, that we can learn from its success to build new systems that solve problems in education, governance, health, local communities, and countless other regions of human experience."
—Steven Johnson in "Future Perfect: The Case For Progress In A Networked Age"
One problem with abstract definitions of religion is that they do not encompass the practice of religion and its mythical or supernatural aspects, which are often essential parts of most religions. In "The Religious Experience", the religion scholar Ninian Smart (1927-2001) does not provide a handy definition for religions but instead offers six "dimensions" that are present in most major religions: 1) The Ritual Dimension, 2) The Mythological Dimension, 3) The Doctrinal Dimension, 4) The Ethical Dimension, 5) The Social Dimension and 6) The Experiential Dimension.
How do these dimensions of religion apply to Internet-centrism?
1) The Ritual Dimension: The need to continuously seek connectivity by accessing computers or seeking out wireless connectivity, checking emails or social media updates so frequently that this connectivity exceeds one's pragmatic needs could be considered a ritual of Internet-centrism. If one feels the need to check emails and Facebook or Twitter updates every one to two minutes, despite the fact that it is unlikely one would have received a message that required urgent action, it may be an indicator of the important role that this ritual plays in the life of an Internet-centrist. Worshippers of traditional religions feel uncomfortable if they miss out on regular prayers or lose their rosaries that allow them to commune with their God, and it appears that for some humans, the ritual of Internet-connectivity may play a similar role.
2) The Mythological Dimension: There is the physical internet, which consists of billions of physical components such as computers, servers, routers or cables that are connected to each other. Prophets and pundits of Internet-centrism also describe a mythical "Internet" which goes for beyond the physical internet, because it involves mythical narratives about the power of the internet as a higher force that is shaping human destiny. Just like "Scientism" attributes a certain mystique to real-world science, Internet-centrism adorns the physical internet with a similar mythological dimension.
Ideas of "cognitive surplus", crowdsourcing knowledge to improve the human condition, internet-based political revolutions that will put an end to injustice, oppression and poverty and other powerful metaphors are used to describe this poorly defined mythical entity that has little to do with the physical internet. The myth of egalitarianism is commonly perpetuated, yet the internet is anything but egalitarian. Social media hubs have millions of followers and certain corporations or organizations are experts at building filters and algorithms to control the information seen by consumers who have minimal power and control over the flow of information.
3) The Doctrinal Dimension: The doctrine of Internet-centrism is the relentless pursuit of sharedom through the internet. The idea is that the more we share, the more we collaborate and the more transparent we are via the internet, the easier it will be for us humans to conquer the challenges that face us. Challenging this basic doctrine that is promoted by Silicon Valley corporations can be perceived as heretical. It is a remarkable testimony to the proselytizing power of the prophets and pundits in Silicon Valley that people were outraged at the government institution NSA for violating our privacy. There was comparatively little concern about the fact that the primary benefactors of the growing culture of sharedom are the for-profit internet corporations that make money off our willingness to sacrifice our privacy.
4) The Ethical Dimension: In many religions, one is asked to follow aspects of a religious doctrine which have no direct ethical context. For example, seeking salvation by praying alone to a god on a mountain-top does not necessarily require adherence to ethical standards. On the other hand, most religions have developed moral imperatives that govern how adherents of a religion interact with fellow believers or non-believers. In Internet-centrism, the doctrinal dimension is conflated with the ethical dimension. Sharedom is not only a doctrinal imperative, it is also a moral imperative. We are told that sharing and collaborating is an ethical duty.
This may be unique to Internet-centrism since the internet (both in its physical or its mythical form) presupposes the existence of fellow beings with whom one can connect. If a catastrophe wiped out all humans but one, who happened to adhere to a traditional religion, she could still pray to a god (ritual), believe in salvation by a supernatural entity (mythological) and abide by the the religious laws (doctrinal). However, if she were an Internet-centrist, all her rituals, beliefs and doctrines would become meaningless.
5) The Social Dimension: Congregating in groups and social interactions are key for many religions, but Internet-centrism provides more tools than any other ideology, cultural movement or religion for us to interact with others. Whether we engage in this social activity by using social media such as Facebook or Twitter, by reading or writing blog posts, or by playing multi-player games online, Internet-centrism encourages us to fulfill our social needs by using the tools of the internet.
6) The Experiential Dimension: Most religions offer their adherents opportunities for highly personal, spiritual experiences. Internet-centrism avoids any talk of "spirituality", but the idea of a personalized experience is very much a part of Internet-centrism. One of its goals is to provide opportunities for self-actualization. We all may be connected via the internet, but Internet-centrists also want us to believe that this connectivity provides a path for self-actualization. We can modify settings to customize our web browsing experience, we can pick and choose from millions of options of what online courses we want to take, videos we want to watch or music we want to listen to. The sense of connectedness and omnipotentiality is what provides the adherent of Internet-centrism with a feeling of personal empowerment that comes close to a spiritual experience of traditional religions.
When one reviews the definitions by Schleiermacher or Geertz, or the multi-dimensional analysis by Ninian Smart, it does indeed seem that Morozov is right and that Internet-centrism is taking on many religion-like characteristics. There is probably still a big disconnect between the Silicon valley prophets or pundits who proselytize and the vast majority of internet users who primarily act as "consumers" but do not yet buy into the tenets of Internet-centrism. But it is likely that at least in the short-term, Internet-centrism will continue to grow, especially if Internet-centrist ideas are introduced to children in schools and they grow up believing that these ideas are both essential and sufficient for our intellectual and social wellbeing. Perhaps the pundits of Internet-centrism could discuss the future of this emerging religion with adherents of other faiths at a TEDxInterfaith conference.
Image Credits: Photo of Gutenberg Bible (Creative Commons license, via NYC Wanderer at Flickr)
Monday, January 06, 2014
Synthetic Biology: Engineering Life To Examine It
by Jalees Rehman
Two scientific papers that were published in the journal Nature in the year 2000 marked the beginning of engineering biological circuits in cells. The paper "Construction of a genetic toggle switch in Escherichia coli" by Timothy Gardner, Charles Cantor and James Collins created a genetic toggle switch by simultaneously introducing an artificial DNA plasmid into a bacterial cell. This DNA plasmid contained two promoters (DNA sequences which regulate the expression of genes) and two repressors (genes that encode for proteins which suppress the expression of genes) as well as a gene encoding for green fluorescent protein that served as a read-out for the system. The repressors used were sensitive to either selected chemicals or temperature. In one of the experiments, the system was turned ON by adding the chemical IPTG (a modified sugar) and nearly all the cells became green fluorescent within five to six hours. Upon raising the temperature to activate the temperature-sensitive repressor, the cells began losing their green fluorescence within an hour and returned to the OFF state. Many labs had used chemical or temperature switches to turn gene expression on in the past, but this paper was the first to assemble multiple genes together and construct a switch which allowed switching cells back and forth between stable ON and OFF states.
The same issue of Nature contained a second land-mark paper which also described the engineering of gene circuits. The researchers Michael Elowitz and Stanislas Leibler describe the generation of an engineered gene oscillator in their article "A synthetic oscillatory network of transcriptional regulators". By introducing three repressor genes which constituted a negative feedback loop and a green fluorescent protein as a marker of the oscillation, the researchers created a molecular clock in bacteria with an oscillation period of roughly 150 minutes. The genes and proteins encoded by the genes were not part of any natural biological clock and none of them would have oscillated if they had been introduced into the bacteria on their own. The beauty of the design lay in the combination of three serially repressing genes and the periodicity of this engineered clock reflected the half-life of the protein encoded by each gene as well as the time it took for the protein to act on the subsequent member of the gene loop.
Both papers described the introduction of plasmids encoding for multiple genes into bacteria but this itself was not novel. In fact, this has been a routine practice since the 1970s for many molecular biology laboratories. The panache of the work lay in the construction of functional biological modules consisting of multiple genes which interacted with each other in a controlled and predictable manner. Since the publication of these two articles, hundreds of scientific papers have been published which describe even more intricate engineered gene circuits. These newer studies take advantage of the large number of molecular tools that have become available to query the genome as well as newer DNA plasmids which encode for novel biosensors and regulators.
Synthetic biology is an area of science devoted to engineering novel biological circuits, devices, systems, genomes or even whole organisms. This rather broad description of what "synthetic biology" encompasses reflects the multidisciplinary nature of this field which integrates ideas derived from biology, engineering, chemistry and mathematical modeling as well as a vast arsenal of experimental tools developed in each of these disciplines. Specific examples of "synthetic biology" include the engineering of microbial organisms that are able to mass produce fuels or other valuable raw materials, synthesizing large chunks of DNA to replace whole chromosomes or even the complete genome in certain cells, assembling synthetic cells or introducing groups of genes into cells so that these genes can form functional circuits by interacting with each other. Synthesis in the context of synthetic biology can signify the engineering of artificial genes or biological systems that do not exist in nature (i.e. synthetic = artificial or unnatural), but synthesis can also stand for integration and composition, a meaning which is closer to the Greek origin of the word. It is this latter aspect of synthetic biology which makes it an attractive area for basic scientists who are trying to understand the complexity of biological organisms. Instead of the traditional molecular biology focus on studying just one single gene and its function, synthetic biology is engineering biological composites that consist of multiple genes and regulatory elements of each gene. This enables scientists to interrogate the interactions of these genes, their regulatory elements and the proteins encoded by the genes with each other. Synthesis serves as a path to analysis.
One goal of synthetic biologists is to create complex circuits in cells to facilitate biocomputing, building biological computers that are as powerful or even more powerful that traditional computers. While such gene circuits and cells that have been engineered have some degree of memory and computing power, they are no match for the comparatively gigantic computing power of even small digital computers. Nevertheless, we have to keep in mind that the field is very young and advances are progressing at a rapid pace.
One of the major recent advances in synthetic biology occurred in 2013 when an MIT research team led by Rahul Sarpeshkar and Timothy Lu at MIT created analog computing circuits in cells. Most synthetic biology groups that engineer gene circuits in cells to create biological computers have taken their cues from contemporary computer technology. Nearly all of the computers we use are digital computers, which process data using discrete values such as 0's and 1's. Analog data processing on the other hand uses a continuous range of values instead of 0's and 1's. Digital computers have supplanted analog computing in nearly all areas of life because they are easy to program, highly efficient and process analog signals by converting them into digital data. Nature, on the other hand, processes data and information using both analog and digital approaches. Some biological states are indeed discrete, such as heart cells which are electrically depolarized and then repolarized in periodical intervals in order to keep the heart beating. Such discrete states of cells (polarized / depolarized) can be modeled using the ON and OFF states in the biological circuit described earlier. However, many biological processes, such as inflammation, occur on a continuous scale. Cells do not just exist in uninflamed and inflamed states; instead there is a continuum of inflammation from minimal inflammatory activation of cells to massive inflammation. Environmental signals that are critical for cell behavior such as temperature, tension or shear stress occur on a continuous scale and there is little evidence to indicate that cells convert these analog signals into digital data.
Most of the attempts to create synthetic gene circuits and study information processing in cells have been based on a digital computing paradigm. Sarpeshkar and Lu instead wondered whether one could construct analog computation circuits and take advantage of the analog information processing systems that may be intrinsic to cells. The researchers created an analog synthetic gene circuit using only three proteins that regulate gene expression and the fluorescent protein mCherry as a read-out. This synthetic circuit was able to perform additions or ratiometric calculations in which the cumulative fluorescence of the mCherry was either the sum or the ratio of selected chemical input concentrations. Constructing a digital circuit with similar computational power would have required a much larger number of components.
The design of analog gene circuits represents a major turning point in synthetic biology and will likely spark a wave of new research which combines analog and digital computing when trying to engineer biological computers. In our day-to-day lives, analog computers have become more-or-less obsolete. However, the recent call for unconventional computing research by the US Defense Advanced Research Projects Agency (DARPA) is seen by some as one indicator of a possible paradigm shift towards re-examining the value of analog computing. If other synthetic biology groups can replicate the work of Sarpeshkar and Lu and construct even more powerful analog or analog-digital hybrid circuits, then the renaissance of analog computing could be driven by biology. It is difficult to make any predictions regarding the construction of biological computing machines which rival or surpass the computing power of contemporary digital computers. What we can say is that synthetic biology is becoming one of the most exciting areas of research that will provide amazing insights into the complexity of biological systems and may provide a path to revolutionize biotechnology.Daniel R, Rubens JR, Sarpeshkar R, & Lu TK (2013). Synthetic analog computation in living cells. Nature, 497 (7451), 619-23 PMID: 23676681
Monday, December 09, 2013
Google Zeitgeist: Annoying Philosophers, Weird Germans and White Pakistanis
by Jalees Rehman
The Autocomplete function of Google Search is both annoying and fascinating. When you start typing in the first letters or words of your search into the Google search box, Autocomplete takes a guess at what you are looking for and "completes" the search phrase by offering you multiple query phrases. The queries offered by Autocomplete are "a reflection of the search activity of users and the content of web pages indexed by Google". Considering the fact that more than five billion Google searches are conducted on an average day, the Google Autocomplete function has a huge database of search information that it can reference. This also means that the Autocomplete suggestions are quite dynamic and can vary over time. A popular new song lyric, the name of a viral video or a recent movie quote can catapult itself to the top of the Autocomplete suggestion list within a matter of hours or days if millions of users start search for that specific phrase. Autocomplete may also take a user's browsing history or location into account, which explains why it may offer a varying set of suggestions to different users.
Autocomplete can be quite annoying because the suggested lists of queries are based on their web popularity and can thus consist of bizarre combinations which are not at all related to one's intended searches. On the other hand, Autocomplete is also a fascinating tool to provide a window into the Zeitgeist of web users, revealing what kinds of phrases are most commonly used on the web, and by inference, what contemporary ideas are currently associated with the entered keywords. The Google Zeitgeist website reveals the most widely searched terms to help identify cultural trends - based on the frequency of Google search engine queries - during any given year.
The United Nations Entity for Gender Equality and the Empowerment of Women (UN Women) recently used the Google Search Autocomplete function in an ad campaign to highlight the extent of misogyny on the web. Searching for "women should…" or "women need to…" was autocompleted to phrases such as "women should be slaves" or "women need to be put in their place". The fact that Autocomplete suggested these phrases means that probably hundreds of thousands of internet users have used these phrases in their search queries or on web pages indexed by Google – a reminder of how much gender injustice still exists in our world.
A recent article in Slate pointed towards another form of bias unveiled by Autocomplete: Occupational prejudice. The search phrase "scientists are…." was autocompleted to suggest that scientists were either liars, liberal or stupid. I tried it out and received similar suggestions by Autocomplete:
I guess we scientists have been upgraded from merely being stupid to being idiots. I was curious whether other professions fare better.
Well, apparently bankers do not.
And doctors are not only as stupid as scientists, they are also overpaid, arrogant and dangerous.
I can understand that doctors are thought to be overpaid, but it is a bit of a surprise that folks on the web think that professors are overpaid, especially considering the fact that many of them have spent a decade or more in postgraduate education before they become professors and still earn far less than non-academic colleagues in the private industry.
Philosophers, on the other hand, are not perceived as being stupid by the Google Zeitgeist. They are wise and annoying with a tinge of depression.
The next time you contact your editors, please remember that they are people, too.
The fact that Autocomplete suggests these phrases means that they are frequently used in searches and web pages but there is no way to know who is using them and what the intent is behind their usage.
What does the Google Zeitgeist tell us about people of different nationalities?
Germans are not seen in a very positive light, but the prejudices regarding Germans being rude, cold and weird should not come as a surprise to anyone who watches Hollywood movies which love to propagate such clichés.
Interestingly, search queries suggest that both Americans and Germans may come across as weird and rude.
Maybe the web collective feels that members of all nationalities are weird and rude – even the Canadians, who are also known to be nice even though they are afraid of the dark.
When I queried the characteristics of Pakistanis with the "Pakistanis are…." Phrase, I was surprised by the fact that Autocomplete offered very different suggestions than those for Germans and North Americans. The latter were being described by adjectives such as rude, weird, nice or cold – but when it came to Pakistanis, the search queries instead focused on their ethnic identity.
Are Pakistanis white or not white? Are they mostly Indians or do they have Arab origins? The odd thing is that I have conversations around these questions with many Pakistanis, who often try to convince me that they indeed have "white" roots. Some Pakistanis I know – especially those who are proud of their fair skin color - frequently mention their possible Greek origins (dating back to the times of Alexander the Great and his invasion of the Indian subcontinent) conquests, others emphasize the fact that the people who currently reside in Pakistan may have had Arab forefathers when the Arabs invaded the Indian subcontinent. On the other hand, I also know plenty of Pakistanis who see themselves as people with a primarily Indian heritage. The fact that this is a hotly debated topic among Pakistanis suggests that maybe the internet queries suggested by Autocomplete were in fact based on queries or web pages of Pakistanis who are interested in discussing this topic.
When it comes to Arabs, their ethnic identity is also apparently a popular topic in internet queries, and again my personal interactions with American Arabs mirror the Autocomplete suggestions. I have often heard American Arabs mention that they feel they ought to be accepted as part the American "white" population ("Hello – I just received a phone call, Dr. Frantz Fanon is on hold for you on line 1).
I first thought that perhaps the desire to identify oneself with being "white" was a remnant of one's colonial past, but my search for "Nigerians are…" did not support this hypothesis.
The Web seems to hold extremely positive views of Nigerians – smart, intelligent and educated.
Moving beyond searches for nationalities, what characteristics do web users associate with members of other groups?
Well, religions do not fare well.
Christianity and Islam are seen as evil, full of falsehood and (oddly enough) may not even be religions.
In contrast, atheism is not labeled as evil. The suggested queries instead revolve around the question of whether or not atheism is a religion.
How about a cultural ideology?
Ok, Google Zeitgeist tells us that postmodernism is BS and dead.
The human emotion of Schadenfreude, on the other hand, is very much alive.
Autocomplete is not only a tool to identify biases and phrases used on the web; it has also become an inspiration for poets. The Google Poetics blog is run by Sampsa Nuotio and Raisa Omaheimo and collects Google poems, recognizing that Autocomplete suggestions sometimes contain a Dadaist beauty and are in essence prose poems. Inspired by their collection of Google poems, I sometimes enter words or verses from famous poems to generate Autocomplete's mutant versions of those famous verses:
Here is a Google Autocomplete poem based on "Do not go gentle into that good night" by Dylan Thomas:
Do not go
do not go where the path may lead
do not go gentle poem
do not go my love
Do not go beyond what is written
And one based on the line "Let us go then, you and I" from T.S. Eliot's ‘The Love Song of J. Alfred Prufrock'
let us entertain you
let us entertain you gift cards
let us play with your look
let us go then you and i
I would like to now close with a final ode to Google:
google is evil
google is god
google is your friend
google is down