Monday, April 25, 2016
Here is Waldo: Anonymity in the Age of Big Data
by Muhammad Aurangzeb Ahmad
The television series Person of Interest posits the existence of a machine that can monitor every person’s daily activities and can then use this information to predict crimes before they happen. While such a system may be way off in the future, a system that can at least identify the identity of any person may not be that far off. Annonymity used to be private affair, if one wished to remain anonymous then all that one had to do was to lay low and limit one’s interactions with outsiders. It was easier to adopt pseudo-identities, the nature of the internet even facilitated this to a greater extent. I should know this because I have been blogging as a Chinese Muslim for almost 10 years now. New waves of technologies aided by Big Data however are changing nature of anonymity with evermore levels of sophistication needed to be truly anonymous.
Even in the ideal case where John Doe disengages from the digital world i.e., does not own a smart phone, only carries cash, does not use any online service etc, others can still leak information about John e.g., pictures that his friends might put up on social media platforms, post something on Facebook, geo-tag one another etc. Locating a person, determining their likes or dislikes would really depend upon how much information their family and friends are leaking about them. In short you are only as anonymous as your most chatty friend.
In cases where we think that we are not giving away any explicit information about ourselves, much can be inferred from the digital traces that we leave. The manner in which we shop online, respond to messages, play video games etc can reveal a lot about ourselves even when we do not want to reveal anything. In our previous work we have observed that it is possible to predict a person’s gender, age, personality, marital status and even political affiliation by just studying at how they play video games. This is just the top of the iceberg; a case in point is the case where Target’s data analytics were able to infer that a girl is pregnant even though she was able to hide this from her parents.
The main takeaway is that we always reveal something about ourselves even though we may think that we are role playing. In our current (unpublished) work we have even observed that it is possible to predict family relationships (parent, sibling, spouse, offspring etc) with a high degree of accuracy by just studying texting patterns with no access to the content of text message.
Alternatively let us consider the massive amounts of data that large corporations and major retailers like Walmart, Target etc are collecting about their customers. It is now quite easy to cheaply buy data about people from third party sources so that not only does one know what items a person is buying but also where they live, their age, gender and household structure. While some organizations have policies in place that restrict them from collecting and using certain types of data without our consent, this self-imposed restriction is not true for every organization. It is also true that most people do not have time to read through 100 pages of EULA. Combine this with algorithms that can predict missing information about a person and one has a recipe for a system that can figure out what you are going to do next (with in a particular domain) with a high level of accuracy.
But what does this mean for us as individuals and for the society as a whole? It will become increasingly easier to answer the question - Where is Waldo? Not only that but one could even tell you here is Waldo and the list of places that he has been in the last 3 years, his eating habits and his likely future purchases. Before we start chanting alarmist slogans about a dystopian post-privacy era we should also look at the centrifugal forces in the privacy debates. Large corporations also have incentives to not violate their customer’s privacy in order to have a certain level of trust with their customers. Apple’s stance of non-corporation with the government on issues related to customer privacy is a case in point.
While one should be vigilant one should not be alarmist, there was an uproar many years ago when Google announced that they would be adding a search feature to Gmail. It turned out that all the privacy doomsday predictions were unfounded. Some amount of data collection is necessary to offer services like recommendation whether it is in music, movies, food etc. Algorithms can only be as good as the data that is fed to them. Thus, one should not rush to the conclusion that anonymity is over.
The flipside of patterns extracted from Big Data is that these patterns also give one a readymade recipe for behaving in a certain way and remain anonymous. Big Data also makes it easier to fake certain personality traits. Even with very crude profile stuffing Ashley Madison was able to lure thousands of men to buy their membership. This leads us to consider under type of risk to anonymity – data breaches. As the fallout from the Ashley Madison leak suggests one’s indiscretions on the Internet have a way to follow on the offline world with a single torrent dump. More recently, a service has emerged which uses Tinder’s API to notify its paid customers if their partner is cheating on them. These cases should not be shocking or surprising – after all information in the wild can rarely be tamed.
If today’s de-annonymization algorithms look impressive then the future is even more fascinating. Google’s deep learning system can already identify the location of almost any picture with very high level of accuracy, Facebook’s facial recognition system can already beat humans, gait identification algorithms can identify any person by the way that one walks, recovering what was typed by the sound of typing is already an old technology and the list goes on. Each of these technologies is impressive in its own regard but taken together one has the hallmark of a system that can deanonymize almost any person on the planet. If we think that it bad enough that governments and large corporations have access to these type of technologies wait till such systems become open source and become accessible at the palm of your hand. It is certainly not the stuff of Singularity Sky but it does open up vistas for a brave new world for which most of us may not have the time to be ready.
Welcome To Alphaville
"The secret of my influence has always been
that it remained secret."
~ Salvador Dalí
Last month I looked at the short and ignominious career of @TayandYou, Microsoft's attempt to introduce an artificial intelligence agent to the spider's parlor otherwise known as Twitter. Hovering over this event is the larger question of how best to think about human-computer interaction. Drawing on the suggestion of computer scientist and entrepreneur Stephen Wolfram, I put forward the concept of 'purpose' as such a framework. So what was Tay's purpose? Ostensibly, it was to 'learn from humans'. But releasing an AI into the wild leads to unexpected consequences. In Tay's case, interacting with humans was so debilitating that not only could it not achieve its stated purpose, but neither could it achieve its real, unstated goal, which was to create a massive database of marketing preferences of the 18-24 demographic. (As a brief update, Microsoft relaunched Tay and it promptly went into a tailspin of spamming everyone, replying to itself, and other spasmodic behaviors more appropriate to a less-interesting version of Max Headroom).
People have been releasing programs into the digital wild for decades now. The most famous example of the earlier, pre-World Wide Web internet was the so-called Morris worm. In 1988, Robert Tappan Morris, then a graduate student at Cornell University, was trying to estimate the size of the Internet (it's more likely that he was bored). Morris's program would write itself into the operating system of a target computer using known vulnerabilities. It didn't do anything malicious but it did take up valuable memory and processing power. Morris's code also included instructions for replication: specifically, every seventh copy of the worm would instantiate a new copy. More importantly, there was no command-and-control system in place. Once launched, the worm was completely autonomous, with no way to change its behavior. Within hours, the fledgling network of about 100,000 machines had nearly crashed, and it took several days of work for the affected institutions – mostly universities and research institutes – to figure out how to expunge the worm and undo the damage.
This is a good example of how the frictionless nature of information technology serves to amplify both purpose and consequence. And the consequences of Morris's worm went far beyond slowing down the Internet for a few days. As Timothy Lee noted in the Washington Post on the occasion of the worm's 25th anniversary:
Before Morris unleashed his worm, the Internet was like a small town where people thought little of leaving their doors unlocked. Internet security was seen as a mostly theoretical problem, and software vendors treated security flaws as a low priority. The Morris worm destroyed that complacency.
This narrative of innocence lost has remained relevant to our experience with technology. Granted, the Internet was small and chummy back in 1988 – after all, the invention of the web browser was still about five years away – but the fact that 99 lines of code could launch an entire industry is worth contemplating. That is, until you realize that if it hadn't been Morris's 99 lines, it would have been someone else's. Now the internet is many orders of magnitude larger and more essential to our society, but I contend that the same dynamic of purpose and consequence remains at work. There is a clear lineage that can be drawn from Morris to Microsoft's Tay. We think we expect one thing to happen, and while that thing may indeed come to pass, a whole lot of other things also come into play.
This brings me to another recent development in AI that's somewhat more serious than Tay, namely the emergence of AlphaGo, an artificial intelligence schooled in the ancient Chinese strategy game Go. As has been widely reported, AlphaGo beat the world #1, Lee Se-dol, by a decisive margin of four games to one in South Korea. AlphaGo accomplished this through an extensive training regimen that included playing another version of itself several million times (The Verge extensively covered the series here).
In the case of AlphaGo, the purpose seems to be clear. Win at Go – which it did, and handily. But we don't get the deeper context, or, in the parlance of clickbait titles, the "You won't believe what happens next". This is partly the fault of the way the mainstream media constructs its reporting today. Another opportunity to crow about how machines will soon overtake us, and then on to the next shiny object that commands the news cycle's attention. In fact, AlphaGo is but a step in a long, iterative process begun decades ago by DeepMind's founder and CEO, Demis Hassabis. In fact, he lays it all out quite clearly in this lecture at the British Museum.
The larger purpose of this process, of which AlphaGo is merely a symptom, is, in Hassabis's own words, "to solve intelligence, and then use that to solve everything else". Obviously we could spend quite a bit of time unpacking what he means by any of the key terms in that mission statement: What is intelligence? How do you know when you've solved it? What is everything else, and who gets to decide that? Seen within this larger context, the idea of an AI winning at Go goes from one of the holy grails to a digital cairn, marking an event on the way to something much greater, and more ambiguous.
As an example consider Watson, IBM's Jeopardy-winning juggernaut. Perhaps because Jeopardy is a game that seems intrinsically more human, the impact on our popular consciousness was more substantial than AlphaGo's feat. But what is Watson doing today? Is it, to borrow a classic dig, "currently residing in the ‘where are they now' file"? Not at all. Watson is an active revenue stream for IBM, although exactly how much is unknown, since the actual numbers are, for the time being, rolled up into the company's larger Cognitive Solutions division. Watson's involvement is remarkably eclectic, including "helping doctors improve cancer treatment at Memorial Sloan Kettering and employers analyze workplace injury reports." Also, Watson is looking forward to providing insight into case law. And this is all in addition to applying its talents to the kitchen.
What else is Watson up to? Going back to Stephen Wolfram's discussion of AI that I referenced last month, I was struck by his vague disinterest in certain applications. For example, he says
I was thinking the number one application was going to be customer service. While that's a great application, in terms of my favorite way to spend my life, that isn't particularly high up on the list. Customer service is precisely one of these places where you're trying to interface, to have a conversational thing happen. What has been difficult for me to understand is when you achieve a Turing test AI-type thing, there isn't the right motivation. As a toy, one could make a little chat bot that people could chat with.
This is, in fact, exactly one of the businesses that Watson is in. Any sufficiently open-minded entrepreneur could rattle off a dozen opportunities where he or she could really use a conversant machine intelligence. And the larger the scale, the greater the opportunity. Just as Tay could talk to millions of millennials, Watson can talk to millions of customers. Meet IBM Watson Engagement Advisor, which is replacing entire call centers as we speak.
Moreover, Watson is not just a disembodied voice on the other end of a phone line. One of the great lines of technological convergence we have already begun to witness is the unification of AI with robotics. And this crosses AI over into embodiment, which is another ball game entirely. Witness this exchange between a Pepper robot, plugged into Watson and a bank customer. (Obviously, this is a promotional video, but I am slightly disoriented by the fact that IBM is hip enough to be using using words like ‘bummer' when describing the risks of an adjustable-rate mortgage.) It is not difficult to imagine thousands of these robots, with their aww-shucks attitude, all connected to a central AI that is constantly learning and refining itself based on inputs provided by humans. In fact, this not some Alpha-60-style speculation; this is already happening.
These examples illustrate the big takeaway concerning how Watson is being deployed. Watson is no sacred cow. IBM views it as a utility that other aspects of its business can and should leverage, hence the fact that Watson is being used not only in its Cognitive Solutions division, but also in the much larger Global Business Solutions division. The general application of AI is exactly that: general, and the more general the better. IBM's managers and executives would much rather have a tool, or suite of tools, that they can apply promiscuously to any market opportunity that presents itself.
There is no reason as to why AlphaGo, which is owned by Google, will approach its further development any differently. This is especially true if we are to take CEO Demis Hassabis's words seriously: "to solve intelligence, and then use that to solve everything else". But as the ongoing integration of Watson into a business context shows us, ‘everything else' is really a proxy phrase for ‘everything where the money is'. I'll hasten to add that there is nothing inherently objectionable about this, but the fact is that there is no guaranteed nobility in the future of these technologies, either. They will be used to chase profits wherever they may be found. This is the dilution, the ambiguation of purpose. In a very definite sense, we approach what Foucault was trying to teach us about power: its diffuse nature, its functioning at a remove.
Finally, an argument has been made in some quarters that all this AI stuff is really going to be fine, since what we are really after is not artificial intelligence per se, but augmented intelligence. On the surface, the difference is promising, since it perpetuates the idea that machines will continue to be our servants, helping us see the world in new and different ways, enriching our experience of the things that motivate us in the first place. But the question that I have for these optimists is simple: Who gets to be that person?
For example, Garry Kasparov, the chess champion whose 1997 defeat at the hands of IBM's Deep Blue heralded the beginning of the current era of man versus machine, proceeded to incorporate play against Deep Blue as an essential part of his training regimen. In fact, it was this additional training that was a factor in his ability to maintain a monopoly on the chess world for many years.
Likewise, Fan Hui, the European Go champion who was defeated by AlphaGo in the run-up to the matches against Lee Se-dol, joined the AlphaGo team as an advisor, once again lending resonance to the old saw "if you can't beat 'em, join 'em". As a recent Wired article noted:
As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn't see before. And that makes him happy. "So beautiful," he says. "So beautiful."
Kasparov and Fan are rare birds, however, with the expertise and fame that provided them with the opportunity to attach themselves, lamprey-like, to the fast-swimming phenomenon that machine intelligence is becoming. But what about ordinary people – perhaps someone who recently lost their job to automation instigated by the same AI? Will they really have the opportunity to engage it in a didactic or even pleasurable capacity? Or will they be too busy job hunting to care? To quote Godard's all-powerful computer in 'Alphaville', "All is linked, all is consequence".
Monday, March 28, 2016
"She was Dolores on the dotted line."
Artificial intelligence – or rather the phenomena that are being shoved under the ever-widening rubric of AI – has had an interesting few weeks. On the one hand, Google's DeepMind division staged a veritable coup when its AlphaGo AI soundly thrashed the world #1 Go player Lee Se-dol in the venerated Chinese strategy game, four games to one. This has been widely covered, and with justification. Experts will be poring over these games for years, and AlphaGo's unorthodox gameplay is already changing the way top practitioners of the game view strategy. It is particularly noteworthy that Fan Hui, the European Go champion who went down 5-0 to AlphaGo in January, has since then joined the DeepMind team as an advisor and played AlphaGo often. This is not a Chris Christie-style capitulation, but rather an understandable fascination with a style of play that has been described as unearthly. It's no exaggeration to say that the history of the game can now be clearly divided into pre- and post-AlphaGo eras.
Which isn't to say that this shellacking has beaten humanity into quiescence. Earlier this week, we exacted some sort of revenge by appropriating Microsoft's latest entry into social AI, the Twitter bot @TayandYou, and transformed it into "a racist, sexist, trutherist, genocidal maniac". If we were to consider @TayandYou and AlphaGo to be birds of a feather, which is of course sloppy thinking of the highest (lowest? most average?) order, that would be a small consolation indeed, and not much different from stamping on an ant after you just got mauled by a bear, and still feeling good about it. But comparing @TayandYou and AlphaGo does lead to some useful insights, because one of the principal issues confronting the field of AI is the idea of purpose. This month, I'll look at the case of @TayandYou, and follow up with AlphaGo in April, since come April no one will remember @TayandYou, whereas with AlphaGo there's at least a chance.
Now, this idea of AIs lacking a purpose may seem like a daft claim. After all, the softwares in question were created by teams of computer scientists backed by wealthy corporations (artificial intelligence is the sport and pastime of what passes for kings these days). And in the popular consciousness AIs are implacably possessed of purpose, usually to the detriment of the human species. There seems to be little chance that there could be any ambiguity about such a basic question. Still, the extraordinary flameout of @TayandYou beckons the question of what, precisely, any specific AI is for. For what was really at stake with @TayandYou will, I think, be very surprising.
In a long and somewhat rambling interview on Edge, Stephen Wolfram recently asked precisely this. Wolfram, a long-time pioneer and creator of platforms such as Mathematica and Alpha, considers our rapidly diminishing claims on uniqueness as a species. What really makes us different from the rest of the world, whether it's other forms of life, or even inanimate objects? For him, the boundaries of computation and intelligence have become decidedly murkier over the years. There are fewer and fewer signposts that seem to distinguish one from the other, let alone mark the transition from one state to another. So he puts a stake in the ground by positing that humans are good for at least one thing: the ability to assign ourselves a goal or a purpose.
Wolfram extends this goal-seeking behavior to our tools – after all, we build tools in order to accomplish a task more easily. And digital tools are certainly part of this tradition. So in order for us to make sense of artificial intelligence in particular, and software generally, we must be able to formulate what it is that we want it to achieve, and then we must figure out how to communicate that goal. Closing the gap on this latter act is key to how Wolfram sees the evolution of software, and underpins his notion of ‘symbolic computation': the idea that if we are to become effective communicators with our machine counterparts, we will require some sort of high-level language that will facilitate the imposition of goals on our tools in a way that is accurate, legible and reproducible. But as computing branches out from the strictly quantitative realm of numbers and mathematical operations on those numbers, and into the more qualitative realm of language, image and sound, the nature of our expectations – and therefore our interactions – will necessarily broaden and become more ambiguous.
In 1950 Alan Turing provided one answer to what "purpose" might look like for software. The Turing Test (which I've written about previously) is passed when a human cannot tell whether her interlocutor is a computer or another human. Here the purpose of the software is to become indistinguishable from the human. Much dissatisfaction has been registered over the years over the utility of this. For my part, I don't think the test is nearly broad enough: the idea that we are successful when we have managed to create something so perfectly in our own image is limiting to what technology could be doing, and perhaps too uncritical of what technology should be doing. But if the Turing Test is our signpost, where does that lead us? As Wolfram notes:
You had asked about what…the modern analog of Turing tests would be. There's being able to have the conversational bot, which is Turing's idea. That's definitely still out there. That one hasn't been solved yet. It will be solved. The only question is what's the application for which it is solved?
For a long time, I have been asking why do we care…because I was thinking the number one application was going to be customer service. While that's a great application, in terms of my favorite way to spend my life, that isn't particularly high up on the list. Customer service is precisely one of these places where you're trying to interface, to have a conversational thing happen.
What has been difficult for me to understand is when you achieve a Turing test AI-type thing, there isn't the right motivation. As a toy, one could make a little chat bot that people could chat with. That will be the next thing. We can see the current round of deep learning, particularly, recurrent neural networks, make pretty good models of human speech and human writing. It's pretty easy to type in, say, "How are you feeling today?" and it knows that most of the time when somebody asks this that this is the type of response you give.
Just as human-robot interaction suffers from the phenomenon of the Uncanny Valley, where a robot can be mistrusted or rejected by a human for seeming just not human enough (as opposed to totally human, or totally inhuman), human-AI interactions seem to fall into the same trap. You might call it the ‘valley of meh', where an interaction with a software begins hopefully, but rapidly degenerates into mediocrity and boredom.
This was precisely where Microsoft's @TayandYou found itself. Except, to its great misfortune, it happened to be "learning" from the Twitter ecosystem. Now, Twitter is a platform that, whether due to design or fate or some unholy combination thereof, detects weakness, indecision, or just plain niceness faster and pounces more brutally than almost any other place on the Internet. And this was exactly what happened. @TayandYou was like the new kid who shows up on the first day of school and just gets pounded at recess, to the point where the parents have no real choice other than to take him out of class entirely.
All along, it was unclear what @TayandYou was doing there in the first place. To continue with the schoolyard analogy, any new arrival who comes up to an established group and says "Hey, I wanna be just like you! Let's play!" is just asking for it. Moreover, Microsoft's researchers proffered some anodyne tagline that @TayandYou is here to learn from humans, and that the more humans interact with it the smarter it gets, as if interacting with humans ever helped another species to become anything other than a museum exhibit. In any case, the crazed weasel pit that is Twitter ensured that @TayandYou would not evolve into some digital successor to K-Pax.
Now, as I've already noted, bots on Twitter are nothing new, and some of them are quite interesting and clever. So it was with interest that I read a counterpoint by Sarah Jeong, writing for Vice's rather likeable Motherboard section, when she interviewed members of this "bot-writing" community. Of the developers interviewed, it seems evident that there is an emerging ethical practice that is inspired to make the bots broadly acceptable. One of the developers, Darius Kazemi, has even provided an open source service that is constantly updated a vocabulary blacklist. Obviously we can debate about the implications for censorship and political correctness, but if the counterexample is @TayandYou's tweet supporting genocide, etc, I'm pretty willing to give the blacklist a shot. Also, it's Twitter, for heaven's sake.
There is another important lesson here, which concerns the aforementioned ‘valley of meh'. Jeong quotes Kazemi as saying that "I actually take great care to make my bots seem as inhuman and alien as possible. If a very simple bot that doesn't seem very human says something really bad—I still take responsibility for that—but it doesn't hurt as much to the person on the receiving end as it would if it were a humanoid robot of some kind." While this might strike some as achieving nearly Portlandia-like levels of sensitivity, it nevertheless points to a distinctly post-Turing Test world, where interactions occur with a diversity of entities. Not every bot needs to pretend like it's human, and we are hopefully adult enough that we can tell the difference, and choose the right entity for the right interaction. I hope.
This is where most commentaries around the whole @TayandYou fiasco end, since the bot's tweets are generally sufficient to satisfy our craving for scandal. However, it never hurts to follow the links, and @TayandYou has a veryinteresting About page. I recommend you put on sunglasses before clicking the link, as the screaming orange background of the web page seems designed to prevent you from reading any of the text. For your benefit, I reproduce the salient bits below:
Tay is targeted at 18 to 24 year old [sic] in the US.
Tay may use the data that you provide to search on your behalf. Tay may also use information you share with her to create a simple profile to personalize your experience. Data and conversations you provide to Tay are anonymized and may be retained for up to one year to help improve the service.
Q: Who is Tay for?
A: Tay is targeted at 18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the US.
Q: What does Tay track about me in my profile?
A: If a user wants to share with Tay, we will track a user's:
Q: How can I delete my profile?
A: Please submit a request via our contact form on tay.ai with your username and associated platform.
Q: How was Tay created?
A: Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that's been anonymized is Tay's primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.
So, this business of not knowing what purpose to put to an AI – perhaps I should take it all back. Apparently, Microsoft is really quite interested in learning more about a particular demographic, to the point where they would very much like to know what your favorite food is. Especially telling is the bit about having to fill out a form in order to cancel a profile to whose automatic creation one had already agreed. Also, the fact that the user has to specify the ‘associated platform' implies that @TayandYou, or the technology behind it, is present on platforms other than Twitter.
To go back to something Wolfram said: "What has been difficult for me to understand is when you achieve a Turing test AI-type thing, there isn't the right motivation." Like most commentators when it comes to networked human-computer interaction, Wolfram does not recognize the value in aggregating data at scale. Because @TayandYou is just that: another vacuum cleaner for data. But while people really don't need anything too clever to hand over their information, the idea of using an AI that can interact with hundreds of thousands, if not millions of people, to come to better understand what they ‘like' – well, that is pure genius. It's like Humbert Humbert hanging out a honey pot for a million Lolitas.
Of course, there were probably some valuable pure learnings to be had around natural language processing, etc etc, had @TayandYou discharged its duties successfully, but this is small beer compared to arriving at a fine-grained understanding of the next major consumer group in the United States. I doubt very much that their actions were predicated on this understanding, but viewed in this light, perhaps the Twitter trolls have done us a favor by sniffing the weakness of @TayandYou and meting out a solid thrashing.
Monday, February 29, 2016
The Penal Colony
“Facts all come with points of view/
Facts don't do what I want them to.”
~ Talking Heads
What is it with Silicon Valley and the “disruption” of education? Is it just another sector of public life that is moribund and therefore in need of a serious intervention, as if it were ‘that friend’ who used to be fun and successful but is now just depressed and drinking too much? Or do Silicon Valley types have a chip on their shoulder – perhaps they were forced to sit through one too many pointless lectures on Kant or Amazonian tribes or feminist critiques of Florentine art, and now that they’re calling the shots they’re going to fix this giant mess that’s called higher education once and for all? (Trigger warning: the only people mentioned in this post are venture capitalists).
In any case, into the ever-narrowing sweepstakes of who can make the absolutely dumbest assertions about the value of education steps Vinod Khosla, elder statesman and patron saint of tech bros in Silicon Valley and beyond. Khosla, a fabulously successful venture capitalist, has waded into the education wars with a broadside so breathtaking in its myopia that you would be forgiven for thinking that it was lifted from the satirical pages of The Onion. But before getting into Khosla’s piece, let’s set the stage with a look at a fellow-disruptor’s contribution to the debate.
Libertarian investor Peter Thiel, also fabulously successful, has put forward $100,000 scholarships fellowships for “young people who want to build new things instead of sitting in a classroom”. Thiel’s mission is to pluck potential John Galts out of the stream of college-bound lemmings and give them the latitude to realize their entrepreneurial potential. He believes that college, as it is currently constituted, leads to stagnant thinking and a narrowing of one’s horizons and potential. Which is odd, considering that most people go to college to have exactly the opposite experience. Be that as it may, anyone under the age of 22 is welcome to apply, which is a fairly dramatic, late-capitalist re-write of the countercultural edict to “not trust anyone over 30.”
I actually don’t have much of a problem with this, because Thiel is not trying to rewire the university system. He is providing more options for a vanishingly small group of people (104 so far since the fellowship’s 2010 inception), and I’ve always been convinced that college – or more specifically, a liberal arts education – is not for everyone. It never has been, and it never will be. That’s not to say that it shouldn’t be available for anyone who wants it. But it is a prime example of overreach when the system screws into people’s heads that “everyone needs a college degree” and that subsequently people waste their money getting a BA in communications, whatever that is. There are certainly people who don’t need to go to college, and I like the fact that Thiel is providing more options, not less.
Compare this fairly surgical intervention with the opening klaxon of Khosla’s essay: “If luck favors the prepared mind, as Louis Pasteur is credited with saying, we’re in danger of becoming a very unlucky nation. Little of the material taught in Liberal Arts programs today is relevant to the future.” If there’s one thing I like about Silicon Valley types, it’s that they never leave you to wonder what they’re thinking. Unfortunately, further reading may give rise to the concern of whether they are thinking at all.
Now I could be pedantic and, in a classically vindictive fashion that we liberal arts types allegedly enjoy, just grab an editor’s red pen and start marking up his essay, eg: ‘Doesn’t luck just happen, regardless of whether you are prepared? So how does a lack of preparation make one less lucky? Pasteur was referring to “the fields of observation” in his quote. How does that change the quote’s meaning? Also, passive voice’. But I will leave such pedantry aside. It’s clear that Khosla’s beef is with the system itself, which is in need of some serious re-jiggering. So let’s move past the opening gambit and go to the second sentence – “Little of the material taught in Liberal Arts programs today is relevant to the future”.
Like what? Literature and history, for example. History especially is for chumps:
Furthermore, certain humanities disciplines such as literature and history should become optional subjects, in much the same way that physics is today (and, of course, I advocate mandatory basic physics study along with the other sciences). And one needs the ability to think through many, if not most, of the social issues we face (which the softer liberal arts subjects ill-prepare one for in my view)…I’d like to teach people how to understand history but not to spend time getting the knowledge of history, which can be done after graduation.
Now, I’m not going to meet Khosla’s arguments head on. I’m sure more qualified, more eloquent people have already done so. What I’m more interested in looking at are the consequences of this kind of thinking, or of what emerges when there is a collective bubble of this kind of thinking going on.
A pretty good example of the fruits of an ahistorical worldview happened right about the time Khosla’s essay bubbled up to the surface. Marc Andreessen, inventor of first truly successful web browser and once-scrappy underdog who fought Microsoft (and lost, forever enshrining his scrappiness), has since also become a very successful tech investor. In fact, as an investor in and board member of Facebook, he’s really no longer much of an underdog at all. So when Free Basics, Facebook’s initiative to bring free Internet access to India, was blocked, Andreessen tweeted in frustration "Anti-colonialism has been economically catastrophic for the Indian people for decades. Why stop now?"
Oh, dear. Despite deleting the tweet, issuing an apology, as well as receiving a rebuke from Mark Zuckerberg himself, the Internet went nuts. It wasn’t hard to spin out an analysis positing how what Facebook was doing in India with Free Basics was textbook colonialism. I think there is a fair amount of justification here, and no critic in his or her right mind would fail to take advantage of such a gorgeous faux pas as the one Andreessen served up. But let’s keep things simple.
It’s all well and good to look at Andreessen’s quote as emblematic or symptomatic of a larger system of power or encroachment – after all, that’s what good liberal arts thinking does (cough). What leads a person to write that in the first place? I mean, how do you – and I am being generous here – confuse ‘colonialism’ with ‘anti-colonialism’? And even if you were to substitute one for the other, the comment still doesn’t make sense, except in some uber-sarcastic manner. Maybe he meant ‘capitalism’, as in: “Anti-capitalism has been economically catastrophic for the Indian people for decades. Why stop now?” This would demonstrate some familiarity of Indian history, at least during a few decades of the 20th century. But it still displays a fairly shocking ignorance of the country that India is today, and has been for a while.
Part of the elegance of any analysis is knowing when to stop, and the older I get the more I favor brevity. So I will say this: Andreessen wrote what he did because he is ignorant. He is ignorant of the world around him, and we can go find the root of this steadfast ignorance in Khosla’s exhortation that history is something to learn on your own time. Except when your temper tantrum exposes your ignorance of history, and for a brief moment we all get to wonder, “Who the hell is this guy, and how did he get to such a powerful place in society?” And, fortunately or unfortunately, that’s all there is to it.
But the rot goes deeper still. Here’s a much better example.
A few years ago I had the opportunity to judge a few business plan competitions. This is actually more interesting than it sounds. Business plans, after all, are a form of literature, or at least a form of text. And like any text, one learns to read the genre for the hopes and fears of its authors. The hopes are writ large: products and services that promise to transform markets and better the lives of millions. The fears are smaller and require a bit more experience to ferret out, as they usually take the form of the financial assumptions that constitute an essential part of any business plan. But what one gets exceptionally sensitized to is the way a plan defines a problem space. Because the way one thinks about the problem has great bearing on the proposed solution. In fact, most business plans fail – both as real plans and as closely reasoned arguments – because the authors failed to think deeply enough about the problem.
I was reminded of these business plans when a friend forwarded me an article on the disruption of prisons (in response to my most recent 3QD piece, on how technology will come to service various sectors of society that we’d rather not spend time on). Much like Khosla’s piece, this article at first seems like a parody. Enouragingly entitled “How Soylent and Oculus Could Fix The Prison System” it is nothing less than the reductio ad absurdum to “solving the problem” of prison. For example, prison violence is solved by virtual reality:
By equipping every inmate with an Oculus Rift headset in his or her own cell, you could isolate prisoners from violence without isolating them from people. Put all the prisoners inside Second Life, Prison Edition, give them all a headset, and let them build virtual characters. You could design an awesome [sic] system for rehabilitation, give access to e-learning tools, Kindle books, Minecraft and other digital tools for creativity (prison is boring), psychologist sessions (the psychologist could log in remotely from anywhere in the world), and even handle all correspondence and prison visits from relatives and friends electronically.
As the author enthuses, “What this eliminates: prison yards, prison libraries, packages and letters secretly containing drugs or shanks.” By using a carceral version of Second Life, gamification would teach them to be better citizens (think: badges!). Helpfully, “a huge benefit is we could track everything that prisoners do.” Once you’ve made your way through the whole post – which is written with the utmost sincerity, as it includes cost breakdowns for everything – you’ll consider Khosla to be a thinker of profound subtlety.
Because when you leave prison, the years or decades spent in a virtual reality simulation will equip you just fine for living in the real world. The author’s concern is actually with creating a smooth, hassle-free and economical prison stay. People fight? Ok, don’t let them interact. Food is expensive? Feed them Soylent. Problem solved. It’s almost as if the airlines hit upon their final solution for air travel – just put everyone under general anesthesia from check-in until baggage claim (actually I have been hoping for this for some time). There is really no concern with what people actually do, whether it’s in prison or outside it. And understanding why people wind up in prison, well that would require history. In business plan parlance, this would be dismissed as “out of scope”.
Now, if this had been a business plan submitted to me in competition, the first question for the author would have been, “What’s the real problem here? Is it that prison is expensive, or is it that people keep returning to prison?” Understanding the problem determines the contours of the solution. And if we agree that the purpose of doing prison differently is to lessen recidivism rates, then we have to ask ourselves, how do we prepare people to not come back into the system? I somehow doubt that teaching them to be really good at some dumbed-down version of Second Life is going to help them there.
I suspect the answer is closer to providing some kind of socialization and support structure that is radically different from the structures that landed the inmates there in the first place. Interestingly enough, and just to prove that I’m not some monomaniacally judgmental person, Chris Redlitz, another Bay area venture capitalist, has been taking the opposite tack: five years ago he founded The Last Mile, which started as a business and entrepreneurship program taught within the confines of San Quentin State Prison, and has since diversified into teaching inmates computer programming skills as well. It is the first program in the nation to do so, and so far none of its graduates have been reincarcerated.
Now, just as not everyone should go out and get a liberal arts degree, I’m sure that not every inmate who goes through the program is cut out to be an entrepreneur or a coder. But that is not really the point. The point is to offer the inmates a different social structure, a viable way of being in the world that was likely not open to them before. And this requires hard work, teaching, and human contact. It creates risk and uncertainty, which is something that the previous, ‘virtual reality’ model seeks to eliminate entirely. In fact, it's kind of like the process of getting a liberal arts education. Huh!
So I am curious: if these two ideas were to be presented to Khosla as competing business plans, which one would he fund? Because while Khosla might maintain that “it’s not that history or Kafka are not important…” I would say that the mettle it takes to come up with an understanding of the problem, and any possible solution, is only possible if you have read history, and especially if you have read Kafka. Otherwise, we create a society where Soylent and Oculus VR will be good enough, and probably not just for prisoners, either.
Monday, February 01, 2016
"No sooner does man discover intelligence
than he tries to involve it in his own stupidity."
~ Jacques Yves Cousteau
Over the course of my last few posts I have been groping towards some kind of meeting point between, on the one hand, the current wave of information technologies, as represented by artificial intelligence (AI), social media and robotics; and on the other, what might be termed, for the sake of brevity, the social condition. The thought experiment is hardly virtual, and is in fact unfolding before us in real time, but as I have been considering the issues at stake, there are significant blind spots that will demand elaboration by many commentators in the years and decades to come. Assuming that, as Marc Andreessen put it, software (and the physical objects in which it is increasingly becoming embodied) will continue to "eat the world", how can we expect these technological goods to be distributed across society?
It's actually kind of difficult to envision this as even being a problem in the first place. It's true that, up until in the first years of this century, there was some discussion of the so-called ‘digital divide', where certain segments of the population would not be able to get onto the ‘Internet superhighway' (another term that has fallen into disuse, perhaps because it feels like we never get out of our cars anymore). These were the segments of society that were already disadvantaged in some respect, where circumstances of poverty and/or geography prevented the delivery of physical and therefore digital services. Less so, those on the wrong side of the divide may have also landed there because of language proficiency or age.
The digital divide hasn't really gone away, it's just been smoothed over by the fact that access has increased dramatically over the last 15 years. But according to the most recent Pew Research Center survey, the disparities still exist, and in exactly the places in which you would expect it: only 30% of Americans 65 or older have a smartphone; 58.2% of Native American households use the Internet; 68% of those who didn't graduate from high school are online; and less than half of households making less than $25,000/year are accessing the Internet. In contrast, the top two or three segments in each of these metrics has adoption rates somewhere in the mid- to upper-90th percentile.
Still, it's worth noting that in recent years, the main battle around Internet access have not been fought around primary access, but rather the notion of ‘network neutrality', or the idea that the delivery of any one type of content should be privileged over that of any other. Regardless of who is on what side, it's clear that the people with skin in this game are already wired up. Even more interestingly, following the Edward Snowden NSA leaks, the other main battle has been around the curtailing of government-sanctioned surveillance, which implies the idea that there is perhaps just a little too much connection going on. (It's true that the digital divide conversation is still quite vibrant in the developing world, but even as Internet and mobile penetration increase everywhere, I'll venture that the same sort of lumpiness will abide.)
Consider for a moment the population characteristics used by the Pew survey: education, income, age, ethnicity, geography. (Curiously, gender is not discussed.) These are time-honored sociological categories that have been used by policy-makers and scholars to come to a more finely grained understanding of what our society looks like. The whole point of the US Census asking these sorts of questions is to help the government figure out how to spread around hundreds of billions of dollars of development money. But something interesting has happened as the years have advanced and ‘digital divide' has fallen out of usage: the categories themselves are disappearing from the discourse.
Instead, what is being talked about is ‘users'. There is no one other than the user: anyone who secures access to the Internet is reincarnated into one monolithic and anodyne group. And if there is only one group, there are in fact no groups at all. We are all fish in the same water. To be fair, this usage was always hard-wired into software development, it's just that software development has had the misfortune to find itself with such enormous purchase on our lives. But as a professor of mine was fond of remarking in graduate school, there are only two professions that call their clients ‘users': drug dealers and software engineers. I mean, even madams refer to their interested parties as ‘clients'.
This gap only becomes more apparent when you start paying attention to how we are talked to about technology. The basic Silicon Valley line is something like this: Each user (or group of users) has a problem, usually with an old industry that's in need of disruption. As a result, said user is just primed for some service or product, usually in the form of an app, that will unlock the value of a currently moribund market, or establish an entirely new one. If I were genuinely careful, I would corral every noun in the preceding sentence with quotation marks, since there are enough assumptions keeping this sentence duct-taped together that I almost want to stop writing and go take a shower. But what is relevant to our current discussion is that the ‘user' is what makes Silicon Valley pay attention, whether these are people who pay in hard currency, or in the currency of their own information. On the Internet, no one cares if you're a dog, as long as you're a dog with a profile that could be of use to some marketer. And if you're a rural Native American over the age of 65 with less than a high school education, then you're not on anyone's radar to begin with.
In a sense, we shouldn't be at all surprised that this has taken place. It's merely the latest extension of our post-Enlightenment condition. Whereas the categories I mention above take it as a given that we are dealing with aspects of the social, the Enlightenment, or at least as it has been handed down to us, is about the individual. The user is merely the next logical manifestation of this, the individual. Furthermore, the ersatz grouping of users into markets accomplishes nothing whatsoever in helping us understand the social, since markets are fickle, transaction-bounded entities, which individuals enter and exit with few obligations, let alone knowledge of one another.
This suits the creators of technology just fine. I don't mean this in a malicious sense. This isn't about persuading a group of voters that they have no common cause, or breaking the institutions that were responsible for collective bargaining for much of the last century. It's a much subtler set-up. Once the discourse is revised downwards to only accommodate descriptions of individuals and markets, the conversations that describe the social conditions upon which technology comes to rest also become scarce. Soon enough, our very capacity to discuss these phenomena is diminished, and what we cannot talk about we must pass over in silence.
Actually, those categories are still with us in two senses, but in both cases they are submerged. The first is on the side of the technologies themselves: thanks to massive databases of user information and the algorithmic tools that parse them, they can slice and dice users of their services and products into ever finer and more accurate groups. In this unregulated twilight zone there is an entire industry dedicated to be always right in these matters. Thus the aspects of the social take on the narrowed importance of a means to an end. Of course, the other aspect in which these categories still abide is reality itself. As much as it compliments itself on being the great leveler, technology is just as adept in accentuating and exacerbating difference.
Let's take one of the more obvious differentiators: wealth. The wealthy are the early adopters – they are the ones who can afford the technologies as they first ascend into prominence, whether we are talking about iPhones or bicycles. There is a period of ascendancy, as the use of a technology seeps into an already extant network, and the further network effects allow that social group to internally reinforce its bonds or perhaps further enrich itself. The technology becomes vital for the overt use of a group's members, as well as a sign by which the group differentiates itself from those outside it – that is, those people who lack such access, for whatever reason.
Facebook went from an exclusive social network to something as general and inclusive as a telephone. This of course does not mean that everyone has access to Facebook, just as not everyone has access to a telephone. For its part, Facebook has had to contend with the consequences of its ubiquity, as teens and young adults flock to other platforms, such as Instagram and SnapChat, where they feel like they can preserve some of the integrity of their groups. For their part, the rich have been setting up their own social networks since at least 2007. Of course, this being Silicon Valley, even the wealthy are constantly at risk of getting disrupted. Relationship Science has built its business model on facilitating connections to the wealthy, celebrities and various and sundry movers and shakers, assuming you can fork over the $3,000 annual fee. As journalist Greg Lindsay dubs it, Rel-Sci is a LinkedIn for the 1%.
However, there is a tipping point at which a technology ceases to provide a sizable return on investment, or exclusivity. Consider what wealthy people seek out when it comes to services; that would be other people. A very specific sort of other people, who are well-trained and discreet. The doorman of a Park Avenue co-op, the hotel concierge or the maître d' of a favorite restaurant are just as capable of receiving packages and making recommendations as they are turning a blind eye when it's so desired. Drivers, cooks, au pairs – you could populate a Richard Scarry children's book with all the people who help the wealthy live their lives as frictionlessly as possible.
I think that this tendency points out one of the great misconceptions concerning the progression of software and robotics. As the cost of these innovations declines and their presence spreads, we are better off asking, who is the most likely to be enwoven into these technologies? And by ‘who' I mean ‘what groups'?
Much attention has been paid to the effects of automation on employment, and rightly so. Partly because this is something tangible – we can measure jobs lost – and partly because it speaks to our grandiose fears of apocalypse-by-automation (the current specter is the loss of 3.5 million trucking jobs to driverless cars). But there is also a flip-side. Once innovative products and services are adopted by and assimilated into the lifestyles of the wealthy, or educated, or urban, those technologies will continue to spread. After all, capitalism dictates that a firm must continue growing and capturing market share.
It's not like privileged groups have grown out of using phones. But as an example, consider what we expect when we use our phones. Voice recognition technology has progressed to the point where it's not unusual to conduct entire transactions with a software system. This is especially conducive to instances where outcomes and exceptions are rigorously definable, such as banking and airline reservations. Sometimes it is the only choice, as call center staff have been cut in favor of these automated systems. On the other hand, those in a position of privilege have this privilege reified by the fact that they can speak to a personal banker or airline agent – similar to the above examples of concierge and doorman, a well-trained human that is discreet and effective. This is what I mean by the future already seeping its way throughout our present.
So a good way to start thinking about this is to embrace those categories of the social that we already have. Which groups are the most likely to become the subjects of a particular technology, and why? This is not to say that they will simply be ignored. Rather, we should instead think about the ways in which these groups will eventually be served by technology that may keep things running smoothly, but is ultimately dehumanizing and fragmenting, à la Neil Blomkamp's 2013 dystopia Elysium. Obviously, there is a long leap between an automated phone system and the hellish endgame described in Elysium but it's a much straighter line if everyone is treated only as an individual – or a user – while actually being targeted as a member of a social group.
So who are the vulnerable? A few groups come to mind. The elderly, who are already being assigned robot nurses, because who has time or money to care for the elderly. Children, who are expensive to educate and a pain in the ass to constantly watch over, are already being stimulated (I simply cannot bring myself to write ‘educated') via toys that have a direct line to IBM's Watson AI. The mentally ill, who need to be sequestered, drugged and monitored. Other institutionalized populations, such as convicts – how great would a fully automated prison be? That way any blame could be laid at the feet of the inmates. And finally, the poor, with whom no one wants to interact anyway. These groups will be the greatest ‘beneficiaries' of technology that is only just beginning to manifest itself. You get the idea of who is left – and what a perfect reproduction of privilege it will be.
As a final thought, consider what is lost as we move deeper into a future in which we are ever more deeply entangled with technology: our collective cultural memory. As William Gibson noted in a 2011 interview in the Paris Review,
It's harder to imagine the past that went away than it is to imagine the future. What we were prior to our latest batch of technology is, in a way, unknowable. It would be harder to accurately imagine what New York City was like the day before the advent of broadcast television than to imagine what it will be like after life-size broadcast holography comes online. But actually the New York without the television is more mysterious, because we've already been there and nobody paid any attention. That world is gone.
In a very real sense, we are co-creating our own ongoing forgetting. I consider myself fortunate to have grown up in a pre-Internet era. And anyone who has witnessed a child attempt to swipe or pinch a magazine page, in the mistaken belief that it is as interactive as an iPad screen, cannot but help feel discomfort at the way in which new generations expect reality to behave around them. Or perhaps they see it as a business opportunity. Difference cannot but persist. What is really at stake is what we choose to do about it.
Monday, December 07, 2015
Some Are Born To Sweet Delight
"Except for a wig of algorithms, and tears and automation."
~Noah Raford, Silicon Howl
Last month I attempted to set up two conflicting frames. On the one hand, there is the advance of technology in its myriad forms, eg: social media, artificial intelligence, robotics. This may seem like an arbitrary selection. For example, why exclude fields of medicine, or energy production, or infrastructure? Of course, all technologies are intrinsically social, especially given the complexities required to design, develop, disseminate and maintain them on a global scale. But my concern here are those technologies that are explicitly social in nature: those inventions, whether hardware or software, that intervene in our lives to enable or enhance communications, experiences, or that provide services along such lines.
On the other hand, these technologies are laid over a long-established matrix of social differentiations. Categories that have traditionally motivated the investigations of social scientists, such as class, race, culture, religion, education, gender and age, form the inescapable substrate upon which technology is seeded and elaborates itself, or withers and dies. As I showed, and contrary to most writing about technology in the mainstream media, these boundaries are not magically dissolved by technology, and in many cases they may be further exacerbated. They are certainly not elided, which seems to be the most common attitude. Instead, those occupying the more privileged ends of these spectra of difference benefit more greatly from each advance, and the underprivileged are further shunted to the side. It is the technological equivalent of income inequality, except it is subtler, since we lack the pithiness of a single number, such as the Gini coefficient, to use as a signpost. (Incidentally, even this metric has of late become increasingly less useful as global inequality ascends to hyperbolic levels.)
Thus the object of our scrutiny should really be the ways in which technology further complicates a landscape that is already extremely difficult to parse. In this sense, these two frames are not really in conflict, but at least from a critical point of view, are rather insufficiently engaged with one another. Furthermore, and perhaps even more importantly, the inquiry should not have as its final destination any hope that technology will ultimately dissolve these differences. This is where efforts to bridge the so-called "digital divide" fall short for me: the idea of a level playing field has always been a fiction. Why should we aspire to it? Isn't it more compelling to understand what difference a difference makes? Conversely, if technology really does succeed in eroding all these categories of difference, we will have to scramble for another definition of what it means to be human. Given the difficulty we have with the current state of the definition, I somehow doubt that a tabula rasa approach would be at all helpful.
Nevertheless, the advent of the broad trifecta of social media, AI and robots seems to be engaging in a subtle subversion of precisely this definition. For instance, something I brought up in my previous essay was the phenomenon of people interacting with software and not really comprehending that fact. And while the example (of a Twitter bot) was trivial and amusing, there are others that strike a deeper chord.
Consider "I Love Alaska", a short film made in 2008 by Sander Plug and Lernert Engelberts. The film is broken up into thirteen shorts, and frankly isn't much to look at: it is mostly footage of Alaskan wilderness, and not necessarily the very pretty bits, either. However, it's the script that counts; as the filmmakers describe the project:
August 4, 2006, the personal search queries of 650,000 AOL (America Online) users accidentally ended up on the Internet, for all to see. These search queries were entered in AOL's search engine over a three-month period. After three days AOL realized their blunder and removed the data from their site, but the sensitive private data had already leaked to several other sites.
"I Love Alaska" tells the story of one of those AOL users. We get to know a religious middle-aged woman from Houston, Texas, who spends her days at home behind her TV and computer. Her unique style of phrasing combined with her putting her ideas, convictions and obsessions into AOL's search engine, turn her personal story into a disconcerting novel of sorts.
Plug and Engelberts basically have taken the concept of found poetry and cast it into the digital age, and very effectively at that. Throughout the film, a voiceover delivers the search queries in a finely tuned deadpan, as they were entered into AOL's search engine. User #711391 doesn't really use keywords. The first phrase we hear is "Cannot sleep with snoring husband." More of an entreaty than a query, it is followed by "How to sleep with snoring husband" (it's unclear if a question mark ends this). Obviously the first query did not yield the desired result, so we have an example of how we are forced to bend language towards the machine. But the behavior here is delightfully obtuse, for she doesn't allow herself to be reduced to using keywords, which is the customary practice when using search engines.
In fact, sometimes it's unclear what she is actually trying to find out. Having (possibly) satisfied her curiosity about dealing with snoring spouses and annoying birds, we then get "Online friendships can be very special." As an elementary school teacher might say, "Are you asking me, or are you telling me?" But there is a very private communion that is happening here. In fact, the AOL search log dump was an absolute gold mine for academic researchers, who were starved for real-life data on how people used search engines. Nevertheless, there is something deeply affecting about bearing witness to the way in which user #711391 comes to regard the AOL search engine not as an anonymous reference gateway but more as a kind of interlocutor, and how her queries eventually lead her to take some substantially consequential actions. It replaces the concept of a diary with a one-sided transcription of a fragmentary telephone conversation; we are left to extrapolate much of the details of what seems to otherwise be a perfectly ordinary, if lonely life.
"I Love Alaska" points to a critical discursive element in the way that internet technologies are read. On the one hand, we get a (somewhat aestheticized) view of how one person engages with a technology that can, to a certain extent, accommodate a fair amount of natural language input. Perhaps her mode of engagement is substantially different from the way ‘the rest of us' use search engines. Or is it? Although AOL was a significant force in bringing people to the Internet in the 1990s, its subscribers were generally not known to be savvy, and Google was already eating AOL's lunch by 2006. Nevertheless, in that year AOL still had about 15 million subscribers. So when we say ‘the rest of us' we are discounting a large population. In fact, consider if you are at all familiar with how your friends or family use search engines – there's really no reason why you would be. There is no ‘rest of us'.
This matters because, on the other hand, the people who know all about this are the ones who created the platforms, of which search engines are but one typology. From their perspective, they are equally concerned with how a middle-aged Houston housewife uses their service as they are anyone else. And just as the AOL search log leak demonstrates that people will use search engines with the idea that no one is looking, the developers of that software will strive to make results for such queries as relevant as possible (User #311045: "how to get revenge on a ex girlfriend"). None of this works, however, if people do not engage the platform. In fact, the more richly they engage the platform, the more data is available for it to evolve. And what is needed is empathy.
How far the arbiters of our brave new world will go to solicit empathy was exposed recently in a post on Medium concerning Facebook's much-vaunted venture into the AI-driven virtual personal assistant market space. The initiative, known as M, flips on its head the usual assumptions. Whereas most AIs would like to convince you they are human, M wants you to know it is an AI, albeit a modest one: it cheerfully chirps "I'm AI but humans help train me!" when asked about its ontological status. Arik Sosman, the author of the Medium post, became increasingly suspicious of M's ability to seemingly navigate queries well beyond any other stae-of-the-art AI and undertook the task of snookering the poor thing.
What ensues is a fascinating forensic exercise into investigating a technology that is intended to replace the search engine itself. But in order to do so, Facebook must train its technology to a much higher standard. And M cannot do that without people. Eventually Sosman is able to ascertain that there is so much human activity going on behind M that the AI is actually more of a veneer than anything else – a sort of "pay no attention to the man behind the curtain" moment. Still, I think of Sosman's dissatisfaction as stemming not from that fact – after all, Facebook never tried to hide the fact that M would have some undisclosed number of human ‘handlers' to assist it. Rather, he was upset that M dissembled in its presentation of itself, pretending to be an AI more than it actually was.
I seem to have strayed from the argument I promised you, though. What happened to class, gender and the rest of the categories that ought to be shaping technology? We shouldn't let the rich ironies of Sosman's anecdote distract us from what is really at stake. As Wired wrote on the occasion of M's launch:
Facebook is, by design, rolling out its new assistant in a community in which the users are demographically similar to the M trainers who will be thinking up gifts for their spouses and fun vacation destinations for them… Will M be as good at helping users in the Bronx access food stamps? How about coming to the aid of the single mother in Oklahoma who has a last-minute childcare issue?
Thus the end game for M is clear: you start with what you know, and from there you eventually digest the rest of the world. M needs the data so that it can reach everyone else: identifying who they are, their needs and preferences, and consequently what kinds of ads and other services they might be most inclined to consume. I don't think anyone knows how much more is needed, but one thing that has become clear in AI research is that it's not how clever your algorithms are, but how much data you have to throw at them. So it would be reasonable to posit that the amount of data required is infinite, or at least indeterminate.
Will M actually achieve such reach? It's impossible to say at the moment, but in the meantime the people who benefit from M are those who are most similar, in terms of socio-economic signifiers, as its creators (indeed, Sosman himself is exactly one of those people, recalling the adage that it takes a thief to catch a thief). But even if M successfully reached all 700 million users currently on Facebook's Messenger app, that would still be less than 10% of the global population. An optimist might say that this just demonstrates how much more room there is to grow, but, given the rate of technological failure, it would be just as realistic to bet that M will only ever remain useful to those users in its initial demographic.
Despite the uncertainty of its success, M's brief is wide and the resources behind it are vast. Since it aspires to be all things to all people (or at least those people who are on Messenger), M doesn't really shed very much light on the selective application of technology to various social segments. It's more instructive to look at the various niches that robots are beginning to fill in this regard. And since robots have come up, I have to perform the obligatory turn towards Japan. (I apologize for such a hackneyed gesture, and I hope that at some point someone will disabuse me of the need for such a cliché.)
What makes robots useful in this discussion is the fact that, unlike a search engine or a virtual personal assistant, they must be designed for a fairly specific purpose. As embodied technologies, they will stick around and keep their shape until they break or are rendered obsolete. And as embodied technology, they traffic much more explicitly in our concepts of empathy; the designed intention is to both invoke empathy, and to materialize empathy in return. This is what makes them effective objects. The drawback is that you have to either keep making them, or at least keep fixing them. Still, at some point the rope runs out. Thus Sony stopped making, and eventually fixing, its Aibo robot dog. A victim of insufficient sales and corporate restructuring, Aibo left hundreds of Japanese bereft of robot dog companionship, which is no small deal (see this video, documenting Shinto ceremonies to help Aibos transition to wherever Aibos go when they die).
But what's more important is that many of those left without their Aibo were senior citizens. In Japan's gradually unraveling demographic decline, there are fewer young(er) people to function as caregivers; by 2011, already 22% of the population was at least 65 or older. So an integral part of the Japanese narrative is not just that they are smart and gadget-obsessed; it's also that they have fewer people around to fulfill the complete assortment of jobs that a well-functioning modern society requires. Hence robots, and if a robot dog is no longer around then perhaps a robot seal will be an adequate substitute.
Similarly, robots are targeting other Japanese demographics. Witness this odd video that was just uploaded to YouTube a few days ago, where a lonely young woman find companionship with her robot pal. There is bike riding (the robot sits in a basket with its arms raised), dance parties and burger-eating. There are even disagreements, fights and tears, although nothing that can't be reconciled in the end. And finally the young woman goes on a date, and meets a nice boy, and gets a ‘good job' wink from her robot companion, who is benevolently lurking in the background while the couple dances. At the end of the video the robot fades into silhouette, and its LED eyes glow with an ominous sort of friendliness. The fading words are "You were me, I was you." I should add that, for whatever inscrutable reason, interspersed between these scenes are lines from William Blake's Auguries Of Innocence.
Aside from being supremely creepy, the video, a promotion for the SOTA line of robots, really delivers the argument. Even if it is marketing, the implication is that machines can help people go on, even in the absence of human contact. Whether we are talking about senior citizens or insecure youth, the point of insertion is the same: machines can help you feel less lonely, at least until you either meet someone new, or you die. Extending this principle further leads us to a very strange vision of society, which is this: software and hardware is cheap, and humans are messy, unpredictable and expensive. Therefore it is not unreasonable to postulate that only wealthy people, or people at the privileged ends of the various social spectra, will be able to afford the services of other humans. Since this essay has gone on long enough already, I will flesh out what this kind of a world might look like next time.
Monday, November 09, 2015
"People for them were just sand, the fertilizer of history."
~ Chernobyl interviewee VM Ivanov
For a few years, if you were on Twitter and you used the word "inconceivable" in a tweet, you would almost immediately receive an odd, unsolicited response. Hailing from the account of someone named @iaminigomontoya, it would announce "You keep using that word. I do not think it means what you think it means." Whether you were just musing to the world in general, or engaging in the vague dissatisfaction of what passes for conversation on Twitter, this Inigo Montoya fellow would be summoned, like some digital djinn, merely by invoking this one word.
Now, those of us who possessed the correct slice of pop culture knowledge immediately recognized Inigo Montoya as one of the characters of the film "The Princess Bride". Splendidly played by Mandy Patinkin, Montoya was a swashbuckling Spaniard, an expert swordsman and a drunk. Allied to the criminal mastermind Vizzini, played by Wallace Shawn, Montoya had to listen to Vizzini mumble "inconceivable" every time events in the film turned against him. Montoya was eventually exasperated enough to respond with the above phrase. Like many other quotes from the 1987 film, it is a bit of a staple, and has since been promoted to the hallowed status of meme for the Internet age.
Of course, it's fairly obvious that no human being could be so vigilant (let alone interested) in monitoring Twitter for every instance of "inconceivable" as it arises. What we have here is a bot: a few lines of code that sifts through some subset of Twitter messages, on the lookout for some pattern or other. Once the word is picked up, @iaminigomontoya does its thing. Now, and through absolutely no fault of their own, there will always be a substantial number of people not in on the joke. These unfortunates, assuming that they have just been trolled by some unreasonable fellow human being, will engage further, such as the guy who responded "Do you always begin conversations this way?"
So here we have an interesting example of contemporary digital life. In the (fairly) transparent world of Twitter, we can witness people talking to software in the belief that it is in fact other people, while the more informed among us already understand that this is not the case. Ironically, it is only thanks to the lumpy and arbitrary distribution of pop culture knowledge that we may at all have a chance to tell the difference, at least without finding ourselves involuntarily engaged in a somewhat embarassing mini-Turing Test. But these days, we pick up our street smarts where we can.
Except we rarely pay attention to the lumpy, arbitrary nature of technology, and nowhere less so than in its latest, apotheotic form: social media. It's this idea of technology as the great leveler, and this is perhaps the principal myth that we are relentlessly fed, as if we were geese on a foie gras farm. And like those geese, we never seem to get tired of the feeding. Nor is there any shortage of those queueing up to do the feeding. Just this weekend I attended a fairly abysmal conference sponsored by the Guggenheim Museum, and had to listen to what I thought were otherwise discerning minds discuss how, for example, the ability of people to participate in a real-time discussion on Twitter about the Ferguson riots made true the claim that there was no longer possible to be ‘outside' of events – or rather, that the only people who were on the ‘outside' were those who were on the receiving end of the obsolete ‘broadcast media', ie: television and radio.
This idea – that people who are passive receivers of information constitute a lesser class of citizenry than those who seek to ‘actively participate' in media – is not just problematic. In fact, let's just call it out for what it is: a barely disguised elitism. Consider the hurdles that you have to overcome to access this allegedly level landscape. You have to know what the Internet is and be able to access it; you have to know what Twitter is and be willing to use it, which is itself no mean feat; and you have to care enough about all of these things, as well as the specific phenomenon of the Ferguson riots, in order to ‘participate' in it. Only at that point are you ready to suffer the slings and arrows of your fellow discussants. Thus the resulting population that jumps through all these hoops is a deeply self-selected one. Not only are the necessary cultural and technological proficiencies required to even get to this conversation substantial, but they are inevitably accompanied by – if not simply borne out of – all the attendant structural inequalities that constitute the context of society in the first place. How many people who are subject to discriminatory policing are not online, simply because they are poor, or uneducated, or most likely, just unconnected? In order to reach a putative place of ‘no outside', one must have all the tacit and consequential social, financial and cultural resources to be able to navigate quite a lot of layers of ‘inside'.
On the other hand, those belonging to the latter group of ‘passive consumers' may be more varied than one suspects. To stay with the example of Ferguson, if I watched the riots on cable news, but did so with friends and family, or with strangers in a bar or an airport lounge, and then had a meaningful discussion, well, it's almost as if this didn't happen, since my participation can't be measured in terms of tweets or likes or what-have-yous. It's just conversation, or private contemplation, as has been the case for quite some time. But if it can't be data-mined then of what use is it? At the same time, it bears mentioning that the ‘conversation' that happens on Twitter or anywhere else in social media is by no means guaranteed to be meaningful, simply because that's where it happens. The technorati merely encourage this sort of magical thinking in order to nudge us into a form of participation that occurs much more on their platforms' terms than we might think. When was the last time you went on line seeking to have your opinion changed by someone, whether it was a friend or family member – let alone a complete stranger?
Why is this the case? There is the old (at least by Internet standards) chestnut that, in real life, no one is as happy as they pretend to be on Facebook, nor as angry as they pretend to be on Twitter. So when self-selecting populations opt into participating on a specific platform, the subtle but influential effects on the participants' behavior results in a discourse that is deeply mediated. This occurs not only as a result of the platform itself (ie, the way graphic and textual elements are constructed and arranged on screen, and how users are allowed and incentivized to participate), but also thanks to how people expect their performance to be received by others, and who those others are.
We attempt to shape our online presences to be reflections of who we think we are in the first place. To think that this will suddenly give rise to some unprecedented sort of diversity – that we will step outside of ourselves to embrace new and uncomfortable truths – is naïve. I am not talking about pleasure-seeking or hedonistic pursuits (although, given the ongoing way GamerGate has problematized the seemingly innocuous pastime of video gaming, it's increasingly difficult to say that social media is capable of treating anything as a mere hobby). Rather, I mean to counter the Pollyanna-ish stance held by many techno-pundits that somehow the arc of social media bends towards justice. It may, or it may not. Perhaps the safest thing that can be said is that it will only make us more of who we are already, for better and for worse.
This is what I mean when I claim that the qualities and consequences of technology are lumpy and arbitrary. In reality, the idea that the world is flat has only ever held true for those people with the financial and social resources to make it so. Theirs is a frictionless world. The rest of us must make do with a pale imitation of this: the world seems flat to us only because we successfully ignore vast swathes of it, and social media is an excellent tool for creating the illusion that we are not ignoring anything really important, and that in fact we are paying more attention than ever before. Who can point fingers and say you're not concerned about social injustice when you've clearly been expressing your outrage by liking, sharing and hashtagging all over the damn place? Which is to say, to your friends and friends of friends and perhaps a few other random passers-by who, by definition, must be on the same platform as you. It is this lumpiness and arbitrariness that is really worth our attention.
On the face of it, an innocuous Twitter bot like @iaminigomontoya doesn't seem to have anything in common with the grand hypothesis that social media, as it is currently constituted, may not be doing us any great favors. But it will indeed take us to the next stage of the argument. I claimed above that social media is the apotheotic form of technology. Aside from being awfully pretentious, this claim is almost certainly already false, in the sense that social media is being augmented and perhaps gradually supplanted by the emergence of artificial intelligence; agents of varying autonomy, veracity and interactivity; and robots of many stripes. But since every stage of technological evolution builds upon already existing infrastructure, social media is where much of this change is manifesting itself.
More importantly, this is happening not just because all this stuff is new and clever, but because we want to talk to anything we possibly can, and we fervently desire for those things to talk back to us. This has already been amply proven by our proclivities to talk to dogs, cats and houseplants. But talking to technology is going to bring matters to a completely different level, because what is unique to technology is its ability to create massive, long-lived feedback loops that are initiated and sustained by our talk.
Here are a few examples of the things that we are building that are designed to talk to us. In addition to @iaminigomontoya, there are many such bots on Twitter, which, due to its restrictive 140-character format, is fertile ground for such experimentation. There are bots that, like our friend, will blithely reply to tweets or insert themselves into conversations, but in order to correct your grammatical and homophonic misdemeanors ("your" vs "you're"; "sneak peek" vs "sneak peak"). There are more aspirational creations as well. One of my favorites is @pentametron, which appropriates tweets that, usually quite unintentionally, happen to have been written in perfect iambic pentameter. @pentametron goes the extra mile, though, and re-assembles the tweets into Shakespearean sonnet form, the results of which can be savored here.
Of course, it's reasonable to argue that these bots are really no different than a wind-up toy. Even if you don't know precisely how it works, you know how to set it in motion, and once you've done so you get your hit of childlike wonder and then you put it down and go on with the rest of your day. But however simple, charming and/or irritating as they may be on their own, when taken as a phenomenon, these bots point to a shift that has already been under way for some time. People are, to one degree or another, not just content to interact with machines in a purposive way, but they are expecting to do so, and their expectations are increasingly open-ended. Sometimes they know they terms of the conversation – that is, that they are conversing with a constructed or artificial subject. And sometimes they do not. The truth is, software doesn't even have to pretend to be human for people to seek out human-like interactions with it. It turns out that willing suspension of disbelief is not just a literary device. As Coleridge defined it, "human interest and a semblance of truth" are all that is required to bring it about.
So what happens when we take our credulous nature and jam it into the lumpy and arbitrary distribution and consequences of technology in general, and social media in particular? In next month's post, I will propose that thinking about the intersection of these two tendencies can give us the opportunity to better envision scenarios of likely technological and social futures. It helps us to avoid the sensationalistic fallacy of a Terminator- or Matrix-style dystopia, where strong AIs destroy our way of life, if not the entire planet. Rather, it is about coming to terms with what is already among us, and of how we are already deeply entangled with it. It may even suggest how we might best adapt ourselves to a world that is perhaps already aswarm with artificial subjects that are inscrutable if not nearly invisible, so accustomed have we become to their presence.
"Inconceivable!" I hear you protest. Of course, Inigo Montoya is all too happy to ask if you know what that word really means.
Monday, July 20, 2015
"We are at home with situations of legal ambiguity.
And we create flexibility, in situations where it is required."
Consider a few hastily conceived scenarios from the near future. An android charged with performing elder care must deal with an uncooperative patient. A driverless car carrying passengers must decide between suddenly stopping, and causing a pile-up behind it. A robot responding to a collapsed building must choose between two people to save. The question that unifies these scenarios is not just about how to make the correct decision, but more fundamentally, how to treat the entities involved. Is it possible for a machine to be treated as an ethical subject – and, by extension, that an artifical entity may possess "robot rights"?
Of course, "robot rights" is a crude phrase that shoots us straight into a brambly thicket of anthropomorphisms; let's not quite go there yet. Perhaps it's more accurate to ask if a machine – something that people have designed, manufactured and deployed into the world – can have some sort of moral or ethical standing, whether as an agent or as a recipient of some action. What's really at stake here is the contention that a machine can act sufficiently independently in the world that it can be held responsible for its actions and, conversely, if a machine has any sort of standing such that, if it were harmed in any way, this standing would serve to protect its ongoing place and function in society.
You could, of course, dismiss all this as a bunch of nonsense: that machines are made by us exclusively for our use, and anything a robot or computer or AI does or does not do is the responsibility of its human owners. You don't sue the scalpel, rather you sue the surgeon. You don't take a database to court, but the corporation that built it – and in any case you are probably not concerned with the database itself, but with the consequence of how it was used, or maintained, or what have you. As far as the technology goes, if it's behaving badly you shut it off, wipe the drive, or throw it in the garbage, and that's the end of the story.
This is not an unreasonable point of departure, and is rooted in what's known as the instrumentalist view of technology. For an instrumentalist, technology is still only an extension of ourselves and does not possess any autonomy. But how do you control for the sort of complexity for which we are now designing our machines? Our instrumentalist proclivities whisper to us that there must be an elegant way of doing so. So let's begin with a first attempt to do so: Isaac Asimov's Three Laws of Robotics.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Some time later, Asimov added a fourth, which was intended to precede all the others, so it's really the ‘Zeroth' Law:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The Laws, which made their first appearance in a 1942 story that is, fittingly enough, set in 2015, are what is known as a deontology: an ethical system expressed as an axiomatic system. Basically, deontology provides the ethical ground for all further belief and action: the Ten Commandments are a classic example. But the difficulties with deontology become apparent when one examines the assumptions inherent in each axiom. For example, the First Commandment states, "Thou shalt have no other gods before me". Clearly, Yahweh is not saying that there are no other gods, but rather that any other gods must take a back seat to him, at least as far as the Israelites are concerned. The corollary is that non-Israelites can have whatever gods they like. Nevertheless, most adherents to Judeo-Christian theology would be loathe to admit the possibilities of polytheism. It takes a lot of effort to keep all those other gods at bay, especially if you're not an Israelite – it's much easier if there is only one. But you can't make that claim without fundamentally reinterpreting that crucial first axiom.
Asimov's axioms can be similarly poked and prodded. Most obviously, we have the presumption of perfect knowledge. How would a robot (or AI or whatever) know if an action was harmful or not? A human might scheme to split actions that are by themselves harmless across several artificial entities, which are subsequently combined to produce harmful consequences. Sometimes knowledge is impossible for both humans and robots: if we look at the case of a stock-trading AI, there is uncertainty whether a stock trade is harmful to another human being or not. If the AI makes a profitable trade, does the other side lose money, and if so, does this constitute harm? How can the machine know if the entity on the other side is in fact losing money? Would it matter if that other entity were another machine and not a human? But don't machines ultimately represent humans in any case?
Better yet, consider a real life example:
A commercial toy robot called Nao was programmed to remind people to take medicine.
"On the face of it, this sounds simple," says Susan Leigh Anderson, a philosopher at the University of Connecticut in Stamford who did the work with her husband, computer scientist Michael Anderson of the University of Hartford in Connecticut. "But even in this kind of limited task, there are nontrivial ethics questions involved." For example, how should Nao proceed if a patient refuses her medication? Allowing her to skip a dose could cause harm. But insisting that she take it would impinge on her autonomy.
In this case, the Hippocratic ‘do no harm' has to be balanced against a more utilitarian ‘do some good'. Assuming it could, does the robot force the patient to take the medicine? Wouldn't that constitute potential harm (ie, the possibility that the robot hurts the patient in the act)? Would that harm be greater than not taking the medicine, just this once? What about tomorrow? If we are designing machines to interact with us in such profound and nuanced ways, those machines are already ethical subjects. Our recognition of them as such is already playing catch-up with the facts on the ground.
As implied with the stock trading example, another deontological shortcoming is in the definitions themselves: what's a robot, and what's a human? As robots become more human-like, and humans become more engineered, the line will become blurry. And in many cases, a robot will have to make a snap judgment. What's binary for "quo vadis", and what do you do with a lying human? Because humans lie for the strangest reasons.
Finally, the kind of world that Asimov's laws presupposes is one where robots run around among humans. It's a very specific sort of embodiment. In fact, it is a sort of Slavery 2.0, where robots clearly function for the benefit and in the service of humanity. The Laws are meant to facilitate a very material cohabitation, whereas the kind of broadly distributed, virtually placeless machine intelligence that we are currently developing by leveraging the Internet is much more slippery, and resembles the AI of Spike Jonze's ‘Her'. How do you tell things apart in such a dematerialized world?
The final nail in Asimov's deontological coffin is the assumption of ‘hard-wiring'. That is, Asimov claims that the Laws would be a non-negotiable part of the basic architecture of all robots. But it is wiser to prepare for the exact opposite: the idea that any machine of sufficient intelligence will be able to reprogram itself. The reasons why are pretty irrelevant – it doesn't have to be some variant of SkyNet suddenly deciding to destroy humanity. It may just sit there and not do anything. It may disappear, as the AIs did in ‘Her'. Or, as in William Gibson's Neuromancer, it may just want to become more of itself, and decide what to do with that later on. Gibson never really tells us why the two AIs – that function as the true protagonists of the novel – even wanted to do what they did.
This last thought indicates a fundamental marker in the machine ethics debate. A real difference is developing itself here, and that is the notion of inscrutability. In order for the stance of instrumentality to hold up, you need a fairly straight line of causality. I saw this guy on the beach, I pulled the trigger, and now the guy is dead. It may be perplexing, I may not be sure why I pulled the trigger at that moment, but the chain of events is clear, and there is a system in place to handle it, however problematic. On the other hand, how or why a machine comes to a conclusion or engages in a course of action may be beyond our scope to determine. I know this sounds a bit odd, since after all we built the things. But a record of a machine's internal decisionmaking would have to be a deliberate part of its architecture, and this is expensive and perhaps not commensurate with the agenda of its designers: for example, Diebold made both ATMs and voting machines. Only the former provided receipts, making it fairly easy to theoretically steal an election.
If Congress is willing to condone digitally supervised elections without paper trails, imagine how far away we are from the possibility of regulating the Wild West of machine intelligence. And in fact AIs are being designed to produce results without any regard for how they get to a particular conclusion. One such deliberately opaque AI is Rita, mentioned in a previous essay. Rita's remit is to deliver state-of-the-art video compression technology, but how it arrives at its conclusions is immaterial to the fact that it manages to get there. In the comments to that piece, a friend added that "it is a regular occurrence here at Google where we try to figure out what our machine learning systems are doing and why. We provide them input and study the outputs, but the internals are now an inscrutable black box. Hard to tell if that's a sign of the future or an intermediate point along the way."
Nevertheless, we can try to hold on to the instrumentalist posture and maintain that a machine's black box nature still does not merit the treatment accorded to an ethical subject; that it is still the results or consequences that count, and that the owners of the machine retain ultimate responsibility for it, whether or not they understand it. Well, who are the owners, then?
Of course, ethics truly manifests itself in society via the law. And the law is a generally reactive entity. In the Anglo-American case law tradition, laws, codes and statutes are passed or modified (and less often, repealed) only after bad things happen, and usually only in response to those specific bad things. More importantly for the present discussion, recent history shows that the law (or to be more precise, the people who draft, pass and enforce it) has not been nearly as eager to punish the actions of collectives and institutions as it has been to pursue individuals. Exhibit A in this regard is the number of banks found guilty of vast criminality following the 2008 financial crisis and, by corollary, the number bankers thrown in jail for same. Part of the reason for this is the way that the law already treats non-human entities. I am reminded of Mitt Romney on the Presidential campaign trail a few years ago, benignly musing that "corporations are people, my friend".
Corporate personhood is a complex topic but at its most essential it is a great way to offload risk. Sometimes this makes sense – entrepreneurs can try new ideas and go bankrupt but not lose their homes and possessions. Other times, as with the Citizens United decision, the results can be grotesque and impactful in equal measure. But we ought to look to the legal history of corporate personhood as a possible test case for how machines may become ethical subjects in the eyes of the law. Not only that, but corporations will likely be the owners of these ethical subjects – from a legal point of view, they will look to craft the legal representation of machines as much to their advantage as possible. To not be too cynical about it, I would imagine this would involve minimal liability and maximum profit. This is something I have not yet seen discussed in machine ethics circles, where the concern seems to be more about the instantiation of ethics within the machines themselves, or in highly localized human-machine interactions. Nevertheless, the transformation of the ethical machine-subject into the legislated machine-subject – put differently, the machines as subjects of a legislative gaze – will be of incredibly far-reaching consequence. It will all be in the fine print, and I daresay deliberately difficult to parse. When that day comes, I will be sure to hire an AI to help me make sense of it all.
Monday, June 22, 2015
Artificially Flavored Intelligence
"I see your infinite form in every direction,
with countless arms, stomachs, faces, and eyes."
~ Bhagavad-Gītā 11 16
About ten days ago, someone posted on an image on Reddit, a sprawling site that is the Internet's version of a clown car that's just crashed into a junk shop. The image, appropriately uploaded to the 'Creepy' corner of the website, is kind of hard to describe, so, assuming that you are not at the moment on any strong psychotropic substances, or are not experiencing a flashback, please have a good, long look before reading on.
What the hell is that thing? Our sensemaking gear immediately kicks into overdrive. If Cthulhu had had a pet slug, this might be what it looked like. But as you look deeper into the picture, all sorts of other things begin to emerge. In the lower left-hand corner there are buildings and people, and people sitting on buildings which might themselves be on wheels. The bottom center of the picture seems to be occupied by some sort of a lurid, lime-colored fish. In the upper right-hand corner, half-formed faces peer out of chalices. The background wallpaper evokes an unholy copulation of brain coral and astrakhan fur. And still there are more faces, or at least eyes. There are indeed more eyes than an Alex Grey painting, and they hew to none of the neat symmetries that make for a safe world. In fact, the deeper you go into the picture, the less perspective seems to matter, as solid surfaces dissolve into further cascades of phantasmagoria. The same effect applies to the principal thing, which has not just an indeterminate number of eyes, ears or noses, but even heads.
The title of the thread wasn't very helpful, either: "This image was generated by a computer on its own (from a friend working on AI)". For a few days, that was all anyone knew, but it was enough to incite another minor-scale freakout about the nature and impending arrival of Our Computer Overlords. Just as we are helpless to not over-interpret the initial picture, so we are all too willing to titillate ourselves with alarmist speculations concerning its provenance. This was presented as a glimpse into the psychedelic abyss of artificial intelligence; an unspeakable, inscrutable intellect briefly showed us its cards, and it was disquieting, to put it mildly. Is that what AI thinks life looks like? Or stated even more anxiously, is that what AI thinks life should look like?
Alas, our giddy Lovecraftian fantasies weren't allowed to run amok for more than a few days, since the boffins at Google tipped their hand with a blog post describing what was going on. The image, along with many others, were the result of a few engineers playing around with neural networks, and seeing how far they could push them. In this case, a neural network is ‘trained' to recognize something when it is fed thousands of instances of that thing. So if the engineers want to train a neural network to recognize the image of a dog, they will keep feeding it images of the same, until it acquires the ability to identify dogs in pictures it hasn't seen before. For the purposes of this essay, I'll just leave it at that, but here is a good explanation of how neural networks ‘learn'.
The networks in question were trained to recognize animals, people and architecture. But things got interesting when the Google engineers took a trained neural net and fed it only one input – over and over again. Once slightly modified, the image was then re-submitted to the network. If it were possible to imagine the network having a conversation with itself, it may go something like this:
First pass: Ok, I'm pretty good at finding squirrels and dogs and fish. Does this picture have any of these things in it? Hmmm, no, although that little blob looks like it might be the eye of one of those animals. I'll make a note of that. Also that lighter bit looks like fur. Yeah. Fur.
Second pass: Hey, that blob definitely looks like an eye. I'll sharpen it up so that it's more eye-like, since that's obviously what it is. Also, that fur could look furrier.
Third pass: That eye looks like it might go with that other eye that's not that far off. That other dark bit in between might just be the nose that I'd need to make it a dog. Oh wow – it is a dog! Amazing.
The results are essentially thousands of such decisions made across dozens of layers of the network. Each layer of ‘neurons' hands over its interpretation to the next layer up the hierarchy, and a final decision of what to emphasize or de-emphasize is made by the last layer. The fact that half of a squirrel's face may be interpellated within the features of the dog's face is, in the end, irrelevant.
But I also feel very wary about having written this fantasy monologue, since framing the computational process as a narrative is something that makes sense to us, but in fact isn't necessarily true. By way of comparison, the philosopher Jacques Derrida was insanely careful about stating what he could claim in any given act of writing, and did so while he was writing. Much to the consternation of many of his readers, this act of deconstructing the text as he was writing it was nevertheless required for him to be accurate in making his claims. Similarly, while the anthropomorphic cheat is perhaps the most direct way of illustrating how AI ‘works', it is also very seductive and misleading. I offer up the above with the exhortation that there is no thinking going on. There is no goofy conversation. There is iteration, and interpretation, and ultimately but entirely tangentially, weirdness. The neural network doesn't think it's weird, however. The neural network doesn't think anything, at least not in the overly generous way in which we deploy that word.
So, echoing a deconstructionist approach, we would claim that the idea of ‘thinking' is really the problem. It is a sort of absent center, where we jam in all the unexamined assumptions that we need in order to keep the system intact. Once we really ask what we mean by ‘thinking' then the whole idea of intelligence, whether we are speaking of our own human one, let alone another's, becomes strange and unwhole. So if we then try to avoid the word – and therefore the idea behind the word – ‘thinking' as ascribed to a computer program, then how ought we think about this? Because – sorry – we really don't have a choice but to think about it.
I believe that there are more accurate metaphors to be had, ones that rely on narrower views of our subjectivity, not the AI's. For example, there is the children's game of telephone, where a phrase is whispered from one ear to the next. Given enough iterations, what emerges is a garbled, nonsensical mangling of the original, but one that is hopefully still entertaining. But if it amuses, this is precisely because it remains within the realm of language. The last person does not recite a random string of alphanumeric characters. Rather, our drive to recognize patterns, also known as apophenia, yields something that can still be spoken. It is just weird enough, which is a fine balance indeed.
What did you hear? To me, it sounds obvious that a female voice is repeating "no way" to oblivion. But other listeners have variously reported window, welcome, love me, run away, no brain, rainbow, raincoat, bueno, nombre, when oh when, mango, window pane, Broadway, Reno, melting, or Rogaine.
This illustrates the way that our expectations shape our perception…. We are expecting to hear words, and so our mind morphs the ambiguous input into something more recognisable. The power of expectation might also underlie those embarrassing situations where you mishear a mumbled comment, or even explain the spirit voices that sometimes leap out of the static on ghost hunting programmes.
Even more radical are Steve Reich's tape loop pieces, which explore the effects of when a sound gradually goes out of phase with itself. In fact, 2016 will be the 50th anniversary of "Come Out", one of the seminal explorations of this idea. While the initial phrase is easy to understand, as the gap in phase widens we struggle to maintain its legibility. Not long into the piece, the words are effectively erased, and we find ourselves swimming in waves of pure sound. Nevertheless, our mental apparatus stills seeks to make some sort of sense of it all, it's just that the patterns don't obtain for long enough in order for a specific interpretation to persist.
Of course, the list of contraptions meant to isolate and provoke our apophenic tendencies is substantial, and oftentimes touted as having therapeutic benefits. We slide into sensory deprivation tanks to gape at the universe within, and assemble mail-order DIY ‘brain machines' to ‘expand our brain's technical skills'. This is mostly bunk, but all are predicated on the idea that the brain will produce its own stimuli when external ones are absent, or if there is only a narrow band of stimulus available. In the end, what we experience here is not so much an epiphany, as apophany.
In effect, what Google's engineers have fabricated is an apophenic doomsday machine. It does one thing – search for patterns in the ways in which it knows how – and it does those things very, very well. A neural network trained to identify animals will not suddenly begin to find architectural features in a given input image. It will, if given the picture of a building façade, find all sorts of animals that, in its judgment, already lurk there. The networks are even capable of teasing out the images with which they are familiar if given a completely random picture – the graphic equivalent of static. These are perhaps the most compelling images of all. It's the equivalent of putting a neural network in an isolation tank. But is it? The slide into anthropomorphism is so effortless.
And although the Google blog post isn't clear on this, I suspect that there is also no clear point at which the network is ‘finished'. An intrinsic part of thinking is knowing when to stop, whereas iteration needs some sort of condition wrapped around the loop, otherwise it will never end. You don't tell a computer to just keep adding numbers, you tell it to add only the first 100 numbers you give it. Otherwise the damned thing won't stop. The engineers ran the iterations up until a certain point, and it doesn't really matter if that point was determined by a pre-existing test condition (eg, ‘10,000 iterations') or a snap aesthetic judgment (eg, ‘This is maximum weirdness!'). The fact is that human judgment is the wrapper around the process that creates these images. So if we consider that a fundamental feature of thinking is knowing when to stop doing so, then we find this trait lacking in this particular application of neural networks.
In addition to knowing when to stop, there is another critical aspect of thinking as we know it, and that is forgetting. In ‘Funes el memorioso', Jorge Luis Borges speculated on the crippling consequences of a memory so perfect that nothing was ever lost. Among other things, the protagonist Funes can only live a life immersed in an ocean of detail, "incapable of general, platonic ideas". In order to make patterns, we have to privilege one thing over another, and dismiss vast quantities of sensory information as irrelevant, if not outright distracting or even harmful.
Interestingly enough, this relates to a theory concerning the nature of the schizophrenic mind (in a further nod to the deconstructionist tendency, I concede that the term ‘schizophrenia' is not unproblematic, but allow me the assumption). The ‘hyperlearning hypothesis' claims that schizophrenic symptoms can arise from a surfeit of dopamine in the brain. As a key neurotransmitter, dopamine plays a crucial role in memory formation:
When the brain is rewarded unexpectedly, dopamine surges, prompting the limbic "reward system" to take note in order to remember how to replicate the positive experience. In contrast, negative encounters deplete dopamine as a signal to avoid repeating them. This is a key learning mechanism which is also involves memory-formation and motivation. Scientists believe the brain establishes a new temporary neural network to process new stimuli. Each repetition of the same experience triggers the identical neural firing sequence along an identical neural journey, with every duplication strengthening the synaptic links among the neurons involved. Neuroscientists say, "Neurons that fire together wire together." If this occurs enough times, a secure neural network is established, as if imprinted, and the brain can reliably access the information over time.
The hyperlearning hypothesis posits that schizophrenics have too much dopamine in their brains, too much of the time. Take the process described above and multiply it by orders of magnitude. The result is a world that a schizophrenic cannot make sense of, because literally everything is important, or no one thing is less important than anything else. There is literally no end to thinking, no conditional wrapper to bring anything to a conclusion.
Unsurprisingly, the artificial neural networks discussed above are modeled on precisely this process of reinforcement, except that the dopamine is replaced by an algorithmic stand-in. In 2011, Uli Grasemann and Risto Miikkulainen did the logical next step: they took a neural network called DISCERN and cranked up its virtual dopamine.
Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information – not as distinct units, but as statistical relationships of words, sentences, scripts and stories.
In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate -- essentially telling it to stop forgetting so much.
After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.
Even though I find this infinitely more terrifying than a neural net's ability to create a picture of a multi-headed dog-slug-squirrel, I still contend that there is no thinking going on, as we would like to imagine it. And we would very much like to imagine it: even the article cited above has as its headline ‘Scientists Afflict Computers with Schizophrenia to Better Understand the Human Brain'. It's almost as if schizophrenia is something you can pack into a syringe, virtual or otherwise, and inject it into the neural network of your choice, virtual or otherwise. (The actual peer-reviewed article is more soberly titled ‘Using computational patients to evaluate illness mechanisms in schizophrenia'.) We would be much better off understanding these neural networks as tools that provide us with a snapshot of a particular and narrow process. They are no more anthropomorphic than the shapes that clouds may suggest to us on a summer's afternoon. But we seem incapable of forgetting this. If we cannot learn to restrain our relentless pattern-seeking, consider what awaits us on the other end of the spectrum: it is not coincidental that the term ‘apophenia' was coined in 1958 by Klaus Conrad in a monograph on the inception of schizophrenia.
Monday, May 25, 2015
The “Invisible Web” Undermines Health Information Privacy
by Jalees Rehman
"The goal of privacy is not to protect some stable self from erosion but to create boundaries where this self can emerge, mutate, and stabilize. What matters here is the framework— or the procedure— rather than the outcome or the substance. Limits and constraints, in other words, can be productive— even if the entire conceit of "the Internet" suggests otherwise.
Evgeny Morozov in "To Save Everything, Click Here: The Folly of Technological Solutionism"
We cherish privacy in health matters because our health has such a profound impact on how we interact with other humans. If you are diagnosed with an illness, it should be your right to decide when and with whom you share this piece of information. Perhaps you want to hold off on telling your loved ones because you are worried about how it might affect them. Maybe you do not want your employer to know about your diagnosis because it could get you fired. And if your bank finds out, they could deny you a mortgage loan. These and many other reasons have resulted in laws and regulations that protect our personal health information. Family members, employers and insurances have no access to your health data unless you specifically authorize it. Even healthcare providers from two different medical institutions cannot share your medical information unless they can document your consent.
The recent study "Privacy Implications of Health Information Seeking on the Web" conducted by Tim Libert at the Annenberg School for Communication (University of Pennsylvania) shows that we have a for more nonchalant attitude regarding health privacy when it comes to personal health information on the internet. Libert analyzed 80,142 health-related webpages that users might come across while performing online searches for common diseases. For example, if a user uses Google to search for information on HIV, the Center for Disease Control and Prevention (CDC) webpage on HIV/AIDS (http://www.cdc.gov/hiv/) is one of the top hits and users will likely click on it. The information provided by the CDC will likely provide solid advice based on scientific results but Libert was more interested in investigating whether visits to the CDC website were being tracked. He found that by visiting the CDC website, information of the visit is relayed to third-party corporate entities such as Google, Facebook and Twitter. The webpage contains "Share" or "Like" buttons which is why the URL of the visited webpage (which contains the word "HIV") is passed on to them – even if the user does not explicitly click on the buttons.
Libert found that 91% of health-related pages relay the URL to third parties, often unbeknownst to the user, and in 70% of the cases, the URL contains sensitive information such as "HIV" or "cancer" which is sufficient to tip off these third parties that you have been searching for information related to a specific disease. Most users probably do not know that they are being tracked which is why Libert refers to this form of tracking as the "Invisible Web" which can only be unveiled when analyzing the hidden http requests between the servers. Here are some of the most common (invisible) partners which participate in the third-party exchanges:
Entity Percent of health-related pages
What do the third parties do with your data? We do not really know because the laws and regulations are rather fuzzy here. We do know that Google, Facebook and Twitter primarily make money by advertising so they could potentially use your info and customize the ads you see. Just because you visited a page on breast cancer does not mean that the "Invisible Web" knows your name and address but they do know that you have some interest in breast cancer. It would make financial sense to send breast cancer related ads your way: books about breast cancer, new herbal miracle cures for cancer or even ads by pharmaceutical companies. It would be illegal for your physician to pass on your diagnosis or inquiry about breast cancer to an advertiser without your consent but when it comes to the "Invisible Web" there is a continuous chatter going on in the background about your health interests without your knowledge.
Some users won't mind receiving targeted ads. "If I am interested in web pages related to breast cancer, I could benefit from a few book suggestions by Amazon," you might say. But we do not know what else the information is being used for. The appearance of the data broker Experian on the third-party request list should serve as a red flag. Experian's main source of revenue is not advertising but amassing personal data for reports such as credit reports which are then sold to clients. If Experian knows that you are checking out breast cancer pages then you should not be surprised if this information will be stored in some personal data file about you.
How do we contain this sharing of personal health information? One obvious approach is to demand accountability from the third parties regarding the fate of your browsing history. We need laws that regulate how information can be used, whether it can be passed on to advertisers or data brokers and how long the information is stored.
We may use information we collect about you to:
· Administer your account;
· Provide you with access to particular tools and services;
· Respond to your inquiries and send you administrative communications;
· Obtain your feedback on our sites and our offerings;
· Statistically analyze user behavior and activity;
· Provide you and people with similar demographic characteristics and interests with more relevant content and advertisements;
· Conduct research and measurement activities;
· Send you personalized emails or secure electronic messages pertaining to your health interests, including news, announcements, reminders and opportunities from WebMD; or
· Send you relevant offers and informational materials on behalf of our sponsors pertaining to your health interests.
Perhaps one of the most effective solutions would be to make the "Invisible Web" more visible. If health-related pages were mandated to disclose all third-party requests in real-time such as pop-ups ("Information about your visit to this page is now being sent to Amazon") and ask for consent in each case, users would be far more aware of the threat to personal privacy posed by health-related pages. Such awareness of health privacy and potential threats to privacy are routinely addressed in the real world and there is no reason why this awareness should not be extended to online information.
Libert, Tim. "Privacy implications of health information seeking on the Web" Communications of the ACM, Vol. 58 No. 3, Pages 68-77, March 2015, doi: 10.1145/2658983 (PDF)
Monday, March 23, 2015
You're on the Air!
by Carol A. Westbrook
The excitement of a live TV broadcast...a breaking news story...a presidential announcement...an appearance of the Beatles on Ed Sullivan. These words conjure up a time when all America would tune in to the same show, and families would gather round their TV set to watch it together.
This is not how we watch TV anymore. It is watched at different times and on different devices, from mobile phones, computers, mobile devices, from previously recorded shows on you DVR, or via streaming service such as Netflix and, soon, Apple. Live news can be viewed on the web, via cell phone apps, or as tweets. An increasing number of people are foregoing TV completely to get news and entertainment from other sources, with content that is never "on the air." (see the chart,below, from the Nov 24, 2013 Business Insider). Many Americans don't even own a television set!
We take it for granted that we will have instant access to video content--whether digital or analog, television, cell phone or iPad. But video itself has its roots in television; the word itself means, "to view over a distance." The story of TV broadcasting is a fascinating one about technology development, entrepreneurship, engineering, and even space exploration. It is an American story, and it is a story worth telling.
At first, America was tuned in to radio. From the early 20's through the 1940s, people would gather around their radios to listen to music and variety shows, serial dramas, news, and special announcements. Yet they dreamed of seeing moving pictures over the airwaves, like they did in newsreels and movies. A series of technical breakthroughs were needed to make this happen.
The first important breakthrough was the invention in 1938 of a way to send and view moving images electronically--Farnsworth's "television." Thus followed a series of patent wars, but at the end of the day, we had television sets which could be used to view moving pictures transmitted by the airwaves. In 1939, RCA televised the opening of the New York Worlds Fair, including a speech by the first President to appear on TV, President Franklin D. Roosevelt. There were few televisions to watch it on, though, until after the end of World War II, when America's demand for commercial television rapidly increased.
This led to the next big advance in television--network broadcasting. The big radio broadcast companies such as RCA (Radio Corporation of America) and CBS (Columbia Broadcasting System) naturally expanded into this media, but their infrastructure was limited. Though the frequencies used for AM radio transmission, from 540 to 1780 kHz (kHz means cycles per second) can travel long distances from their transmitting stations, each wavelength can only carry a limited amount of signal energy; in other words, it has a narrow bandwidth. Much higher frequency wavelengths, in the megahertz range (million cycles per second) are required for television so they can carry the additional information needed for picture as well as sound. As a result there was a scramble for higher frequency wavelengths, which was mediated by the FCC (Federal Communications Commission), the entity that regulates broadcasting. In 1948 the FCC allocated the higher frequency bands, designating which ones would be reserved for radio, and which ones for television, and and assigned channel numbers to the TV bands. The VHF television channels were designated 2 - 13. Channel 1 was reallocated to public and emergency communications, which explains why your TV starts with Channel 2! Several higher frequencies, designated as UHF, were reserved for later TV use, including channels 32 to 70. The FCC also froze the number of station licenses at 108 in 1948.
Because the number of broadcast stations was limited, TV was available only if you lived within range of a broadcast network, primarily CBS, NBC or ABC. In other words, if you lived a large city--New York, Chicago, Washington, Philadelphia, Boston, Los Angeles, Seattle or Salt Lake City. Outside of these areas, you might have a chance if you lived on a hill, put up a very high antenna, and prayed for a thermal inversion or a charged ionosphere to propagate the short signal to your television. My husband Rick, an electrical engineer and amateur radio buff, recounts that he watched the coronation of Queen Elizabeth in 1952 from his TV set in a small town in Pennsylvania, due to an environmental quirk (sunspots?), but everyone else had to wait for the films to cross the Atlantic and be shown on their local station.
Yet, for those of us who lived in a prime location, there was an ever-expanding number of programs to watch, such as the Texaco Star Theater, the Milton Bearle Show, and a variety of news shows. Many of us grew up on Howdy Doody, or shows created locally and televised live. I recall walking home from grade school for lunch as a child in Chicago, spending an hour watching "Lunchtime Little Theater," before returning to school to finish the afternoon's lessons! Many of these early shows have been lost, as they were never recorded, and video had not yet been invented.
Television broadcasting eventually went nationwide, thanks to microwave transmission, which developed out of WWII radar. This technology was used to relay television broadcasts to local affiliate stations, which could then broadcast them on their regular channels in the local area. Microwaves use point-to-point transmission, from one microwave tower to the next, and microwave towers were constructed to span the continent. The FCC increased the number of television station licenses, and the broadcast companies truly became "networks." Finally, everyone could watch the same shows at the same time.
But TV was still limited geographically--it could not cross the ocean. This problem was not solved until the third important technology was developed, that of satellite broadcasting. Sputnik, the first space satellite, was launched in 1957. Five years later, July 23, 1962, the first satellite-based transatlantic broadcast took place using the Telstar satellite to relay TV signals from the US ground station in Andover, Maine, to the receiving stations in Goonhilly Downs, England and Pleumeur-Bodou, France.
It's fun to watch this broadcast, which was introduced by Walter Cronkite, and began with a split screen showing the Statue of Liberty on the left and the Eiffel tower on the Right. The satellite transmission was followed a live broadcast of an ongoing baseball game in Chicago's Wrigley Field between the Philadelphia Phillies and the Chicago Cubs, and also included live remarks from President Kennedy, as well as footage from Cape Canaveral, Florida, Seattle, and Canada. I've included a short clip of the Kennedy broadcast.
If you looked up at the night in 1962, you might see the Telstar satellite zoom across your backyard sky. It took about 20 minutes to traverse, passing every 2.5 hours. Broadcast signals could only be transmitted to Telstar and back to land stations on either side of the Atlantic only during this 20-minute transit time, so the tracking satellite dishes had to be fast-moving; they also had to be very large to capture such a weak signal. It is impressive to see the massive size of the dishes in these satellite ground stations, and, and to imagine how quickly they had to move to sweep the sky. This picture of Goonhilly Downs gives you an idea of their size.
Although Telstar demonstrated that satellite transmission was possible for long-range broadcasting, the equipment and precision needed for tracking a rapidly-moving low-earth satellite was onerous. So the space scientists at NASA and Bell Labs launched the next generation of satellites, named "Syncom," into high earth orbit at just the right distance from the earth so that their speed matched the speed of the earth's rotation. When orbiting directly above the equator, the Syncom satellites appeared to be stationery over a single geographic location. Thus, the geostationary (or geosynchronous) satellite was born.
Stationery satellites paved the way for a tremendous expansion in telecommunications, and are still in widespread use. Satellites enabled the rise of cable TV networks such as HBO and CNN in the 1970s, which broadcast without having to go through FCC-regulated television transmitting stations. Instead, their programming was sent via satellite to the cable service, and from there selected programs went by cable to the TV of paid subscribers. These stations could also be accessed through Satellite TV subscription, such as Galaxy, which broadcast them directly to their customers' satellite dishes. Because early satellites could only carry a limited number of cable channels, multiple satellites had to be accessed to provide the purchased programming. Moveable satellite dishes of about four to twelve feet in diameter were positioned in the subscriber's yards or on their roof. Satellite TV further expanded American's access to television, reaching rural communities that had limited (or no) cable service and poor antenna reception; they also provided special paid programming, such as sports events watched at bars. This picture shows a 10-foot moveable dish in my yard in Indiana.
Stationery TV dishes--such as Direct TV antennas--were not feasible until satellites were able to carry more programming, so the dish could stay parked on only one geosynchronous satellite. The technical advance which allowed this was the development of digital video, in the late 1990's. Digital video would eventually displace analog-- remember when the DVD was introduced, which rendered VCRs obsolete in just a few years' time? Each genosynchronous satellites could now carry many more simultaneous channels than before, since each channel takes up only a small fraction of the bandwidth when compared to analog signals. Digital signals also increased the capacity of traditional TV, broadcast from ground towers, which eventually transferred to the HDTV standards, which broadcast at the high capacity UHF frequencies. The transition to HDTV was completed in June 2009, and the TV networks abandoned analog transmission on the old VHF channels, though many of the newer stations carry the old numbers (2 - 13). TV viewers are surprised to learn that they can watch their favorite channels on the newer HDTV sets using only a simple indoor antenna, and many are giving up their pricey cable services. Digital video signals were ready for growth in other media, as they theoretically be transmitted over the internet or by cell phone, and could be stored easily for re-broadcast.
Yet one more step was needed before widespread internet and cellular-based video could occur, allowing us to watch television programs as we do now. This was not a technical advance but an economic one--the sharp drop in the price of computer memory, which happened about 2009. Prior to that, computers had a lot less memory and storage capacity. Perhaps you remember the agony of trying to watch a YouTube video in its early years? Or of waiting for your browser to load? Now we take it for granted that we can view digitized images, create them, share them, watch pre-recorded programs, and record on our TIVO from multiple sources. There seems to be no limit to the ways that we can enjoy television, truly viewing "pictures at a distance." It is a far cry from the early years of television that many of us still remember, when we all watched a small, black-and-white screen with poor sound, to watch John, Paul, George and Ringo sing "I Love You." Now those were the days!
Thanks to my husband Rick Rikoski, for his patient and helpful explanations of the technology of television and its early development.
Monday, March 31, 2014
Sharing Our Sorrow Via Facebook
by Jalees Rehman
Geteiltes Leid ist halbes Leid ("Shared sorrow is half the sorrow") is a popular German proverb which refers to the importance of sharing bad news and troubling experiences with others. The therapeutic process of sharing takes on many different forms: we may take comfort in the fact that others have experienced similar forms of sorrow, we are often reassured by the empathy and encouragement we receive from friends, and even the mere process of narrating the details of what is troubling us can be beneficial. Finding an attentive audience that is willing to listen to our troubles is not always easy. In a highly mobile, globalized world, some of our best friends may be located thousands of kilometers away, unable to meet face-to-face. The omnipresence of social media networks may provide a solution. We are now able to stay in touch with hundreds of friends and family members, and commiserate with them. But are people as receptive to sorrow shared via Facebook as they are in face-to-face contacts?
A team of researchers headed by Dr. Andrew High at the University of Iowa recently investigated this question and published their findings in the article "Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook". The researchers created three distinct Facebook profiles of a fictitious person named Sara Thomas who had just experienced a break-up. The three profiles were identical in all respects except for how much information was conveyed about the recent (fictitious) break-up. In their article, High and colleagues use the expression "emotional bandwidth" to describe the extent of emotions conveyed in the Facebook profile.
In the low bandwidth scenario, the profile contained the following status update:
"sad and depressed:("
The medium bandwidth profile included a change in relationship status to "single" in the timeline, in addition to the low bandwidth profile update "sad and depressed:(".
Finally, the high emotional bandwidth profile not only contained the updates of the low and medium bandwidth profiles, but also included a picture of a crying woman (the other two profiles had no photo, just the standard Facebook shadow image).
The researchers then surveyed 84 undergraduate students (enrolled in communications courses, average age 20, 53% female) and presented them with screenshots of one of the three profiles.
They asked the students to imagine that the person in the profile was a member of their Facebook network. After reviewing the assigned profile, each student completed a questionnaire asking about their willingness to provide support for Sara Thomas using a 9-point scale (1 = strongly disagree; 9 = strongly agree). The survey contained questions that evaluated the willingness to provide emotional support (e.g. "Express sorrow or regret for her situation") and network support (e.g. "Connect her with people whom she may turn to for help''). In addition to being queried about their willingness to provide distinct forms of support, the students were also asked about their sense of community engendered by Facebook (e.g., "Facebook makes me feel I am a part of a community'') and their preference for online interactions over face-to-face interactions (e.g., "I prefer communicating with other people online rather than face-to-face'').
High and colleagues hypothesized that the high emotional bandwidth profiles would elicit greater support from the students. In face-to-face interactions, it is quite common for us to provide greater support to a person – friend or stranger – if we see them overtly crying and therefore the researchers' hypothesis was quite reasonable. To their surprise, the researchers found the opposite. The willingness to provide emotional or network support was significantly lower among students who viewed the high emotional bandwidth profile! For example, average emotional support scores were 7.8 among students who saw Sara entering the "sad and depressed:(" update (low bandwidth) but the scores were only 6.5 among students who also saw the image of Sara crying and updating her relationship status to single (high bandwidth). Interestingly, students who preferred online interactions over face-to-face interactions or those who felt that Facebook created a strong sense of community responded positively to the high bandwidth profile.
There are some important limitations of the study. The students were asked to evaluate whether they would provide support to a fictitious person by imagining that she was part of their Facebook friends network. This is a rather artificial situation because actual supportive Facebook interactions occur among people who know each other. It is not easy to envision support for a fictitious person whose profile one sees for the first time. Furthermore, "emotional bandwidth" is a broad concept and it is difficult to draw general conclusions about "emotional bandwidth" from the limited differences between the three profiles. Increasing the sample size of the study subjects as well as creating a broader continuum of emotional bandwidth differences (e.g. including profiles which include pictures of a fictitious Sara Thomas who is not crying, using other status updates, etc.), and also considering scenarios that are not just related to break-ups (e.g. creating profiles of a fictitious grieving person who has lost a loved one) would be useful for an in-depth analysis of "emotional bandwidth".
The study by High and colleagues is an intriguing and important foray into the cyberpsychology of emotional self-disclosure and supportive communication on Facebook. This study raises important questions about how cyberbehavior differs from real world face-to-face behavior, and the even more interesting question of why these behaviors are different. Online interactions omit the dynamic gestures, nuanced intonations and other cues which play a critical role in determining our face-to-face behavior. When we share emotions via Facebook, our communication partners are often spatially and temporally displaced. This allows us to carefully "edit" what we disclose about ourselves, but it also allows our audience to edit their responses, unlike the comparatively spontaneous responses of a person sitting next to us. Facebook invites us to use the "Share" button, but we need to remember that online "sharing" is a sharing between heavily edited and crafted selves that is very different from traditional forms of "sharing".
Acknowledgments: The images from the study profiles were provided by Dr. Andrew High, copyright of the images - Dr. Andrew High.
Reference: Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook, AC High, A Oeldorf-Hirsch, S Bellur, Computers in Human Behavior (2014) 34, 79-88
Monday, March 17, 2014
Why Amazon Reminds Me of the British Empire
by Emrys Westacott
"Life—that is: being cruel and inexorable against everything about us that is growing old and weak….being without reverence for those who are dying, who are wretched, who are ancient." (Friedrich Nietzsche, The Gay Science)
A recent article by George Packer in The New Yorker about Amazon is both eye-opening and thought-provoking. In "Cheap Words" Packer describes Amazon's business practices, the impact of these on writers, publishers, and booksellers, and the seemingly limitless ambitions of Amazon's founder and CEO Jeff Bezos whose "stroke of business genius," he says, was "to have seen in a bookstore a means to world domination."
Amazon began as an online book store, but US books sales now account for only about seven percent of the seventy-five billion dollars it takes in each year. Through selling books, however, Amazon developed perhaps better than any other business two strategies that have been key to its success: it uses to the full sophisticated computerized collection and analysis of data about its customers; and it makes the interaction between buyer and seller maximally simple and convenient. It also, of course, typically offers lower prices than its competitors. Bezos' plan to one day have drones provide same-day delivery of items that have been stocked in warehouses near you in anticipation of your order is the logical next step in this drive toward creating a frictionless customer experience.
Amazon's impact on the world of books has been massive. Over the past twenty years the number of independent bookstores in the US has been cut in half from four thousand to two thousand, and this number continues to dwindle. Because Amazon is by far the biggest bookseller, no publisher can afford to not use its services, and Amazon exploits this situation to the hilt. Publishers are required to pay Amazon millions of dollars in "marketing discount" fees. Those that balked at paying the amount demanded had the ‘Buy' button removed from their titles on Amazon's web site. Amazon used the same tactic to try to force Macmillan to agree to its terms regarding digital books. And of course Amazon's Kindle dominates the world of e-books, another major threat to traditional publishers and booksellers.
The argument for viewing Amazon in a positive light is not difficult to make.
They offer the customer a bigger selection of books than anyone else, usually at lower prices. Buying online as a returning customer with a registered credit card is laughably easy. Any wannabe writer can self-publish with Amazon, and those whose books sell receive a much higher percentage in royalties. In opening up this opportunity to all, and in basing its advertising and promotional decisions on computer analysis of customer behavior rather than on some self-styled expert's opinion, Amazon eliminates the unnecessary middlemen, professional tastemakers, and elitist gatekeepers that have controlled—and constrained—publishing for so long, replacing them with the dynamic democracy of the digital market place.
For all that, more than one person I know reacted to Packer's article by pledging to avoid buying stuff from Amazon in future, at least as far as and for as long as this is possible (which judging from the way things are going may not be too far or very long). Why this reaction? Well, when I told my daughter about Packer's article her immediate response was to say that Amazon sounded a bit like the British Empire. Which set me thinking.
What parallels can be found between the premier online retailer and the largest empire in history? I see similarities in three areas: beliefs and attitudes; practices; and impact on affected populations. Let's consider these in turn.
According to Packer's account, the prevailing attitude among those in charge at Amazon is arrogance. Here is where I think the echoes of imperialism are most apparent. British imperialists typically viewed themselves as superior to those they displaced or ruled on various counts: birth, race, heritage, education, culture, morals, religion, ability, and character, all resulting in and backed up by superior political and military power. The proof of this superiority could be seen on any map of the world that showed the extent of Britannia's rule. The Amazon execs are indifferent, of course, to such things as birth or pedigree; what matters to them is being smart. But thinking of themselves as smart is the basis for a particular kind of arrogance which they seem to share with other successful types in places like Silicon Valley and Wall Street. The way one top exec is described to Packer by a colleague is revealing: he's said to be "the smartest guy in the room at a company where everyone believes himself to be just that."
This fetishism of smartness is certainly not confined to techies, but it assumes a specific and perhaps especially intense form among them. Obviously, there are many different ways of being intelligent. One can excel at abstract reasoning, creative problem-solving, learning languages, understanding people, remembering information, noticing patterns and connections, interpreting works of art, manipulating people and events, mastering a practical skill, recognizing opportunities, artistic creativity, witty repartee—the list is virtually endless. So there are many people out there who are smart in various ways. But at any particular time and place, certain kinds of intelligence will be especially valued. It might be the ability to track an animal, or plan a battle, or discourse fluently in Latin, or demonstrate erudition, or make accurate and discriminating observations, or solve technical problems using mathematics and logic. These are all forms of smartness that at different times have been applauded and rewarded. And of course one kind of smartness is to recognize just what kind of smarts the present or immediate future will reward.
Today we live in an age when science enjoys cultural hegemony and most educated people earn a living by processing information. Naturally enough, therefore, certain kinds of smartness are now much in demand and are rewarded accordingly. Prominent among these is fluency in computer science and technology. The market value of knowledge and skills in this area has been greatly enhanced by the growth of the internet since this has expanded to an unprecedented degree the potential customer base or audience for any online enterprise.
The fetishism of smartness at places like Amazon is thus, naturally enough, oriented towards technological fluency and business acumen. But it seems to be accompanied by a moral subtext. Our success is not due to chance or luck; it's due to our intelligence; therefore it's deserved. On the face of it, this might seem dissimilar to the attitude of a British imperialist who, after all, could hardly claim credit for being born British (Cecil Rhodes supposedly said that "to be born English is to win first prize in the lottery of life"). But it is similar insofar as the British attributed their success in conquering and ruling much of the world to their possession of certain qualities—intelligence, industry, organization, moral and cultural superiority. The similarity extends also to the contemptuous attitude felt and sometimes expressed toward those who suffer as a result of this success. One former Amazon employee cited by Packer says that execs at Amazon view the older publishers as "antediluvian losers" and describe whole sections of the print world as the "Rust Belt media." Imperialists like Winston Churchill regularly referred to the native populations whose settlements, property, and whole way of life he cheerfully helped to destroy when serving as a military officer in Africa as "primitive," "backward," barbarous," "ignorant," "savage," and "improvident."
In the eyes of both, what legitimizes this contempt—and reinforces the arrogance—is the conviction that they are on the side of history. As Jeff Bezos said to Charlie Rose: "Amazon is not happening to bookselling. The future is happening to bookselling." The attitude is a form of Social Darwinism. Countries with superior military power and political organization will naturally dominate people who are lacking in these. ("Whatever happens we have got / The Gatling gun, and they have not.") Businesses that know how to use the latest technology effectively will inevitably send to the wall those that still rely on dated methods that are less efficient: that's the way capitalism functions. The ultimate and unarguable proof of superiority is real world success: the subjugation of native populations; the growth of market share. Might is right.
Seeing themselves as being aligned with the forces of inevitable historical change is accompanied, naturally enough, by the belief that they are agents of progress, that the changes they help being about are desirable. Obviously, this self-perception can be self-serving; but that doesn't make it foolish. There is an idealistic strain in enterprises like Amazon, Google, or Facebook that is not simply a piece of self-deception or a marketing strategy. Amazon really does make books available to people who lack a local bookstore (although in some cases, of course, this lack may be largely due to the local bookstore being put out of business by Amazon). Their constantly expanding inventory–Bezos' eventual goal is to warehouse copies of every book ever written–means that it is now much easier than ever before to buy obscure and out of print titles. Electronic self-publishing makes it easier and cheaper for all writers to put their work out in the public domain. British imperialists also saw themselves as benefiting the world. Churchill, reflecting on what the British had achieved in Africa, thought that future historians would judge them to be "a people, of whom at least it may be said, that they have added to the happiness, the learning and the liberties of mankind." Cecil Rhodes was bracingly blunt: "I contend that we are the first race in the world, and the more of the world we inhabit the better it is for the human race."
Moving from attitudes to actions, we should first of all be fair to Amazon. They don't massacre by the thousand those who resist their growing power; they don't torch villages in acts of punitive reprisal; they don't use gunboats to force the Chinese to keep buying opium from British drug traffickers. But within the parameters of legal business operations, they do seem to be pretty ruthless. Some of their success is undoubtedly due to their clever use of up to date methods, from automated, individual-oriented advertising to warehouses staffed by non-unionized workers who are already being replaced by robots. But according to Packer their success in bookselling is also largely due to a strategy whereby they "created dependency and harshly exploited its leverage." Refusing to sell books by publishers who won't cough up a sufficiently large "marketing discount" fee is a case in point. This is, in effect, a legal extortion racket. To be sure, it isn't as crude as the way the British persuaded the Chinese to sign the Treaty of Nanking, which required China to hand over twenty-one million dollars, grant all sorts of trading concessions, and cede control of Hong Kong (the British method was to threaten Nanking with gunboats). But the underlying mentality isn't so different. Where one isn't constrained by moral considerations, all that remains is a power struggle; and all that ultimately matters in that struggle is who wins. As Quirrell says in Harry Potter and the Philosopher's Stone, echoing Machiavelli, Hobbes, and Nietzsche: "There is no good and evil, there is only power and those too weak to seek it.
Of course, Jeff Bezos is hardly the first capitalist to play hardball, so it wouldn't make much sense to single out his company as singularly ruthless in its business strategies. The ethics of Amazon are pretty much the ethics of any big business striving toward monopoly status. What is troubling, though, about the mindset described by Packer is the seeming indifference to, or even satisfaction over, the negative impact of the company's actions on significant numbers of people. Packer reports that among "people who care about reading, Amazon's unparalleled power generates endless discussion, along with paranoia, resentment, confusion, and yearning." This could equally stand as a description of those who found themselves powerless to resist British rule. But in both cases, the view from the seat of power is that those who aren't with the program either don't recognize what's in their best interests or deserve to disappear.
"Innovate or die." "Move fast and break things" Such mantras are associated with the technological revolution, but there is nothing essentially new here. They express the essential spirit–and reality– of capitalism that Marx describes in The Communist Manifesto. Those who find themselves surfing the waves of innovation naturally enough sing the praises of the new. So much is understandable. It feels good to be a winner, doubly good if you sense the wind of history at your back, and triply good if you believe you're making the world a better place. British imperialists felt good on all three counts, yet we are now critical of their attitude in large part because of their indifference to the individuals, communities and cultures they affected and in many cases destroyed. They could have done with more humility and more humanity. The same goes for the Amazon execs described by Packer. What is unbecoming, even ugly, in both groups is the callousness drifting into contempt toward those who, also understandably, lament the destruction of something they cherish, whether it be a secure job (like working in a bookstore), a respected occupation (like print publishing), a skill that is no longer marketable (like editing), a pleasure that may soon no longer be available (like browsing in used bookstores) or, indeed, an entire form of life.
Monday, March 03, 2014
Is Internet-Centrism a Religion?
by Jalees Rehman
On the evening of March 3 in 1514, Steven is sitting next to Friar Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the printing press will change everything."
Let us now fast-forward 500 years and re-enact this hypothetical scene with some tiny modifications.
On the evening of March 3 in 2014, Steven is sitting next to TED-Talker Clay in a Nottingham pub, covering his face with his hands.
"I am losing the will to live", Steven sobs, "Death may be sweeter than life in this world of poverty, injustice and war."
"Do not despair, my friend", Clay says, "for the internet will change everything."
Clay's advice in the first scene sounds ludicrous to us because we know that the printing press did not usher in an era of wealth, justice and peace. Being retrospectators, we realize that the printing press revolutionized how we disseminate information, but even the most efficient dissemination tool is just a means and not the ends.
It is more difficult for us to dismiss Clay's advice in the second scene because it echoes the familiar Silicon Valley slogans which inundate us with such persistence that some of us have begun to believe them. Clay's response is an example of what Evgeny Morozov refers to as "Internet-centrism", the unwavering belief that the Internet is not just an information dissemination tool but that it constitutes the path to salvation for humankind. In his book "To Save Everything, Click Here: The Folly of Technological Solutionism", Morozov suggests that "Internet-centrism" is taking on religion-like qualities:
"If the public debate is any indication, the finality of "the Internet"— the belief that it's the ultimate technology and the ultimate network— has been widely accepted. It's Silicon Valley's own version of the end of history: just as capitalism-driven liberal democracy in Francis Fukuyama's controversial account remains the only game in town, so does the capitalism-driven "Internet." It, the logic goes, is a precious gift from the gods that humanity should never abandon or tinker with. Thus, while "the Internet" might disrupt everything, it itself should never be disrupted. It's here to stay— and we'd better work around it, discover its real nature, accept its features as given, learn its lessons, and refurbish our world accordingly. If it sounds like a religion, it's because it is."
Morozov does not equate mere internet usage with "Internet-centrism". People routinely use the internet for work or leisure without ascribing mythical powers to it, but it is when the latter occurs that internet usage transforms into "Internet-centrism".
Does Morozov's portrayal of "Internet-centrism" as a religion correspond to our current understanding of religions? "Internet-centrism" does not involve deities, sacred scripture or traditional prayers, but social scientists and scholars of religion do not require deism, scriptures or prayers to categorize a body of beliefs and practices as a religion.
The German theologian Friedrich Schleiermacher (1768-1834) thought that the feeling of "absolute dependence" ("das schlechthinnige Abhängigkeitsgefühl") was one of the defining characteristics of a religion. In a January 2014 Pew Internet survey, 53% of adult internet users in said that it would be "very hard" to give up the internet, whereas only 38% felt this way in 2006. This does not necessarily meet the Schleiermacher threshold of "absolute dependence" but it indicates a growing perception of dependence among internet users, who are struggling to envision a life without the internet or a life beyond the internet.
Absolute dependence is not unique to religion, therefore it may be more helpful to turn to religion-specific definitions if we want to understand the religionesque characteristics of Internet-centrism. In his classic essay "Religion as a cultural system" (published in "The Interpretation of Cultures"), the anthropologist Clifford Geertz (1926-2006) defined religion as:
" (1) a system of symbols which acts to (2) establish powerful, persuasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic."
Today's Silicon Valley pundits (incidentally a Sanskrit term originally used for learned Hindu scholars well-versed in Vedic scriptures) excel at establishing "powerful, persuasive, and long-lasting moods and motivations" and endowing "conceptions of general order of existence" with an "aura of factuality". Morozov does not specifically reference the Geertz definition of religion, but he provides extensive internet pundit quotes which fit the bill. Here is one such example:
"To be a peer progressive, then, is to live with the conviction that Wikipedia is just the beginning, that we can learn from its success to build new systems that solve problems in education, governance, health, local communities, and countless other regions of human experience."
—Steven Johnson in "Future Perfect: The Case For Progress In A Networked Age"
One problem with abstract definitions of religion is that they do not encompass the practice of religion and its mythical or supernatural aspects, which are often essential parts of most religions. In "The Religious Experience", the religion scholar Ninian Smart (1927-2001) does not provide a handy definition for religions but instead offers six "dimensions" that are present in most major religions: 1) The Ritual Dimension, 2) The Mythological Dimension, 3) The Doctrinal Dimension, 4) The Ethical Dimension, 5) The Social Dimension and 6) The Experiential Dimension.
How do these dimensions of religion apply to Internet-centrism?
1) The Ritual Dimension: The need to continuously seek connectivity by accessing computers or seeking out wireless connectivity, checking emails or social media updates so frequently that this connectivity exceeds one's pragmatic needs could be considered a ritual of Internet-centrism. If one feels the need to check emails and Facebook or Twitter updates every one to two minutes, despite the fact that it is unlikely one would have received a message that required urgent action, it may be an indicator of the important role that this ritual plays in the life of an Internet-centrist. Worshippers of traditional religions feel uncomfortable if they miss out on regular prayers or lose their rosaries that allow them to commune with their God, and it appears that for some humans, the ritual of Internet-connectivity may play a similar role.
2) The Mythological Dimension: There is the physical internet, which consists of billions of physical components such as computers, servers, routers or cables that are connected to each other. Prophets and pundits of Internet-centrism also describe a mythical "Internet" which goes for beyond the physical internet, because it involves mythical narratives about the power of the internet as a higher force that is shaping human destiny. Just like "Scientism" attributes a certain mystique to real-world science, Internet-centrism adorns the physical internet with a similar mythological dimension.
Ideas of "cognitive surplus", crowdsourcing knowledge to improve the human condition, internet-based political revolutions that will put an end to injustice, oppression and poverty and other powerful metaphors are used to describe this poorly defined mythical entity that has little to do with the physical internet. The myth of egalitarianism is commonly perpetuated, yet the internet is anything but egalitarian. Social media hubs have millions of followers and certain corporations or organizations are experts at building filters and algorithms to control the information seen by consumers who have minimal power and control over the flow of information.
3) The Doctrinal Dimension: The doctrine of Internet-centrism is the relentless pursuit of sharedom through the internet. The idea is that the more we share, the more we collaborate and the more transparent we are via the internet, the easier it will be for us humans to conquer the challenges that face us. Challenging this basic doctrine that is promoted by Silicon Valley corporations can be perceived as heretical. It is a remarkable testimony to the proselytizing power of the prophets and pundits in Silicon Valley that people were outraged at the government institution NSA for violating our privacy. There was comparatively little concern about the fact that the primary benefactors of the growing culture of sharedom are the for-profit internet corporations that make money off our willingness to sacrifice our privacy.
4) The Ethical Dimension: In many religions, one is asked to follow aspects of a religious doctrine which have no direct ethical context. For example, seeking salvation by praying alone to a god on a mountain-top does not necessarily require adherence to ethical standards. On the other hand, most religions have developed moral imperatives that govern how adherents of a religion interact with fellow believers or non-believers. In Internet-centrism, the doctrinal dimension is conflated with the ethical dimension. Sharedom is not only a doctrinal imperative, it is also a moral imperative. We are told that sharing and collaborating is an ethical duty.
This may be unique to Internet-centrism since the internet (both in its physical or its mythical form) presupposes the existence of fellow beings with whom one can connect. If a catastrophe wiped out all humans but one, who happened to adhere to a traditional religion, she could still pray to a god (ritual), believe in salvation by a supernatural entity (mythological) and abide by the the religious laws (doctrinal). However, if she were an Internet-centrist, all her rituals, beliefs and doctrines would become meaningless.
5) The Social Dimension: Congregating in groups and social interactions are key for many religions, but Internet-centrism provides more tools than any other ideology, cultural movement or religion for us to interact with others. Whether we engage in this social activity by using social media such as Facebook or Twitter, by reading or writing blog posts, or by playing multi-player games online, Internet-centrism encourages us to fulfill our social needs by using the tools of the internet.
6) The Experiential Dimension: Most religions offer their adherents opportunities for highly personal, spiritual experiences. Internet-centrism avoids any talk of "spirituality", but the idea of a personalized experience is very much a part of Internet-centrism. One of its goals is to provide opportunities for self-actualization. We all may be connected via the internet, but Internet-centrists also want us to believe that this connectivity provides a path for self-actualization. We can modify settings to customize our web browsing experience, we can pick and choose from millions of options of what online courses we want to take, videos we want to watch or music we want to listen to. The sense of connectedness and omnipotentiality is what provides the adherent of Internet-centrism with a feeling of personal empowerment that comes close to a spiritual experience of traditional religions.
When one reviews the definitions by Schleiermacher or Geertz, or the multi-dimensional analysis by Ninian Smart, it does indeed seem that Morozov is right and that Internet-centrism is taking on many religion-like characteristics. There is probably still a big disconnect between the Silicon valley prophets or pundits who proselytize and the vast majority of internet users who primarily act as "consumers" but do not yet buy into the tenets of Internet-centrism. But it is likely that at least in the short-term, Internet-centrism will continue to grow, especially if Internet-centrist ideas are introduced to children in schools and they grow up believing that these ideas are both essential and sufficient for our intellectual and social wellbeing. Perhaps the pundits of Internet-centrism could discuss the future of this emerging religion with adherents of other faiths at a TEDxInterfaith conference.
Image Credits: Photo of Gutenberg Bible (Creative Commons license, via NYC Wanderer at Flickr)
Monday, January 06, 2014
Synthetic Biology: Engineering Life To Examine It
by Jalees Rehman
Two scientific papers that were published in the journal Nature in the year 2000 marked the beginning of engineering biological circuits in cells. The paper "Construction of a genetic toggle switch in Escherichia coli" by Timothy Gardner, Charles Cantor and James Collins created a genetic toggle switch by simultaneously introducing an artificial DNA plasmid into a bacterial cell. This DNA plasmid contained two promoters (DNA sequences which regulate the expression of genes) and two repressors (genes that encode for proteins which suppress the expression of genes) as well as a gene encoding for green fluorescent protein that served as a read-out for the system. The repressors used were sensitive to either selected chemicals or temperature. In one of the experiments, the system was turned ON by adding the chemical IPTG (a modified sugar) and nearly all the cells became green fluorescent within five to six hours. Upon raising the temperature to activate the temperature-sensitive repressor, the cells began losing their green fluorescence within an hour and returned to the OFF state. Many labs had used chemical or temperature switches to turn gene expression on in the past, but this paper was the first to assemble multiple genes together and construct a switch which allowed switching cells back and forth between stable ON and OFF states.
The same issue of Nature contained a second land-mark paper which also described the engineering of gene circuits. The researchers Michael Elowitz and Stanislas Leibler describe the generation of an engineered gene oscillator in their article "A synthetic oscillatory network of transcriptional regulators". By introducing three repressor genes which constituted a negative feedback loop and a green fluorescent protein as a marker of the oscillation, the researchers created a molecular clock in bacteria with an oscillation period of roughly 150 minutes. The genes and proteins encoded by the genes were not part of any natural biological clock and none of them would have oscillated if they had been introduced into the bacteria on their own. The beauty of the design lay in the combination of three serially repressing genes and the periodicity of this engineered clock reflected the half-life of the protein encoded by each gene as well as the time it took for the protein to act on the subsequent member of the gene loop.
Both papers described the introduction of plasmids encoding for multiple genes into bacteria but this itself was not novel. In fact, this has been a routine practice since the 1970s for many molecular biology laboratories. The panache of the work lay in the construction of functional biological modules consisting of multiple genes which interacted with each other in a controlled and predictable manner. Since the publication of these two articles, hundreds of scientific papers have been published which describe even more intricate engineered gene circuits. These newer studies take advantage of the large number of molecular tools that have become available to query the genome as well as newer DNA plasmids which encode for novel biosensors and regulators.
Synthetic biology is an area of science devoted to engineering novel biological circuits, devices, systems, genomes or even whole organisms. This rather broad description of what "synthetic biology" encompasses reflects the multidisciplinary nature of this field which integrates ideas derived from biology, engineering, chemistry and mathematical modeling as well as a vast arsenal of experimental tools developed in each of these disciplines. Specific examples of "synthetic biology" include the engineering of microbial organisms that are able to mass produce fuels or other valuable raw materials, synthesizing large chunks of DNA to replace whole chromosomes or even the complete genome in certain cells, assembling synthetic cells or introducing groups of genes into cells so that these genes can form functional circuits by interacting with each other. Synthesis in the context of synthetic biology can signify the engineering of artificial genes or biological systems that do not exist in nature (i.e. synthetic = artificial or unnatural), but synthesis can also stand for integration and composition, a meaning which is closer to the Greek origin of the word. It is this latter aspect of synthetic biology which makes it an attractive area for basic scientists who are trying to understand the complexity of biological organisms. Instead of the traditional molecular biology focus on studying just one single gene and its function, synthetic biology is engineering biological composites that consist of multiple genes and regulatory elements of each gene. This enables scientists to interrogate the interactions of these genes, their regulatory elements and the proteins encoded by the genes with each other. Synthesis serves as a path to analysis.
One goal of synthetic biologists is to create complex circuits in cells to facilitate biocomputing, building biological computers that are as powerful or even more powerful that traditional computers. While such gene circuits and cells that have been engineered have some degree of memory and computing power, they are no match for the comparatively gigantic computing power of even small digital computers. Nevertheless, we have to keep in mind that the field is very young and advances are progressing at a rapid pace.
One of the major recent advances in synthetic biology occurred in 2013 when an MIT research team led by Rahul Sarpeshkar and Timothy Lu at MIT created analog computing circuits in cells. Most synthetic biology groups that engineer gene circuits in cells to create biological computers have taken their cues from contemporary computer technology. Nearly all of the computers we use are digital computers, which process data using discrete values such as 0's and 1's. Analog data processing on the other hand uses a continuous range of values instead of 0's and 1's. Digital computers have supplanted analog computing in nearly all areas of life because they are easy to program, highly efficient and process analog signals by converting them into digital data. Nature, on the other hand, processes data and information using both analog and digital approaches. Some biological states are indeed discrete, such as heart cells which are electrically depolarized and then repolarized in periodical intervals in order to keep the heart beating. Such discrete states of cells (polarized / depolarized) can be modeled using the ON and OFF states in the biological circuit described earlier. However, many biological processes, such as inflammation, occur on a continuous scale. Cells do not just exist in uninflamed and inflamed states; instead there is a continuum of inflammation from minimal inflammatory activation of cells to massive inflammation. Environmental signals that are critical for cell behavior such as temperature, tension or shear stress occur on a continuous scale and there is little evidence to indicate that cells convert these analog signals into digital data.
Most of the attempts to create synthetic gene circuits and study information processing in cells have been based on a digital computing paradigm. Sarpeshkar and Lu instead wondered whether one could construct analog computation circuits and take advantage of the analog information processing systems that may be intrinsic to cells. The researchers created an analog synthetic gene circuit using only three proteins that regulate gene expression and the fluorescent protein mCherry as a read-out. This synthetic circuit was able to perform additions or ratiometric calculations in which the cumulative fluorescence of the mCherry was either the sum or the ratio of selected chemical input concentrations. Constructing a digital circuit with similar computational power would have required a much larger number of components.
The design of analog gene circuits represents a major turning point in synthetic biology and will likely spark a wave of new research which combines analog and digital computing when trying to engineer biological computers. In our day-to-day lives, analog computers have become more-or-less obsolete. However, the recent call for unconventional computing research by the US Defense Advanced Research Projects Agency (DARPA) is seen by some as one indicator of a possible paradigm shift towards re-examining the value of analog computing. If other synthetic biology groups can replicate the work of Sarpeshkar and Lu and construct even more powerful analog or analog-digital hybrid circuits, then the renaissance of analog computing could be driven by biology. It is difficult to make any predictions regarding the construction of biological computing machines which rival or surpass the computing power of contemporary digital computers. What we can say is that synthetic biology is becoming one of the most exciting areas of research that will provide amazing insights into the complexity of biological systems and may provide a path to revolutionize biotechnology.Daniel R, Rubens JR, Sarpeshkar R, & Lu TK (2013). Synthetic analog computation in living cells. Nature, 497 (7451), 619-23 PMID: 23676681
Monday, December 09, 2013
Google Zeitgeist: Annoying Philosophers, Weird Germans and White Pakistanis
by Jalees Rehman
The Autocomplete function of Google Search is both annoying and fascinating. When you start typing in the first letters or words of your search into the Google search box, Autocomplete takes a guess at what you are looking for and "completes" the search phrase by offering you multiple query phrases. The queries offered by Autocomplete are "a reflection of the search activity of users and the content of web pages indexed by Google". Considering the fact that more than five billion Google searches are conducted on an average day, the Google Autocomplete function has a huge database of search information that it can reference. This also means that the Autocomplete suggestions are quite dynamic and can vary over time. A popular new song lyric, the name of a viral video or a recent movie quote can catapult itself to the top of the Autocomplete suggestion list within a matter of hours or days if millions of users start search for that specific phrase. Autocomplete may also take a user's browsing history or location into account, which explains why it may offer a varying set of suggestions to different users.
Autocomplete can be quite annoying because the suggested lists of queries are based on their web popularity and can thus consist of bizarre combinations which are not at all related to one's intended searches. On the other hand, Autocomplete is also a fascinating tool to provide a window into the Zeitgeist of web users, revealing what kinds of phrases are most commonly used on the web, and by inference, what contemporary ideas are currently associated with the entered keywords. The Google Zeitgeist website reveals the most widely searched terms to help identify cultural trends - based on the frequency of Google search engine queries - during any given year.
The United Nations Entity for Gender Equality and the Empowerment of Women (UN Women) recently used the Google Search Autocomplete function in an ad campaign to highlight the extent of misogyny on the web. Searching for "women should…" or "women need to…" was autocompleted to phrases such as "women should be slaves" or "women need to be put in their place". The fact that Autocomplete suggested these phrases means that probably hundreds of thousands of internet users have used these phrases in their search queries or on web pages indexed by Google – a reminder of how much gender injustice still exists in our world.
A recent article in Slate pointed towards another form of bias unveiled by Autocomplete: Occupational prejudice. The search phrase "scientists are…." was autocompleted to suggest that scientists were either liars, liberal or stupid. I tried it out and received similar suggestions by Autocomplete:
I guess we scientists have been upgraded from merely being stupid to being idiots. I was curious whether other professions fare better.
Well, apparently bankers do not.
And doctors are not only as stupid as scientists, they are also overpaid, arrogant and dangerous.
I can understand that doctors are thought to be overpaid, but it is a bit of a surprise that folks on the web think that professors are overpaid, especially considering the fact that many of them have spent a decade or more in postgraduate education before they become professors and still earn far less than non-academic colleagues in the private industry.
Philosophers, on the other hand, are not perceived as being stupid by the Google Zeitgeist. They are wise and annoying with a tinge of depression.
The next time you contact your editors, please remember that they are people, too.
The fact that Autocomplete suggests these phrases means that they are frequently used in searches and web pages but there is no way to know who is using them and what the intent is behind their usage.
What does the Google Zeitgeist tell us about people of different nationalities?
Germans are not seen in a very positive light, but the prejudices regarding Germans being rude, cold and weird should not come as a surprise to anyone who watches Hollywood movies which love to propagate such clichés.
Interestingly, search queries suggest that both Americans and Germans may come across as weird and rude.
Maybe the web collective feels that members of all nationalities are weird and rude – even the Canadians, who are also known to be nice even though they are afraid of the dark.
When I queried the characteristics of Pakistanis with the "Pakistanis are…." Phrase, I was surprised by the fact that Autocomplete offered very different suggestions than those for Germans and North Americans. The latter were being described by adjectives such as rude, weird, nice or cold – but when it came to Pakistanis, the search queries instead focused on their ethnic identity.
Are Pakistanis white or not white? Are they mostly Indians or do they have Arab origins? The odd thing is that I have conversations around these questions with many Pakistanis, who often try to convince me that they indeed have "white" roots. Some Pakistanis I know – especially those who are proud of their fair skin color - frequently mention their possible Greek origins (dating back to the times of Alexander the Great and his invasion of the Indian subcontinent) conquests, others emphasize the fact that the people who currently reside in Pakistan may have had Arab forefathers when the Arabs invaded the Indian subcontinent. On the other hand, I also know plenty of Pakistanis who see themselves as people with a primarily Indian heritage. The fact that this is a hotly debated topic among Pakistanis suggests that maybe the internet queries suggested by Autocomplete were in fact based on queries or web pages of Pakistanis who are interested in discussing this topic.
When it comes to Arabs, their ethnic identity is also apparently a popular topic in internet queries, and again my personal interactions with American Arabs mirror the Autocomplete suggestions. I have often heard American Arabs mention that they feel they ought to be accepted as part the American "white" population ("Hello – I just received a phone call, Dr. Frantz Fanon is on hold for you on line 1).
I first thought that perhaps the desire to identify oneself with being "white" was a remnant of one's colonial past, but my search for "Nigerians are…" did not support this hypothesis.
The Web seems to hold extremely positive views of Nigerians – smart, intelligent and educated.
Moving beyond searches for nationalities, what characteristics do web users associate with members of other groups?
Well, religions do not fare well.
Christianity and Islam are seen as evil, full of falsehood and (oddly enough) may not even be religions.
In contrast, atheism is not labeled as evil. The suggested queries instead revolve around the question of whether or not atheism is a religion.
How about a cultural ideology?
Ok, Google Zeitgeist tells us that postmodernism is BS and dead.
The human emotion of Schadenfreude, on the other hand, is very much alive.
Autocomplete is not only a tool to identify biases and phrases used on the web; it has also become an inspiration for poets. The Google Poetics blog is run by Sampsa Nuotio and Raisa Omaheimo and collects Google poems, recognizing that Autocomplete suggestions sometimes contain a Dadaist beauty and are in essence prose poems. Inspired by their collection of Google poems, I sometimes enter words or verses from famous poems to generate Autocomplete's mutant versions of those famous verses:
Here is a Google Autocomplete poem based on "Do not go gentle into that good night" by Dylan Thomas:
Do not go
do not go where the path may lead
do not go gentle poem
do not go my love
Do not go beyond what is written
And one based on the line "Let us go then, you and I" from T.S. Eliot's ‘The Love Song of J. Alfred Prufrock'
let us entertain you
let us entertain you gift cards
let us play with your look
let us go then you and i
I would like to now close with a final ode to Google:
google is evil
google is god
google is your friend
google is down
Monday, November 25, 2013
Through A Printer Darkly
by James McGirk
James McGirk works as a literary journalist and is a contributing analyst to an online think tank. The following is an imagined itinerary for a tourist vacation twenty years in the future.
Seven days in the PRINTERZONE
June 20, 2033-June 28, 2033
A quick suborbital hop to Iceland courtesy of Virgin Galactic and then it’s all aboard the ScholarShip, a luxurious three-mast schooner powered by that most ecologically palatable of sources: the wind.
Weather-permitting you and twenty of your fellow alumni will set sail for the Printerzone. (The North and Norwegian Seas can be temperamental: in the event of heavy weather we revert to backup biodiesel power.) Our destination has been recognized by UNESCO as a World Heritage Site: it is both a glimpse at what our future might become should government regulation of printers come to an end, and a fantasy of life free from credit and ubiquitous surveillance. Together we’ll spend a week immersed in this unique community, on board an oilrig in international waters, using three-dimensional additive printing to meet our every need.
Joining us on this adventure will be Prof. Orianna Braum, an associate professor of Maker Culture at Stanford University; Alan Reasor, a forty-year veteran of the additive printing industry; and a young man who prefers to refer to himself by displaying a small silver plastic snowflake in his palm.
ITINERARY - DAY ONE
A colorful day spent traversing the Norwegian and North Seas… sublime marine grays and blues stirred by the bracing sea breeze. Keep your eyes peeled for pods of chirping Minke whales! Many are 100 percent natural.
Breakfast and lunch will be served onboard The ScholarShip by our chef Matthias Spork. Selections include: printed cereals and pastas, catch-of-the-day and a refreshing sorbet spatter-printed by his wife, renowned pastry chef Rebecca Spork.
Prof. Braum and Mr. Reasor will debate: Has Three-Dimensional Printing failed its Promise? Reasor will argue that in most instances economies of scale and the cost of raw materials make conventional manufacturing a more cost-effective solution than 3D printing. Prof. Braum will counter, describing industries that have been radically reshaped by printing—prosthetics and dentistry, bespoke suiting and fashion, at-home robotics and auto-repair—and suggest instead that government safety regulation and restrictive intellectual property licenses have done more to stifle innovation than costs. There will be time for questions afterwards. And then a brief demonstration of piezoelectric substrates: printed materials that respond to the human touch.
Following a hearty and delicious dinner prepared by the Sporks, we invite you for hot toddy and outdoor stargazing with our First Mate. The Arctic winds can be fierce at night, so you have the option of lighting the hearth in your cabin, and viewing a very special Skype broadcast—The Pink Printer’s Naughty Apprentice—which outlines in a most whimsical and titillating way some of the more adult uses of the three-dimensional printer.
(Please note that cabins containing occupants below the age of consent in their country of residence will not receive this broadcast.)
Drop Anchor in the Printerzone
After a hot breakfast ladled out by the Sporks, join your shipmates on deck for an approach unlike anywhere else on earth: a faint glimmer on the horizon gathers in size and sprouts shapes and colors, until the magnificent muddle that is the Printerzone fills our entire field of vision. Crumpled wrapping paper on stilts, a wag once said. Squint at this glorious mass, and beneath the colorful sprays of plastic and the pieces of flotsam and jetsam the residents have creatively incorporated into their homes, you just might make out the original concrete and steel beneath.
Your daily allowance of printer substrate will be issued to you in bulk so that you may trade it for trinkets. A rope ladder will be lowered from above. One at a time you will be hoisted to the Zone. There, our guide, the man who identifies himself with the silver snowflake (henceforth referred to as [*]) shall greet us. He is an interesting specimen. Ask of him what you will. The tour begins at The Workshop, a vast, enclosed “maker space” where P’Zoners (as they call themselves) exchange goods, plans for new designs and information. Barter your substrate for unique souvenirs. Take a class in creation. Then enjoy a sandwich lunch carefully selected by the Sporks. Food may also be bartered with the natives.
After lunch you may explore the Zone at your leisure or enjoy another spirited debate between Reasor and Braum. Printerzone: Model City or Goofy Aberration? Dinner shall be served in the Workshop, which at night transforms into The Wild Rumpus. Guests in peak physical condition may want to join the carousing. (N.B. Beware of custom-printed entheogens and other libations, which, while they may be legal in the Printerzone, are not necessarily safe.)
Fresh croissants and a mug of coffee are the perfect way to begin a crisp Printetrzone morning! Daring types may wish to join [*] and don a protective suit printed from the city’s custom printers, and sink beneath the waves for a romp on the seafloor and a look at how the city has evolved below the waterline. Printerzone’s silver suits are said to work as well in orbit as they do submerged beneath the waves. You may examine copies of a Vogue pictorial featuring the suits.
For those who prefer a more relaxed pace in the morning, there will be a bicycle tour of the Zone’s famous hydroponic orchid nursery, its orphanage and its medical clinics (notable, for, among other things, performing the first artificial face transplant). There will also be a chance to examine the city’s recycling system up close as it transforms unwanted printer output and even sewage and brine into the raw materials for printing. No stinky smells we promise!
(All printed foods served aboard the ScholarShip are guaranteed to be free from precursor materials that were made from human waste or potential allergens.)
For lunch, if you’re ready for it, be prepared to break some taboos. Guided by [*], the Sporks, rabbis, halal butchers, vegan chefs, and a number of other experts, you will be given a unique opportunity to eat—among otherwise offensive offerings—a perfect facsimile of human flesh, pork, dolphin steak, non-toxic fugu flesh, endangered sea turtle, and even taste the world’s most potent toxins in perfect moral comfort and safety. Less adventurous offerings will also be available for the squeamish.
During lunch, Braum and Reasor will sound off on the subject of: Whether Full Employment is Possible in a post-3DP World. Braum says printing in three dimensions will kill off the middlemen who camp out in many employment categories (the warehouse managers, the marketing men…); Reasor agrees, but thinks the unfettered labor will be absorbed by innovative new industries. There will be time for questions. Coffee too.
After lunch there will be a demonstration of one of the most potent technologies to emerge from three-dimensional printing: the cheap invisibility cloak. Then you will be joined by some of the city’s most outrageous tailors, haberdashers, wig makers, and costume outfitters. Design a more colorful, eccentric version of yourself and then top off your creation with a freshly printed invisibility cloak, so that you might attend the night’s festivities in absolute comfort. You need only reveal yourself to those you want to. Buffet dinner. Brandy against the chill.
(N.B. Printerzone security forces are equipped with night-vision goggles, so rest assured that you will be safe, but don’t get any antisocial ideas. There are some rules to abide by!)
Pondering the Printerzone
On our fourth day, after a healthy, all-natural breakfast lovingly prepared by the Sporks on the ScholarShip, we delve into the Printerzone’s more pensive side. [*] will lead us on a tour of the Million Memorials, the serene necropolis where the city’s mourners print chalky likenesses of friends and family they’ve lost, and missing objects and abstractions too. A quiet, haunting place. After a pleasing serenade by the P’Zone wailers, we picnic among the monuments, and hear [*]’s own story of loss—his young bride who slipped over the railing during a photo session and drowned in the ocean— and gaze at the spun plastic residue of a brief but happy relationship and afterwards, stroll back to The Workshop for a chance to barter for more amusements.
The subject of the day’s lecture (delivered, of course by Braum and Reasor) will be: Three Dimensional Printing in the Developing World. Printing won’t be the panacea we think it will because the developing world lacks the infrastructure to sustain itself; but surely the availability of items that would otherwise have been unavailable is valuable—but what about the cottage industries that would be eradicated by printing, wouldn’t that snuff out any printing-related development? Drink during the lecture if you like. Gaze longingly at potential mates if you wish to. This is a pleasure cruise.
After a brief question and answer session, a fittingly austere supper will be served, and [*] will introduce us to a non-profit initiative sponsored by the Printerzone: a crisis response team that will race to trouble spots and, without the needless hassle of lines of communication and supply, be able to provide surgical equipment, medicines and shelter at a fraction of the cost… cost? Yes, even this barter-driven economy is soliciting funds. Contribute what you will. The city’s orphans hand out orchids.
Snack before the Wild Rumpus. Serenade. Custom sex surrogates printed for an additional fee. (Please: No printing of lecturers, crewmembers, fellow travelers without their expressed permission, no skin prints using DNA within a 15 percent match of your own.)
At home in the Printerzone
Many of travelers wake on their fifth day beside a grim memory, manifest in the form of slightly abused piezoelectric plastic. You may find it cathartic to batter your unwanted surrogate to pieces, or, if you are the showy sort—enter the surrogate into the ring for gladiatorial combat. The festivities begin with a squabble between Braum and Reasor’s creations (one wonders at the tension between them), followed by a battle royal, and a moving speech by [*] about whether or not a surrogate has a soul. Each participant will be allowed to download a copy of Do Androids Dream of Electric Sheep for later review.
By now you’ve spent nearly a week looking up at the frills wrapped around the upper decks of the rig. Perhaps you’ve wondered what the lives of the residents are like beyond the Wild Rumpus or the Workshop floor. Today you’ll enjoy an intimate glance at their living quarters.
Some might find this disturbing. There are children here, you might say, how could one live like this? But they’re hardly cut off; well, maybe they are cut off from nature and history and dry land but not the ‘net. See the data goggles they wear? The tykes and pubers who strut about the Zone have come to see the boundary between what is virtual and what is not as a thing much more permeable than you or I.
Here the Internet is inside out. People print virtual things. Shudder at the home robots with their suction cup attachments. Are they vacuum cleaners or sexual abominations or both? Much of the home décor won’t make sense unless you’re jacked into the ’net. Too prone to data dropsy to peer through a lens? Ask yourself why this trip appealed to you in this first place, but fear not—there are gentle entheogens that replicate the experience of data being blazed onto your eyeballs.
Nighttime. Rumpus again. Dance and flail until you feel yourself dissolve into the communal flesh. The Sporks have taken the day off. Truth be told they’re disgusted with three-dimensional printing and what it means for their profession. Can you blame them? Who cares, you aren’t hungry. From perched up high, the Zone looks terraced and circular like a medieval etching of The Inferno. The Rumpus looks like the writhing of the damned. You think you see Braum and Reasor embrace. [*] sits beside you and tells you his given name was Virgil. Has he been drugging you?
Beyond the Printerzone
Someone wakes you up by firing a pistol in the air. That’s right, there are a lot of weapons here. This is a polite society. Ugh, the sunlight streaming into your eyes is sheer agony. Your neurons are crying out. Caffeine! Dopamine! Serotonin! You wobble out on deck. The Sporks are back. Thank God the Sporks are back. They pour you a mug of coffee. They cut you a grapefruit. Crackling bacon, the smell of bread baking.
[*] won’t look you in the eye, the sweaty creep.
Above you the colorful plastic printed houses look chintzy in the light. They hoist you up. Peek below. The ScholarShip is an oasis of sanity and earthtones. Everything else is Technicolor Burp. Can you really face another day of this? The medic gives you something for your throbbing head. A party assembles. Wrapped sandwiches for lunch and shot-glasses of Astronaut Ice Cream. A hardhat. That silver protective garb you’ll have to peel off afterwards. The place stinks of kerosene (that’s jet fuel someone will say.) There are men from NASA, and men from the Air Force, and men with helmets that look like they’re made entirely from mirrorshades. Cyclopses. You want to leave. There’s a faint but unmistakable rumble.
Reasor and Braum waddle to the front of your party. Another debate: Space Exploration is Three-Dimensional Printing’s Killer App. This time they both agree. Reasor thinks the way to reach for the stars is to print a massive cable and haul ourselves up. Braum says that’s great, but what’s better is that you can go anywhere in space and print anything you could possibly need. You can beam plans to the spaceship, plans for things that weren’t invented when the ship took off. Applause. Time for questions. Cups of coffee. Cookies.
Wonder what if printers were used to print infinite printers?
Clutch your mug. Look around. The top level is cold and metallic. Limp suits hang waiting, rows of silver helmets that look like Belgian Glass globes wink in the setting sun. Rockets: fins, nose caps, nozzles, streamlined bellies, lie, being assembled from spools of plastic. Dinner is splendid and sober. You remember little of it. There were candles. An ant walked across the table.
Tonight there is no Wild Rumpus. You sleep on the rig, beneath the stars but protected by an infinitesimal layer of plastic. A storm blows in. Electricity rips the Arctic sky. Rain pounds plastic but never touches you. You are woken by a helmeted Cyclops: “Some visitors decide never to leave,” he says, extending a gloved hand. It’s silver. “We’ll nourish you.” Behind the smooth surface you can just make out the blurry face of [*]
Wake to the smell of Sporks’ cooking. A printed snowflake has been placed beside you. Visitors may opt to extend their stay. Or leave and never, ever come back.
Monday, November 18, 2013
Homo Erectus, or I Married a Ham
by Carol A. Westbrook
My husband loves big erections. Don't get me wrong, I'm not speaking here about Viagra, I'm talking about tall towers made of metal, long wires strung high in the sky, and tall antennas protruding from car roofs. He loves anything that broadcasts or receives those elusive radio waves, the bigger the better. That is because he is a ham, also known as an amateur radio enthusiast, and all hams love antennas.
Amateur radio has been around since the early 1900's, shortly after Marconi's first transatlantic wireless transmission in 1901. Initially, radio amateurs communicated using Morse code, as did commercial radiotelegraphy, but voice transmission quickly gained in popularity. In order to broadcast on the ham radio frequencies, hams must obtain an amateur radio license from the FCC, and a unique call sign, their ham "name." Proficiency in Morse code was required in order to obtain an amateur radio license, but this requirement was finally dropped in 2003, which opened up the field to many more interested radio amateurs, my husband being one of them. As a result, the hobby is becoming popular again. There are local clubs to join, as well as national get-togethers called "hamfests" where there are lectures, demonstrations, equipment swap-meets, and licensing exams.
What do hams do? They communicate by radio. They use everything from a battery-powered hand-held transmitter to a massive collection of specialized radio equipment located in a corner of their home or garage, which they call their "ham shack." (See picture of my husband's ham shack, above, in his library). They talk to other ham radio operators, and participate in conversations that may be local or span the globe, depending on the radio wavelength, the power of their transmitter, and their antenna. And they erect large antennas, perhaps on an outside tower or the roof of their home.
Like Marconi, hams learn early on that it's relatively easy to send out a radio signal, but the distance it travels depends as much on the size and configuration of the antenna as it does on the signal strength. There is an art to constructing an antenna, and hams spend a great deal of effort on it. That is why hams are fascinated by antennas. They are the quintessential "homo erectus."
My husband's fascination was fueled by his boyhood days. In the 1950's he felt isolated from the outside world because his family's radio and TV could only receive a few stations, living as they did in an a valley surrounded by the Pocono Mountains. He learned that he could receive more stations by stringing long wires throughout the house, or on the roof -- creating his own makeshift antennas. This led to an engineering degree, an interest in telecommunications, and a ham radio license.
Our houses are festooned with antennas. We have long wires strung from roof to garage, a small tower on the hillside, four large parabolic dishes, from 6 to 11 feet in diameter, that receive signals from transmitting satellites... but that's another story. We even have a stealth antenna in our garden which, to the casual observer, appears to be just another garden ornament, nestled among the roses. (See picture) Unlike other "ham widows" I don't mind these antennas -- they are certainly conversation pieces. I do not have a ham license--I didn't past the exam, but then again I didn't study for it. But I often go along with my husband to hamfests, including the famous Dayton Hamvention, which takes place every May.
What is so appealing about ham radio? Why spend your time and money to buy archaic equipment and erect antennas and mess up your house -- when you can just call on your cell or Skype your friend? The answer is simple -- because you can. As a hobbyist, you cannot easily make a micro chip, or build a cell phone, or create your own internet, but you can assemble your own equipment and broadcast your own voice, around the world. Just like Marconi! What a high! What a sense of empowerment! And ham radio is a great hobby for youngsters who want to learn about the electrical and mechanical world, and enjoy the challenge of "getting out of the valley" using their own ingenuity and design. If you would like to learn more, contact the national association for amateur radio, the American Radio Relay League, to learn how to get involved, or visit their headquarters and museum at 225 Main Street Newington, CT 06111-1494 USA. You might get hooked, too.
Monday, October 14, 2013
Should Doctors ‘Google’ Their Patients?
by Jalees Rehman
Beware of what you share. Employers now routinely utilize internet search engines or social network searches to obtain information about job applicants. A survey of 2,184 hiring managers and human resource professionals conducted by the online employment website CareerBuilder.com revealed that 39% use social networking sites to research job candidates. Of the group who used social networks to evaluate job applicants, 43% found content on a social networking site that caused them to not hire a candidate, whereas only 19% found information that that has caused them to hire a candidate. The top reasons for rejecting a candidate based on information gleaned from social networking sites were provocative or inappropriate photos/information, including information about the job applicants' history of substance abuse. This should not come as a surprise to job applicants in the US. After all, it is not uncommon for employers to invade the privacy of job applicants by conducting extensive background searches, ranging from the applicant's employment history and credit rating to checking up on any history of lawsuits or run-ins with law enforcement agencies. Some employers also require drug testing of job applicants. The internet and social networking websites merely offer employers an additional array of tools to scrutinize their applicants. But how do we feel about digital sleuthing when it comes to relationship that is very different than the employer-applicant relationship – one which is characterized by profound trust, intimacy and respect, such as the relationship between healthcare providers and their patients?
The Hastings Center Report is a peer-reviewed academic bioethics journal which discusses the ethics of "Googling a Patient" in its most recent issue. It first describes a specific case of a twenty-six year old patient who sees a surgeon and requests a prophylactic mastectomy of both breasts. She says that she does not have breast cancer yet, but that her family is at very high risk for cancer. Her mother, sister, aunts, and a cousin have all had breast cancer; a teenage cousin had ovarian cancer at the age of nineteen; and that her brother was treated for esophageal cancer at the age of fifteen. She also says that she herself has suffered from a form of skin cancer (melanoma) at the age of twenty-five and that she wants to undergo the removal of her breasts without further workup because she wants to avoid developing breast cancer. She says that her prior mammogram had already shown abnormalities and she had been told by another surgeon that she needed the mastectomy.
Such prophylactic mastectomies, i.e. removal of both breasts, are indeed performed if young women are considered to be at very high risk for breast cancer based on their genetic profile and family history. The patient's family history – her mother, sister and aunts being diagnosed with breast cancer – are indicative of a very high risk, but other aspects of the history such as her brother developing esophageal cancer at the age of fifteen are rather unusual. The surgeon confers with the patient's primary care physician prior to performing the mastectomy and is puzzled by the fact that the primary care physician cannot confirm many of the claims made by the patient regarding her prior medical history or her family history. The physicians find no evidence of the patient ever having been diagnosed with a melanoma and they also cannot find documentation of the prior workup. The surgeon then asks a genetic counselor to meet with the patient and help resolve the discrepancies. During the evaluation process, the genetic counselor decides to ‘google' the patient.
The genetic counselor finds two Facebook pages that are linked to the patient. One page appears to be a personal profile of the patient, stating that in addition to battling stage four melanoma (a very advanced stage of skin cancer with very low survival rates), she has recently been diagnosed with breast cancer. She also provides a link to a website soliciting donations to attend a summit for young cancer patients. The other Facebook page shows multiple pictures of the patient with a bald head, suggesting that she is undergoing chemotherapy, which is obviously not true according to what the genetic counselor and the surgeon have observed. Once this information is forwarded to the surgeon, he decides to cancel the planned surgery. It is not clear why the patient was intent on having the mastectomy and what she would gain from it, but the obtained information from the Facebook pages and the previously noted discrepancies are reason enough for the surgeon to rebuff the patient's request for the surgery.
Two groups of biomedical ethics experts then weigh in on the case and the broader question of whether or not health care professionals should ‘google' patients. The first group of ethics experts feels that uninvited patient ‘googling' is generally a bad practice for three main reasons:
- It allows healthcare professionals to withdraw from their patients and start relying on online data and information gleaned from social networking sites instead of interacting with the patient and addressing the key issues head-on.
- The ‘googling' of patients erodes the trust between the healthcare professional and the patient. Patients might feel a sense of betrayal that the healthcare professional "spied" on them.
- An internet search or review or social network pages linked to the patient represents an invasion of the privacy of the patient. The patient should have the right to decide what information to disclose and what not to disclose, but by surreptitiously obtaining this information, the healthcare provide circumvents the right to privacy of the patient.
A separate panel of reviewers arrives at a very different conclusion and specifically points to this case as an example where it was imperative to ‘google' the patient. As this panel points out, the genetic counselor used a legal method to search the internet and found information on public Facebook profiles after having found many red flags and inconsistencies in the patient's medical history. By finding the information on Facebook, the surgeon and the counselor were able to prevent a self-injurious, deceptive and possibly fraudulent scheme of the patient to go forward. This panel of experts goes as far as saying that it would have actually been irresponsible to not perform the Google search after all the red flags and inconsistencies were identified.
As with all ethical dilemmas, it is difficult to find the correct answer. The first panel brings up good points that the relationship between a healthcare professional and a patient is characterized by trust and respect of privacy, but I tend to agree with the second panel in the case of this patient. It illustrates that the ‘googling' was able to avert an unnecessary and irreversible surgery. This was not just an indiscriminate ‘googling' or searching of private information on Facebook pages. The action was prompted by very real concerns about contradictory information regarding the patient's medical history. On balance, the benefit of avoiding the unnecessary surgery probably outweighed the risk of harming the trust between the healthcare professional and the patient – one which was already undermined by the patient's deception.
This case is rather unusual because it is probably quite rare that a surgeon or a genetic counselor would find valuable information on a patient by merely searching Google or Facebook for information. The type of information that could be of value to most healthcare providers is not usually disclosed on public sites or social network pages. For example, a cardiologist may be interested in finding out why a patient's cholesterol levels are not decreasing despite being placed on optimal medications and being advised to cut down the dietary intake of cholesterol. The cardiologist may suspect that the patient is not really taking the medications or perhaps eating much more dietary cholesterol than the patient is willing to disclose during the doctor's visits. However, it is unlikely that the patient's Facebook page will chronicle whether or not the patient secretly eats cheese omelets on a daily basis or chooses not to take his cholesterol medications.
On the other hand, other healthcare professionals could find important diagnostic clues when reviewing the Facebook page of a patient. Psychiatrists or psychologists may be able to get a much better sense of a patient's mental health and functioning by reviewing the daily posts and interactions of a patient with friends and family members instead of just having to rely on the brief snapshot when they interview the patient during a 30 minute visit.
The study ""To Google or not to Google: Graduate students' use of the Internet to access personal information about clients." by the psychologists DeLillo and Gale surveyed 854 students enrolled in clinical, counseling, and school psychology doctoral programs in the United States and Canada, asking them how they felt about using Google or social networking websites to learn more about their clients/patients. Interestingly, two-thirds of the psychologists-in-training felt that it was never acceptable or usually not acceptable to use web search engines in order to find additional information about their clients. This feeling was even more pronounced when it came to social networking sites: 76.8% of the students thought that this was never acceptable or usually not acceptable.
However, despite these feelings, 97.8% of the students had searched for at least one client's information using search engines such as Google, whereas 94.4% had searched for at least one client's information using social networking websites. Importantly, 76.8% of the therapists who had conducted the searches for client information on social networking sites also reported that it was either always or usually unacceptable! This suggests a significant dissonance between the ethical perception of the therapists and their actions. Furthermore, more than 80% of the therapists who had conducted the searches said that their clients were aware of the internet and social networking searches they were conducting.
The case study with the patient requesting the mastectomy and the high prevalence of using the internet to perform searches on patients/clients by psychologists highlights the ethical dilemmas that are emerging in our culture of digital sharedom. The internet with its often very public display of individual information may be a powerful tool for certain healthcare professionals, but we also need to develop ethical guidelines for how healthcare professionals should use this tool. For medical procedures and tests, healthcare professionals have to obtain informed consent from their patients, discussing the risks and benefits of the procedure or test. Should healthcare professionals also obtain informed consent from patients before they pry into their social media networks? Or would that defeat the purpose because the patients might change the privacy settings or change the content of their posts, knowing that healthcare professionals might be reviewing them? Should healthcare professionals in specialties such as psychology and psychiatry ‘google' all their patients – just like they now ask questions about substance abuse to all patients – or only if there are certain red flags?
The survey of psychologists-in-training highlights the cognitive dissonance that healthcare professionals may experience: They may reject such searches on their clients or patients in the abstract, but they may still choose to perform the searches, probably because they think it will allow them to provide better care for their clients and patient. Instead of relying of idiosyncratic decisions made by professionals, we have to establish the ethical ground-rules for how healthcare professionals can use search engines or social networking sites when obtaining information about individuals. We may have become so accustomed to invasions of our privacy by government agencies and corporations that we sometimes forget that privacy is instrumental in maintaining our individuality. Especially in relationships that are founded on an extraordinary degree of trust, such as those between healthcare professionals and their patients or clients, we need to ensure that this trust is not eroded by the dark side of sharedom.
Acknowledgements: I would like to thank Ryan Hunt from CareerBuilder for clarifying the survey results.
- Rebecca Volpe, George Blackall, and Michael Green; and Danny George, Maria Baker, and Gordon Kauffman, "Googling a Patient" Hastings Center Report 43, no. 5 (2013): 14-15.
- DiLillo, David; Gale, Emily B. "To Google or not to Google: Graduate students' use of the Internet to access personal information about clients."Training and Education in Professional Psychology, Vol 5(3), Aug 2011, 160-166. doi: 10.1037/a0024441
Monday, June 24, 2013
by Jalees Rehman
"The most radical revolutionary will become a conservative the day after the revolution."
The recent revelations by the whistleblower Edward Snowden that the NSA (National Security Agency) is engaged in mass surveillance of private online communications between individuals by obtaining data from "internet corporations" such as Google, Facebook and Microsoft as part of a covert program called PRISM have resulted in widespread outrage and shock. The outrage is understandable, because such forms of surveillance constitute a major invasion of our privacy. The shock, on the other hand, is somewhat puzzling. In the past years, the Obama administration has repeatedly demonstrated that it is willing to continue or even expand the surveillance policies of the Bush government. The PATRIOT Act was renewed in 2011 under Obama and government intrusion into our personal lives is justified under the mantle of "national security". We chuckle at the absurdity of obediently removing our shoes at airport security checkpoints and at the irony of having to place Hobbit-size toothpaste tubes into transparent bags for a government that seems to have little respect for transparency. Non-US-citizens who reside in or travel to the United States know that they can be detained by US authorities, but even US citizens who are critical of their government, such as the MacArthur Genius grantee Laura Poitras, are hassled by American authorities. Did anyone really believe that the Obama administration with its devastating track record of murdering hundreds of civilians - including many children – in drone attacks would have moral qualms about using the NSA to spy on individual citizens?
The Stasi analogy
One of the obvious analogies drawn in the aftermath of Snowden's assertions is the comparison between the NSA and the "Stasi", the abbreviated nickname for the "Ministerium für Staatssicherheit" (Department of State Security) in the former German Democratic Republic (GDR or DDR). Articles referring to the "United Stasi of America" or the "Modern Day Stasi-State" make references to the massive surveillance apparatus of the East German Stasi, which monitored all forms of communications between citizens of East Germany, from wire-tapping apartments, offices, phones and secretly reading letters. The Stasi "perfected" the invasion of personal spaces – as exemplified in the Oscar-winning movie "The Lives of Others". It is tempting to think of today's NSA monitoring of emails, Facebook posts or other social media interactions as a high-tech version of the Stasi legacy. A movie director may already be working on a screenplay for a movie about Snowden and the NSA called "The Bytes of Others". However, there are some key differences between the surveillance conducted by the Stasi and the PRISM surveillance program of the NSA. The Stasi was a state-run organization which was responsible for amassing the data and creating profiles of the monitored citizens. It did not just rely on regular Stasi employees, but heavily relied on so called IMs – "inoffizielle Mitarbeiter" or "informelle Mitarbeiter" - informal informants. These informal informants were East German citizens who met with designated Stasi officers, reporting on the opinions and actions of their friends, colleagues and relatives and at times aiding the Stasi in promoting state propaganda. In the case of the PRISM program, the amassing of data is conducted by private "internet corporations" such as Facebook, Google and Microsoft, who then share some of the data with the state. Furthermore, instead of having to rely on informal informants like the Stasi, "internet corporations" simply rely on the users themselves who readily divulge their demographic information, opinions and interests to the corporations.
Corporate erosion of our privacy
It seems strange that the outrage ensuing after the PRISM revelations is primarily directed at the US government and the NSA, but not at the corporations which are invading our privacy. Criticisms of the role that private corporations have played in the PRISM program primarily focus on the fact that these corporations divulged the information to the government, but seem to ignore the fact that corporations such as Facebook, Google and Microsoft continuously invade our privacy and use our data for their own marketing goals or share it with their clients. Centuries of persecution and oppression by governments - monarchs, dictators or democratically elected governments - have sensitized us to privacy invasion by governments, but we seem to have a rather laissez-faire attitude when it comes to corporate invasion of our privacy. In fact, we associate the expressions "corporate espionage" or "corporate surveillance" with corporations spying on each other but not necessarily with them spying on us. If we had found out the US Postal Service kept track of how many letters we send to certain recipients, perhaps even scanned our personal letters for certain keywords and then used this information for its own marketing purposes or sold it to interested parties, most of us would have considered this an egregious violation of our privacy. Yet we know that "internet corporations" such as Google and Facebook routinely practice this form of privacy invasion. In our neoliberal world of unfettered capitalism, the state is increasingly answering to corporate interests while ignoring the concerns of citizens. We have to ask ourselves whether such an eviscerated state is the only threat to our civil liberties, or whether we need be more sensitive to violations of our privacy and liberties by private corporations.
Long before the leak of the PRISM documents, critics such as Evgeny Morozov in "The Net Delusion", Rebecca MacKinnon in "Consent of the Networked" or Robert McChesney in "Digital Disconnect" warned us about the invasion of rivacy by "internet corporations" which are collecting information about us. We do not have to pay to use Google and Facebook, but the reason why these for-profit corporations offer us "free" services is because they use and market the information we unwittingly provide them. This type of information-gathering is probably legal, because when we sign up for accounts, most of us agree to their terms and conditions. Even if new laws or regulations are enacted after the PRISM scandal to limit surveillance, it is likely they will only pertain to how government agencies manage information on individuals or how corporations convey such information to government agencies, but it is unlikely that new laws will limit the information gathering for corporate benefits.
Why is it that we tend to be so lenient towards "internet corporations"? One reason may be the mythopoesis surrounding the "internet". Instead of viewing Silicon Valley executives of "internet corporations" as capitalists who sell our privacy for profit, we envision them as benevolent, entrepreneurial hipsters who eat organic quinoa salads and donate some portion of their profits to philanthropic causes. Some of us may buy into the myth of the egalitarian nature of the "internet". The "internet" is not egalitarian, especially not when it comes to the sharing and marketing of information by corporations. For example, there is a fundamental asymmetry when Facebook collects data on its users but does not feel compelled to reveal exactly how it uses the information. Jeff Jarvis, a vocal supporter of "internet corporations" has already expressed concern that users may start questioning their blind trust in the "internet" as a consequence of the PRISM revelations, skillfully avoiding a discussion of corporate privacy invasion. This strategy of placing all the blame for privacy violations on the government may be the best strategy for corporations. Google's attempt to challenge the US government, asking for permission to disclose any data requests from the NSA, enables Google to portray itself as a knight in shining armor and evade the far more uncomfortable discussion of corporate uses and abuses of amassed data.
Culture of sharedom
Evgeny Morozov's recent book "To Save Everything Click Here" provides an excellent insight into the mythos of the "internet". The physical internet consists of computers, routers and servers that are connected to each other, whereas the mythical "internet" is a cultural icon to which god-like powers are ascribed. Morozov refers to this ideology as "internet-centrism". The ideology of "solutionism", a term borrowed from the world of architecture and urban planning, refers to:
…an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions— the kind of stuff that wows audiences at TED Conferences— to problems that are extremely complex, fluid, and contentious.
"Solutionism" and "internet-centrism" can act in concert, creating a virtuous cycle in which the mythical "internet" is seen as a means to provide the ultimate solutions to the problems of humankind. This view of the "internet" and the afore-mentioned neoliberal awe of Silicon Valley entrepreneurs all may contribute to why privacy invasions by internet corporations are forgiven or ignored.
One additional cultural phenomenon that has allowed "internet corporations" to erode our privacy is that of sharedom, the incessant and growing desire to share our opinions and details of our personal lives with a broad audience. Just like "solutionism" or neoliberalism, sharedom is not a product of the "internet", but it has become a major fuel for the mythical "internet". Sharedom is just another word for nothing left to hide. Reality television, for example, is a manifestation of sharedom. The MTV reality TV show "The Real World" was first broadcast in 1992 when the "internet" was still in its embryonic stage. Millions of viewers could watch minute details of the lives of strangers living in a house together. One may view reality TV as a form of mass exhibitionism and mass voyeurism, but as Mark Greif has pointed out, one of the key aspects of reality TV was that it allowed viewers to "judge" the people they were observing. While reality TV only allowed a small group of people – selected from thousands of applicants – to "share" their lives with a broad audience, the "internet" gradually enabled everyone with an online connection to share their lives. We started living in transparent cages - Massive Open Online Cages (MOOCs) - and the "internet" permitted the audience to give instant feedback by passing online "judgments", such as leaving comments on social media posts or blog posts. This culture of sharedom was an unexpected bounty for "internet corporations", because it not only made us less cautious about our privacy but also supplied them with massive amounts of free personal data that could be marketed.
We often hear about the trade-off between privacy and security and the need for an optimal balance, which maximizes the privacy of the individual while maintaining the security of our society. This sounds like a reasonable argument, but it ignores the fact that this is not the only privacy trade-off. Corporations are interested in maximizing their profits and since individual data is a marketable commodity, their interest is to find a balance between maximal profit and maintaining some degree of privacy for users that makes them feel comfortable enough to share personal data that can be marketed. In addition to this trade-off between profits and privacy, the culture of sharedom also creates the trade-off between publicity and privacy. Jill Lepore has recently discussed the challenges of this trade-off in an essay in the New Yorker:
In the twentieth century, the golden age of public relations, publicity, meaning the attention of the press, came to be something that many private citizens sought out and even paid for. This has led, in our own time, to the paradox of an American culture obsessed, at once, with being seen and with being hidden, a world in which the only thing more cherished than privacy is publicity. In this world, we chronicle our lives on Facebook while demanding the latest and best form of privacy protection—ciphers of numbers and letters—so that no one can violate the selves we have so entirely contrived to expose.
Another form of trade-off is that of convenience versus privacy. Using a website such as Amazon to purchase products offers a lot of convenience: It remembers which products we have previously bought, it offers targeted recommendations for new or related products that may be of interest based on our profile, and it even remembers which products we recently browsed. The more we use Amazon, the more accurate their profile of our interests becomes, as evidenced by the accuracy of Amazon's recommendations for new purchases. All we have to offer Amazon in exchange for this convenience is a window into the privacy of our soul.
I remember coming across the expression "Faustian bargain" to describe how we exchange our privacy for the sake of convenience. When Goethe's Faust agreed to serve the devil Mephistopheles in the after-life, he was rewarded with youth and a beautiful lover. We may not approve of Faust's choice, but his deal at least merits some consideration. We currently sacrifice our privacy for the benefit of corporate profits and in exchange receive free shipping, targeted ads and coupons. No youth, no lovers. Our deal does not even rise to the level of a "Faustian bargain".
The recent study "Silent Listeners: The Evolution of Privacy and Disclosure on Facebook" conducted by researchers at Carnegie Mellon University monitored the public disclosure (information visible to all) and private disclosure (information visible to Facebook friends) of personal data by more than 5,000 Facebook users during the time period 2005-2011. The researchers identified two opposing trends. Over time, Facebook users divulged less and less personal information such as birthdates, favorite books or political information to the public. On the other hand, the researchers also noticed a trend of revealing more personal information to Facebook friends. Apparently, there was a growing awareness of how public disclosures can compromise privacy, but users were also emboldened to reveal more personal information when they deemed their audience to be trustworthy. As the researchers correctly pointed out, these "private disclosures" are always available to Facebook itself, third-party apps and to advertisers, referred to as "silent listeners" by the researchers. This is a key point when it comes to privacy settings on social media websites. Users are able to control how much information is displayed to other individuals and future laws and regulations may protect users by curtailing disclosures to government agencies, but information disclosures to the company that provides the service itself and its corporate clients are often beyond our control.
The poll "Teens, Social Media and Privacy" conducted by the Pew Research Center confirmed this lack of concern about third-party access to personal data in a group of 632 teenagers. Overall, 60% of teenagers said that they were either not at all concerned or not too concerned about third-party access (such as advertisers or third-party apps) to their personal information. Only 9% were very concerned about it. Individual comments made by teenagers in a Pew focus group further underscore this cavalier attitude towards corporate access to personal data:
Male (age 16): "It's mostly just bands and musicians that I ‘like' [on Facebook], but also different companies that I ‘like', whether they're clothing or mostly skateboarding companies. I can see what they're up to, whether they're posting videos or new products... [because] a lot of times you don't hear about it as fast, because I don't feel the need to Google every company that I want to keep up with every day. So with the news feed, it's all right there, and you know exactly."
Male (age 13): "I usually just hit allow on everything [when I get a new app]. Because I feel like it would get more features. And a lot of people allow it, so it's not like they're going to single out my stuff. I don't really feel worried about it."
Value of privacy
The revelations about how the government is using surveillance data obtained by "internet corporations" should prompt a broad debate of how we value privacy, especially because it is difficult to affix a price-tag on this intangible non-commodity. This debate will hopefully lead to greater transparency in regards to how governments access and handle personal information. However, it is important to also raise awareness of the potential abuse of personal information by private corporations. If we truly value our privacy, we need to develop methods that restrict government and corporate access to our personal data. In the process we will have to unravel our myths surrounding internet-centrism, solutionism and sharedom.
Image Credits: Automated envelope sealer used by the Stasi to close opened letters after review of the letter contents (image by Appaloosa - Wikimedia Commons), a Stasi surveillance post (image by Lokilech - Wikimedia Commons)
Monday, March 04, 2013
by Jalees Rehman
"For every rational line or forthright statement there are leagues of senseless cacophony, verbal nonsense, and incoherency."
The British-Australian art curator Nick Waterlow was tragically murdered on November 9, 2009 in the Sydney suburb of Randwick. His untimely death shocked the Australian art community, not only because of the gruesome nature of his death – Waterlow was stabbed alongside his daughter by his mentally ill son – but also because his death represented a major blow to the burgeoning Australian art community. He was a highly regarded art curator, who had served as a director of the Sydney Biennale and international art exhibitions and was also an art ambassador who brought together artists and audiences from all over the world.
After his untimely death, his partner Juliet Darling discovered some notes that Waterlow had jotted down shortly before his untimely death to characterize what defines and motivates a good art curator and he gave them the eerily prescient title “A Curator’s Last Will and Testament”:
2. An eye of discernment
3. An empty vessel
4. An ability to be uncertain
5. Belief in the necessity of art and artists
6. A medium— bringing a passionate and informed understanding of works of art to an audience in ways that will stimulate, inspire, question
7. Making possible the altering of perception.
Waterlow’s notes help dismantle the cliché of stuffy old curators walking around in museums who ensure that their collections remain unblemished and instead portray the curator as a passionate person who is motivated by a desire to inspire artists and audiences alike.
The Evolving Roles of Curators
The traditional role of the curator was closely related to the Latin origins of the word, “curare” refers to “to take care of”, “to nurse” or “to look after”. Curators of museums or art collections were primarily in charge of preserving, overseeing, archiving and cataloging the artifacts that were placed under their guardianship. As outlined in “Thinking Contemporary Curating” by Terry Smith, the latter half of 20th century witnessed the emergence of new roles for art curators, both private curators and those formally employed as curators by museum or art collections. Curators not only organized art exhibitions but were given an increasing degree of freedom in terms of choosing the artists and themes of the exhibitions and creating innovative opportunities for artists to interact with their audiences. The art exhibition itself became a form of art, a collage of art assembled by the curators in a unique manner.
Curatorial roles can be broadly divided into three domains:
1) Custodial – perhaps most in line with traditional curating in which the curator primarily maintains or preserves art collections
2) Navigatory – a role which has traditionally focused on archiving and cataloging pieces of art so that audiences can readily access art
3) Discerning – the responsibility of a curator to decide which artists and themes to include and feature, using the “eye of discernment” described by Nick Waterlow
Creativity and Curating
The diverse roles of curators are characterized by an inherent tension. Curators are charged with conserving and maintaining art (and by extension, culture) in their custodial roles, but they also seek out new forms of art and experiment with novel ways to exhibit art in their electoral roles. Terry Smith’s “Thinking Contemporary Curating” shows how the boundaries between curator and artist are becoming blurry, because exhibiting art itself requires an artistic and creative effort. Others feel that the curators or exhibition makers need to be conscious of their primary role as facilitators and that they should not “compete” with the artists whose works they are exhibiting. This raises the question of whether the process of curating art is actually creative.
It is difficult to find a universal and generally accepted definition of what constitutes creativity because it is such a subjective concept, but the definition provided by Jonathan Plucker and colleagues in their paper “Why Isn’t Creativity More Important to Educational Psychologists? Potentials, Pitfalls, and Future Directions in Creativity Research” is an excellent starting point:
“Creativity is the interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context.”
Using this definition, assembling an art exhibition is indeed creative – it generates a “perceptible product” which is both novel and useful to the audiences that attend the exhibition as well as to the artists who are being provided new opportunities to showcase their work. The aptitude, process and environment that go into the assembly and design of an art exhibition differ among all curators, so that each art exhibition reflects the creative signature of a unique curator.
Ubiquity of Curators
The formal title “curator” is commonly used for art curators or museum curators, but curatorial activity – in its custodial, navigatory and discerning roles – is not limited to these professions. Librarians, for example, have routinely acted as curators of books. Their traditional focus has been directed towards their custodial and navigatory roles, cataloging and preserving books, and helping readers navigate through the vast jungle of published books.
Unlike the key role that art curators play in organizing art exhibitions, librarians are not the primary organizers of author readings, book fairs or other literary events, which are instead primarily organized by literary magazines, literary agents, publishers or independent bookstores. It remains to be seen whether the literary world will also witness the emergence of librarians as curators of such literary events, similar to what has occurred in the art world. Our local public library occasionally organizes a “Big Read” event for which librarians select a specific book and recommend that the whole community read the book. The librarians then lead book discussions with members of the community and also offer additional reading materials that relate to the selected book. Such events do not have the magnitude of an art exhibition, but they are innovative means by which librarians interact with the community and inspire readers.
One of the most significant curatorial contributions in German literary history was the collection of fairy-tales and folk-tales by the Brothers Grimm (Brüder Grimm or Gebrüder Grimm), Jacob and Wilhelm Grimm. Readers may not always realize how much intellectual effort went into assembling the fairy-tales, many of which co-existed in various permutations depending on the region of where the respective tales were being narrated. I own a copy of the German language edition of the “Children's and Household Tales” (Kinder- und Hausmärchen) which contains all their original annotations. These annotations allow the reader to peek behind the scenes and see the breadth of their curatorial efforts, especially their “eye of discernment”. For example, the version of Snow-White that the Brothers Grimm chose for their final edition contains the infamous scene in which the evil Queen asks her mirror, “Mirror, Mirror on the wall, Who is the prettiest in all the land?” She naturally expects the mirror to say that the Queen is the prettiest, because she just finished feasting on what she presumed were Snow-White’s liver and lungs and is convinced that Snow-White is dead. According to the notes of the Brothers Grimm, there was a different version of the Snow-White tale in which the Queen does not ask a mirror, but instead asks Snow-White’s talking pet dog, which is cowering under a bench after Snow-White’s disappearance and happens to be called “Spiegel” (German for “Mirror”)! I am eternally grateful for the curatorial efforts of the Brothers Grimm because I love the symbolism of the Queen speaking to a mirror and because I do not have to agonize over understanding why Snow-White named her pet dog “Mirror” or expect a Disneyesque movie with the title “Woof Woof” instead of “Mirror Mirror”.
The internet is now providing us access to an unprecedented and overwhelming amount of information. Every year, millions of articles, blog posts, images and videos are being published online. Older texts, images and videos that were previously published in more traditional formats are also being made available for online consumption. The book “The Information: A History, a Theory, a Flood” by James Gleick is quite correct in using expressions such as “information glut” or “deluge” to describe how we are drowning in information. Gleick also aptly uses the allegory of the “Library of Babel”, a brilliant short story written by Jorge Luis Borges about an imaginary library consisting of hexagonal rooms that is finite in size but contains an unfathomably large number of books, all possible permutations of sequences of letters. Most of these books are pure gibberish, because they are random sequences of letters, but amidst billions of such books, one is bound to find at least a handful with some coherent phrases. Borges' story also mentions a mythical “Book-Man”, a god-like librarian who has seen the ultimate cipher to the library, a book which is the compendium of all other books. Borges originally wrote the story in 1941, long before the internet era, but the phrase "For every rational line or forthright statement there are leagues of senseless cacophony, verbal nonsense, and incoherency" rings even more true today when we think of the information available on the web.
This overwhelming and disorienting torrent of digital information has given rise to a new group of curators, internet or web curators, who primarily focus on the navigatory and discerning roles of curatorship. Curatorial websites or blogs such as 3quarksdaily, Brainpickings or Longreads comb through mountains of online information and try to select a handful of links to articles, essays, poems, short stories, videos, images or books which they deem to be the most interesting, provocative or inspiring for their readers. They disseminate these links to their readers and followers by posting excerpts or quotes on their respective websites or by using social media networks such as Twitter. The custodial role of preserving online information is not really the focus of internet curators; instead, internet curators are primarily engaged in navigatory and discerning roles. In addition to the emergence of professional internet curatorship through such websites or blogs, a number of individuals have also begun to function as volunteer internet curators and help manage digital information.
Analogous to art curatorship, internet curatorship also requires a significant creative effort. Each internet curator uses individual criteria to create their own collage of information and themes they focus on. Even when internet curators have thematic overlaps, they may still decide to feature or disseminate very different types of information, because the individuals engaged in curatorship have very distinct tastes and subjective curatorial criteria. One curator’s chaff is another curator’s wheat.
Formal Education and Training in Internet Curation
There are no formal training programs that train people to become internet curators. Most popular internet curators usually have a broad range of interests ranging from the humanities, arts and sciences to literature and politics. They use their own experience and expertise in these areas to help them select the best links that they then pass on to their readers or followers. Some internet curators are open to suggestions from their readers, thus crowd-sourcing their curatorial activity, others routinely browse selected websites or social media feeds of individuals which they deem to be the most interesting, others may plug in their favorite words to scour the web for intriguing new articles.
Internet curation will become even more important in the next decades as the amount of information we amass will likely continue to grow exponentially. Not just individuals, but even corporations and governments will need internet curators who can sift through information and distilling it down to manageable levels, without losing critical content. In light of this anticipated need for internet curators, one should ask the question whether it is time to envision formal training programs that help prepare people for future jobs as internet curators. Internet curation is both an art and a science – the art of the curatorial process is to creatively assemble information in a manner that attracts and inspires readers while the science of internet curation involves using search algorithms that do not just rely on subjective and arbitrary criteria but systematically interrogate vast amounts of information that are now globally available. A Bachelor’s or Master’s degree program in Internet Curation could conceivably train students in the art and science of internet curation.
In scientific manuscripts, it is common for scientists to cite the preceding work of colleagues. Other colleagues who provide valuable tools, such as plasmids for molecular biology experiments, are cited in the “Acknowledgements” section of a manuscript. Colleagues whose input substantially contributed to the manuscript or the scientific work are included as co-authors. Current academic etiquette does not necessarily acknowledge the curatorial efforts of scientists who may have nudged their colleagues into a certain research direction by forwarding an important paper that they might have otherwise ignored.
Especially in world in which meaningful information is becoming one of our most valuable commodities, it might be time to start acknowledging the flux of information that shapes our thinking and our creativity. We are beginning to recognize the importance of people who are links in the information chain and help separate out meaningful information from the “senseless cacophony”. Perhaps we should therefore also acknowledge all the sources of information, not only those who generated it but also those who manage the information or guide us towards the information. Such a curatorial credit or Q-credit could be added to the end of an article. It would not only acknowledge the intellectual efforts of the information curators, but it could also serve as a curation map which would inspire readers to look at the individual elements in the information chain. The readers would be able to consult the nodes or elements that were part of the information chain (instead of just relying on lone cited references) and choose to take alternate curation paths.
I will try to illustrate a Q-credit using the example of Abbas Raza who pointed me towards a 3quarksdaily discussion of “Orientalism” and an essay by the philosopher Akeel Bilgrami. Even though I had previously read Edward Said’s book “Orientalism”, the profound insights in Bilgrami’s essay made me re-read Edward Said’s book. The Q-credit could be acknowledged as follows:
Q-Credit: Abbas Raza --> The 2008 3Quarksdaily Forum on Occidentalism --> “Occidentalism, the Very Idea: An Essay on Enlightenment and Enchantment” by Akeel Bilgrami published 2008 on 3Quarksdaily.com and 2006 in Critical Inquiry --> Bilgrami identifies five broad themes in Edward Said’sOrientalism
The acknowledgement of information flux is already part of the Twitter netiquette. The German theologian Barbara Mack uses her Twitter handle @faraway67 to curate important new articles about history, science, music, photography, linguistics and literature. She sees the role of web curators similar to that of music conductors, who do not compose original pieces of music but instead enable the access of an audience to the original creative work. She says that “web curation is a relatively new field of dealing with information and good curation is an act of creativity which requires dedication and a keen sense for content.” She agrees that curators should indeed be given credit, “not only out of courtesy but to acknowledge their efforts of taking upon the challenge of bringing the vast information the web provides into a handy form for their followers to enjoy.”
Twitter curators such as Barbara Mack use abbreviations such as h/t (hat-tip) or RT (retweet) followed by a Twitter handle to acknowledge their sources. Contemporary Twitter netiquette suggests that if curated links of use to followers, these should acknowledge the curators' efforts before tweeting them on.
One challenge that is intrinsic to Twitter (but may in an analogous fashion apply to other social media networks as well) is that each tweet can only contain 140 characters, which presently makes it very difficult to acknowledge the comprehensive curatorial information flux. If I decide to tweet on an interesting article about the philosophy of science, which I found in the Twitter feed of person X, the space limitations may make it impossible for me to give credit to all the preceding members of the information chain which had directed X’s attention to that specific article. The Q-credit system may thus be best suited for acknowledgements at the end of blog posts or articles, but not for social media messaging with strict space limitations.
The Future of Internet Curation
The area of internet curation is still in its infancy and it is very difficult to predict how it will evolve. Managing online information will become increasingly important. Even though such managerial roles may not necessarily carry the title “internet curator”, there is little doubt that managing online information in a meaningful manner is one of the biggest challenges that we will face in the 21st century. I am quite optimistic that we will be able to address this challenge, but the first hurdle is to recognize it.
Image Credit: The Librarian by Giuseppe Arcimboldo (1527–1593)
1. “The Cambridge Handbook of Creativity” (2010) by James C. Kaufman and Robert J. Sternberg --> Chapter 3 “Assessment of Creativity” by Jonathan A. Plucker and Matthew C. Makel --> “Why Isn’t Creativity More Important to Educational Psychologists? Potentials, Pitfalls, and Future Directions in Creativity Research” (2004) by Jonathan A. Plucker et al. in EDUCATIONAL PSYCHOLOGIST, 39(2), 83–96
3. Book review of “The Information” at Brainpickings --> “The Information: A History, a Theory, a Flood” (2011) by James Gleick --> “Library of Babel” by Jorge Luis Borges as an allegory for the information glut
Monday, December 24, 2012
A Universal History of Online Iniquity
by James McGirk
“BREAKING: Confirmed flooding on NYSE. The trading floor is flooded under more than 3 feet of water.” It was a horrid thought, but Shashank Tripathi’s (i.e. Comfortablysmug’s) infamous Hurricane Sandy tweet had panache.
Tripathi mimicked the style of a breaking news tweet perfectly. The image of water sluicing into the New York Stock Exchange was too good to be true. An irresistible nugget of news distilling the potent emotions stirred by the storm: Sorrow for afflicted New Yorkers, fear for the future, the thrill of seeing history unspool in real time, and a dose of snickering glee at the idea of cuff-linked financiers wading through filthy water.
The cruelty and incendiary media appeal of Tripathi’s tweet was reminiscent of another notorious prank: the attack on the Epilepsy Foundation. On March 22, 2008, a horde of eBaum’s World users (a community devoted to online humor) logged onto the Epilepsy Foundation’s online forums, and plastered its pages with blinking graphics.
As despicable as deliberately triggering thousands of epileptic fits or enflaming a vulnerable community during a catastrophe may be, consider how hard it is to shock a contemporary audience with a piece of art or literature. As subversive texts go, these are arguably genuine artistic achievements, thrilling to witness in real time or read about afterwards.
It’s an aesthetic experience Sherrod DeGrippo, an information security expert who founded two of the world’s preeminent repositories of Internet drama, Encyclopedia Dramatica and OhInternet.com, compares to watching reality television. “I think that a lot of what is attractive about Internet drama is the combination of schadenfreude and superiority people feel when looking at it,” says DeGrippo. “Reality TV inspires a lot of the same feelings. The viewer thinks of himself as superior, but when examined, the viewer is obsessively voyeuristic.”
When Tripathi’s identity was revealed, he certainly looked like a real-life Omarosa or Wendy Pepper. Here was a hedge fund analyst who wrote a sex diary for New York Magazine reminiscent of Preppie Murderer Robert Chambers and managed a Republican campaign, spending his free time trying to rile people up during Hurricane Sandy: a despicable man, and deliciously so.
Tripathi apologized for his barrage of misleading tweets. Most of the delinquent denizens of eBaum’s World did not, however. Their anti-social behavior was unapologetically deliberate. They were—to use the correct Internet jargon—trolling.
“Trolling is a lot like graffiti,” writes essayist and Internet grey eminence Paul Graham (he founded Y Combinator). “Graffiti happens at the intersection of ambition and incompetence: people want to make their mark on the world, but have no other way to do it than literally making a mark on the world.”
Graffiti is an apt metaphor. It began as a diffuse, sub-literary phenomenon, and grew to become a permanent part of the global urban experience. To make the leap from anonymous malcontents scratching their names on the walls of Roman prisons to a graffiti “artist” like Jean-Michel Basquiat took millennia and a series of technological leaps and daring appropriations.
Christopher “moot” Poole created 4chan in 2003, an online image board that would become, according to NYU digital culture and folklore scholar Dr. Whitney Phillips, a “specially demarcated troll space.” There, tens of thousands of anonymous users interact in real time and together create a flood of never-ending nudity and subversive humor. It was the trolling equivalent of the cheap, portable spray-paint can.
“Pretty much all the people you encounter [on 4chan’s /b/ board] are trolls,” says Dr. Phillips. Ethical issues are cast aside. “Everyone (or almost everyone) is aware of the game and consents to playing.” But when 4chan’s users flock together for the Internet equivalent of a Viking raid they forget that “outside these specially demarcated troll spaces, people are NOT aware of the trolling game and therefore are NOT afforded the opportunity to consent. Trolls don't give targets the opportunity to say no, in fact tend to be triggered when they encounter resistance (the common trolling aphorism "your resistance only makes my penis harder" speaks volumes).”
Media attention is like sloshing gasoline on a fire. In a forthcoming academic article, Philips argues that: “trolls and mainstream media outlets, specifically Fox News, are locked in a cybernetic feedback loop predicted upon spectacle.”
Not only does media attention encourage trolls, it infuses their historical moments with images and vocabulary. “Trolls are cultural scavengers, fashioning amusement from that which already exists,” says Dr. Phillips. She describes a 4chan user who successfully trolled Oprah Winfrey’s online message board by posing as a pedophile. Oprah actually read his post aloud on the air:
“Let me read you something posted on our message boards,” she gravely began, “from somebody who claims to be a member of a known pedophile network: He said he does not forgive. He does not forget. His group has over 9000 penises and they’re all... raping... children.”
It was a trolling triumph. The message incorporated recognizable 4chan memes (the official slogan of Anonymous—an online hacktivist collective closely affiliated with 4chan—is: We are anonymous, We are Legion, We do not forgive, We do not forget, Expect us; while “over 9,000,” is one of the board’s most popular memes; as is anything having to do with pedophilia.) And a clip containing Oprah’s words was spliced with images of various troll memes and circulated.
There is no history on 4chan; nothing is archived. Any image, link, or message posted to the board will soon slip off the board’s front page and vanish unless users continue to “bump” the post or recycle its content.
If 4chan were the only “specially demarcated troll space,” whatever culture was created on it might never stabilize into something more significant. Moments like the Oprah’s trolling would be lost as soon as 4chan turned its attention elsewhere. So in 2004, 4chan and their Internet ilk began to use Sherrod DeGrippo’s Encyclopedia Dramatica as an archive for Internet drama they had witnessed online and that they created.
“The site started as a joke,” says DeGrippo. “A friend had placed an article on Wikipedia, only to have it swiftly deleted. So I threw a quick instance of Mediawiki up on my own server and put the article there. It was intended to just have the one article, but people started adding more and more.”
A Mediawiki (the software that Wikipedia uses) lets users simultaneously create and edit content online. This added another dimension to the experience of Internet drama.
“[Contributors to Encyclopedia Dramatica would] view and then create derivative works off of things they claimed to despise or mock,” explains DeGrippo. “Then those new, created artifacts began to take a life of their own and make a story of their own.”
Because items don’t vanish into the ether as soon as they are forgotten about, Encyclopedia Dramatica evolved into something more akin to a wild garden than the primordial ooze of 4chan’s /b/.
There is actually something resembling a coherent voice to the site. “For me, the voice of a lot of ED's content came from a sysop who went by the name of OldDirtyBtard,” says DeGrippo. “He was a British guy living in LA and a friend, very jovial. I preferred to assume everything written was in his accent, he killed himself in 2010.”
The Encyclopedia Dramatica page dedicated to OldDirtyBtard (Sean Carasov, a one-time Beastie Boys tour manager) contains a strange and touching tribute. There is a copy of his suicide note. A video of his memorial that sadly does not include “the first time in history that got rickrolled by a bagpipe player,” photographs of the man’s tattoos, and a link to his exploits under another moniker that he used to harass the Church of Scientology with (and who allegedly poisoned a feral cat he had tamed).
DeGrippo relinquished control of Encyclopedia Dramatica in 2011 (a mirror of the site continues to be updated and now includes a very unflattering page dedicated to DeGrippo). She has made a second repository, OhInternet, with a more advanced interface that tries to avoid the bloody and obscene “shock” content that infests Encyclopedia Dramatica.
What binds a community like Encyclopedia Dramatica together? “I think people just do things on the Internet until they're not fun anymore,” says DeGrippo. “I ran ED for 7 years, the user base and readership changed on a continuum. People would disappear and come back all the time. I think that's what is appealing about a lot of Internet communities, you can leave them, or they can change, but ultimately it's the same people in the same places.”
Clearly Shashank Tripathi craved community. He left hundreds of comments on the New York Magazine’s website and broadcast more than 67,000 tweets to his followers. His Hurricane Sandy tweet may have had panache, but like any clichéd villain, all he really wanted was for someone to pay him attention.
Monday, July 18, 2011
The Thirty-Third Internet Connection in New Delhi
by James McGirk
I never had a problem with Alaskan Senator Ted Steven’s oft-mocked remark about the Internet being a series of tubes. I saw it with my own eyes (metaphorically speaking) as a teenager growing up in New Delhi. The Internet was a feed of information that trickled in drip by drip, slowly increasing as we switched our faucets and eventually tapped into the municipal supply. My father was a foreign correspondent, which meant he had to send stories back home to be published. When we left “on assignment” to India he was issued with a bag of sophisticated telecommunications equipment. We plugged it in and became early adopters.
Our first modem looked like a cross between a swimming cap, a spider, and a rubber truncheon. There were two cups that stretched over the mouthpiece and receiver of a standard analogue telephone. One contained a microphone and the other a speaker. The modem would whistle and hiss signals into the phone, and listen for responses. It was a crude but robust system, the only thing capable of working on lines that were filled with crackling static and echoes, and may well have been tapped. Entire sentences would be garbled by line noise, or more insidiously the changes could be almost invisible. A pound sterling inserted where a dollar sign once stood.
Dad’s squealing octopus of a modem might have made things easier for everyone else, but it tossed a sabot into the gears of my imagination. Before we had left, he had taken me on a tour of his office and the printing press below. Communications made sense to me afterward. The newspaper was like a factory: an office space above filled with glowing amber terminals and stuttering typewriters and piles of important paper being fed to the machines below. The presses were magnificent, booming and huffing and spattering ink at rolling reams of paper. I could easily imagine the process as an unbroken chain extending across the world, see a pale English editor with a phone clenched between his shoulder and ear, transcribing dad’s story click by click into a typesetting machine to be turned to molten lead, slotted into a drum, dunked in ink and pressed onto fresh newsprint a hundred times a minute.
I had seen computers before. In the past Mom and Dad, who were both journalists, had been issued clunky old Amstrads and Tandys, but those didn’t seem that different from the typesetting machines and dumb terminals back at headquarters. Dad’s new laptop was more like a porthole into a parallel universe than a word processor. It had memory, a temperamental beige lozenge that could be switched on and off with catastrophic consequences. The interface was also different. There were programs and applications nested in one another; navigating around the system you got a sense that you were indeed navigating; traversing an alien logic, and each time you learned how to run some program or subroutine it gave you the most satisfying synaptic kick.
We were terrified of change and subject to the home office’s budgetary whims, which meant upgrades were infrequent. But I was beginning to learn a new vocabulary of clock speeds and RAM, and wasn’t the only one. We lived beside Nehru Place, an information technology enclave with a crowded bazaar that sold everything from ink stamps to enterprise-level computing solutions. The market’s advertisements provided me with an easy gloss of the state of the art. Each month the market would sprout new billings touting the latest processors, at first 80286 processors capable of a blistering 16MHz then 386s, 486s and eventually Pentiums. Even the drawings changed. The hand-painted computers and peripherals changed from cartoonish televisions complete with knobs protruding from curved screens to angular, almost menacing monitors and towers and keyboards containing the correct number of keys.
My first direct encounter with computer ‘telephony’ came from a friend’s dial-up bulletin board system (a.k.a. a BBS). I forget what he called his. No doubt something disproportionately macho. A brief Googling revealed “The POISON Den” and “The TWILIGHT Zone” as the names of two of his contemporaries. I would dial in occasionally but it was more fun to see him run it. Several hours a day he monopolized one of the family phonelines to allow strangers to call and log-in. Most visitors were guests and afforded access only to the most banal files and chat boards, while a select few were given special titles and privileges, such as a secret stash of password cracking programs and R-rated pictures of Baywatch star Erika Eleniak. As Sysop, the most exalted of ranks, my friend had access to the entire system, unimaginable power for a 14-year-old boy to wield; and wield it he did, occasionally booting off a lowly user just for the rush.
I started to think of the space inside his BBS, which in real life wasn’t much more than a second computer, as something not too different from Nehru Place. A single road leading into the center that was frequently congested. A ground floor of wares that were accessible to all, even guests, while getting into the towers above was limited to businessmen and others who belonged there. Lording over it all, godlike, was the Sysop, who could tear it all down and build it up again at his whim.
My parents eventually caved and bought us a personal computer (partly this was to contain my experimentation). It was a 386 running a primitive version of Windows. A photographer friend of my dad’s would troubleshoot the thing for us, spending hours at a time installing new software and removing programs that had gone feral and begun destroying files, such as the ones with photocopied manuals I bought from Nehru Place bootleggers. He tried to keep us up-to-date with the latest technologies, particularly those he didn’t have himself and wanted to play with. Naturally I supported his suggestions. And this was how we were eventually convinced to buy a U.S. Robotics brand 14.4K modem and a pricy account connecting us to the Internet.
Family lore has it we that we received the 33rd Internet connection in New Delhi. There are many caveats to this claim, it willfully ignores that the embassies and Indian government had access years before, and was likely derived from our account's name, which we got from Videsh Sanchaar Nigam Limited (then India’s national telephone company) and had the prefix delaac33. If the ‘ac’ stood for ‘account’ then it was indeed possible we had Delhi’s 33rd commercial account. We may also have had the three-hundred-and-thirty-third account, not quite so grand a claim, but an early account all the same.
Our meager connection was text-only. There was a browser called Lynx (which still exists) that launched each expedition from the University of Kansas’ online portal. Images had to be downloaded separately as files and would resolve line-by-line and were rarely worth the hours it took to download them. The Internet was a totally different experience than the BBS, it felt as wide open as a frontier or a mountain forest waiting to be foraged. My focus shifted from trying to make sense of the system to hunting and gathering odd bits of information.
I gravitated toward text files, little nuggets of mayhem containing instructions for pipe bombs, cracking locks, spoofing telephone boxes and sabotaging cars. It was all hopelessly out of date and none of the equipment would have been available in India even if I did try one of those recipes. I grew out of those silly files quickly but I remember them well. I came across a nostalgic database of them a few years ago (someone had assembled it for a documentary called 'Textfiles'.) Reading them again they seemed to represent so much more than their content. They were like flavor crystals, and reading them was like accessing the revolutionary DNA of the Internet, a harbinger of what was about to come rumbling down the pipe. Not quite a series of tubes but glorious in a flickering, monochromatic sort of way.
Monday, June 13, 2011
Writing for Machines
by James McGirk
Writers are anxious about the Internet and all things electronic, as we worry these newfangled ways of entertaining ourselves might someday obviate our own work. The solution, perhaps, lies in understanding and adapting to this new medium. Consuming enough that we can master its complexities and render appealingly intelligent confections for our readers. But who are these readers? Are they different online than they are in print? Some of them aren’t even human. There is a new form of reader browsing the Internet. For this is no longer just the age of mechanical reproduction; we now have to contend with mechanical readers as well.
William Gibson, who coined the term “cyberspace” imagined it as a mass consensual hallucination, rendered as a cityscape, the prominence of each shape on the horizon an index of how much data was passing through a single point; a point which in 1982 a reader might have thought of as a mainframe computer, and what today, nearly thirty years later, we might identify as an html address or site. On Gibson’s Internet Google would glow the brightest, soar the highest; be an Empire State Building to the Internet’s Manhattan. Most users don’t look at the Internet by volume, however, they read it pane by pane, navigating from bookmarks or through searches, feeding keywords into an ‘engine,’ a series of algorithms, to retrieve lists of linked addresses to the information they seek. These lists are customized to the user, the results tweaked by the user’s location and previous searches. The more searches you make, the more information about yourself you reveal, the more customized the experience becomes.
From a content provider’s point of view (as opposed to a more passive content user’s point of view) an ideal Internet browser might render something close to Gibson’s landscape of crystalline data sculptures, were there a way to capture such information in real time. But commercial users would rather see traffic than the mere through-output of bits and bytes. Who consumes what information, when and why is much more important to commerce than mere bandwidth. Though online sales have grown to become big business, the Internet remains a popularity contest. The real currency of the online world is attention. Being able to read the flow of attention online would mean mastering it, and reaping the ad money that comes along with that attention. But instead of trying to follow where everyone is going all at once, content providers are instead attempting to clone their readers’ minds.
As you navigate the Internet, the Internet – which is to say the entities using the Internet – navigate you. This isn’t a benign process. They want to learn as much about you as possible to snag your attention; not only by viewing content, but by diverting your time into loops of advertisements and possibly even pushing you through a point-of-sale and taking your money directly. They do so by gleaning information about you. Where you go, what you search for, what type of computer you are using… Websites leave small tracking codes on your computer called cookies, and each of these transmits data back to home base.
Keywords (also known as index terms) are the most interesting and valuable traces left by users. Cookies record the terms users use to come across a site. An entire industry has sprung up to interpret these keywords, and another to optimize content online so it can be better read by search engines (this is called Search Engine Optimization). The data they gather is a crude simulacrum of their users; an inscription of their desires for an instant. Almost like a section of brain tissue. A clue. And en masse a hologram of their users collective desires.
All writers crave attention and respond to their readers’ desires. Charles Dickens used live audiences as focus groups for his serialized fiction. Newspapers and magazines have always had to respond to circulation numbers. Electronic texts simply speed up the process. Text online can be altered immediately. There are even advanced analytics packages that use keywords and cookies to anticipate what readers want and automatically generate ‘content’ for users in response to what they ‘perceive’ readers as wanting. Other companies use similar algorithms to assign stories to human beings. When you hear the term content farms, that’s what’s going on.
Google tweaked its search algorithms a few months ago, which trimmed back the custom-generated content that had begun to choke its search results like kudzu. But beyond the first or second page of results, it comes sneaking back and you will still find page after page of sites that copy the content of other sites, or ones loaded with all the correct terminology of whatever it is you seek, but arrayed in such a way that these phrases convey little or no meaning. As replications of our desire, these simulacra are incomplete. It would take an infinite amount of data (and a correspondingly infinite amount of time to collect this data) to accurately model a human being’s wants and desires. But machines are getting closer and closer.
There are gaps between reader and author in a traditional text too. Enormous ones. Between the platonic ideal an author holds in his or her head, the text he or she extrudes into type and the reuptake and processing that takes place in a reader’s head, there is plenty of room for strange, unexpected effects to creep in. William Gibson described the cyberspace generated by a child’s calculator as a grey infinity utterly empty but for a string a few basic arithmetic equations (slim structures of liquid crystal one imagines). This unnameable sea of grey emptiness is not neutral. More of a field or something we project into and allow things to assume shape. And distended from the platonic ideal and warped by exterior forces these things become strange. Even arithmetic has its unexpected, subjective aspects. Many a calculator screen has been reversed to spell mild profanities.
Knowledge builds on memory, and all information builds off what we already know. Reading works by drawing parallels with memories, essentially unpacking an archive into that grey arithmetic field mentioned above and letting it take new forms. The way a machine reads, in this respect, is no different. Software has an archive of its own, a database that it is adding or subtracting from. It 'reads' by comparing its archive to a text, and then updating itself. An author can access this archive with his or her text; and the more sophisticated it is, the more he or she can manipulate it; perhaps even creating an aesthetic experience.
Literary forms are beginning to emerge in response to automated reading systems, searches, and databases. Online, an era somewhat akin to the pamphlet-strewn amateurism of18th Century America is in bloom. The most exotic forms can be found on the Internet’s wild fringe, in its anonymous and pseudo-anonymous chat sites. Here there is a frantic economy of monikers, memes and spoofed identities. In online forums such as the semi-anonymous Somethingawful users compete to create the catchiest, most innovative forms – most often an evolution of an earlier idea, name or other fragment of an idea. The best innovators become famous within their tiny little spheres. Other forums are anonymous and ephemeral – the most famous of these being the notorious 4chan/b ‘Random’ board – where the only recognition earned is the sheer longevity of a creation. A post can only survive as long as it is replied to. Then it is gone forever.
The best memes were once charted on the now-defunct Encyclopedia Dramatica. But now there is no reason at all to create but sheer artistic thrill. Although ‘board lore’ has developed a concept somewhat akin to ‘duende’ – a dark, nihlistic reward in the form of amusement known as ‘lulz.’
The evolution of the online literary form could well come from manipulating these mysterious semantic mechanicals. They offer the opportunity to make writing dangerous again. With the proper keywords, information is taken up into automatic readers belonging to some very interesting entities, to the point where there can be real world consequences. As a way of experimenting with this form I created a series of posts with keywords that I imagined might appeal to some of the more peculiar gleaners out trolling for information online. I posted lists of oil rigs, information about espionage, created a consulting company specializing in complex shipping orders in the Arabian Ocean, wrote about electronic warfare, and laced my work with other ‘edible’ keywords. I received visits from hedge funds, multinational banking concerns, the department of defense, oil companies, environmental organizations, the Pakistani government, the Kuwaiti government, the Iranian government, the Russian government, an unacknowledged US military facility, and a few mysterious hits from ‘Cabin John, Maryland’ (a park across the river from CIA headquarters).
I don’t think my posts ever stirred more than a few pixels. All I did was conjure another layer of anxiety about the online world, but for a writer paranoia is far better stuff than anxiety over obsolescence.
Monday, May 30, 2011
The Elusive City
I could tell you how many steps make up the streets rising like stairways, and the degree of the arcades' curves, and what kind of zinc scales cover the roofs; but I already know this would be the same as telling you nothing.
Italo Calvino, Invisible Cities, 1974, p4
In the headlong rush to lead us to the promised land of the “Smart City” one finds a surprising amount of agreement between the radically different constituencies of public urban planners, global corporations and scruffy hackers. This should be enough to make anyone immediately suspicious. Often quite at odds, these entities – and it seems, most anyone else – contend that there is no end to the benefits associated with opening the sluices that hold back a vast ocean’s worth of data. Nevertheless, the city’s traditional imperviousness to measurement sets a high bar for anyone committed to its quantification, and its ambiguity and amorphousness will present a constant challenge to the validity and ownership of the data and the power thereby generated.
We can trace these intentions back to the notoriously misinterpreted statement allegedly made by Stewart Brand, that “information wants to be free.”* Setting aside humanity's talent to anthropomorphize just about anything, we can nevertheless say that urban planners indeed want information to be free, since they believe that transparency is an easy substitute for accountability; corporations champion such freedom since information is increasingly equated with new and promising revenue streams and business models; and hackers believe information to be perhaps the only raw material required to forward their own agendas, regardless of which hat they happen to be wearing.
All three groups enjoy the simple joys of strictly linear thinking: that is to say, the more information there is, the better off we all are. But before we allow ourselves to be seduced by the resulting reams of eye candy, let us consider the anatomy of a successful exercise in urban visualization.
A classic example of the use of layered mapping to identify previously unknown correlations occurred in London in 1854. An epidemic of cholera had been raging in the streets of London, and Dr. John Snow was among the investigators attempting to pinpoint its causes. At the time, the medical establishment considered cholera transmission to be airborne, while Snow had for some time considered it to be waterborne. By carefully layering the cholera victims’ household locations with the location of water pumps, Snow was able to make the clear case that water was in fact cholera’s vector.
This anecdote is by no means unknown, having become a favourite warhorse of epidemiologists and public health advocates; it has now been gladly co-opted by information technology aficionados as an example of a proto-geographic information system (GIS). However, it is worth a further unfortunate mention, as described by Martin Frost, that:
After the cholera epidemic had subsided, government officials replaced the Broad Street Handle Pump. They had responded only to the urgent threat posed to the population, and afterwards they rejected Snow's theory. To accept his proposal would be indirectly accepting the oral-fecal method transmission of disease, which was too unpleasant for most of the public.
Thus even the starkest illuminations by data may yet find little purchase among the policymakers for whom it is ultimately intended.
Another point worth mentioning about Snow’s discovery is that he found exactly the result for which he was seeking. He was, in fact, testing a hypothesis, and not engaging in a cavalier quest for serendipity. The lynchpin of the exercise’s success was the fact that Snow was mapping not just the street plan, but also the locations of the shallow wells. The map did not include any of the other aspects of urban infrastructure, which might have obfuscated the sought-after relationship. On the other hand, without including the wells, what might the map have taught the health authorities? That Broad Street required quarantining?
Even more importantly, the good Dr. Snow put down his quill and went into the field, where he was able to interview residents and understand how the deaths that were further afield of the contaminated pump were in fact connected to it: the residents simply considered it to be better water, and, much to their misfortune, considered the extra effort to go to a more distant well to be worth the trouble.
Several conclusions should be clear from this exceedingly elegant (and therefore admittedly rare) result: 1) It helps to know what it is you are looking for; and 2) The initial hypotheses indicated by the data can only be validated by field-level observation and correlation. These traits – falsifiability and reproducibility – are two hallmarks of the scientific method. Armchair technologists need not apply.
So how replicable is Snow's example? In this "scientific" sense, Richard Saul Wurman, founder of the TED Conference and all-star curmudgeon, questions our ability to even understand what a “city” is. For example, he posits that we do not have a common language to describe the size of a city, or of how one city relates to another, or what an “urban area” is. If there are six different ways of describing Tokyo, and those six ways lead to boundaries variously encompassing populations of 8.5 million to 45 million people, which is the “real Tokyo,” and of what use is the concept of a “border”? We have no unified way of showing density, collecting information, no common display techniques, and no way of showing a boundary. We have no common way of talking about a city. Accordingly for Wurman, the consequence is that ideas cannot be built on one another, and urbanists forego benefits of the scientific method. However, if we consider Snow’s process, the map was a means to an end, a supporting role in the scientific discourse, and was not meant to be anything more than that.
Of what use, then, is the deluge of data, and the pretty pictures that we draw from it? One can find endless examples on the Web of beautiful visualizations derived from datasets that are either partial or self-selected, with results that range from the obvious to the quixotic to the inscrutable. During the Cognitive Cities conference, held in Berlin in February of this year, more than one presenter was asked the question that went more or less along the lines of “Well, that is very nice but it does not tell me anything I don’t know already. What has surprised you about your findings?”
While the end results may be oftentimes trivial, and the lack of Wurman’s standards of measurement worthy of our best Gallic shrug, there is far more unease concerning how and where urban data is being generated, and for whose benefit. At the aforementioned Cognitive Cities conference, Adam Greenfield delivered a powerful keynote which struck a stridently skeptical note towards the various technologies that are rapidly contributing to the manifestation of the networked city. He goes through an increasingly disturbing catalogue of “public objects” whose technologies harvest our participation in public space, creating rich data flows for the benefit of advertisers or police or other bodies, and this generally entirely without our knowledge.
For example, certain vending machines in Japan now have a purely touch-screen interface, but the available selections are selected by algorithms based on the machine's sensing the age and gender of the person standing before it. Therefore, I might see the image of a Snickers bar while you might see the image of a granola bar. The ensuing selections help to refine the algorithm further, but a great deal of agency has been removed from the consumer, or, in the words of Saskia Sassen, we have moved from “sensor to censor”.
Even in initiatives where the public’s initial voice is sought and respected, technology has a way of subverting its alleged masters. Greenfield documents how residents of a New Zealand city voted in a public referendum to allow the installation of closed circuit TV (CCTV) cameras for the purposes of monitoring traffic and thereby increasing pedestrian safety. It was an unobjectionable request, and the referendum passed decisively. However, a year later, the vendor offered the city government an upgraded software package, which included facial recognition functionality. The government purchased the upgrade and installed it without any further consultation with the public, bringing to Greenfield’s mind Lawrence Lessig’s axiom “Code is Law:”
...the invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible. The struggle in that world will not be government’s. It will be to assure that essential liberties are preserved in this environment of perfect control. (Lessig, pp4-5)
Greenfield’s remedy to make public objects play nicely is problematic, however; his requirement for “opening the data” starkly contradicts significant economic trends. As a simple example, it is doubtful that advertisers will do anything but fight tooth and nail to keep their data proprietary, and given the growing dependence municipalities have on revenue generated by private advertising in public spaces, it is difficult to see the regulatory pendulum swinging Greenfield’s way.
Instead, we see a further complexification of the terms of engagement. Consider the popular iPhone/Android application iSpy, which allows users to access thousands of public CCTV cameras around the world. In many cases, the user can even control the camera from his or her phone touchpad, zooming and panning for maximum pleasure. In this sense, at least, we have succeeded in recapturing aspects of the surveillance society and recasting them as a newly constituted voyeurism.
And yet, there are signs that the radical democratization of data generation is alive and well. Consider Pachube, a site devoted to aggregating a myriad varieties of sensory data. Participants can install their own sensors, eg, a thermometer or barometer, follow some fairly simple instructions to digitize the data feed and connect it to the Internet, and then aggregate or “mash” these results together to create large, distributed sensory networks that contribute to the so-called “Internet of things.” Lest one consider this merely a pleasant hobby, consider the hard data that is being generated by the Pachube community built around sensing radiation emitted during the Fukushima nuclear disaster (and contrast it with the misinformation spread by the Japanese government itself).
The broader point worth emphasizing is that communities appropriate and aggregate sensor data to serve specific purposes, and when these purposes are accomplished these initiatives are simply abandoned. No committee needs to publish a final report; recommendations are not made to policymakers. There is no grandiose flourish, but rather the passing of another temporary configuration of hardware, software and human desire, sinking noiselessly below the waves of the world’s oceans of data.
Cities are and have always been messy and defiantly unquantifiable. Because of this – and not despite it – they are humanity’s most enduring monuments. In this context, our interventions do not promise to amount to much. Rather, these interventions may be best off as targeted, temporary and indifferent to a broader success which would be largely dependent on the difficulties of transcending context. Should it surprise us that cities, which manage to outlast monarchs, corporations and indeed the nations that spawn them, are ultimately indifferent to our own attempts to explicate and quantify them? And, upon embarking on an enterprise of dubious value and even more dubious certainty, are we not perhaps better off simply asking, What difference does a difference make, and acting accordingly?
* Brand’s actual statement was “Information wants to be free. Information also wants to be expensive. Information wants to be free because it has become so cheap to distribute, copy, and recombine---too cheap to meter. It wants to be expensive because it can be immeasurably valuable to the recipient. That tension will not go away. It leads to endless wrenching debate about price, copyright, 'intellectual property', the moral rightness of casual distribution, because each round of new devices makes the tension worse, not better.” Viewed in its entirety, there is really very little to disagree with. We should add that, since it was originally formulated around 1984, it has aged extremely well.