The Elusive City

by Misha Lepetic

I could tell you how many steps make up the streets rising like stairways, and the degree of the arcades’ curves, and what kind of zinc scales cover the roofs; but I already know this would be the same as telling you nothing.

Italo Calvino, Invisible Cities, 1974, p4

Heidi_Whitman-Brain_Terrain_282 In the headlong rush to lead us to the promised land of the “Smart City” one finds a surprising amount of agreement between the radically different constituencies of public urban planners, global corporations and scruffy hackers. This should be enough to make anyone immediately suspicious. Often quite at odds, these entities – and it seems, most anyone else – contend that there is no end to the benefits associated with opening the sluices that hold back a vast ocean’s worth of data. Nevertheless, the city’s traditional imperviousness to measurement sets a high bar for anyone committed to its quantification, and its ambiguity and amorphousness will present a constant challenge to the validity and ownership of the data and the power thereby generated.

We can trace these intentions back to the notoriously misinterpreted statement allegedly made by Stewart Brand, that “information wants to be free.”* Setting aside humanity’s talent to anthropomorphize just about anything, we can nevertheless say that urban planners indeed want information to be free, since they believe that transparency is an easy substitute for accountability; corporations champion such freedom since information is increasingly equated with new and promising revenue streams and business models; and hackers believe information to be perhaps the only raw material required to forward their own agendas, regardless of which hat they happen to be wearing.

All three groups enjoy the simple joys of strictly linear thinking: that is to say, the more information there is, the better off we all are. But before we allow ourselves to be seduced by the resulting reams of eye candy, let us consider the anatomy of a successful exercise in urban visualization.

A classic example of the use of layered mapping to identify previously unknown correlations occurred in Snow-cholera-map London in 1854. An epidemic of cholera had been raging in the streets of London, and Dr. John Snow was among the investigators attempting to pinpoint its causes. At the time, the medical establishment considered cholera transmission to be airborne, while Snow had for some time considered it to be waterborne. By carefully layering the cholera victims’ household locations with the location of water pumps, Snow was able to make the clear case that water was in fact cholera’s vector.

This anecdote is by no means unknown, having become a favourite warhorse of epidemiologists and public health advocates; it has now been gladly co-opted by information technology aficionados as an example of a proto-geographic information system (GIS). However, it is worth a further unfortunate mention, as described by Martin Frost, that:

After the cholera epidemic had subsided, government officials replaced the Broad Street Handle Pump. They had responded only to the urgent threat posed to the population, and afterwards they rejected Snow’s theory. To accept his proposal would be indirectly accepting the oral-fecal method transmission of disease, which was too unpleasant for most of the public.

Thus even the starkest illuminations by data may yet find little purchase among the policymakers for whom it is ultimately intended.

Another point worth mentioning about Snow’s discovery is that he found exactly the result for which he was seeking. He was, in fact, testing a hypothesis, and not engaging in a cavalier quest for serendipity. The lynchpin of the exercise’s success was the fact that Snow was mapping not just the street plan, but also the locations of the shallow wells. The map did not include any of the other aspects of urban infrastructure, which might have obfuscated the sought-after relationship. On the other hand, without including the wells, what might the map have taught the health authorities? That Broad Street required quarantining?

Even more importantly, the good Dr. Snow put down his quill and went into the field, where he was able to interview residents and understand how the deaths that were further afield of the contaminated pump were in fact connected to it: the residents simply considered it to be better water, and, much to their misfortune, considered the extra effort to go to a more distant well to be worth the trouble.

Several conclusions should be clear from this exceedingly elegant (and therefore admittedly rare) result: 1) It helps to know what it is you are looking for; and 2) The initial hypotheses indicated by the data can only be validated by field-level observation and correlation. These traits – falsifiability and reproducibility – are two hallmarks of the scientific method. Armchair technologists need not apply.

So how replicable is Snow’s example? In this “scientific” sense, Richard Saul Wurman, founder of the TED Conference and all-star curmudgeon, questions our ability to even understand what a “city” is. For example, he posits that we do not have a common language to describe the size of a city, or of how one city relates to another, or what an “urban area” is. If there are six different ways of describing Tokyo, and those six ways lead to boundaries variously encompassing populations of 8.5 million to 45 million people, which is the “real Tokyo,” and of what use is the concept of a “border”? We have no unified way of showing density, collecting information, no common display techniques, and no way of showing a boundary. We have no common way of talking about a city. Accordingly for Wurman, the consequence is that ideas cannot be built on one another, and urbanists forego benefits of the scientific method. However, if we consider Snow’s process, the map was a means to an end, a supporting role in the scientific discourse, and was not meant to be anything more than that.

Of what use, then, is the deluge of data, and the pretty pictures that we draw from it? One can find endless examples on the Web of beautiful visualizations derived from datasets that are either partial or self-selected, with results that range from the obvious to the quixotic to the inscrutable. During the Cognitive Cities conference, held in Berlin in February of this year, more than one presenter was asked the question that went more or less along the lines of “Well, that is very nice but it does not tell me anything I don’t know already. What has surprised you about your findings?”

***

While the end results may be oftentimes trivial, and the lack of Wurman’s standards of measurement worthy of our best Gallic shrug, there is far more unease concerning how and where urban data is being generated, and for whose benefit. At the aforementioned Cognitive Cities conference, Adam Greenfield delivered a powerful keynote which struck a stridently skeptical note towards the various technologies that are rapidly contributing to the manifestation of the networked city. He goes through an increasingly disturbing catalogue of “public objects” whose technologies harvest our participation in public space, creating rich data flows for the benefit of advertisers or police or other bodies, and this generally entirely without our knowledge.

For example, certain vending machines in Japan now have a purely touch-screen interface, but the available selections are selected by algorithms based on the machine’s sensing the age and gender of the person standing before it. Therefore, I might see the image of a Snickers bar while you might see the image of a granola bar. The ensuing selections help to refine the algorithm further, but a great deal of agency has been removed from the consumer, or, in the words of Saskia Sassen, we have moved from “sensor to censor”.

Banksy_cctv_looking_at Even in initiatives where the public’s initial voice is sought and respected, technology has a way of subverting its alleged masters. Greenfield documents how residents of a New Zealand city voted in a public referendum to allow the installation of closed circuit TV (CCTV) cameras for the purposes of monitoring traffic and thereby increasing pedestrian safety. It was an unobjectionable request, and the referendum passed decisively. However, a year later, the vendor offered the city government an upgraded software package, which included facial recognition functionality. The government purchased the upgrade and installed it without any further consultation with the public, bringing to Greenfield’s mind Lawrence Lessig’s axiom “Code is Law:”

…the invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible. The struggle in that world will not be government’s. It will be to assure that essential liberties are preserved in this environment of perfect control. (Lessig, pp4-5)

Greenfield’s remedy to make public objects play nicely is problematic, however; his requirement for “opening the data” starkly contradicts significant economic trends. As a simple example, it is doubtful that advertisers will do anything but fight tooth and nail to keep their data proprietary, and given the growing dependence municipalities have on revenue generated by private advertising in public spaces, it is difficult to see the regulatory pendulum swinging Greenfield’s way.

Mzl.zfuqadgh.320x480-75 Instead, we see a further complexification of the terms of engagement. Consider the popular iPhone/Android application iSpy, which allows users to access thousands of public CCTV cameras around the world. In many cases, the user can even control the camera from his or her phone touchpad, zooming and panning for maximum pleasure. In this sense, at least, we have succeeded in recapturing aspects of the surveillance society and recasting them as a newly constituted voyeurism.

And yet, there are signs that the radical democratization of data generation is alive and well. Consider Pachube, a site devoted to aggregating a myriad varieties of sensory data. Participants can install their own sensors, eg, a thermometer or barometer, follow some fairly simple instructions to digitize the data feed and connect it to the Internet, and then aggregate or “mash” these results together to create large, distributed sensory networks that contribute to the so-called “Internet of things.” Lest one consider this merely a pleasant hobby, consider the hard data that is being generated by the Pachube community built around sensing radiation emitted during the Fukushima nuclear disaster (and contrast it with the misinformation spread by the Japanese government itself).

The broader point worth emphasizing is that communities appropriate and aggregate sensor data to serve specific purposes, and when these purposes are accomplished these initiatives are simply abandoned. No committee needs to publish a final report; recommendations are not made to policymakers. There is no grandiose flourish, but rather the passing of another temporary configuration of hardware, software and human desire, sinking noiselessly below the waves of the world’s oceans of data.

Cities are and have always been messy and defiantly unquantifiable. Because of this – and not despite it – they are humanity’s most enduring monuments. In this context, our interventions do not promise to amount to much. Rather, these interventions may be best off as targeted, temporary and indifferent to a broader success which would be largely dependent on the difficulties of transcending context. Should it surprise us that cities, which manage to outlast monarchs, corporations and indeed the nations that spawn them, are ultimately indifferent to our own attempts to explicate and quantify them? And, upon embarking on an enterprise of dubious value and even more dubious certainty, are we not perhaps better off simply asking, What difference does a difference make, and acting accordingly?

* Brand’s actual statement was “Information wants to be free. Information also wants to be expensive. Information wants to be free because it has become so cheap to distribute, copy, and recombine—too cheap to meter. It wants to be expensive because it can be immeasurably valuable to the recipient. That tension will not go away. It leads to endless wrenching debate about price, copyright, ‘intellectual property’, the moral rightness of casual distribution, because each round of new devices makes the tension worse, not better.” Viewed in its entirety, there is really very little to disagree with. We should add that, since it was originally formulated around 1984, it has aged extremely well.