2011-11-14

The following is a series of short posts WIRED commissioned me to write as part of their “Change Accelerators” promotion for BMW. Due to some infelicities of site design, I had a lot of people ask me if the posts would be consolidated anywhere, or otherwise reposted in a way that simplified the process of reading them as a continuous thread of argument. Here they are, then: formatted as a single post, but otherwise preserving the transitions and other artifacts of serialization.

If you’re familiar with my work, there’s not, frankly, likely to be a hell of a lot new here. You’ve heard me say this all before, generally in more detail and depth. Nevertheless, for those completists among you who wanted to see what I had to say, but didn’t want to wrestle with the original framing…here you go.

One final note: the content here closely corresponds with the basic talk I’ve been giving over the last few months (at PICNIC in Amsterdam, Strelka in Moscow, and at GSAPP the other day). If you missed any of those occasions, you can get a pretty strong flavor of what you would have heard me say here. I hope you find it useful.

Monday.

My name is Adam Greenfield, and I’m the founder and managing director of a New York City-based design practice called Urbanscale, which is dedicated to design “for networked cities and citizens.”

I understand if this description causes you to scratch your head a little. You’re probably familiar, though, with the notion of the “smart city,” and the idea that ubiquitous information technology is transforming the way humanity designs, understands and lives its urban settlements. It’s fair to say that this is the domain we work in at Urbanscale.

The interest in what happens at the intersection of the urban and the technological is natural — and possibly even inevitable, given the convergence of two seemingly ineluctable trends. The first is the ongoing urbanization of our planet. There’s an oft-quoted observation from the statisticians of the United Nations Population Division that the end of 2008 marked the first moment in human history at which more than half of us lived in cities. In the wake of this finding, it’s reasonable to argue that henceforth any consideration of the human is necessarily a consideration of the urban…and vice versa. We are apparently a citying species.

At this same moment in time, we see an ever-greater proportion of the objects, surfaces and relations we encounter in our everyday experience of these cities colonized by information technology. Increasingly, we live in places where thoroughly ordinary things like buses, recycling bins, and parking spaces are instrumented with embedded sensors, where nearly everybody walking down the street carries a device that is nothing less than an aperture onto the global network (and an interface to whatever functionality is connected to it).

Somewhere in the merging of these two tendencies is the very potent idea that the environment in which the majority of twenty-first century humanity lives can consciously be reimagined as a platform for computational applications and services — as a “smart city.”

As it happens, though, we don’t actually use this phrase in our practice, nor do we entirely endorse many of the assumptions that are bound up in it. What I’d like to share with you over the next few days is an accounting of the reasons why. I’m going to be challenging some of the orthodoxies that have already cropped up in the short life of this idea, some of the failures of imagination that are preventing us from making the best possible use of networked technology in the cities of this urban century. And together, maybe we can grasp some of the more radical potential we see in the space.

Tuesday.

Yesterday we discussed the increasing prominence of rhetoric around something called the “smart city.” As it’s generally described — and, increasingly, built and delivered — this is a place in which the buildings, streets, infrastructural elements and other aspects of the built environment have been equipped with embedded sensors. The flow of water through a city’s pipes, traffic through its streets, and people through its public spaces is mapped and modeled. Information derived from the widest possible array of municipal agencies and activities — applications for building permits, visits to drop-in clinics, restaurant health-and-cleanliness inspections — is gathered and subjected to computational scrutiny. The cleverer versions of this even use sentiment analysis applied to geocoded posts on Twitter to assess the collective mood of a place. The intention is to make every unfolding process of the city visible, to render the previously opaque or indeterminate not merely knowable, but known — and actionable.

There’s probably no better current example of this tendency than the “intelligent operations center” IBM’s Smarter City unit built for the city of Rio de Janeiro, billed as a “citywide monitoring and response-management system.” This is municipal government reimagined as some combination of automotive dashboard and war room, with live data used to direct and inform the disposition of a city’s available resources in something close to real time. It’s not a terrible idea, and it has perfectly honorable antecedents — notably the Cybersyn operations network cyberneticist Stafford Beer built for the Chilean government of Salvator Allende between 1970 and 1973 (!).

But there are certain problems with this approach, problems that as far as I can see are unacknowledged in any of the hype around the project. For one thing, any data gathered by a grid like the one IBM envisioned in Rio is never “just” the data, not at any point a neutral, objective quantity. As Laura Kurgan — director of the Spatial Information Design Lab at Columbia University’s Graduate School of Architecture, Planning and Preservation, and one of my intellectual heroes — has pointed out, we measure the quantities that it is politically expedient to measure. We deploy the sensors that are cheap to deploy. There is always contingency, always a selection process, always a choice of what to gather…and always decisions made by some historical agent about how to label, characterize and represent the information that does get collected.

In the overwhelming majority of the discussions I’ve seen around IBM’s “intelligent operations center” and the many proposals like it, this mystification of “the data” goes unremarked upon and unchallenged. The result is that inherently political and interested decisions acquire an entirely unearned gloss of technical neutrality. Ironically, an ever-so-slightly different, more sensitive design of the system would allow users of data to see and correct for its inevitable bias— or to ask different and potentially more fruitful questions of the same grid of inputs. But to be blunt, we’re not likely to ever see that craft or care in design from institutions like IBM or its vendors.

If there’s a saving grace in any of this, it’s that Rio is at least a genuine place — nothing if not an environment with its own distinct history and texture. In tomorrow’s post, we’ll see how that sets it apart from a great many of the current crop of “smart cities.”

Wednesday.

At present, any time you hear the phrase “smart city,” the odds are very good indeed that your interlocutor is referring to nominally futuristic visions like Korea’s New Songdo, Masdar City in the United Arab Emirates, or the unfortunately-named PlanIT Valley in Portugal — settlements built from scratch, on what urban planners call “greenfield” sites. In other words, these are places where there wasn’t anything, or anyone, before.

By building their cities up from nothing, in the middle of nowhere (or, in the case of Songdo, on land that was reclaimed from the East China Sea and literally did not exist ten years ago) the developers of these places get to live perpetually in that always-just-around-the-corner time researchers Genevieve Bell and Paul Dourish call the “proximate future.” They don’t have to reckon with all that messy history, with existing neighborhoods and the claims and constituencies they inevitably give rise to, with the dense mesh of ways of doing and being that makes any real place what it is.

What’s worse, these are barely cities: Masdar City is being designed for 90,000, PlanIT Valley for 150,000. Even at a projected population of 500,000 — which, if my observations over this last summer are any guide, will take years to fill out — Songdo is best thought of as an appendix to the immense Seoul-Incheon-Ansan conurbation. On a planet of seven billion, we don’t believe this makes any sense. Despite the blistering pace of construction in places like the Pearl River Delta, these ground-up cities aren’t places where the overwhelming majority of us live, or ever will.

At Urbanscale we defer to the wisdom of the legendary American robber Willie Sutton, who targeted banks “because that’s where the money is.” If we’re going to imagine urban interventions based on networked information technology, we’re going to design them for the cities people already live in.

There’s another reason why we might want to do this. Whether they quite know it or not, anyone proposing to deploy “smart city” technology necessarily partakes of one of two alternative conceptions of urban structuration. (I should point out that my reading here owes a lot to James C. Scott’s framing of things in Seeing Like A State, a fantastic book I unreservedly recommend.)

The first approach is something that can be broadly characterized as “watchfulness from above,” which Scott identifies with the modernist architect Le Corbusier. In this construction of things, cities ought to be designed so that they observe a clear visual order, with rigid segregation between land uses (residential, industrial, commercial), and between those uses and the systems of circulation that support them.

This is intended to make it easier for an administrator to disentangle the various threads that make up the urban skein, to quite literally see what’s going on, to facilitate managerial intervention and regulation. It’s an aesthetic with distaste for the messiness and complexity of metropolitan life, and, equally, with clear political implications: the Corbusian city is one consecrated to administration, where the potential for any organic development is subordinated to the needs of managers. (Le Corbusier was thinking and writing in the first decades of the twentieth century, but this fetish for clear visual order only really comes into its own in the age of Google Earth.)

And it’s this vision that’s inscribed, knowingly or otherwise, in most contemporary descriptions of the “smart city.” Corbusian descriptions of the serene and masterful guidance of the city-as-machine-for-living are strikingly reminiscent of IBM, albeit couched in a different register of language.

But we can contrast this with a very different process of urban development, something that I think of as “spontaneous order from below,” and which Scott identifies with the great American urbanist Jane Jacobs. Tomorrow we’ll come back to this notion of spontaneous order from below, and what it might have to offer us as an appealing alternative model of networked cities.

Thursday.

Yesterday we discussed the first of two competing conceptions of urban order, a top-down vision that has its origins in the failed high modernism of Le Corbusier, and survives in the contemporary rhetoric around cities like Masdar and New Songdo and PlanIT Valley.

But there’s another way of thinking of things, which strikes me as not merely more appealing, but more empirical, more pragmatic…and ultimately more effective. This is a perspective often associated with urbanist Jane Jacobs, who devotes considerable space in her 1961 classic The Death and Life of Great American Cities to the ways in which a functioning urban community produces order from the bottom up, in an infinitely of small, unconscious acts.

Her most famous example has to do with the safety of well-trafficked streets, in which a diversity of building uses and schedules generates a reliable flow of passers-by throughout the day, and well into the night. This, in turn, produces what economists call a positive externality: nobody’s intentionally setting out to patrol the neighborhood, but with so many “eyes on the street,” untoward incidents are that much less likely to occur, and can be rapidly and appropriately responded to when they do.

Anna Minton, in her recent Ground Control, provides a potent — and disturbing — illustration of how the difference between these two conceptions plays out in the contemporary city. It’s a case study drawn from London, a city which deploys one CCTV camera for every seven of its 7.7 million residents, making it the most surveilled city on the face of the planet.

Minton poses the obvious, but heretofore apparently unspeakable, question as to whether all those cameras actually make Londoners any safer. Her research finds that, perversely, the opposite is true — that the presence of a CCTV camera makes pedestrians less likely to take personal responsibility for emergent situations like accidents, muggings or acts of harassment. People apparently assume that there’s someone (uniformed?) on the other end of the camera lens, duly empowered to respond to such incidents. They don’t need to intervene themselves…so they don’t. Minton’s findings suggest that CCTV fails entirely in the roles of crime deterrence and prevention. It’s Jane Jacobs’s point all over again: a functioning human community is bound together by an elaborate weave of organic relations that takes years or decades to build up, which can be destroyed in weeks or months through the clumsy application of technology.

What does any of this have to do with smart cities? Rather than the heavy — indeed, heroic — infrastructural investments involved in the Masdar/Songdo/PlanIT Valley way of doing things, rather than the necessity of starting the city all over again from scratch, mightn’t we imagine interventions that have a lot more to do with the places we already live in and the devices the great majority of us already have? Is there any possibility that we could use networked technology to preserve the intricate order and innate, pre-existing intelligence of our great urban places?

Tomorrow, in our final installation, we explore just what this might look like.

Friday.

I believe that there’s a final reason why the vision of the smart city appears so vividly at this particular moment in history. At least in the United States, government is retreating from the provision of many services it used to provide as a matter of course; we’ve stumbled into a vicious cycle in which ascendant neoliberal rhetoric interacts with, and reinforces, a collapsing tax base and a brutal underlying economic reality. The result is an urgent imperative on the part of municipal administrations to do more with less, and a palpable hunger for any tools that will help them achieve this aim.

Into the breach step the theoreticians of the smart city, promising improved managerial oversight, greater resource-utilization efficiency, and predictive models to help keep the chaos at bay. I ultimately think that many of these interventions will prove heir to all the philosophical weaknesses and limitations of the Corbusian model, and will largely fail to deliver on their promise.

Is there a valid competing vision of the networked city, something that we might we offer instead? One of the most fascinating things I’ve witnessed in the last year was a management consultant from McKinsey — the most buttoned-down, Oxford-and-chinos kind of guy you could possibly imagine — forthrightly describe a vision of networked, self-organized place that would not have sounded out of place at the Barcelona Telephone Exchange in 1936, during the period that it was successfully managed by the anarchist CNT-FAI union.

You can certainly accuse me of making a virtue of a necessity, but I found this hopeful. If someone that entrenched in contemporary modes of technological development is comfortable with the thought that the art of municipal management hasn’t reached its final form, that in fact we may be on the verge of frankly radical reassessments, then the potential scope for creativity in everything that’s coming may well be far greater than we might have suspected. The downside is that most of us are going to have to take a lot more responsibility for managing the circumstances of our own lives. But the opportunity, the wonderful thing…is that most of us will get to take a lot more responsibility for managing the circumstances of our own lives.

What I want to emphasize is that the constraints aren’t primarily technological. We already have everything we need to achieve this aim, materially and conceptually. What limits us is a collective dearth of imagination, and a recourse to the same brain-dead processes of specification, procurement and development that resulted in the shoddy information-technological tools so many of us are perforce compelled to work with. (Anyone who’s ever tried to use an SAP tool to file an expense report knows precisely what I mean.)

It’s almost as if the space of possibility we’re now presented with is so large and daunting that we’re collectively more comfortable retreating to the relative certainties of the ways we’ve been doing things for ages, whether or not they make any particular sense amid our present circumstances. If we want to design supple, responsive networked places — if we want to invest all the considerable power of contemporary informatic technology in making places that are worth living in — I believe we can surely do so, but it will mean taking bold and decisive steps beyond the stale rhetoric and dubious intellectual heritage of the “smart city.”

Show more