As Web companies and government agencies analyze ever more information about our lives, it’s tempting to respond by passing new privacy laws or creating mechanisms that pay us for our data. Instead, we need a civic solution, because democracy is at risk.
In 1967, The Public Interest, then a leading venue for highbrow policy debate, published a provocative essay by Paul Baran, one of the fathers of the data transmission method known as packet switching. Titled “The Future Computer Utility,” the essay speculated that someday a few big, centralized computers would provide “information processing … the same way one now buys electricity.”
Our home computer console will be used to send and receive messages—like telegrams. We could check to see whether the local department store has the advertised sports shirt in stock in the desired color and size. We could ask when delivery would be guaranteed, if we ordered. The information would be up-to-the-minute and accurate. We could pay our bills and compute our taxes via the console. We would ask questions and receive answers from “information banks”—automated versions of today’s libraries. We would obtain up-to-the-minute listing of all television and radio programs … The computer could, itself, send a message to remind us of an impending anniversary and save us from the disastrous consequences of forgetfulness.
It took decades for cloud computing to fulfill Baran’s vision. But he was prescient enough to worry that utility computing would need its own regulatory model. Here was an employee of the RAND Corporation—hardly a redoubt of Marxist thought—fretting about the concentration of market power in the hands of large computer utilities and demanding state intervention. Baran also wanted policies that could “offer maximum protection to the preservation of the rights of privacy of information”:
Highly sensitive personal and important business information will be stored in many of the contemplated systems … At present, nothing more than trust—or, at best, a lack of technical sophistication—stands in the way of a would-be eavesdropper … Today we lack the mechanisms to insure adequate safeguards. Because of the difficulty in rebuilding complex systems to incorporate safeguards at a later date, it appears desirable to anticipate these problems.
Sharp, bullshit-free analysis: techno-futurism has been in decline ever since.
All the privacy solutions you hear about are on the wrong track.
To read Baran’s essay (just one of the many on utility computing published at the time) is to realize that our contemporary privacy problem is not contemporary. It’s not just a consequence of Mark Zuckerberg’s selling his soul and our profiles to the NSA. The problem was recognized early on, and little was done about it.
Almost all of Baran’s envisioned uses for “utility computing” are purely commercial. Ordering shirts, paying bills, looking for entertainment, conquering forgetfulness: this is not the Internet of “virtual communities” and “netizens.” Baran simply imagined that networked computing would allow us to do things that we already do without networked computing: shopping, entertainment, research. But also: espionage, surveillance, and voyeurism.
If Baran’s “computer revolution” doesn’t sound very revolutionary, it’s in part because he did not imagine that it would upend the foundations of capitalism and bureaucratic administration that had been in place for centuries. By the 1990s, however, many digital enthusiasts believed otherwise; they were convinced that the spread of digital networks and the rapid decline in communication costs represented a genuinely new stage in human development. For them, the surveillance triggered in the 2000s by 9/11 and the colonization of these pristine digital spaces by Google, Facebook, and big data were aberrations that could be resisted or at least reversed. If only we could now erase the decade we lost and return to the utopia of the 1980s and 1990s by passing stricter laws, giving users more control, and building better encryption tools!
A different reading of recent history would yield a different agenda for the future. The widespread feeling of emancipation through information that many people still attribute to the 1990s was probably just a prolonged hallucination. Both capitalism and bureaucratic administration easily accommodated themselves to the new digital regime; both thrive on information flows, the more automated the better. Laws, markets, or technologies won’t stymie or redirect that demand for data, as all three play a role in sustaining capitalism and bureaucratic administration in the first place. Something else is needed: politics.
Even programs that seem innocuous can undermine democracy.
First, let’s address the symptoms of our current malaise. Yes, the commercial interests of technology companies and the policy interests of government agencies have converged: both are interested in the collection and rapid analysis of user data. Google and Facebook are compelled to collect ever more data to boost the effectiveness of the ads they sell. Government agencies need the same data—they can collect it either on their own or in coöperation with technology companies—to pursue their own programs.
Many of those programs deal with national security. But such data can be used in many other ways that also undermine privacy. The Italian government, for example, is using a tool called the redditometro, or income meter, which analyzes receipts and spending patterns to flag people who spend more than they claim in income as potential tax cheaters. Once mobile payments replace a large percentage of cash transactions—with Google and Facebook as intermediaries—the data collected by these companies will be indispensable to tax collectors. Likewise, legal academics are busy exploring how data mining can be used to craft contracts or wills tailored to the personalities, characteristics, and past behavior of individual citizens, boosting efficiency and reducing malpractice.
On another front, technocrats like Cass Sunstein, the former administrator of the Office of Information and Regulatory Affairs at the White House and a leading proponent of “nanny statecraft” that nudges citizens to do certain things, hope that the collection and instant analysis of data about individuals can help solve problems like obesity, climate change, and drunk driving by steering our behavior. A new book by three British academics—Changing Behaviours: On the Rise of the Psychological State—features a long list of such schemes at work in the U.K., where the government’s nudging unit, inspired by Sunstein, has been so successful that it’s about to become a for-profit operation.
Thanks to smartphones or Google Glass, we can now be pinged whenever we are about to do something stupid, unhealthy, or unsound. We wouldn’t necessarily need to know why the action would be wrong: the system’s algorithms do the moral calculus on their own. Citizens take on the role of information machines that feed the techno-bureaucratic complex with our data. And why wouldn’t we, if we are promised slimmer waistlines, cleaner air, or longer (and safer) lives in return?
This logic of preëmption is not different from that of the NSA in its fight against terror: let’s prevent problems rather than deal with their consequences. Even if we tie the hands of the NSA—by some combination of better oversight, stricter rules on data access, or stronger and friendlier encryption technologies—the data hunger of other state institutions would remain. They will justify it. On issues like obesity or climate change—where the policy makers are quick to add that we are facing a ticking-bomb scenario—they will say a little deficit of democracy can go a long way.
Here’s what that deficit would look like: the new digital infrastructure, thriving as it does on real-time data contributed by citizens, allows the technocrats to take politics, with all its noise, friction, and discontent, out of the political process. It replaces the messy stuff of coalition-building, bargaining, and deliberation with the cleanliness and efficiency of data-powered administration.
This phenomenon has a meme-friendly name: “algorithmic regulation,” as Silicon Valley publisher Tim O’Reilly calls it. In essence, information-rich democracies have reached a point where they want to try to solve public problems without having to explain or justify themselves to citizens. Instead, they can simply appeal to our own self-interest—and they know enough about us to engineer a perfect, highly personalized, irresistible nudge.
Privacy is a means to democracy, not an end in itself.
Another warning from the past. The year was 1985, and Spiros Simitis, Germany’s leading privacy scholar and practitioner—at the time the data protection commissioner of the German state of Hesse—was addressing the University of Pennsylvania Law School. His lecture explored the very same issue that preoccupied Baran: the automation of data processing. But Simitis didn’t lose sight of the history of capitalism and democracy, so he saw technological changes in a far more ambiguous light.
He also recognized that privacy is not an end in itself. It’s a means of achieving a certain ideal of democratic politics, where citizens are trusted to be more than just self-contented suppliers of information to all-seeing and all-optimizing technocrats. “Where privacy is dismantled,” warned Simitis, “both the chance for personal assessment of the political … process and the opportunity to develop and maintain a particular style of life fade.”
Three technological trends underpinned Simitis’s analysis. First, he noted, even back then, every sphere of social interaction was mediated by information technology—he warned of “the intensive retrieval of personal data of virtually every employee, taxpayer, patient, bank customer, welfare recipient, or car driver.” As a result, privacy was no longer solely a problem of some unlucky fellow caught off-guard in an awkward situation; it had become everyone’s problem. Second, new technologies like smart cards and videotex not only were making it possible to “record and reconstruct individual activities in minute detail” but also were normalizing surveillance, weaving it into our everyday life. Third, the personal information recorded by these new technologies was allowing social institutions to enforce standards of behavior, triggering “long-term strategies of manipulation intended to mold and adjust individual conduct.”
Modern institutions certainly stood to gain from all this. Insurance companies could tailor cost-saving programs to the needs and demands of patients, hospitals, and the pharmaceutical industry. Police could use newly available databases and various “mobility profiles” to identify potential criminals and locate suspects. Welfare agencies could suddenly unearth fraudulent behavior.
But how would these technologies affect us as citizens—as subjects who participate in understanding and reforming the world around us, not just as consumers or customers who merely benefit from it?
In case after case, Simitis argued, we stood to lose. Instead of getting more context for decisions, we would get less; instead of seeing the logic driving our bureaucratic systems and making that logic more accurate and less Kafkaesque, we would get more confusion because decision making was becoming automated and no one knew how exactly the algorithms worked. We would perceive a murkier picture of what makes our social institutions work; despite the promise of greater personalization and empowerment, the interactive systems would provide only an illusion of more participation. As a result, “interactive systems … suggest individual activity where in fact no more than stereotyped reactions occur.”
If you think Simitis was describing a future that never came to pass, consider a recent paper on the transparency of automated prediction systems by Tal Zarsky, one of the world’s leading experts on the politics and ethics of data mining. He notes that “data mining might point to individuals and events, indicating elevated risk, without telling us why they were selected.” As it happens, the degree of interpretability is one of the most consequential policy decisions to be made in designing data-mining systems. Zarsky sees vast implications for democracy here:
A non-interpretable process might follow from a data-mining analysis which is not explainable in human language. Here, the software makes its selection decisions based upon multiple variables (even thousands) … It would be difficult for the government to provide a detailed response when asked why an individual was singled out to receive differentiated treatment by an automated recommendation system. The most the government could say is that this is what the algorithm found based on previous cases.
This is the future we are sleepwalking into. Everything seems to work, and things might even be getting better—it’s just that we don’t know exactly why or how.
Too little privacy can endanger democracy. But so can too much privacy.
Simitis got the trends right. Free from dubious assumptions about “the Internet age,” he arrived at an original but cautious defense of privacy as a vital feature of a self-critical democracy—not the democracy of some abstract political theory but the messy, noisy democracy we inhabit, with its never-ending contradictions. In particular, Simitis’s most crucial insight is that privacy can both support and undermine democracy.
Traditionally, our response to changes in automated information processing has been to view them as a personal problem for the affected individuals. A case in point is the seminal article “The Right to Privacy,” by Louis Brandeis and Samuel Warren. Writing in 1890, they sought a “right to be let alone”—to live an undisturbed life, away from intruders. According to Simitis, they expressed a desire, common to many self-made individuals at the time, “to enjoy, strictly for themselves and under conditions they determined, the fruits of their economic and social activity.”
A laudable goal: without extending such legal cover to entrepreneurs, modern American capitalism might have never become so robust. But this right, disconnected from any matching responsibilities, could also sanction an excessive level of withdrawal that shields us from the outside world and undermines the foundations of the very democratic regime that made the right possible. If all citizens were to fully exercise their right to privacy, society would be deprived of the transparent and readily available data that’s needed not only for the technocrats’ sake but—even more—so that citizens can evaluate issues, form opinions, and debate (and, occasionally, fire the technocrats).
This is not a problem specific to the right to privacy. For some contemporary thinkers, such as the French historian and philosopher Marcel Gauchet, democracies risk falling victim to their own success: having instituted a legal regime of rights that allow citizens to pursue their own private interests without any reference to what’s good for the public, they stand to exhaust the very resources that have allowed them to flourish.
When all citizens demand their rights but are unaware of their responsibilities, the political questions that have defined democratic life over centuries—How should we live together? What is in the public interest, and how do I balance my own interest with it?—are subsumed into legal, economic, or administrative domains. “The political” and “the public” no longer register as domains at all; laws, markets, and technologies displace debate and contestation as preferred, less messy solutions.
But a democracy without engaged citizens doesn’t sound much like a democracy—and might not survive as one. This was obvious to Thomas Jefferson, who, while wanting every citizen to be “a participator in the government of affairs,” also believed that civic participation involves a constant tension between public and private life. A society that believes, as Simitis put it, that the citizen’s access to information “ends where the bourgeois’ claim for privacy begins” won’t last as a well-functioning democracy.
Thus the balance between privacy and transparency is especially in need of adjustment in times of rapid technological change. That balance itself is a political issue par excellence, to be settled through public debate and always left open for negotiation. It can’t be settled once and for all by some combination of theories, markets, and technologies. As Simitis said: “Far from being considered a constitutive element of a democratic society, privacy appears as a tolerated contradiction, the implications of which must be continuously reconsidered.”
Laws and market mechanisms are insufficient solutions.
In the last few decades, as we began to generate more data, our institutions became addicted. If you withheld the data and severed the feedback loops, it’s not clear whether they could continue at all. We, as citizens, are caught in an odd position: our reason for disclosing the data is not that we feel deep concern for the public good. No, we release data out of self-interest, on Google or via self-tracking apps. We are too cheap not to use free services subsidized by advertising. Or we want to track our fitness and diet, and then we sell the data.
Simitis knew even in 1985 that this would inevitably lead to the “algorithmic regulation” taking shape today, as politics becomes “public administration” that runs on autopilot so that citizens can relax and enjoy themselves, only to be nudged, occasionally, whenever they are about to forget to buy broccoli.
Habits, activities, and preferences are compiled, registered, and retrieved to facilitate better adjustment, not to improve the individual’s capacity to act and to decide. Whatever the original incentive for computerization may have been, processing increasingly appears as the ideal means to adapt an individual to a predetermined, standardized behavior that aims at the highest possible degree of compliance with the model patient, consumer, taxpayer, employee, or citizen.
What Simitis is describing here is the construction of what I call “invisible barbed wire” around our intellectual and social lives. Big data, with its many interconnected databases that feed on information and algorithms of dubious provenance, imposes severe constraints on how we mature politically and socially. The German philosopher Jürgen Habermas was right to warn—in 1963—that “an exclusively technical civilization … is threatened … by the splitting of human beings into two classes—the social engineers and the inmates of closed social institutions.”
The invisible barbed wire of big data limits our lives to a space that might look quiet and enticing enough but is not of our own choosing and that we cannot rebuild or expand. The worst part is that we do not see it as such. Because we believe that we are free to go anywhere, the barbed wire remains invisible. Worse, there’s no one to blame: certainly not Google, Dick Cheney, or the NSA. It’s the result of many different logics and systems—of modern capitalism, of bureaucratic governance, of risk management—that get supercharged by the automation of information processing and by the depoliticization of politics.
The more information we reveal about ourselves, the denser but more invisible this barbed wire becomes. We gradually lose our capacity to reason and debate; we no longer understand why things happen to us.
But all is not lost. We could learn to perceive ourselves as trapped within this barbed wire and even cut through it. Privacy is the resource that allows us to do that and, should we be so lucky, even to plan our escape route.
This is where Simitis expressed a truly revolutionary insight that is lost in contemporary privacy debates: no progress can be achieved, he said, as long as privacy protection is “more or less equated with an individual’s right to decide when and which data are to be accessible.” The trap that many well-meaning privacy advocates fall into is thinking that if only they could provide the individual with more control over his or her data—through stronger laws or a robust property regime—then the invisible barbed wire would become visible and fray. It won’t—not if that data is eventually returned to the very institutions that are erecting the wire around us.
Think of privacy in ethical terms.
If we accept privacy as a problem of and for democracy, then popular fixes are inadequate. For example, in his book Who Owns the Future?, Jaron Lanier proposes that we disregard one pole of privacy—the legal one—and focus on the economic one instead. “Commercial rights are better suited for the multitude of quirky little situations that will come up in real life than new kinds of civil rights along the lines of digital privacy,” he writes. On this logic, by turning our data into an asset that we might sell, we accomplish two things. First, we can control who has access to it, and second, we can make up for some of the economic losses caused by the disruption of everything analog.
Lanier’s proposal is not original. In Code and Other Laws of Cyberspace (first published in 1999), Lawrence Lessig enthused about building a property regime around private data. Lessig wanted an “electronic butler” that could negotiate with websites: “The user sets her preferences once—specifies how she would negotiate privacy and what she is willing to give up—and from that moment on, when she enters a site, the site and her machine negotiate. Only if the machines can agree will the site be able to obtain her personal data.”
It’s easy to see where such reasoning could take us. We’d all have customized smartphone apps that would continually incorporate the latest information about the people we meet, the places we visit, and the information we possess in order to update the price of our personal data portfolio. It would be extremely dynamic: if you are walking by a fancy store selling jewelry, the store might be willing to pay more to know your spouse’s birthday than it is when you are sitting at home watching TV.
The property regime can, indeed, strengthen privacy: if consumers want a good return on their data portfolio, they need to ensure that their data is not already available elsewhere. Thus they either “rent” it the way Netflix rents movies or sell it on the condition that it can be used or resold only under tightly controlled conditions. Some companies already offer “data lockers” to facilitate such secure exchanges.
So if you want to defend the “right to privacy” for its own sake, turning data into a tradable asset could resolve your misgivings. The NSA would still get what it wanted; but if you’re worried that our private information has become too liquid and that we’ve lost control over its movements, a smart business model, coupled with a strong digital-rights-management regime, could fix that.
Meanwhile, government agencies committed to “nanny statecraft” would want this data as well. Perhaps they might pay a small fee or promise a tax credit for the privilege of nudging you later on—with the help of the data from your smartphone. Consumers win, entrepreneurs win, technocrats win. Privacy, in one way or another, is preserved also. So who, exactly, loses here? If you’ve read your Simitis, you know the answer: democracy does.
It’s not just because the invisible barbed wire would remain. We also should worry about the implications for justice and equality. For example, my decision to disclose personal information, even if I disclose it only to my insurance company, will inevitably have implications for other people, many of them less well off. People who say that tracking their fitness or location is merely an affirmative choice from which they can opt out have little knowledge of how institutions think. Once there are enough early adopters who self-track—and most of them are likely to gain something from it—those who refuse will no longer be seen as just quirky individuals exercising their autonomy. No, they will be considered deviants with something to hide. Their insurance will be more expensive. If we never lose sight of this fact, our decision to self-track won’t be as easy to reduce to pure economic self-interest; at some point, moral considerations might kick in. Do I really want to share my data and get a coupon I do not need if it means that someone else who is already working three jobs may ultimately have to pay more? Such moral concerns are rendered moot if we delegate decision-making to “electronic butlers.”
Few of us have had moral pangs about data-sharing schemes, but that could change. Before the environment became a global concern, few of us thought twice about taking public transport if we could drive. Before ethical consumption became a global concern, no one would have paid more for coffee that tasted the same but promised “fair trade.” Consider a cheap T-shirt you see in a store. It might be perfectly legal to buy it, but after decades of hard work by activist groups, a “Made in Bangladesh” label makes us think twice about doing so. Perhaps we fear that it was made by children or exploited adults. Or, having thought about it, maybe we actually do want to buy the T-shirt because we hope it might support the work of a child who would otherwise be forced into prostitution. What is the right thing to do here? We don’t know—so we do some research. Such scrutiny can’t apply to everything we buy, or we’d never leave the store. But exchanges of information—the oxygen of democratic life—should fall into the category of “Apply more thought, not less.” It’s not something to be delegated to an “electronic butler”—not if we don’t want to cleanse our life of its political dimension.
Sabotage the system. Provoke more questions.
We should also be troubled by the suggestion that we can reduce the privacy problem to the legal dimension. The question we’ve been asking for the last two decades—How can we make sure that we have more control over our personal information?—cannot be the only question to ask. Unless we learn and continuously relearn how automated information processing promotes and impedes democratic life, an answer to this question might prove worthless, especially if the democratic regime needed to implement whatever answer we come up with unravels in the meantime.
Intellectually, at least, it’s clear what needs to be done: we must confront the question not only in the economic and legal dimensions but also in a political one, linking the future of privacy with the future of democracy in a way that refuses to reduce privacy either to markets or to laws. What does this philosophical insight mean in practice?
First, we must politicize the debate about privacy and information sharing. Articulating the existence—and the profound political consequences—of the invisible barbed wire would be a good start. We must scrutinize data-intensive problem solving and expose its occasionally antidemocratic character. At times we should accept more risk, imperfection, improvisation, and inefficiency in the name of keeping the democratic spirit alive.
Second, we must learn how to sabotage the system—perhaps by refusing to self-track at all. If refusing to record our calorie intake or our whereabouts is the only way to get policy makers to address the structural causes of problems like obesity or climate change—and not just tinker with their symptoms through nudging—information boycotts might be justifiable. Refusing to make money off your own data might be as political an act as refusing to drive a car or eat meat. Privacy can then reëmerge as a political instrument for keeping the spirit of democracy alive: we want private spaces because we still believe in our ability to reflect on what ails the world and find a way to fix it, and we’d rather not surrender this capacity to algorithms and feedback loops.
Third, we need more provocative digital services. It’s not enough for a website to prompt us to decide who should see our data. Instead it should reawaken our own imaginations. Designed right, sites would not nudge citizens to either guard or share their private information but would reveal the hidden political dimensions to various acts of information sharing. We don’t want an electronic butler—we want an electronic provocateur. Instead of yet another app that could tell us how much money we can save by monitoring our exercise routine, we need an app that can tell us how many people are likely to lose health insurance if the insurance industry has as much data as the NSA, most of it contributed by consumers like us. Eventually we might discern such dimensions on our own, without any technological prompts.
Finally, we have to abandon fixed preconceptions about how our digital services work and interconnect. Otherwise, we’ll fall victim to the same logic that has constrained the imagination of so many well-meaning privacy advocates who think that defending the “right to privacy”—not fighting to preserve democracy—is what should drive public policy. While many Internet activists would surely argue otherwise, what happens to the Internet is of only secondary importance. Just as with privacy, it’s the fate of democracy itself that should be our primary goal.
After all, back in 1967 Paul Baran was lucky enough not to know what the Internet would become. That didn’t stop him from seeing the benefits of utility computing and its dangers. Abandon the idea that the Internet fell from grace over the last decade. Liberating ourselves from that misreading of history could help us address the antidemocratic threats of the digital future.
Evgeny Morozov is the author of The Net Delusion: The Dark Side of Internet Freedom and To Save Everything, Click Here: The Folly of Technological Solutionism.
Article by Neeraj Bhatia, an architect, urban designer, and assistant professor at CCA. Neeraj is the director of The Open Workshop and co-director of InfraNet Lab. He is the co-editor of Bracket 2, focusing on soft architecture, the second edition of an annual journal. Find out more here.
The term "soft" is expansive in its meanings. Soft material, soft sound, soft-mannered, soft sell, soft power, soft management, soft computing, soft politics, software, soft architecture. It describes material qualities, evokes character traits. It defines strategies of persuasion, models of systems thinking and problem-solving, and new approaches to design.
But the most obvious associations with soft have been material characteristics—yielding readily to touch or pressure; deficient in hardness; smooth; pliable, malleable, or plastic. And this is the definition of "soft" that came to define some of the most exciting design motives of the 1960s and '70s. These new design approaches were skeptical of modernism; soft was deemed to enable individualism, responsiveness, nomadism, and anarchy.
Archigram, Buckminster Fuller, Cedric Price, and Yona Friedman were among soft architecture's forerunners. Archigram’s investigations into pods, Price’s inflatable roof structures, and Fuller’s research into lightness were all literally soft, and often scaled to the material properties of human occupation. However, larger urban visions such as Plug-In City, Ville Spatiale, or Potteries Thinkbelt can equally be understood as soft. What connects these projects is their attempt to develop design strategies that shifted from the malleability of a material to the flexibility of a system. In so doing they developed new characteristics of "soft."
Here, we take a look at some of "soft" architecture's most radical ideas, structures, and concepts.
Cedric Price, Price Potteries Thinkbelt, 1964
North Staffordshire's pottery industry was suffering an economic crisis in the 1950s and 1960s, leaving the entropic landscape with underused infrastructure and industry. Price published his Potteries Thinkbelt in 1966, converting the railway and facilities into a vast educational network for 20,000 students. The network was malleable and involved scheduling/time into the process of design.
Reyner Banham and Francois Dallegret, Environmental Bubble, 1965
The Environmental Bubble proposed a domestic utopia with all the basic amenities of modern life (food, shelter, energy ... television), but without the binds of permanent buildings and structures of earlier human settlements. The transparent plastic dome is inflated by air conditioning and rejects the archetypal home icon. Instead it is defined by the individual and his or her subjective yearnings.
Hans Hollein, Mobile Office, 1969
Before the era of mobile communication, Hans Hollein derived the mobile office. The design transformed the office into an inflatable, transportable, and weather-proof spectacle!
Coop Himmelb(l)au, Basel Event: the Restless Sphere, 1971
Mechanical motion generated from pressurized gas is a realm of technology called pneumatics, which manifested itself in the design culture of the 1960s. The Basel Event was a public demonstration of pneumatic construction, showcasing a Restless Sphere, four meters in diameter, put in motion by its occupant. Coop Himmelb(l)au sought to create an architecture as light as the sky; it had political ramifications through its manipulations.
Philippe Rahm, Interior Weather, 2006
Philippe Rahm's meteorological architecture incorporates soft typologies and data sets otherwise invisible to the human eye. Interior Weather is an installation with two sets of spaces: "objective" rooms with temperature, light intensity, and humidity in flux; and "subjective" rooms with occupants being observed for physiological values and social behavior. Territory is defined here through the senses, not walls.
Walter Henn, Burolandschaft, 1963
The era of paternalism and strict, fixed, hierarchical office space has transitioned into a new typology of malleability and modularity. The idea of "the cubicle" was novel in its modularity and non-hierarchical form. Henn's Burolandschaft, literally "office landscape," launched a movement based on an open plan freed from partitions. It has heavily influenced contemporary projects that create flexible space through the (re)organization of furniture.
Conrad Waddington, Epigenetic Landscape, 1957
Waddington's formalized epigenetic landscape offers a metaphor for cell differentiation and proliferation, demonstrating how a marble would gravitate toward the lowest local elevation. The resulting Boolean network is an example of visualizing a problematic data set that is constantly reorganizing itself through feedback mechanism.
Writer Sanford Kwinter famously appropriated Conrad Waddington’s "Epigenetic Landscape" as a topological model with which to envision a new conception of form-making (the second picture above)—a concept explored in this "Reverse of Volume RG" installation, Japanese artist Yasuaki Onishi.
Yona Friedman, Villa Spatiale, 1970
The Spatial City articulated Friedman's belief that architecture should only provide a framework, in which the inhabitants had freedom to articulate space for specific needs. The design is "free from authoritarianism" and is a multi-story, spatial space-frame-grid, which implements mobile, temporary, and lightweight infrastructure.
Michael Webb (Archigram), Magic Carpet and Brunhilda's Magic Ring of Fire, 1968
Proposed during the 1970s culture of indeterminacy and the dissolution of buildings, the Magic Carpet and Brunhilda's Magic Ring of Fire is a "reverse hovercraft" facility holding a body suspended in space using jets of air.
Rod Garrett, Black Rock City
One of the principles of the Burning Man Festival is to leave no trace: "We clean up after ourselves and endeavor, whenever possible, to leave such places in a better state than when we found them." Black Rock City originated as tabula rasa in the Nevada desert; its population fluxes to 50,000 during the festival beginning on the last Monday of August every year. It is urbanism made of a soft framework, that is temporary and adjusted each year.
*It amuses me to see people complain that his history is “too German.” That’s exactly what’s great about it. I’m staggered by the meticulous attention to detail in Dreher’s compendium. I’m quite the fan of media history, but really, ten of me couldn’t write a book like this.
The Making of an Avant Garde: The Institute for Architecture and Urban Studies 1967-1984
A documentary written, produced, and directed by Diana Agrest
1.5 AIA and New York State CEUs
This film screening is organized by The Irwin S. Chanin School of Architecture of The Cooper Union and co-sponsored by The Architectural League.
A screening of The Making of an Avant Garde: The Institute for Architecture and Urban Studies 1967-1984.
The Institute for Architecture and Urban Studies, founded in 1967 with close ties to The Museum of Modern Art, made New York the global center for architectural debate and redefined architectural discourse in the United States. A place of immense energy and effervescence, its founders and participants were young and hardly known at the time but would ultimately shape architectural practice and theory for decades. Diana Agrest’s film documents and explores the Institute’s fertile beginnings and enduring significance as a locus for the avant-garde. The film features Mark Wigley, Peter Eisenman, Diana Agrest, Charles Gwathmey, Mario Gandelsonas, Richard Meier, Kenneth Frampton, Barbara Jakobson, Frank Gehry, Anthony Vidler, Deborah Berke, Rem Koolhaas, Stan Allen, Suzanne Stephens, Bernard Tschumi, Joan Ockman, among others.
Time & Place
Wednesday, November 13, 2013
7:00 p.m.
The Great Hall
The Cooper Union
7 East 7th Street
Tickets
This event is free and open to all. Reservations neither needed nor accepted.
Personal comment:
Undoubtedly a documentary I'll look to get a copy!
This essay is adapted from Marina Alberti Cities as Hybrid Ecosystems(Forthcoming) and from Marina Alberti “Anthropocene City”, forthcoming in The Anthropocene Project by the Deutsche Museum Special Exhibit 2014-1015
Cities face an important challenge: they must rethink themselves in the context of planetary change. What role do cities play in the evolution of Earth? From a planetary perspective, the emergence and rapid expansion of cities across the globe may represent another turning point in the life of our planet. Earth’s atmosphere, on which we all depend, emerged from the metabolic process of vast numbers of single-celled algae and bacteria living in the seas 2.3 billion years ago. These organisms transformed the environment into a place where human life could develop. Adam Frank, an Astrophysicist at the University of Rochesters, reminds us that the evolution of life has completely changed big important characteristics of the planet (NPR 13.7: Cosmos & Culture, 2012). Can humans now change the course of Earth’s evolution? Can the way we build cities determine the probability of crossing thresholds that will trigger non-linear, abrupt change on a planetary scale (Rockström et al 2009)?
For most of its history, Earth has been relatively stable, and dominated primarily by negative feedbacks that have kept it from getting into extreme states (Lenton and Williams 2013). Rarely has the earth experienced planetary-scale tipping points or system shifts. But the recent increase in positive feedback (i.e., climate change), and the emergence of evolutionary innovations (i.e. novel metabolisms), could trigger transformations on the scale of the Great Oxidation (Lenton and Williams 2013). Will we drive Earth’s ecosystems to unintentional collapse? Or will we consciously steer the Earth towards a resilient new era?
In my forthcoming book, Cities as Hybrid Ecosystems, I propose a co-evolutionary paradigm for building a science of cities that “think like planets” (see the Note at the bottom)— a view that focuses both on unpredictable dynamics and experimental learning and innovation in urban ecosystems. In the book I elaborate on some concepts and principles of design and planning that can emerge from such a perspective: self-organization, heterogeneity, modularity, feedback, and transformation.
How can thinking on a planetary scale help us understand the place of humans in the evolution of Earth and guide us in building a human habitat of the “long now”?
Planetary Scales
Humans make decisions simultaneously at multiple time and spatial scales, depending on the perceived scale of a given problem and scale of influence of their decision. Yet it is unlikely that this scale extends beyond one generation or includes the entire globe. The human experience of space and time has profound implications for our understanding of world phenomena and for making long- and short-term decisions. In his book What time is this place, Kevin Lynch (1972) eloquently told us that time is embedded in the physical world that we inhabit and build. Cities reflect our experience of time, and the way we experience time affects the way we view and change the environment. Thus our experience of time plays a crucial role in whether we succeed in managing environmental change. If we are to think like a planet, the challenge will be to deal with scales and events far removed from everyday human experience. Earth is 4.6 billion years old. That’s a big number to conceptualize and account for in our individual and collective decisions.
Thinking like a planet implies expanding the time and spatial scales of city design and planning, but not simply from local to global and from a few decades to a few centuries. Instead, we will have to include the scales of the geological and biological processes on which our planet operates. Thinking on a planetary scale implies expanding the idea of change. Lynch (1972) reminds us that “the arguments of planning all come down to the management of change.” But what is change?
Human experience of change is often confined to fluctuations within a relatively stable domain. However Planet Earth has displayed rare but abrupt changes and regime shifts in the past. Human experience of abrupt change is limited to marked changes in regional system dynamics, such as altered fire regimes, and extinctions of species. Yet, since the Industrial Revolution, humans have been pushing the planet outside a stability domain. Will human activities trigger such a global event? We can’t answer that, as we don’t understand enough about how regime shifts propagate across scales, but emerging evidence does suggest that if we continue to disrupt ecosystems and climate we face an increasing risk of crossing those thresholds that keep the earth in a relatively stable domain. Until recently our individual behaviors and collective institutions have been shaped primarily by change that we can envision relatively easily on a human time scale. Our behaviors are not tuned to the slow and imperceptible but systematic changes that can drive dramatic shifts in Earth’s systems.
Planetary shifts can be rapid: the glaciation of the Younger Dryas (abrupt climatic change resulting in severe cold and drought) occurred roughly 11,500 years ago, apparently over only a few decades. Or, it can unfold slowly: the Himalayas took over a million years to form. Shifts can emerge as the results of extreme events like volcanic eruptions, or relatively slow processes, like the movement of tectonic plates. Though we still don’t completely understand the subtle relationship between local and global stability in complex systems, several scientists hypothesize that the increasing complexity and interdependence of socio-economic networks can produce ‘tipping cascades’ and ‘domino dynamics’ in the Earth’s system, leading to unexpected regime shifts (Helbing 2013, Hughes et al 2013).
Planetary Challenges and Opportunities
A planetary perspective for envisioning and building cities that we would like to live in—cities that are livable, resilient, and exciting—provides many challenges and opportunities. To begin, it requires that we expand the spectrum of imaginary archetypes. Current archetypes reflect skewed and often extreme simplifications of how the universe works, ranging from biological determinism to techno-scientific optimism. At best they represent accurate but incomplete accounts of how the world works. How can we reconcile the messages contained in the catastrophic versus optimistic views of the future of Earth? And, how can we hold divergent explanations and arguments as plausibly true? Can we imagine a place where humans have co-evolved with natural systems? What does that world look like? How can we create that place in the face of limited knowledge and uncertainty, holding all these possible futures as plausible options?
The concept of “planetary boundaries” offers a framework for humanity to operate safely on a planetary scale. Rockström et al (2009) developed the concept of planetary boundaries to inform us about the levels of anthropogenic change that can be sustained so we can avoid potential planetary regime shifts that would dramatically affect human wellbeing. The concept does not imply, and neither rules out, planetary-scale tipping points associated with human drivers. Hughes et al (2013) do address some the misconception surrounding planetary-scale tipping points that confuses a system’s rate of change with the presence or absence of a tipping point. To avoid the potential consequences of unpredictable planetary-scale regime shifts we will have to shift our attention towards the drivers and feedbacks rather than focus exclusively on the detectable system responses. Rockström et al (2009) identify nine areas that are most in need of set planetary boundaries: climate change; biodiversity loss; input of nitrogen and phosphorus in soils and waters; stratospheric ozone depletion; ocean acidification; global consumption of freshwater; changes in land use for agriculture; air pollution; and chemical pollution.
A different emphasis is proposed by those scientists who have advanced the concept of planetary opportunities: solution-oriented research to provide realistic, context-specific pathways to a sustainable future (DeFries et al. 2012). The idea is to shift our attention to how human ingenuity can expand the ability to enhance human wellbeing (i.e. food security, human health), while minimizing and reversing environmental impacts. The concept is grounded in human innovation and the human capacity to develop alternative technologies, implement “green” infrastructure, and reconfigure institutional frameworks. The potential opportunities to explore solution-oriented research and policy strategies are amplified in an urbanizing planet, where such solutions can be replicated and can transform the way we build and inhabit the Earth.
Imagining a Resilient Urban Planet
While these different images of the future are both plausible and informative, they speak about the present more than the future. They all represent an extension of the current trajectory as if the future would unfold along the path of our current way of asking questions, and our way of understanding and solving problems. Yes, these perspectives do account for uncertainty but it is defined by the confidence intervals around this trajectory. Both stories are grounded in the inevitable dichotomies of humans and nature, and technology vs. ecology. These views are at best an incomplete account of what is possible: they reflect a limited ability to imagine the future beyond such archetypes. Why can we imagine smart technologies and not smart behaviors, smart institutions, and smart societies? Why think only of technology and not of humans and their societies that co-evolve with Earth?
Understanding the co-evolution of human and natural systems is key to build a resilient society and transform our habitat. One of the greatest questions in biology today is whether natural selection is the only process driving evolution and what the other potential forces might be. To understand how evolution constructs the mechanisms of life, molecular biologists would argue that we also need to understand the self-organization of genes governing the evolution of cellular processes and influencing evolutionary change (Johnson and Kwan Lam 2010).
To function, life on Earth depends on the close cooperation of multiple elements. Biologists are curious about the properties of complex networks that supply resources, process waste, and regulate the system’s functioning at various scales of biological organization. West et al. (2005) propose that natural selection solved this problem by evolving hierarchical fractal-like branching. Other characteristics of evolvable systems are flexibility (i.e. phenotypic plasticity), and novelty. This capacity for innovation is an essential precondition for any system to function. Gunderson and Holling (2002) have noted that if systems lack the capacity for innovation and novelty, they may become over-connected and dynamically locked, unable to adapt. To be resilient and evolve, they must create new structures and undergo dynamic change. Differentiation, modularity, and cross-scale interactions of organizational structures have been described as key characteristics of systems that are capable of simultaneously adapting and innovating (Allen and Holling 2010).
To understand coevolution of human-natural systems will require advancement in the evolution and social theories that explain how complex societies and cooperation have evolved. What role does human ingenuity play? In Cities as Hybrid Ecosystems I propose that coupled human-natural systems are not governed only by either natural selection or human ingenuity alone, but by hybrid processes and mechanisms. It is their hybrid nature that makes them unstable and at the same time able to innovate. This novelty of hybrid systems is key to reorganization and renewal. Urbanization modifies the spatial and temporal variability of resources, creates new disturbances, and generates novel competitive interactions among species. This is particularly important because the distribution of ecological functions within and across scales is key to the system being able to regenerate and renew itself (Peterson et al. 1998).
The city that thinks like a planet: What does it look like?
In this blog article I have ventured to pose this question, but I will not venture to provide an answer. In fact no single individual can do that. The answer resides in the collective imagination and evolving behaviors of people of diverse cultures who inhabit a diversity of places on the planet. Humanity has the capacity to think in the long term. Indeed, throughout history, people in societies faced with the prospect of deforestation, or other environmental changes, have successfully engaged in long-term thinking, as Jared Diamond (2005) reminds us: consider Tokugawa shoguns, Inca emperors, New Guinea highlanders, or 16th-century German landowners. Or, more recently, the Chinese. Many countries in Europe, and the United States, have dramatically reduced their air pollution and meanwhile increased their use of energy and combustion of fossil fuels. Humans have the intellectual and moral capacity to do even more when tuned into challenging problems and engaged in solving them.
A city that thinks like a planet is not built on already set design solutions or planning strategies. Nor can we assume that the best solution would work equally well across the world regardless of place and time. Instead, such a city will be built on principles that expand its drawing board and collaborative action to include planetary processes and scales, to position humanity in the evolution of Earth. Such a view acknowledges the history of the planet in every element or building block of the urban fabric, from the building to the sidewalk, from the back yard to the park, from the residential street to the highway. It is a view that is curious about understanding who we are and about taking advantage of the novel patterns, processes, and feedbacks that emerge from human and natural interactions. It is a city grounded in the here and the now and simultaneously in the different time and spatial scales of human and natural processes that govern the Earth. A city that thinks like a planet is simultaneously resilient and able to change.
How can such a perspective guide decisions in practice? Urban planners and decision makers, making strategic decisions and investments in public infrastructure, want to know whether certain generic properties or qualities of a city’s architecture and governance could predict its capacity to adapt and transform itself. Can such a shift in perspective provide a new lens, a new way to interpret the evolution of human settlements, and to support humans in successfully adapting to change? Evidence emerging from the study of complex systems points to their key properties that expand adaptation capacity while enabling them to change: self organization, heterogeneity, modularity, redundancy, and cross-scale interactions.
A co-evolutionary perspective shifts the focus of planning towards human-natural interactions, adaptive feedback mechanisms, and flexible institutional settings. Instead of predefining “solutions,” that communities must implement, such perspective focuses on understanding the ‘rules of the game’, to facilitate self-organization and careful balance top-down and bottom-up managements strategies (Helbing 2013). Planning will then rely on principles that expand heterogeneity of forms and functions in urban structures and infrastructures that support the city. They support modularity (selected as opposed to generalized connectivity) to create interdependent decentralized systems with some level of autonomy to evolve.
In cities across the world, people are setting great examples that will allow for testing such hypotheses. Human perception of time and experience of change is an emerging key in the shift to a new perspective for building cities. We must develop reverse experiments to explore what works, what shifts the time scale of individual and collective behaviors. Several Northern European cities have adopted successful strategies to cut greenhouse gases, and combined them with innovative approaches that will allow them to adapt to the inevitable consequences of climate change. One example is the Copenhagen 2025 Climate Plan. It lays out a path for the city to become the first carbon-neutral city by 2025 through efficient zero-carbon mobility and building. The city is building a subway project that will place 85 percent of its inhabitants within 650 yards of a Metro station. Nearly three-quarters of the emissions reductions will come as people transition to less carbon-intensive ways of producing heat and electricity through a diverse supply of clean energy: biomass, wind, geothermal, and solar. Copenhagen is also one of the first cities to adopt a climate adaptation plan to reduce its vulnerability to the extreme storm events and rising seas expected in the next 100 years.
In the Netherlands, alternative strategies are being explored to allow people to live with the inevitable floods. These strategies involve building on water to develop floating communities and engineering and implementing adaptive beach protections that take advantage of natural processes. The experimental Sand Motor project uses a combination of wind, waves, tides, and sand to replenish the eroded coasts. The Dutch Rijkswaterstaat and the South Holland provincial authority placed a large amount of sand in an artificial 1 km long and 2 km wide peninsula into the sea, allowing for the wave and currents to redistribute it and build sand dunes and beaches to protect the coast over time.
New York is setting an example for long-term planning by combining adaptation and transformation strategies into its plan to build a resilient city, and Mayor Michael Bloomberg has outlined a $19.5 billion plan to defend the city against rising seas. In many rapidly growing cities of the Global South, similar leadership is emerging. For example, Johannesburg which adopted one of the first climate change adaptation plan, and so have Durban and Cape Town, in South Africa and Quito, Equador, along with Ho Chi Minh City Vietnam, where a partnership with the City of Rotterdam Netherlands has been established to develop a resilience strategy.
To think like a planet and explore what is possible we may need to reframe our questions. Instead of asking what is good for the planet, we must ask what is good for a planet inhabited by people. What is a good human habitat on Earth? And instead of seeking optimal solutions, we should identify principles that will inform the diverse communities across the world. The best choices may be temporary, since we do not fully understand the mechanisms of life, nor can we predict the consequences of human action. They may very well vary with place and depend on their own histories. But human action may constrain the choices available for life on earth.
Scenario Planning
Scenario planning offers a systematic and creative approach to thinking about the future by letting scientists and practitioners expand old mindsets of ecological sciences and decision making. It provides a tool we can use to deal with the limited predictability of changes on the planetary scale and to support decision-making under uncertainty. Scenarios help bring the future into present decisions (Schwartz 1996). They broaden perspectives, prompt new questions, and expose the possibilities for surprise.
Scenarios have several great features. We expect that they can shift people’s attention toward resilience, redefine decision frameworks, expand the boundaries of predictive models, highlight the risks and opportunities of alternative future conditions, monitor early warning signals, and identify robust strategies (Alberti et al 2013)
A fundamental objective of scenario planning is to explore the interactions among uncertain trajectories that would otherwise be overlooked. Scenarios highlight the risks and opportunities of plausible future conditions. The hypothesis is that if planners and decision makers look at multiple divergent scenarios, they will engage in a more creative process for imagining solutions that would be invisible otherwise. Scenarios are narratives of plausible futures; they are not predictions. But they are extremely powerful when combined with predictive modeling. They help expand boundary conditions and provide a systematic approach we can use to deal with intractable uncertainties and assess alternative strategic actions. Scenarios can help us modify model assumptions and assess the sensitivities of model outcomes. Building scenarios can help us highlight gaps in our knowledge and identify the data we need to assess future trajectories.
Scenarios can also shine spotlights on warning signals, allowing decision makers to anticipate unexpected regime shifts and to act in a timely and effective way. They can support decision making in uncertain conditions by providing us a systematic way to assess the robustness of alternative strategies under a set of plausible future conditions. Although we do not know the probable impacts of uncertain futures, scenarios will provide us the basis to assess critical sensitivities, and identify both potential thresholds and irreversible impacts so we can maximize the wellbeing of both humans and our environment.
A new ethic for a hybrid planet
More than half a century ago, Aldo Leopold (1949) introduced the concept of “thinking like a mountain”: he wanted to expand the spatial and temporal scale of land conservation by incorporating the dynamics of the mountain. Defining a Land Ethic was a first step in acknowledging that we are all part of larger community hat include soils, waters, plants, and animals, and all the components and processes that govern the land, including the prey and predators. Now, along the same lines, Paul Hirsch and Bryan Norton (2012) In Ethical Adaptation to Climate Change: Human Virtues of the Future,MIT Press, articulates a new environmental ethics by suggesting that we “think like a planet.” Building on Hirsch and Norton’s idea, we need to expand the dimensional space of our mental models of urban design and planning to the planetary scale.
My regular readers would have understood that I develop a certain amount of quasi-pathological obsessions for a certain amounts of ideas or concepts that tend to come back regularly in my articles, in such a way that one could say that each article tends towards an attempt to articulate always the same idea. Among these obsessions is the idea of the archipelago and you will soon see that I did not finish to articulate a few thoughts around this idea yet, since an ambitious project of the same name will soon complement my writing on this blog.
In the following text, I would like to approach the archipelago through the same way that I first did, through a philosopher that has been highly influential to me in this last decade, Édouard Glissant. The archipelago is for him a figure of a utopia towards which the world should tend in order to construct a politics of “the relation” rather than a politics of the universal. Of course, an archipelago is a very evocative example of territories that construct simultaneously the difference between each island, and a collective identity as a group; that is what makes it a strong figure for a new paradigm of sovereignty (see past article). However, according to Glissant, there is an additional complexity to it that enriches this territory of an exemplary ideology. In order to look at it more closely, we need to first observe its opposite, the continental sea — etymologically, the archipelago is also a sea before being a group of islands. The paradigmatic example of the continental sea, because of both its history and its contemporaneity is the Mediterranean Sea. The following excerpt is what he writes about it in one of his only translated books in English, about which we might want to observe the difficulty to translate the language — Glissant was talking about translation as an emerging art in itself — that Betsy Wings brilliantly managed to translate from French to English:
Compared to the Mediterranean, which is an inner sea surrounded by lands, a sea that concentrates (in Greek, Hebrew, and Latin antiquity and later in the emergence of Islam, imposing the thought of the One), the Caribbean is, in contrast, a sea that explodes the scattered lands into an arc. A sea that diffracts. Without necessarily inferring any advantage whatsoever to their situation, the reality of archipelagos in the Caribbean or the Pacific provides a natural illustration of the thought of Relation. (Édouard Glissant, The Poetics of Relation, trans Betsy Wing, Ann Harbor: University of Michigan Press, 1997)
When one looks at maps of the history of the Mediterranean Sea, whether dominated by the Greeks, the Romans, the Christians, the Muslims, the Ottomans, or the European colons (French, British and Italians), one can easily understand the consistence of the efforts that have been deployed for centuries to force the multiple into the uniform. In this regard, the Mediterranean itself a sort of virtually neutral territory that each nation, one by one, attempts to dominate in order to bring one’s identity to the rank of universal norm. The fact that the three largest monotheist religions — two of which are still the most populated in the world — emerged and developed around the Mediterranean, is symptomatic for Glissant of this obsessive will to unify. The geography of the region is, of course, not alone to blame, but one understands that in the case of the Mediterranean Sea, the territory of water constitutes a certain object of covetousness for the nations that surround it.
The Caribbean, as an archipelago constitutes, on the contrary, a layout of islands for which the water is the environing milieu; there is therefore no desire to dominate it since it has virtually no limits, and thus does not constitute a territory per se. History has not been ‘tender’ with these islands as they were the first territories discovered in the late 15th-century by Christopher Columbus who enslaved the indigenous population. Starting in the beginning of the 18th-century, during the colonial domination of the Spanish, the French and the British, hundreds of thousands of African slaves were brought by boat — the vessel of colonial uniformization for Glissant — and provided the manpower of the colonies until the first part of the 19th-century when slavery was progressively abolished. In the meantime, in 1791, Toussaint L’Ouverture and the slaves of Saint-Domingue won a historical war against the French colons and created an autonomous territory on what is now Haiti. The current blockade on Cuba from the United States is also revealing the various tensions that are still operative around this special territory.
Nonetheless, the Caribbean is also the territory of a concept that Glissant work his all life around, the one of creolization. Creole itself often refers to one of the local languages spoken in the Caribbean as something that emerged from the encounter of the colonial language (mostly Spanish and French) and the various languages spoken by the slave population. In general, creole can refer to a similar linguistic evolutionary process between any number of languages (colonial/colonized or not), and even at a broader level, the phenomenon of creolization, as understood by Glissant, constitutes a process in which any aspect of an individual or collective identity encounters another to create a new one, richer because integrating the difference, if not opposition, from which it is born. The jazz is one of the mot evocative example of such a process, as it was invented in the American plantations by slaves after their encounters with European music instruments. This music cannot possibly be considered without the resistive struggle that its birth constituted, since any act of freedom — creativity being at the paroxysm of it — accomplished by the slave materializes by definition a negation of his or her status.
The archipelago is the territory of creolization par excellence, as it embodies a geography of islands whose coast are in continuous contact with the unexpected — another important component of Glissant’s philosophy/poetry — of the otherness. The processes of exchanges — pacific or violent — that occur on it are a form of recognition of the difference with the otherness, that then construct voluntarily or not new aspects of individual and collective identities that are richer than a simple synthesis of the two original ones. In this regard, Glissant’s philosophy of the Relation can be understood as the social interpretation of Baruch Spinoza’s philosophy of affects that I have been evoking many times in the past. Just like Spinoza, Glissant starts by analyzing the world in a ‘neutral’ way, unfolding the nature of exchanges between humans and nations, and only then establishes an ethics based on this philosophical scheme. For Glissant, the archipelago constitutes the territory of his ethics.
For more in English about the philosophy and history of the Caribbean, consult the excellent Public Archive edited and written by Professor Peter Hudson.
Lev Manovich is a leading theorist of cultural objects produced with digital technology, perhaps best known for The Language of New Media (MIT Press, 2001). I interviewed him about his most recent book, Software Takes Command (Bloomsbury Academic, July 2014).
Photograph published in Alan Kay and Adele Goldberg, "Personal Dynamic Media" with the caption, "Kids learning to use the interim Dynabook."
MICHAEL CONNOR: I want to start with the question of methodology. How does one study software? In other words, what is the object of study—do you focus more on the interface, or the underlying code, or some combination of the two?
LEV MANOVICH: The goal of my book is to understand media software—its genealogy (where does it come from), its anatomy (the key features shared by all media viewing and editing software), and its effects in the world (pragmatics). Specifically, I am concerned with two kinds of effects:
1) How media design software shapes the media being created, making some design choices seem natural and easy to execute, while hiding other design possibilities;
2) How media viewing / managing / remixing software shapes our experience of media and the actions we perform on it.
I devote significant space to the analysis of After Effects, Photoshop and Google Earth—these are my primary case studies.
Photoshop Toolbox from version 0.63 (1988) to 7.0 (2002).
I also want to understand what media is today conceptually, after its "softwarization." Do the concepts of media developed to account for industrial-era technologies, from photography to video, still apply to media that is designed and experienced with software? Do they need to be updated, or completely replaced by new more appropriate concepts? For example: do we still have different media or did they merge into a single new meta-medium? Are there some structural features which motion graphics, graphic designs, web sites, product designs, buildings, and video games all share, since they are all designed with software?
In short: does "media" still exist?
For me, "software studies" is about asking such broad questions, as opposed to only focusing on code or interface. Our world, media, economy, and social relations all run on software. So any investigation of code, software architectures, or interfaces is only valuable if it helps us to understand how these technologies are reshaping societies and individuals, and our imaginations.
MC: In order to ask these questions, your book begins by delving into some early ideas from the 1960s and 1970s that had a profound influence on later developers. In looking at these historical precedents, to what extent were you able to engage with the original software or documentation thereof? And to what extent were you relying on written texts by these early figures?
Photograph published in Kay and Goldberg with the caption, "The interim Dynabook system consists of processor, disk drive, display, keyboard, and pointing devices."
LM: In my book I only discuss the ideas of a few of the most important people, and for this, I could find enough sources. I focused on the theoretical ideas from the 1960s and 1970s which led to the development of modern media authoring environment, and the common features of their interfaces. My primary documents were published articles by J. C. R. Licklider, Ivan Sutherland, Ted Nelson, Douglas Engelbart, Alan Kay, and their collaborators, and also a few surviving film clips—Sutherland demonstrating Sketchpad (the first interactive drawing system seen by the public), a tour of Xerox Alto, etc. I also consulted manuals for a few early systems which are available online.
While I was doing this research, I was shocked to realize how little visual documentation of the key systems and software (Sketchpad, Xerox Parc's Alto, first paint programs from late 1960s and 1970s) exists. We have original articles published about these systems with small black-and-white illustrations, and just a few low resolution film clips. And nothing else. None of the historically important systems exist in emulation, so you can't get a feeling of what it was like to use them.
This situation is quite different with other media technologies. You can go to a film museum and experience the real Panoroma from early 1840s, camera obscura, or another pre-cinematic technology. Painters today use the same "new media" as Impressionists in the 1870s—paints in tubes. With computer systems, most of the ideas behind contemporary media software come directly from the 1960s and 1970s—but the original systems are not accessible. Given the number of artists and programmers working today in "software art" and "creative coding," it should be possible to create emulations of at least a few most fundamental early systems. It's good to take care of your parents!
MC: One of the key early examples in your book is Alan Kay's concept of the "Dynabook," which posited the computer as "personal dynamic media" which could be used by all. These ideas were spelled out in his writing, and brought to some fruition in the Xerox Alto computer. I'd like to ask you about the documentation of these systems that does survive. What importance can we attach to these images of users, interfaces and the cultural objects produced with these systems?
Top and center: Images published in Kay and Goldberg with the captions, "An electronic circuit layout system programmed by a 15-year- old student" and "Data for this score was captured on a musical keyboard. A program then converts the data to standard musical notation." Bottom: The Alto Screen showing windows with graphics drawn using commands in Smalltalk programming language.
LM: The most informative sets of images of Alan Kay's "Dynabook" (Xerox Alto) appears in the article he wrote with his collaborator Adele Goldberg in 1977. In my book I analyze this article in detail, interpreting it as "media theory" (as opposed to just documentation of the system). Kay said that reading McLuhan convinced him that computer can be a medium for personal expression. The article presents theoretical development of this idea and reports on its practical implementation (Xerox Alto).
Alan Turing theoretically defined a computer as a machine that can simulate a very large class of other machines, and it is this simulation ability that is largely responsible for the proliferation of computers in modern society. But it was only Kay and his generation that extended the idea of simulation to media—thus turning the Universal Turing Machine into a Universal Media Machine, so to speak. Accordingly, Kay and Goldberg write in the article: "In a very real sense, simulation is the central notion of the Dynabook." However, as I suggest in the book, simulating existing media become a chance to extend and add new functions. Kay and Goldberg themselves are clear about this—here is, for example, what they say about an electronic book: "It need not be treated as a simulated paper book since this is a new medium with new properties. A dynamic search may be made for a particular context. The non-sequential nature of the file medium and the use of dynamic manipulation allow a story to have many accessible points of view."
The many images of media software developed both by Xerox team and other Alto users which appear in the article illustrate these ideas. Kay and Goldberg strategically give us examples of how their "interim 'Dynabook'" can allow users to paint, draw, animate, compose music, and compose text. This maked Alto first Universal Media Machine—the first computer offering ability to compose and create cultural experiences and artifacts for all senses.
MC: I'm a bit surprised to hear you say the words "just documentation!" In the case of Kay, his theoretical argument was perhaps more important than any single prototype. But, in general, one of the things I find compelling about your approach is your analysis of specific elements of interfaces and computer operations. So when you use the example of Ivan Sutherland's Sketchpad, wasn't it the documentation (the demo for a television show produced by MIT in 1964) that allowed you to make the argument that even this early software wasn't merely a simulation of drawing, but a partial reinvention of it?
Frames from Sketchpad demo video illustrating the program’s use of constraints. Left column: a user selects parts of a drawing. Right column: Sketchpad automatically adjusts the drawing. (The captured frames were edited in Photoshop to show the Sketchpad screen more clearly.)
LM: The reason I said "just documentation" is that normally people dont think about Sutherland, Engelbart or Kay as "media theorists," and I think it's more common to read their work as technical reports.
On to to Sutherland. Sutherland describes the new features of his system in his Ph.D. thesis and the published article, so in principle you can just read them and get these ideas. But at the same time, the short film clip which demonstrates the Sketchpad is invaluable—it helps you to better understand how these new features (such as "contraints satisfaction") actually worked, and also to "experience" them emotionally. Since I have seen the film clip years before I looked at Sutherland's PhD thesis (now available online), I can't really say what was more important. Maybe it was not even the original film clip, but its use in one of Alan Kay's lectures. In the lecture Alan Kay shows the clip, and explains how important these new features were.
MC: The Sketchpad demo does have a visceral impact. You began this interview by asking, "does media still exist?" Along these lines, the Sutherland clip raises the question of whether drawing, for one, still exists. The implications of this seem pretty enormous. Now that you have established the principle that all media are contingent on the software that produces, do we need to begin analyzing all media (film, drawing or photography) from the point of view of software studies? Where might that lead?
LM: The answer which I arrive to the question "does media still exist?" after 200 pages is relevant to all media which is designed or accessed with software tools. What we identify by conceptual inertia as "properties" of different mediums are actually the properties of media software—their interfaces, the tools, and the techniques they make possible for navigating, creating, editing, and sharing media documents. For example, the ability to automatically switch between different views of a document in Acrobat Reader or Microsoft Word is not a property of “text documents,” but as a result of software techniques whose heritage can be traced to Engelbart’s “view control.” Similarly, "zoom" or "pan" is not exclusive to digital images or texts or 3D scenes—its the properly of all modern media software.
Along with these and a number of other "media-independent" techniques (such as "search") which are build into all media software, there are also "media-specific" techniques which can only be used with particular data types. For example, we can extrude a 2-D shape to make a 3D model, but we can't extrude a text. Or, we can change contrast and saturation on a photo, but these operations do not make sense in relation to 3D models, texts, or sound.
So when we think of photography, film or any other medium, we can think of it as a combination of "media-independent" techniques which it shares with all other mediums, and also techniques which are specific to it.
MC: I'd proposed the title, "Don't Study Media, Study Software" for this article. But it sounds like you are taking a more balanced view?
LM: Your title makes me nervous, because some people are likely to misinterpret it. I prefer to study software such as Twitter, Facebook, Instagram, Photoshop, After Effects, game engines, etc., and use this understanding in interpreting the content created with this software—tweets, messages, social media photos, professional designs, video games, etc. For example, just this morning I was looking at a presentation by one of Twitter's engineers about the service, and learned that sometimes the responses to tweets can arrive before the tweet itself. This is important to know if we are to analyze the content of Twitter communication between people, for example.
Today, all cultural forms which require a user to click even once on their device to access and/or participate run on software. We can't ignore technology any longer. In short: "software takes command."
As life has evolved, its complexity has increased exponentially, just like Moore’s law. Now geneticists have extrapolated this trend backwards and found that by this measure, life is older than the Earth itself.
Here’s an interesting idea. Moore’s Law states that the number of transistors on an integrated circuit doubles every two years or so. That has produced an exponential increase in the number of transistors on microchips and continues to do so.
But if an observer today was to measure this rate of increase, it would be straightforward to extrapolate backwards and work out when the number of transistors on a chip was zero. In other words, the date when microchips were first developed in the 1960s.
A similar process works with scientific publications. Between 1990 and 1960, they doubled in number every 15 years or so. Extrapolating this backwards gives the origin of scientific publication as 1710, about the time of Isaac Newton.
Today, Alexei Sharov at the National Institute on Ageing in Baltimore and his mate Richard Gordon at the Gulf Specimen Marine Laboratory in Florida, have taken a similar to complexity and life.
These guys argue that it’s possible to measure the complexity of life and the rate at which it has increasedfrom prokaryotes to eukaryotes to more complex creatures such as worms, fish and finally mammals. That produces a clear exponential increase identical to that behind Moore’s Law although in this case the doubling time is 376 million years rather than two years.
That raises an interesting question. What happens if you extrapolate backwards to the point of no complexity–the origin of life?
Sharov and Gordon say that the evidence by this measure is clear. “Linear regression of genetic complexity (on a log scale) extrapolated back to just one base pair suggests the time of the origin of life = 9.7 ± 2.5 billion years ago,” they say.
And since the Earth is only 4.5 billion years old, that raises a whole series of other questions. Not least of these is how and where did life begin.
Of course, there are many points to debate in this analysis. The nature of evolution is filled with subtleties that most biologists would agree we do not yet fully understand.
For example, is it reasonable to think that the complexity of life has increased at the same rate throughout Earth’s history? Perhaps the early steps in the origin of life created complexity much more quickly than evolution does now, which will allow the timescale to be squeezed into the lifespan of the Earth.
Sharov and Gorden reject this argument saying that it is suspiciously similar to arguments that squeeze the origin of life into the timespan outlined in the biblical Book of Genesis.
Let’s suppose for a minute that these guys are correct and ask about the implications of the idea. They say there is good evidence that bacterial spores can be rejuvenated after many millions of years, perhaps stored in ice.
They also point out that astronomers believe that the Sun formed from the remnants of an earlier star, so it would be no surprise that life from this period might be preserved in the gas, dust and ice clouds that remained. By this way of thinking, life on Earth is a continuation of a process that began many billions of years earlier around our star’s forerunner.
Sharov and Gordon say their interpretation also explains the Fermi paradox, which raises the question that if the universe is filled with intelligent life, why can’t we see evidence of it.
However, if life takes 10 billion years to evolve to the level of complexity associated with humans, then we may be among the first, if not the first, intelligent civilisation in our galaxy. And this is the reason why when we gaze into space, we do not yet see signs of other intelligent species.
There’s no question that this is a controversial idea that will ruffle more than a few feathers amongst evolutionary theorists.
But it is also provocative, interesting and exciting. All the more reason to debate it in detail.
The Lewis Residence by Frank Gehry (1985–1995), Peter Eisenman’s unrealized Biocentrum (1987), Chuck Hoberman’s Expanding Sphere (1992) and Shoei Yoh’s roof structures for Odawara (1991) and Galaxy Toyama (1992) Gymnasiums: four seminal projects that established bold new directions for architectural research by experimenting with novel digital tools. Curated by architect Greg Lynn, Archaeology of the Digital is conceived as an investigation into the foundations of digital architecture at the end of the 1980s and the beginning of the 1990s.
Watch an introduction to Archaeology of the Digital by curator Greg Lynn here.
Watch a conversation between Peter Eisenman, architect of the Biozentrum and Greg Lynn here.
The vernissage for Archaeology of the Digital is 7 May 2013.
On 8 May, from 2 pm to 6 pm, Greg Lynn discusses the foundations of digital architecture with Peter Eisenman, Chuck Hoberman and Shoei Yoh.
Personal comment:
Though we are not really in this line of thinking regarding what digital technologies means/will mean for architecture, an interesting "archeological" exhibition next May at the CCA about the rise of computation and algorithmic tools in architecture back in the late 1980ies and early 1990ies.
And an interesting discussion as well between Peter Eisenman and his former "apprentice", Greg Lynn.
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.