Lev Manovich is a leading theorist of cultural objects produced with digital technology, perhaps best known for The Language of New Media (MIT Press, 2001). I interviewed him about his most recent book, Software Takes Command (Bloomsbury Academic, July 2014).
Photograph published in Alan Kay and Adele Goldberg, "Personal Dynamic Media" with the caption, "Kids learning to use the interim Dynabook."
MICHAEL CONNOR: I want to start with the question of methodology. How does one study software? In other words, what is the object of study—do you focus more on the interface, or the underlying code, or some combination of the two?
LEV MANOVICH: The goal of my book is to understand media software—its genealogy (where does it come from), its anatomy (the key features shared by all media viewing and editing software), and its effects in the world (pragmatics). Specifically, I am concerned with two kinds of effects:
1) How media design software shapes the media being created, making some design choices seem natural and easy to execute, while hiding other design possibilities;
2) How media viewing / managing / remixing software shapes our experience of media and the actions we perform on it.
I devote significant space to the analysis of After Effects, Photoshop and Google Earth—these are my primary case studies.
Photoshop Toolbox from version 0.63 (1988) to 7.0 (2002).
I also want to understand what media is today conceptually, after its "softwarization." Do the concepts of media developed to account for industrial-era technologies, from photography to video, still apply to media that is designed and experienced with software? Do they need to be updated, or completely replaced by new more appropriate concepts? For example: do we still have different media or did they merge into a single new meta-medium? Are there some structural features which motion graphics, graphic designs, web sites, product designs, buildings, and video games all share, since they are all designed with software?
In short: does "media" still exist?
For me, "software studies" is about asking such broad questions, as opposed to only focusing on code or interface. Our world, media, economy, and social relations all run on software. So any investigation of code, software architectures, or interfaces is only valuable if it helps us to understand how these technologies are reshaping societies and individuals, and our imaginations.
MC: In order to ask these questions, your book begins by delving into some early ideas from the 1960s and 1970s that had a profound influence on later developers. In looking at these historical precedents, to what extent were you able to engage with the original software or documentation thereof? And to what extent were you relying on written texts by these early figures?
Photograph published in Kay and Goldberg with the caption, "The interim Dynabook system consists of processor, disk drive, display, keyboard, and pointing devices."
LM: In my book I only discuss the ideas of a few of the most important people, and for this, I could find enough sources. I focused on the theoretical ideas from the 1960s and 1970s which led to the development of modern media authoring environment, and the common features of their interfaces. My primary documents were published articles by J. C. R. Licklider, Ivan Sutherland, Ted Nelson, Douglas Engelbart, Alan Kay, and their collaborators, and also a few surviving film clips—Sutherland demonstrating Sketchpad (the first interactive drawing system seen by the public), a tour of Xerox Alto, etc. I also consulted manuals for a few early systems which are available online.
While I was doing this research, I was shocked to realize how little visual documentation of the key systems and software (Sketchpad, Xerox Parc's Alto, first paint programs from late 1960s and 1970s) exists. We have original articles published about these systems with small black-and-white illustrations, and just a few low resolution film clips. And nothing else. None of the historically important systems exist in emulation, so you can't get a feeling of what it was like to use them.
This situation is quite different with other media technologies. You can go to a film museum and experience the real Panoroma from early 1840s, camera obscura, or another pre-cinematic technology. Painters today use the same "new media" as Impressionists in the 1870s—paints in tubes. With computer systems, most of the ideas behind contemporary media software come directly from the 1960s and 1970s—but the original systems are not accessible. Given the number of artists and programmers working today in "software art" and "creative coding," it should be possible to create emulations of at least a few most fundamental early systems. It's good to take care of your parents!
MC: One of the key early examples in your book is Alan Kay's concept of the "Dynabook," which posited the computer as "personal dynamic media" which could be used by all. These ideas were spelled out in his writing, and brought to some fruition in the Xerox Alto computer. I'd like to ask you about the documentation of these systems that does survive. What importance can we attach to these images of users, interfaces and the cultural objects produced with these systems?
Top and center: Images published in Kay and Goldberg with the captions, "An electronic circuit layout system programmed by a 15-year- old student" and "Data for this score was captured on a musical keyboard. A program then converts the data to standard musical notation." Bottom: The Alto Screen showing windows with graphics drawn using commands in Smalltalk programming language.
LM: The most informative sets of images of Alan Kay's "Dynabook" (Xerox Alto) appears in the article he wrote with his collaborator Adele Goldberg in 1977. In my book I analyze this article in detail, interpreting it as "media theory" (as opposed to just documentation of the system). Kay said that reading McLuhan convinced him that computer can be a medium for personal expression. The article presents theoretical development of this idea and reports on its practical implementation (Xerox Alto).
Alan Turing theoretically defined a computer as a machine that can simulate a very large class of other machines, and it is this simulation ability that is largely responsible for the proliferation of computers in modern society. But it was only Kay and his generation that extended the idea of simulation to media—thus turning the Universal Turing Machine into a Universal Media Machine, so to speak. Accordingly, Kay and Goldberg write in the article: "In a very real sense, simulation is the central notion of the Dynabook." However, as I suggest in the book, simulating existing media become a chance to extend and add new functions. Kay and Goldberg themselves are clear about this—here is, for example, what they say about an electronic book: "It need not be treated as a simulated paper book since this is a new medium with new properties. A dynamic search may be made for a particular context. The non-sequential nature of the file medium and the use of dynamic manipulation allow a story to have many accessible points of view."
The many images of media software developed both by Xerox team and other Alto users which appear in the article illustrate these ideas. Kay and Goldberg strategically give us examples of how their "interim 'Dynabook'" can allow users to paint, draw, animate, compose music, and compose text. This maked Alto first Universal Media Machine—the first computer offering ability to compose and create cultural experiences and artifacts for all senses.
MC: I'm a bit surprised to hear you say the words "just documentation!" In the case of Kay, his theoretical argument was perhaps more important than any single prototype. But, in general, one of the things I find compelling about your approach is your analysis of specific elements of interfaces and computer operations. So when you use the example of Ivan Sutherland's Sketchpad, wasn't it the documentation (the demo for a television show produced by MIT in 1964) that allowed you to make the argument that even this early software wasn't merely a simulation of drawing, but a partial reinvention of it?
Frames from Sketchpad demo video illustrating the program’s use of constraints. Left column: a user selects parts of a drawing. Right column: Sketchpad automatically adjusts the drawing. (The captured frames were edited in Photoshop to show the Sketchpad screen more clearly.)
LM: The reason I said "just documentation" is that normally people dont think about Sutherland, Engelbart or Kay as "media theorists," and I think it's more common to read their work as technical reports.
On to to Sutherland. Sutherland describes the new features of his system in his Ph.D. thesis and the published article, so in principle you can just read them and get these ideas. But at the same time, the short film clip which demonstrates the Sketchpad is invaluable—it helps you to better understand how these new features (such as "contraints satisfaction") actually worked, and also to "experience" them emotionally. Since I have seen the film clip years before I looked at Sutherland's PhD thesis (now available online), I can't really say what was more important. Maybe it was not even the original film clip, but its use in one of Alan Kay's lectures. In the lecture Alan Kay shows the clip, and explains how important these new features were.
MC: The Sketchpad demo does have a visceral impact. You began this interview by asking, "does media still exist?" Along these lines, the Sutherland clip raises the question of whether drawing, for one, still exists. The implications of this seem pretty enormous. Now that you have established the principle that all media are contingent on the software that produces, do we need to begin analyzing all media (film, drawing or photography) from the point of view of software studies? Where might that lead?
LM: The answer which I arrive to the question "does media still exist?" after 200 pages is relevant to all media which is designed or accessed with software tools. What we identify by conceptual inertia as "properties" of different mediums are actually the properties of media software—their interfaces, the tools, and the techniques they make possible for navigating, creating, editing, and sharing media documents. For example, the ability to automatically switch between different views of a document in Acrobat Reader or Microsoft Word is not a property of “text documents,” but as a result of software techniques whose heritage can be traced to Engelbart’s “view control.” Similarly, "zoom" or "pan" is not exclusive to digital images or texts or 3D scenes—its the properly of all modern media software.
Along with these and a number of other "media-independent" techniques (such as "search") which are build into all media software, there are also "media-specific" techniques which can only be used with particular data types. For example, we can extrude a 2-D shape to make a 3D model, but we can't extrude a text. Or, we can change contrast and saturation on a photo, but these operations do not make sense in relation to 3D models, texts, or sound.
So when we think of photography, film or any other medium, we can think of it as a combination of "media-independent" techniques which it shares with all other mediums, and also techniques which are specific to it.
MC: I'd proposed the title, "Don't Study Media, Study Software" for this article. But it sounds like you are taking a more balanced view?
LM: Your title makes me nervous, because some people are likely to misinterpret it. I prefer to study software such as Twitter, Facebook, Instagram, Photoshop, After Effects, game engines, etc., and use this understanding in interpreting the content created with this software—tweets, messages, social media photos, professional designs, video games, etc. For example, just this morning I was looking at a presentation by one of Twitter's engineers about the service, and learned that sometimes the responses to tweets can arrive before the tweet itself. This is important to know if we are to analyze the content of Twitter communication between people, for example.
Today, all cultural forms which require a user to click even once on their device to access and/or participate run on software. We can't ignore technology any longer. In short: "software takes command."
I’m really excited to share my new essay, “The Relevance of Algorithms,” with those of you who are interested in such things. It’s been a treat to get to think through the issues surrounding algorithms and their place in public culture and knowledge, with some of the participants in Culture Digitally (here’s the full litany: Braun, Gillespie, Striphas, Thomas, the third CD podcast, and Anderson‘s post just last week), as well as with panelists and attendees at the recent 4S and AoIR conferences, with colleagues at Microsoft Research, and with all of you who are gravitating towards these issues in their scholarship right now.
The motivation of the essay was two-fold: first, in my research on online platforms and their efforts to manage what they deem to be “bad content,” I’m finding an emerging array of algorithmic techniques being deployed: for either locating and removing sex, violence, and other offenses, or (more troublingly) for quietly choreographing some users away from questionable materials while keeping it available for others. Second, I’ve been helping to shepherd along this anthology, and wanted my contribution to be in the spirit of the its aims: to take one step back from my research to articulate an emerging issue of concern or theoretical insight that (I hope) will be of value to my colleagues in communication, sociology, science & technology studies, and information science.
The anthology will ideally be out in Fall 2013. And we’re still finalizing the subtitle. So here’s the best citation I have.
Gillespie, Tarleton. “The Relevance of Algorithms. forthcoming, in Media Technologies, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.
Below is the introduction, to give you a taste.
Algorithms play an increasingly important role in selecting what information is considered most relevant to us, a crucial feature of our participation in public life. Search engines help us navigate massive databases of information, or the entire web. Recommendation algorithms map our preferences against others, suggesting new or forgotten bits of culture for us to encounter. Algorithms manage our interactions on social networking sites, highlighting the news of one friend while excluding another’s. Algorithms designed to calculate what is “hot” or “trending” or “most discussed” skim the cream from the seemingly boundless chatter that’s on offer. Together, these algorithms not only help us find information, they provide a means to know what there is to know and how to know it, to participate in social and political discourse, and to familiarize ourselves with the publics in which we participate. They are now a key logic governing the flows of information on which we depend, with the “power to enable and assign meaningfulness, managing how information is perceived by users, the ‘distribution of the sensible.’” (Langlois 2012)
Algorithms need not be software: in the broadest sense, they are encoded procedures for transforming input data into a desired output, based on specified calculations. The procedures name both a problem and the steps by which it should be solved. Instructions for navigation may be considered an algorithm, or the mathematical formulas required to predict the movement of a celestial body across the sky. “Algorithms do things, and their syntax embodies a command structure to enable this to happen” (Goffey 2008, 17). We might think of computers, then, fundamentally as algorithm machines — designed to store and read data, apply mathematical procedures to it in a controlled fashion, and offer new information as the output.
But as we have embraced computational tools as our primary media of expression, and have made not just mathematics but all information digital, we are subjecting human discourse and knowledge to these procedural logics that undergird all computation. And there are specific implications when we use algorithms to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions.
These algorithms, which I’ll call public relevance algorithms, are — by the very same mathematical procedures — producing and certifying knowledge. The algorithmic assessment of information, then, represents a particular knowledge logic, one built on specific presumptions about what knowledge is and how one should identify its most relevant components. That we are now turning to algorithms to identify what we need to know is as momentous as having relied on credentialed experts, the scientific method, common sense, or the word of God.
What we need is an interrogation of algorithms as a key feature of our information ecosystem (Anderson 2011), and of the cultural forms emerging in their shadows (Striphas 2010), with a close attention to where and in what ways the introduction of algorithms into human knowledge practices may have political ramifications. This essay is a conceptual map to do just that. I will highlight six dimensions of public relevance algorithms that have political valence:
1. Patterns of inclusion: the choices behind what makes it into an index in the first place, what is excluded, and how data is made algorithm ready
2. Cycles of anticipation: the implications of algorithm providers’ attempts to thoroughly know and predict their users, and how the conclusions they draw can matter
3. The evaluation of relevance: the criteria by which algorithms determine what is relevant, how those criteria are obscured from us, and how they enact political choices about appropriate and legitimate knowledge
4. The promise of algorithmic objectivity: the way the technical character of the algorithm is positioned as an assurance of impartiality, and how that claim is maintained in the face of controversy
5. Entanglement with practice: how users reshape their practices to suit the algorithms they depend on, and how they can turn algorithms into terrains for political contest, sometimes even to interrogate the politics of the algorithm itself
6. The production of calculated publics: how the algorithmic presentation of publics back to themselves shape a public’s sense of itself, and who is best positioned to benefit from that knowledge.
Considering how fast these technologies and the uses to which they are put are changing, this list must be taken as provisional, not exhaustive. But as I see it, these are the most important lines of inquiry into understanding algorithms as emerging tools of public knowledge and discourse.
It would also be seductively easy to get this wrong. In attempting to say something of substance about the way algorithms are shifting our public discourse, we must firmly resist putting the technology in the explanatory driver’s seat. While recent sociological study of the Internet has labored to undo the simplistic technological determinism that plagued earlier work, that determinism remains an alluring analytical stance. A sociological analysis must not conceive of algorithms as abstract, technical achievements, but must unpack the warm human and institutional choices that lie behind these cold mechanisms. I suspect that a more fruitful approach will turn as much to the sociology of knowledge as to the sociology of technology — to see how these tools are called into being by, enlisted as part of, and negotiated around collective efforts to know and be known. This might help reveal that the seemingly solid algorithm is in fact a fragile accomplishment.
~ ~ ~
Here is the full article [PDF]. Please feel free to share it, or point people to this post.
For all the profound changes the Internet has brought about, the network itself–the data centers, exchange points, and wires that make it possible–remains for most users as disembodied and non-material as the data that flows within and across it. In his recent book, Tubes: A Journey to the Center of the Internet, journalist Andrew Blum goes in search of the Internet’s infrastructure to find the physical pieces that make up our digital world. Tubes is a book about real places on the map: their sounds and smells, their storied pasts, their physical details, and the people who work there. In the following conversation with Gregory Wessner, the League’s Special Projects Director, Blum talks about what he saw on his trip to the Internet, what architects need to know about it, and what we can expect to see in the future.
GREGORY WESSER: So tell me about Tubes
ANDREW BLUM:Tubes–A Journey to the Center of the Internet is my attempt to visit the physical infrastructure of the Internet. It comes out of about ten years of writing, mostly about architecture and buildings. What I realized around 2008/2009 was that I was supposedly writing about buildings, but I was spending all my time sitting in front of a screen all day. And at the end of the day, I would get upand look at the smaller screen that I started carrying in my pocket in 2007. There seemed to be this huge disconnect between the physical world that I was supposed to be writing about and the day-to-day life that I was living. Even stranger still was that virtual world seemed to have no physical embodiment. There was no way to bridge the gap between the world I experienced and the world on the other side of the screen. Until one day my Internet broke and a repairman came to fix it and he followed the wire from behind the couch, down to the basement, and outside to the back of the building.
I started to wonder what would happen if you yanked the wire from the wall to see where it would lead.
GW: And you followed him?
AB: And I followed him. And then he saw a squirrel running on the wire and said, “I think a squirrel is chewing on your Internet.” And I thought if a squirrel can chew on this piece of the Internet then there must be other physical pieces of the Internet. So I started to wonder what would happen if you yanked the wire from the wall to see where it would lead.
GW: And so what did you find? What are the physical components of the Internet?
AB: I would say there are three different categories. First, if the Internet is a network of networks, the places where networks meet are the most important and these are called Internet exchange points. The surprising thing is that there are about a dozen buildings in the world that are far more important in order of magnitude than the next tier. So while the Internet is theoretically everywhere, it is predominantly concentrated in these dozen buildings to the point that for international communications, just as you would fly through New York or Washington on your way to Europe, your Internet traffic is always going to pass through the same keyhubs, which include New York, London, Amsterdam, Frankfurt, Los Angeles, San Francisco, and others. So that’s one category.
The second category is data centers. The generic building type of an Internet exchange point is the data center. It’s a place for equipment and things like that. But I use data centers to mean the place where data is stored, where it is warehoused, and those buildings concentrate around two poles. They stay close to where people are and/or the exchange points. Or at the most interesting scale, they’re in places that are optimized for efficiency, which at the moment means predominantly Oregon, Washington, North Carolina, Sweden, and places like that. These include the big business or big super data centers like Facebook and Google and Microsoft. And then the third category is basically the lines between, the fiber optic cables that connect buildings, the fiber optic cables that connect cities, and the undersea cables that connect continents. Those are the three kinds of categories of physicality of the Internet.
GW: And to what degree did you see any design thinking applied to any of this?
AB: A surprising amount, or at least more than you might think. For example, in the first category of exchange points, one of the most prominent in the U.S., and in some ways a company that’s dominant internationally, is a company called Equinix. From the outside, the typical data center–not all of them, but many of them and certainly the biggest one in Ashburn, Virginia, which has the single greatest concentration of networks in the U.S.–looks like the ultimate bland concrete box.
(Left) An Equinix Data Center in Ashburn, VA
(Right) Data Center Server Room, photo via Shutterstock.
It looks like the back of a Walmart. There’s no sign by the door. They tell you to look for the door with the ashtray next to it to know which one the entrance is. But inside, the buildings are deliberately designed to look like you would hope the Internet would look like. They’re basically hotels and the customers are the network engineers that care for the equipment. So the interiors are meant to appeal–very explicitly meant to appeal–to the kind of sci-fi sensibility of network engineers. So that means a kind of big, red silo when you walk in. It means blue spotlights and very dramatic lighting with a sort of black ceiling like a theater. When you ask why is it dark, why are there blue lights, they pretend it’s for security, but in fact the founder is very explicit about the fact that he knew that this had to appeal to network engineers. They call the aesthetic cyberific.
GW: So a lot of these spaces are actually designed to appeal to human sensibilities and are not purely driven by technical considerations?
AB: Well, there’s certainly a technical element to the way they’re designed and Equinix prides itself in having the best technical design. They have a patent in cable management because one element to the building’s performance is the management of the cables. It’s all about cages, it’s all about the router in a cage or a bank of routers in a cage, and then cables that go up into ceiling are strung in layers of racks. Four layers of racks, each with a different type of cable: power, fiber, copper, and inner duct, which is like super fiber. The whole building is designed to accommodate these connections between cables and that’s an explicitly designed piece. But in terms of what the building feels like, it’s meant to appeal to a network engineers.
GW: Along this whole journey to the Internet, you met a lot of network engineers and technologists and related computer people. Did you meet any designers who had a hand in shaping any of this? Did you search them out?
AB: The one designer I spoke with was the guy who designed Facebook’s data center in Oregon, a guy named Neil Sheehan of Sheehan Partners in Chicago. The building is a beautiful building. It has the feel of a Donald Judd sculpture to it. It comes right up out of the landscape and it’s got these beautifully clean concrete walls and a sort of light well on top with a sophisticated entry court. What’s interesting about it is that Facebook has been very explicit that this building is a showpiece. So if you believe that architecture expresses ideals, and for the most part the Internet has not had an idealistic architectural expression, here Facebook is saying with this building that they want to show the way in which they care about their practices and their infrastructure and their customers.
Facebook’s Prineville Data Center, Oregon | Sheehan Partners, Ltd. photos: Jonnu Singleton
GW: Notwithstanding this showpiece of Facebook’s, what architectural ideals has the Internet been conveying up until now?
AB: At their best, the buildings of the Internet exhibit this sort of incredibly robust functionalism that comes out of telecom, where the buildings are very thoughtfully laid out, are very strong. The way I described them at one point is they’re like very fancy plumbing supply warehouses. There’s clearly thought put into them. They’re not just any building. Yet, they’re very deliberately anonymous and discreet.
GW: Deliberately anonymous?
AB: Because the general trend is that these are unmarked buildings. When you walk by them–and I’m thinking of one in Amsterdam that is essentially the office for one of the biggest global backbones–it’s a building that looks like it could be a mechanic shop. It’s a relatively small, relatively industrial-looking building. But if you look closely, it clearly is an expensive building. It’s clearly robustly built and built for its specific purpose. It has a very defined and incredibly functional expression. But then there is a whole class of buildings that don’t have that. They feel sloppy and fast and like basic suburban commercial/industrial boxes. It’s the same thing with the cable landing stations, the places were the trans-oceanic cables make landfall; the language they speak is of robustness and of stability. They’re clearly there to last. But they’re also so quiet, they try not to look like anything. They are like assertively anonymous buildings.
GW: With this this new Facebook data center in Oregon, do you think that the buildings of the Internet will become more representational and less anonymous?
AB: Yes, I do. A good example is a data center in London called Telehouse West. It’s a campus with three buildings, one each from 1991, 2001, and 2011. The first one is next to a relatively significant Grimshaw building, the Financial Times printworks. The first Telehouse West building is very British High-Tech, but it was built not for the Internet but as a back office for banks. The middle one from 2001 is nothing; it is a quiet and horrible building. But the 2011 one has this very sophisticated, pixelated facade. It’s a big windowless box, but the pixelated façade is very much saying, I am a building of technology, I am a giant machine.
Telehouse West | photo: James Brindle
If the whole notion of the cloud is to replace the hard drive in the computer on your desk with a hard drive in a computer far away, as we get more sophisticated about thinking about the consequences of that replacement, you start to think more about what that far-away hard drive computer is. As soon as you start to think more about what it is, then you need pictures. And as soon as it becomes a sort of corporate emblem in some way or another then that architectural expression follows. That’s absolutely what Facebook has done.
GW: What does an architect need to think about when designing these new representational buildings of the Internet?
AB: I hadn’t thought of that. I know the trend in the past has been to express theethereality of the Internet. It’s been about fluid curves and strange shapes and somehow expressing that it’s virtual. And the future trend I think would be the opposite. It would be about expressing stability. It’s hard to say how deliberate this was but the Facebook building reads like a Greek temple. I mean, it’s got this long low building that sits in the landscape, on the top of this butte. It reads as stable, the concrete walls, the way it’s landscaped. There’s no reference to the notion that this is somehow ethereal. It feels just the opposite. But that makes sense. These buildings are the next evolution. Sort of like a bank used to be, a bank was meant to express stability. Up until now, the trust has been abstract, but I think that the trust will soon be literal and we’ll soon want to see where this stuff is and as a result, we’ll express that sense of stability.
GW: How would you say the Internet is different as an infrastructure than electricity or telephones?
AB: Well, the Internet as a whole has no designer. I mean, physically it’s the ultimate emergent system. It isn’t to say there haven’t been forces or people that made that helped define certain places. Exchange points, for example, are usually where they are for two reasons. There’s some fact of geography, like 60 Hudson Street in Manhattan is the elbow of Lower Manhattan, which has always been a communications hub. Then there’s almost always a charismatic salesman who convinced the first two networks to come and then everybody else to come on top. That’s recent, that’s in the last 15 years. But there’s always some kind of geographic fact that makes a spot important and then there’s always somebody who made it happen.
GW: So if the Internet is located where it is in part because it’s exploiting some existing infrastructure or economic system or communications history, would it look differently if you were designing it tabula rasa? Is there a theoretical ideal of the physical form of the Internet?
AB: Yeah, there is and it was the phone system. The phone system is a master planned system. But the Internet is the opposite. There is no defined structure to the Internet. It’s entirely the millions of decisions of each autonomous network. And each network is truly autonomous. That’s the fundamental idea. It’s almost a philosophical idea; it cannot be anything but emergent because it’s a network of networks. It’s always about this agreement between two networks that are in part competitors and in part cooperators. So the transition has been from a top-down design system like the telephone, particularly the nationalized systems, to the emergent system of the Internet. And they overlap certainly, but not exactly.
There is no defined structure to the Internet. It’s entirely the millions of decisions of each autonomous network.
GW: How could cities be designed differently to better accommodate the Internet?
AB: There’s always the difficulty of getting fiber to buildings, but that’s a relatively small issue. The bigger issue for smaller cities is having good local hubs. Big cities don’t have that problem. In some ways their local hubs are too big. They’re too expensive. In New York the main hub buildings cost a fortune…they’re too expensive for people to get into and it pushes other people out. It decreases opportunities insome ways because of that.
But in a second or third tier city, cities do better when somebody has managed to create a building that operates as carrier neutral. It’s not owned by a Verizon or a Sprint or a Time Warner and it offers the right environment for networks to connect to each other. So you need that building for that for that connection to happen. Most cities have developed that as a matter of course, but not all. One example I looked at a little bit is South Bend, Indiana, which is lucky because it’s a railroad city. There happened to be a guy there who basically turned the railroad station into an interconnection facility. He runs it independently. And so South Bend is doing this big municipal fiber network because of the connection to the rest of the Internet that this building offers. The flip side of that, one of New York’s major interconnection facilities, 111 Eighth Avenue, which used to be privately owned and had this sort of ecosystem of different networks, is now wholly owned by Google. It is an incredible reversal of the argument that these buildings should be neutral.
GW: To backtrack to an earlier point you made about Internet concentrating in certain cities, whether it’s at these twelve major exchange points or around the larger data centers, did you then look at how those buildings are in turn affecting the neighborhoods and cities in which they are concentrated?
AB: The best example of that at moment is Oregon. Oregon’s traditional industry collapsed for the most part and the data centers are in many ways replacing that traditional industry. The data centers are encouraged because they fill a financial vacuum with the collapse of the timber and aluminum industries. They then also benefit from the power supply and the hydroelectric supply that timber and aluminum benefitted from, that infrastructure. And as enough of them settled there, then the air conditioner repair guys, the security system guys, the electricians, all the service people who grow the human capital to serve those centers set up shop there. And then more people come because those services are available. The even more dramatic example is Amsterdam. As a matter policy in the mid-90s, Amsterdam said it should be a port for the Internet as it’s always been a port for everything else. And so then said that anyone who digs a trench to lay fiber has to announce their intention. Anyone else who wants to put their fiber in that trench is allowed and they split they cost. As a result there was this incredible abundance of fiber; that both allowed Amsterdam as a city to be a key interconnection point for the Internet on a global scale. And then that also made Internet access in the Netherlands cheap and faster than anywhere else because there’s both so much fiber in the ground and because there are so many international links that wholesale bandwidth is cheaper. So it’s just as with their airport, you can go anywhere from Amsterdam, even though Amsterdam is a city of about 1.2 million people.
GW: Did you look at how the Internet is physically manifesting itself in new cities? Like New Songdo in South Korea or Masdar in Abu Dhabi? Is there a difference in the way the Internet evolves in a new city versus being retroactively installed in an existing city?
AB: I didn’t. But in terms of the fiber in the ground on the city scale, of course there will be a difference. It’s going to be incredibly more efficient when that piece of it is master planned. But then the connection from that city to the rest of the Internet–it either requires somebody making the expense of building a stronger direct connection, which I’m sure is what Songo did, or it will suffer from being an extra hop away and always having an extra charge of getting back to one of those hubs.
GW: What do you think is the most relevant thing for architects that you learned throughout writing this book?
AB: There’s something very relevant, which is that the hard divide between the physical and virtual worlds that we’ve been operating under and assuming for the last fifteen years can’t possibly exist. To do that is to ignore the physical manifestation of a key part of our experience. And if the connection between that physical manifestation and our experience, the thing we have on our screen, is vague for the moment, as we depend more and more on the stuff far away the connection will become more visible and more legible. An architecture magazine tweeted something like, architecture writer Andrew Blum looks at the cloud, seems a bit far-fetched for architecture. So I tweeted back that architecture is always striving for relevance. You can’t ignore the question of what is the fate of place when all of us sit in front of our screens.
Andrew Blum is a journalist writing about architecture, design, technology, urbanism, art, and travel. He is a contributing editor at Metropolis and Urban Omnibus, and his articles and essays have appeared in Wired, The New York Times, Popular Science and Architectural Record, among many other publications.
Kevin Slavin argues that we're living in a world designed for -- and increasingly controlled by -- algorithms. In this riveting talk from TEDGlobal, he shows how these complex computer programs determine: espionage tactics, stock prices, movie scripts, and architecture. And he warns that we are writing code we can't understand, with implications we can't control.
Kevin Slavin navigates in the algoworld, the expanding space in our lives that’s determined and run by algorithms. Full bio »
It takes you 500,000 microseconds just to click a mouse. But if you’re a Wall Street algorithm and you’re five microseconds behind, you’re a loser.” (Kevin Slavin)
The digital revolution has spawned a new generation of small, agile and hyperactive publishers who, over the last decade, have profoundly transformed how architecture and design are broadcast, both in print and online.AnarchitecturereportbyShumi Bose
-
This article was originally published in Domus 961 / September 2012
In Victor Hugo's The Hunchback of Notre-Dame, Claude Frollo looks from a printed book to the cathedral building and utters his famous phrase, "Ceci tuera cela" ("This will kill that"). Where once predictions ranged from the utopian to the apocalyptic, we now see an online world that sits alongside the physical world, and similarly fateful proclamations concerning the effect of online architectural publishing on print media have long since passed. In lieu of predictions of one supplanting the other, we see a reality in which distinctions between the two are increasingly blurred. The cacophony of viewpoints, ideas and juxtapositions may still exist, but from this are emerging increasingly hybrid voices and groups, and new forms of publishing conceived as media for conveying architectural and political ideas, rather than as endpoints in themselves.
Driving this new, complex definition of publishing is the widespread access to new technologies— something that has opened the door to a new understanding of what the word "readership" actually means. Contemporary audiences are well versed in receiving information via a variety of media, but reading is not enough: as well as navigating and selecting content, they expect to be able to contribute their own thoughts, ideas, variations and objections. But the impact of new technologies goes well beyond the ubiquitous reach and accessibility of blogs: it extends to short print- run books, global distribution networks, e-publishing, and so on. The thrill of the new isn't enough to hold our interest; increasingly we expect online platforms to be just one of the many tentacled operations of creative practitioners, with overspills between the physical and virtual worlds. Even Dezeen, once considered the epitome of rapid response online design publishing, has evolved into something much more complex, developing into a lifestyle brand that extends into the physical world with pop-up stores and exhibitions.
Top: Having worked for a long time as designers and product managers for companies and architectural practices, Birgit Lohmann and Massimo Mini moved to Bali, Indonesia. In 1999 they created Designboom with its head office in Milan, while during the summer months they relocate Designboom to a temporary office in an undisclosed seaside location in Sardinia. Designboom has a permanent work team, made up of architects and designers from around the world who select and publish (exclusively in English) information and articles on art, architecture and design. Above: Designboom publishes the latest press releases and topical readers’ proposals, as well as journalistic reports conducted by its own staff. Every year Designboom co-organises four to six design competitions with large international corporations. Alongside the Italian edition, over the years four Asian versions of Designboom have also been established in Chinese, Japanese, Korean and Vietnamese
The ease of exchanging information on the Internet allows individuals to engage in collective intellectual works of unprecedented scale. At the same time, seemingly collaborative modes of publishing are also a means for individual, previously hidden voices to gain voice and exposure, as in the case of Ethel Baraona Pohl and her partner César Reyes Nájera, who describe their endeavours at dpr-barcelona as the happy by-product of frustration. Perennially interested in the layers that technology adds to discursive and physical space, their occasional contributions to journals were the only outlet for critical thinking outside the studio; larger bodies of research encountered rejection and huge delays within slow-moving institutionalised channels.
Future Plural is an independent curatorial unit, research lab and umbrella for creative collaboration founded in 2009 by husband-and-wife team Geoff Manaugh and Nicola Twilley, together with Alexander Trevi. Its activities include the production of seminars, studios, events, publications, installations and exhibitions that investigate spatial questions
Their experimental (and at the time innovative) digital publications pioneered new models of self- and collaborative publishing, experimenting with platforms that allow texts to be collectively manipulated by an online community. But the most rapidly-expanding branch of their company, dpr-barcelona, is short-print-run architecture books: no mean feat if one considers the crisis in book publishing (just five months ago, the famed Swiss design publisher Birkhäuser was placed in administration).
"Many small-scale emerging ventures deal with the communal, hyperlocal or niche terrains, and are often run by couples"
Geoff Manaugh is the sole author of the widely-read blog BLDGBLOG, a platform that gives voice to his reflections, interviews and essays on architecture, landscape and all things concerning the built environment: from airports and shopping centres to action movies, videogames like Bioshock, prison camps and shelters for giant sequoias. Edible Geography, authored by Nicola Twilley, adopts a geographer’s approach to the science of food. Venue, a travelling project launched by Future Plural in early June, will explore remote locations and visit people of interest scattered across North America, and report on them on and offline
Such a free exchange of ideas is not without its problems, however. At present, the virtual environment is a Wild West of sorts, in which the value of labour and production remains an arbitrarily defined quantity , and universal paradigms regulating attribution, control and agency are as yet absent. Many of the most successful emerging ventures in architectural discourse borrow on traditional modes of production: they are often concerned with hyper-specific or niche terrains. They are small, agile and, in keeping with the cottage-industry low-overhead model, a disproportionate number are run by couples.
Specialising in architecture and design, dprbarcelona was set up in Barcelona by architects Ethel Baraona Pohl and César Reyes Nájera. Under the motto “Beyond books. Between art, science and architecture”, their catalogue includes monographs, documentation of buildings, historical studies, collections of essays and degree theses. All dprbarcelona’s books spring from a creative exchange between publisher, author or architect, and feature contributions by academic experts who complete the overview of each project
Writing in a recent issue of MAS Context—a scholarly Chicago-based journal produced by "the [invited] crowd", available both in print and for free low-resolution download — Javier Arbona aims to conceptualise knowledge-sharing and the rebroadcasting of content. He does this not in the context of privacy, authority and intellectual rights, but rather more interestingly in post-Fordist notions of labour. "Through a series of virtual devices common to most blogs (like 'apps' for quick reposting, emailing, retweeting, bookmarking on other sites, or, say, 'sharing' on Facebook, etc.), the work chores of circulating content are hidden by what seem like benign, abstract socio-communal acts."
Dpr-barcelona’s wide-ranging activity covers a multitude of formats and platforms, convinced of print media’s great value but aware of their readers’ new necessities and uses. They therefore accommodate e-books, tablet apps and hybridisations of various media with interactions through Augmented Reality. Reyes Nájera deals mainly with publications related to bioclimatic architectural projects, while Baraona Pohl collaborates with architectural magazines and sites as well as curating exhibitions and events (she is associate curator of the Istanbul Design Biennial due to open in October)
Arguably, the simultaneous growth of DIY publishing and ground-up activism have resulted in the conflation of civic and political rights with the spatial, civic and architectural locale. In the online output of architects and architectural writers, such as This Is Not A Gateway, or in new event formats such as Venue (the "live" and peripatetic collaboration between Geoff Manaugh and his partner Nicola Twilley, from BLDGBLOG and Edible Geography respectively), one can see a direct opposition to existing capitalised forms of production, and a more grass-roots activist stance in terms of engaging with urban and landscape problems through publications, events, artistic platforms and more. Perhaps in this light, some argue that even the most prolific "news" sites have critical, even political impact by virtue of their mere existence and reach.
Based in Berlin, Ruby Press is a small independent publisher specialised in architecture and urban planning. It was founded in 2008 by Ilka and Andreas Ruby (an architect and an art and architecture historian, respectively) with the aim of pursuing their own editorial line after eight years working as authors and publishers with their company textbild (www.textbild. com)
By disseminating architectural news to a wider audience than ever before, they shift access to knowledge from the hands of geographically marginal elites into the realm of the "real world". David Basulto — a qualified architect, teacher and co-founder of ArchDaily (the self-proclaimed "most popular architecture website today") based in Santiago, Chile — maintains, for example, that reaching a "housewife" demographic is intrinsic to his cause.
The book that marked the start of their business, and which exemplifies their editorial approach, is Urban Transformation, a kaleidoscopic study of emerging urban conditions worldwide. It counts over 50 international contributors including architects, urban planners, politicians and artists. Aiming to relaunch the architectural monograph, the Rubys concentrate on quality books with critical explorations and technical apparatus. The graphic design of all their publications is done in close collaboration with Leonard Streich (a graphic designer and architect), Elena Schütz and Julian Schubert (both architects)
As every forward-looking action has its retrograde reaction, the rapid growth and proliferation of blogs, networks and websites has been paralleled by a more intense fascination with the physicality of print media. While much design discussion has moved online, the recent, globally roving Archizines exhibition, curated by Elias Redstone, showcased contemporary architectural fanzines and journals.
Marcus Fairs has been the soul of Dezeen since 2006, when he created what was then only a design blog. Previously he had worked as a journalist for Blueprint, The Guardian, The Independent on Sunday and Condé Nast Traveller, and had also been editor of Icon since 2003. In 2007 Rupinder Bhogal joined him as co-editor
This followed Beatriz Colomina's archival Clip Stamp Fold, and preceded an installation dedicated to the 20th century's great magazines at this year's Venice Biennale of Architecture. The volumes on display in all three exhibitions are vibrant matter; they have the capacity to give rise to public spheres and imagined communities. Through the act of being printed, made permanent, books and journals provide punctuation points in the apparently endless production, discussion and evolution of ideas, thus reinforcing the truism that "printed matter matters".
Fairs and Bhogal created Dezeen Limited and expanded the site in terms of numbers and contributions, while gradually adding initiatives like the Dezeenjobs search site in 2008, and a site in 2012 specialised in design watches: Dezeenwatchstore. Their latest enterprise is Dezeenscreen, a site devoted to videos of architecture, design and art, launched at this year’s Milan Furniture Fair. Dezeen aims at the quick-fire select publication of the best architecture, design and interior design projects worldwide, thanks to a dense network of international collaborators and voluntary contributions received from professional and other sources
But the nature of architectural books is also changing. Organisational structures and layouts have become more flexible, more determined by the visual, more accommodating of non-architectural content, and increasingly employing some of the tools of online paradigms. Julien De Smedt's 2010 monograph Agenda features images of Kanye West's blog, facsimile emails and diagrams tracking office workflow, echoing the tools of Web analytics familiar to any online publisher. The publication of books from blogs — such as The BLDGBLOG Book — has also reflected the growing recognition of online discourse within traditional print media.
In 2007, Trenton Oldfield teamed up with Deepa Naik to establish this non-profit organisation in London with the aim of providing a link between the street and academic circles. Together they seek to create platforms for critical research on the urban fabric. This Is Not A Gateway brings to bear the experiences and interests of its two founders: Oldfield worked for more than ten years in NGOs specialising in urban planning and cultural and environmental programmes; Naik collaborated with museum organisations in the field of art while concerning herself with social, educational and structural themes
Andreas Ruby, co-founder of Berlin-based offices textbild and Ruby Press, is cynical about the simple transposition between screen and page, confessing, "It's like with early cars: they all looked like horse carriages, until they found their own way." Instead, he speaks with passion and conviction about books as an enduring art form, with their own intrinsic possibilities, physically encoded in subtly corporeal nuances. Ruby Press books could be described as a reaction to the logic of the large-printrun media machine that has in recent years grown dramatically in influence within the realms of design and architecture publishing. They are characterised by attention to detail, carefully considering page size, paper weight and porosity, and exquisite graphic design—but also short print runs. Low overheads allow for agility and small scale, and small-scale publishing, in turn, legitimates not only a more finely tuned specificity and quicker production, but also a more artisanal approach and the ability to operate on lower margins.
In 2009 Oldfield and Naik set up Myrdle Court Press as a means to support and give visibility to their work with the realisation of graphically advanced books designed for maximum legibility, utilising high-quality materials entrusted to local printers and an independent distributor. Oldfield and Naik’s main ambition is to provide a remedy for the chronic lack of critical discussion, and compensate for the reduced credibility in the democratic system of today’s rapidly and chaotically expanding urban fabrics
What one finds today, therefore, is not that online formats seek to replace or supersede printed formats. Instead, the poly-vocal, movable and interactive capacity that is most amplified in online production is actually part of a wider change affecting both print publishing and architectural production itself. Pop-cultural, even ahistorical post-modern juxtapositions, achieved in print by Banham, the Venturis, Archigram and many others before and after, have not only continued online, but also extrapolated into an ever-expanding kaleidoscope of perspectives and media. Rather than being drowned in sound, as readers we are increasingly savvy in terms of what to see and how, at what speed, in what context and on which device.
Shumi Bose (@tontita00), curator and writer of architectural history and theory.
I'm excited to be launching a new project called Venue, a 16-month collaboration with the Nevada Museum of Art's Center for Art + Environment, Columbia University GSAPP's Studio-X NYC, and Future Plural, the small publishing and curatorial group I'm a part of with Nicola Twilley.
We kick things off this Friday, June 8, with a launch event at the Nevada Museum of Art in downtown Reno, from 6-8pm; if you're near Reno, consider stopping by!
[Image: The tools and props of surveying; courtesy of the USGS].
In brief, Venue is equal parts surveying expedition and forward-operating landscape research base, a DIY interview booth and media rig that will pop up at sites across North America through September 2013.
Nicola Twilley and I will be traveling on and off, in a series of discontinuous trips, over the next 16 months, visiting a variety of sites including infrastructural landmarks, science labs, factories, film sets, archaeological excavations, art installations, university departments, design firms, National Parks, urban farms, corporate offices, studios, town halls, and other locations across North America, where we'll both record and broadcast original interviews, tours, and site visits. From architects to scientists and novelists to mayors, from police officers to civil engineers and athletes to artists, Venue’s interview archive will form a cumulative, participatory, and media-rich core sample of the greater North American landscape.
[Image: Understanding landscapes by way of strange devices; courtesy of the USGS].
An exhibition dusts for evidence of Fuller in the world as we see it today, and points to credible signs that his fingerprints are all over the dynamic concepts and multi-functional aesthetics that drive modern architecture and design.
The Utopian Impulse: Buckminster Fuller and the Bay Area, which opened 31 March at the San Francisco Museum of Modern Art, is the first exhibition of its kind to consider the local influence that Bucky Fuller (1895 – 1983) — that legendary and inimitable 20th century mind — had on the Bay Area's realized structures, and, just as importantly, on its widely, even "collectively" envisioned ones.
Utopian Impulse examines Fuller's (sometimes literal) presence in many seminal movements and experiments from the 1970s, including those of the avant-garde architecture collective, Ant Farm. Ant Farm's proposal for a domed city, called "Convention City 1976," is one particularly striking example on display, among others: in the form of models, videos, and photographs we see a media-centric public arena in a city built for 20,000 inhabitants (most of them "actors" clued into the roles demanded of their turned-on and on-view center of activity). A clever and even eerie foreshadowing of the way we live now, the domed city is as relevant to say, Times Square, as to the more general "small town", in which every inhabitant watches the same nightly newscast and simultaneously casts a vote for his favorite contestant on Dancing With The Stars or Eurovision. Given the show's context, Convention City is an excellent demonstration of a work that cooks in the same juices as Fuller's, but comes out of the skillet free of any derivation, an independent and original product of its time and dialog.
Also on prominent view in Utopian Impulse is the Oval Intention Tent (1976) by The North Face, the outdoor-gear company established in the Bay Area in 1966. The tent is, in no mistakable terms, a realized geodesic dome, a forward march away from the "A-Frame" tent of decades past. The OI is, in practice, "tensegrity" — a Fullerism that combines tension with integrity; it is a physical structure made of metaphysical "big ideas." And on view alongside the tent, is a 9-minute video clip of Fuller's visit to The North Face in 1981.
"Influence" is a cold and abstracted term, but to see Fuller like this, in motion amid the lines of thought that he inspired, puts "influence" into an unusually intimate and even immediate contact with the viewer. If it weren't for the "do not touch" code of museum conduct, one could reach out and grab Fuller's ideas, quilted as they are into remarkably diverse works.
But rather than reclining comfortably into the years that Fuller was active, Utopian Impulse links seamlessly with present-day relevance, as well, displaying evocative projects and proposals that Fuller did not live to see. Among them is Jellyfish House by San Francisco-based firm IwamotoScott: the proposal shows a comprehensive and "animated" single-family home, which filters and cleans bay water and provides UV protection to those inside. Forward-thinking in its sustainable philosophy, the home can trace up to Fuller in its family tree, though it was conceived more than two decades after his death. Likewise, there can be no video clip of Fuller walking through Thom Mayne's San Francisco Federal Building (completed 2007), but the exhibition draws attention to the building's movable screen, which slides over the roof and down the facade to "facilitate the entrance of shaded natural light into the building." In doing so, Utopian Impulse dusts for evidence of Fuller in the world as we see it today, and points to credible signs that his fingerprints are all over the dynamic concepts and multi-functional aesthetics that drive modern architecture and design.
"That Fuller's own projects remain for us to see in an "unrealized" state, really, gratefully, means that they remain for us in an uncompromised state: unedited by commercial, social, even practical realities and demands"
The show stems from thirteen patented designs by Fuller, on loan from the R. Buckminster Fuller Archive at Stanford University. As a portfolio of drawings and images, which Fuller created toward the end of his life in collaboration with graphic designer Chuck Byrne, the designs are called Inventions: Twelve Around One. Notable among them are the famous 4D House (1928) and the Dymaxion Car (1933), although, really, they're all notable. Displayed as captivating prints on the walls, these images cocoon the show in both the aesthetics and limitless potential Fuller envisioned for the world around him. They also serve as reference points: visual guides that allow the viewer to more easily trace the indirect germination of those seeds Fuller planted.
As his legacy, Fuller leaves many loose threads to pull. Utopian Impulse demonstrates how grand his ideas were, how "custom-made" (to put it lightly) his ideals and vocabulary. And his projects? They were largely "unfinished" (let the bad connotation flow). Unrealized, not wholly understood, perhaps, "paper." Typically, these are fighting words in the world of architecture and design. But not so within the walls dedicated to The Utopian Impulse. That Fuller's own projects remain for us to see in an "unrealized" state, really, gratefully, means that they remain for us in an uncompromised state: unedited by commercial, social, even practical realities and demands. Bluntly put, these projects never became diet-Fuller or Fuller-lite. Perhaps that is why they remain inspiring. Fuller considered himself a "comprehensivist" — someone whose interests are informed by whole systems rather than by a single specialty. One can imagine, then, how even one compromise of his ideas could lead to their collapse.
Above all, what The Utopian Impulse does successfully is put on display the presence of a person, the weight of his convictions, and the comprehensivist earnestness of his proposals. The most difficult step to take in Utopian Impulse is not a mental one — the show is quite persuasive in illustrating the Fuller effect, past and present — but the physical step from the show's entrance, where a video of Fuller speaking (from his forty-two-hour lecture, Everything I Know) and a mini-glossary of Fuller's invented terms easily demonstrate his magnetism.
The terms ‘Creole’ and ‘creolization’ are used in many different contexts and generally in an inconsistent way. It is instructive to start with the origins of the root word. It was probably derived from the Latin creara (‘created originally’)… The French transformed the word to ‘créole’… ‘Creole’ referred to something or someone that had foreign (normally metropolitan) origins and that had now become somewhat localised… To be a Creole is no longer a mimetic, derivative stance. Rather it describes a position interposed between two or more cultures, selectively appropriating some elements, rejecting others, and creating new possibilities that transgress and supersede parent cultures, which themselves are increasingly recognised as fluid.
— Robin Cohen, Creolization and Cultural Globalization: The Soft Sounds of Fugitive Power, Globalizations Vol. 4 (2) 2007
Why do I blog this? Some people wonder about the fact that we live in a perpetual present without the jetpacks, moonbases and virtual realities we were promised. This was actually the topic of the Lift 09 conference I co-organized. I’m more and more interested to uncover the the “alternative futures” to this, places where créolisation will play an important role. This is a new pet project for 2012 and I will file all the weak signals I collect about this under the category “creolization”.
Personal comment:
As we are very interested in that topic of creolization (see on our home page), so as Nicolas Nova with with we had a discussion on that topic last week, I take the opportunity to mention that he (Nicolas) will file projects under this subject on his blog.
Excerpt from Le Processus by Marc-Antoine Mathieu (Delcourt 1993)
Following the three last articles in which I was preparing my reference texts in addition of those that I have been already writing in the past, this following article is an attempt to reconstitute the small presentation I was kindly invited to give by Carla Leitão for her seminar about libraries and archives at Pratt Institute. This talk was trying to elaborate a small theory of the book as a subversive artifact based on six literary authors that have in common a dramatization of their own medium, the book, within their books. The predicate of this essay lies in the fact that books are indeed subversive -and therefore suppressed by authoritarian power- as they reveal the existence of other worlds.
In his series Julius Corentin Acquefacques, prisonnier des rêves, Marc-Antoine Mathieu continuously explores and questions graphic novel as the medium he uses for his narratives to exist, and therefore to acquire a certain autonomy as soon as they have been created. In reusing the constructive elements of drawings within the narrative (preparatory sketches, vanishing points, framing bars, anamorphoses etc.) he creates several layers of universes that include our own, and therefore makes us wonder if our reality couldn’t be the fiction of a higher degree of reality.
It is not innocent that he uses the terminology of the dream to bases his stories as dreams constitute the daily experience we make of another world within the world. The nightmare here, is based on the impossibility for the main character, Julius Corentin Acquefacques to distinguish what is dream, what is his reality, what is the reality of those other worlds he can see for short instants and eventually what is the reality of his creator, the author himself.
In The Trial written by Franz Kafka and published in 1929, the book as an artifact is not literally present. However, the existence of other worlds within the narrative can be found in the fact that the version we know is the one assembled by Kafka’s best friend, Max Brod who re-assembled the chapters of the unachieved book according to his own interpretation and on the contrary of his friend’s wishes who wanted it to be burnt. Brod, in a research for rationality starts the narrative by the scene in which K., the protagonist, learns that he will be judged for something he ignores, continues it by K.’s experience of the administrative labyrinth and eventually finishes it by K.’s execution. In Towards a Minor Literature, Felix Guattari and Gilles Deleuze criticize this order, cannot seem to accept that such chapter about K.’s death has been written by Kafka and eventually consider that this event is nothing more than an additional part of the character’s delirium or dream within the story. As I have been writing before in an essay entitled The Kafkian Immanent Labyrinth as a Post-Mortem Dream, my own interpretation consists in starting with this ‘last’ chapter in which K. is executed, thus attributing the following delirium to the visions that K. experiences before dying. In other words, K. never really dies for himself even though he dies in the point of view of others, of course (to read more about this topic read also my review of Gaspard Noe’s Enter the Void). His perception of time exponentially decelerates tending more and more towards the exact moment of his death without ever reaching it: this is the Kafkian nightmare.
The fact that one can counts three (and probably so many more) ways of assembling the ten chapters written by Kafka make the book itself a labyrinth allowing the existence of several parallel worlds which all share the same composing elements but presents different essences of meaning.
Jorge Luis Borges, whose filiation with Kafka is not to be demonstrated, is also well known for his quasi-Leibnizian (see previous article) invention of an infinity of parallel worlds through books. The Library of Babel (see previous post) is the most famous example as it introduces an infinite library containing every unique books that can be written in 410 pages with 25 symbols. At the end of this short story, Borges precises that this library could be in fact, contained in a single book which will be introduced later on in The Book of Sand (see the recent post about it): a book with an infinity of pages.
What is to be found in infinity seems to be indicated in the story The Secret Miracle (1943) in the following excerpt that could easily be used to essentialize Borges’ work and life:
Toward dawn he dreamed that he had concealed himself in one of the naves of the Clementine Library. A librarian wearing dark glasses asked him: “What are you looking for?” Hladik answered: “I am looking for God.” The librarian said to him: “God is in one of the letters on one of the pages of one of the four hundred thousand volumes of the Clementine. My fathers and the fathers of my fathers have searched for this letter; I have grown blind seeking it.”
in Labyrinths. New York: New Directions Book, 1962.
Many Borges’ readers will indeed know that himself lost his sight few decades after he wrote this story. What was this God that he was looking for in the many book of Buenos Aires’ National Library? Which kind of Kaballah did he create to find an esoteric meaning in the mathematics of the profane scriptures? Maybe did he have a glance to this infinity that he has been chanting for many years and became blind as a price to pay for it.
It is in fact, one thing to comprehend the infinity of contingencies that Borges presents, but it is another one to fathom it fully. Such transcendental understanding could indeed correspond to an encounter with what deserve to be called God. Borges gives us the chance, one more time, to experience such encounter through his story The Garden of Forking Paths (1941) which dramatizes a book in which the infinite combination of worlds constituted by a given sum of events since the dawn of times exists in parallel of each other:
“Here is Ts’ui Pên’s labyrinth,” he said, indicating a tall lacquered desk.
“An ivory labyrinth!” I exclaimed. “A minimum labyrinth.”
“A labyrinth of symbols,” he corrected. “An invisible labyrinth of time. To me, a barbarous Englishman, has been entrusted the revelation of this diaphanous mystery. After more than a hundred years, the details are irretrievable; but it is not hard to conjecture what happened. Ts’ui Pe must have said once: I am withdrawing to write a book. And another time: I am withdrawing to construct a labyrinth. Every one imagined two works; to no one did it occur that the book and the maze were one and the same thing. The Pavilion of the Limpid Solitude stood in the center of a garden that was perhaps intricate; that circumstance could have suggested to the heirs a physical labyrinth. Ts’ui Pên died; no one in the vast territories that were his came upon the labyrinth; the confusion of the novel suggested to me that it was the maze. Two circumstances gave me the correct solution of the problem. One: the curious legend that Ts’ui Pên had planned to create a labyrinth which would be strictly infinite. The other: a fragment of a letter I discovered.”
Albert rose. He turned his back on me for a moment; he opened a drawer of the black and gold desk. He faced me and in his hands he held a sheet of paper that had once been crimson, but was now pink and tenuous and cross-sectioned. The fame of Ts’ui Pên as a calligrapher had been justly won. I read, uncomprehendingly and with fervor, these words written with a minute brush by a man of my blood: I leave to the various futures (not to all) my garden of forking paths. Wordlessly, I returned the sheet.
in Labyrinths. New York: New Directions Book, 1962.
In 1962, Philip K. Dick writes a novel entitled The Man in the High Castle (see previous article) which dramatizes an uchronia for which Roosevelt died before ending his first mandate of President of the USA, replaced by an isolationist President who refuses to engage his county in the second World War. It results from this choice that the Nazis conquest Europe while the Japanese army colonizes East Asia (including Siberia) and eventually both combine their forces to invade the USA. Dick’s plot thus occurs in United States under nippo-nazi domination in which it is said to exist a book, The Grasshopper Lies Heavy written by a certain Hawthorne Abendsen who would describe in it a world in which the Allies won the over against the Axis. The book is, of course, forbidden as it allows the depiction of another reality than the one which is imposed by colonial empires:
At the bookcase she knelt. ‘Did you read this?’ she asked, taking a book out. Nearsightedly he peered. Lurid cover. Novel. ‘No,’ he said. ‘My wife got that. She reads a lot.’
‘You should read it.’
Still feeling disappointed, he grabbed the book, glanced at it. The Grasshopper Lies Heavy. ‘Isn’t this one of those banned-in-Boston books?’ he said.
‘Banned through the United States. And in Europe, of course.’ She had gone to the hall door and stood there now, waiting.
‘I’ve heard of this Hawthorne Abendsen.’ But actually he had not. All he could recall about the book was — what? That it was very popular right now. Another fad. Another mass craze. He bent down and stuck it back in the shelf. ‘I don’t have time to read popular fiction. I’m too busy with work.’ Secretaries, he thought acidly, read that junk, at home alone in bed at night. It stimulates them. Instead of the real thing. Which they’re afraid of. But of course really crave.
The Man in the High Castle. New York: G. P. Putnam’s Sons, 1962.
The ban of books depicted in Dick’s uchronia brings us to worlds in which books have been definitely suppressed from society. In the well known 1984, written in 1949 by Georges Orwell, the only remaining book is the dictionary of the Newspeak which, editions by editions becomes thinner and thinner as the language is subjected by a strict progressive purge. Language, indeed, allows the formulation of other worlds which can be punished as thoughtcrimes. The Book is therefore not destroyed literally but its principal material is voluntarily put in scarcity.
‘The Eleventh Edition is the definitive edition,’ he said. ‘We’re getting the language into its final shape — the shape it’s going to have when nobody speaks anything else. When we’ve finished with it, people like you will have to learn it all over again. You think, I dare say, that our chief job is inventing new words. But not a bit of it! We’re destroying words — scores of them, hundreds of them, every day. We’re cutting the language down to the bone. The Eleventh Edition won’t contain a single word that will become obsolete before the year 2050.’
He bit hungrily into his bread and swallowed a couple of mouthfuls, then continued speaking, with a sort of pedant’s passion. His thin dark face had become animated, his eyes had lost their mocking expression and grown almost dreamy.
‘It’s a beautiful thing, the destruction of words. Of course the great wastage is in the verbs and adjectives, but there are hundreds of nouns that can be got rid of as well. It isn’t only the synonyms; there are also the antonyms. After all, what justification is there for a word which is simply the opposite of some other word? A word contains its opposite in itself. Take “good”, for instance. If you have a word like “good”, what need is there for a word like “bad”? “Ungood” will do just as well — better, because it’s an exact opposite, which the other is not. Or again, if you want a stronger version of “good”, what sense is there in having a whole string of vague useless words like “excellent” and “splendid” and all the rest of them? “Plusgood” covers the meaning, or “doubleplusgood” if you want something stronger still. Of course we use those forms already. but in the final version of Newspeak there’ll be nothing else. In the end the whole notion of goodness and badness will be covered by only six words — in reality, only one word. Don’t you see the beauty of that, Winston? It was B.B.’s idea originally, of course,’ he added as an afterthought.
1984. New York : Signet Classics, 1949
The quintessential narrative dramatizing the destruction of books is of course Fahrenheit 451 (see the recent article about it) written by Ray Bradbury in 1953. In this story, firemen are not people in charge of fighting against fire, but on the contrary, those in charge of inflaming books that have been banned as principal element of discord and inequality within society. Fahrenheit 451 (233 degrees Celsius) is indeed the temperature for which paper burns. Books are thus the object that allows the various human writings to remain archived for virtually eternity but which allow carry with them, their own fragility as their main material, paper, is vulnerable to the elements and fire in particular. Francois Truffaut, who released an excellent film adaptation of Bradbury’s novel in 1966, by showing a copy of Mein Kampf in his movie, did not miss to point out that a resistance movement that would undertake to save the books from fire could not possibly judge which books deserved to be kept and which one could be let to the institutional purge.
In the theater play Almansor that he wrote in 1820, Heinrich Heine makes the following tragic prophecy: Where we burn books, we will end up burning men. On May 10th 1933, the Nazis who recently reached the head of the executive and legislative power in Germany will burn thousands of books including Heine’s, which do not fit within the spirit of the new antisemitic/anti-communist politics they are willing to undertake. About a decade later, they will industrially kill eleven millions people (including six millions Jews) in what remains as the darkest moment of mankind’s history: the Holocaust.
Among the books burned in 1933, one could find the ones written by Marx, Freud, Brecht, Benjamin, Einstein, Kafka but also one of the father of science fiction, HG Wells. This last example illustrates well the will of the third Reich to annihilate any vision of the future that was not compliant with the one elaborated by the Nazis.
In latin, book burning ceremonies are called autodafé from Portuguese Acto da Fé (literally act of faith). Autodafé were common during the Spanish and Portuguese inquisition during the medieval era. Indeed, books listed in the Catholic Index (the list of books forbidden by the Church) and heretics were burned indistinctly in vast rituals of authoritarian religion. In 1933, this act of faith had been elaborated by Joseph Goebbels, minister of propaganda of the Reich and accomplished with great enthusiasm by hordes of students who collected and confiscated the books that have been listed as subversive. An important element in the principal autodafé of May 10th 1933 in Berlin was that the rain was preventing the flames to burn the book in such a way that firemen had to pour gasoline on the aggregation of books to set them ablaze. This significant ‘detail’ had probably a great influence on Bradbury in the elaboration of his narrative.
The books are therefore agents of infection in the point of view of an authoritarian ideological power. Their authors place in them the germs of subversion that are then spread to whoever read them. Knowledge is power as Foucault was insisting, imagination is, in fact, power to the same extent. The virtual access to other worlds via books is the possibility of a resistance in this given reality. For that, books have to be salvaged at any price. They constitute the archives of a civilization as much as they are the active agents of vitalization of a society that accepts the multiplicity of their narratives.
ENIAC programmers, late 1940s. (U.S. military photo, Redstone Arsenal Archives, Huntsville, Alabama), from Programmed Visions by Wendy Hui Kyong Chun.
After “getting fit” and whatever else people typically declare to be their new year’s resolutions, this year’s most popular goal is surprisingly nerdy: learning to code. Within the first week of 2012, over 250,000 people, including New York’s mayor Michael Bloomberg, had signed up for weekly interactive programming lessons on a site called Code Year. The website promises to put its users “on the path to building great websites, games, and apps.” But as New Yorker web editor Blake Eskin writes, “The Code Year campaign also taps into deeper feelings of inadequacy... If you can code, the implicit promise is that you will not be wiped out by the enormous waves of digital change sweeping through our economy and society.”
If the entrepreneurs behind Code Year (and the masses of users they’ve signed up for lessons) are all hoping to ride the wave of digital change, Wendy Hui Kyong Chun, a professor of Modern Culture and Media at Brown University, is the academic trying to pause for a moment to take stock of the present situation and see where software is actually headed. All the frenzy about apps and “the cloud,” Chun argues, is just another turn in the “cycles of obsolescence and renewal” that define new media. The real change, which Chun lays out in her book Programmed Visions: Software and Memory, is that “programmability,” the logic of computers, has come to reach beyond screens into both the systems of government and economics and the metaphors we use to make sense of the world.
“Without [computers, human and mechanical],” writes Chun, “there would be no government, no corporations, no schools, no global marketplace, or, at the very least, they would be difficult to operate...Computers, understood as networked software and hardware machines, are—or perhaps more precisely set the grounds for—neoliberal governmental technologies...not simply through the problems (population genetics, bioinformatics, nuclear weapons, state welfare, and climate) they make it possible to both pose and solve, but also through their very logos, their embodiment of logic.”
To illustrate this logic, Chun draws extensively on history, theory, and detailed technical explanations, enriching cursory understandings of software. “Understanding software as a thing,” she writes, “means engaging its odd materializations and visualizations closely and refusing to reduce software to codes and algorithms—readily readable objects—by grappling with its simultaneous ambiguity and specificity.” Indeed, Chun spends a lot of time specifying computer terms. What's the difference between hardware, software, firmware, and wetware? Source code, compiled code, and written instructions? What is a thing and how did software become one? Even for a fairly nerdy computer user there’s a lot to pick up on. The book really shines, however, when Chun waxes poetic on the more ambiguous aspects of software.
The term “vaporware” refers to software that’s announced and advertised but never actually released for use, such as Ted Nelson’s infamous Xanadu project. Vaporware is problematic when it comes to theory because grand ideas and slick renderings rarely (if ever) align with the way technology looks and works in real life. Geert Lovink, Alexander Galloway, and others have called to banish “vapor theory,” theory built on hypothetical ideas about software rather than instantiations of it, which Lovink criticizes as, “gaseous flapping of the gums...generated with little exposure, much less involvement with those self-same technologies and artworks.” Chun concedes that while this embargo on vapor has been essential to grounding new media studies, “a rigorous engagement with software makes new media studies more, rather than less, vapory.” Vapor is not incidental to software, she argues, but actually essential to its understanding. This is what makes Chun’s theories exciting to follow: she engages renderings, dreams, and misunderstandings about technology rather than casting them aside. The key source of these misunderstandings is the use of the computer as metaphor.
People in previous generations conceptualized the world around them using technologies like clocks and steam engines. While these analog, mechanical devices are intricate, if one were to take apart a clock and and put it back together its inner workings could be understood. Digital computers are more complex because they are made of both tangible chips and immaterial codes, neither of which are intuitive to deconstruct. Further, all software interfaces, like the “paintbrush” tool in Photoshop, are metaphors themselves. “Who completely understands what one’s computer is actually doing at any given moment?” asks Chun, knowing that the answer is nobody. Yet this murky recursion of “unknowability” and vapors is exactly why Chun finds software to be such an apt metaphor for the world we live in. Recalling Stewart Brand’s call for a picture of the whole earth in 1968, Chun poses the question: what would a picture of the whole Internet look like? Except, in this case, to find out may not be the point. In the way that the stock market is based on speculation—virally spreading fear about the future of a company (as opposed to concrete evidence or actual bad management decisions) can cause a stock to tank—a technologized world is increasingly based on conjecture. In its unseeable, untouchable, and effectively unknowable nature, the computer represents the lens we need in order to think about the enormous and incomprehensible forces of social, economic, and political power that govern our lives. “[Software’s] ghostly interfaces embody—conceptually, metaphorically, virtually—a way to navigate our increasingly complex world,” writes Chun.
The book looks at a broad range of examples from artists, scholars, and technologists to situate “programmability” in relation to everything from global systems like capitalist economics, neoliberal politics, and knowledge production to those of the mind and body: gender, race, and the structure of thought. The footnotes are full of interesting paths waiting to be followed: Frederick P. Brooks on why programming is fun and hacking is addictive, Ben Shneiderman on direct manipulation interfaces, Brenda Laurel on computers as theatre and how that relates to skeuomorphism, and Thomas Y. Levin on the temporality of surveillance, to name just a few. While it’s tempting to look to this web of ideas and the history of computing as an answer for why things are the way they are today, Chun's point in invoking all these voices is that it’s not that clear cut.
Some of the book’s propositions about our relationship to computers seem overblown: a priestly source of power, a form of magic, code as a fetish. If nothing else, these phrases are provocative and point to how potent Chun finds software to be in the world today.As more and more people find themselves able to create things out of code, it feels critical to understand software on both a practical and fundamental level.
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.