Think of the name Buckminster Fuller, and you may think of a few oddities of mid-twentieth-century design for living: the Dymaxion House, the Dymaxion Car, the geodesic dome. But these artifacts represent only a small fragment of Fuller’s life and work as a self-styled “comprehensive anticipatory design scientist.” In his decades-long project of developing and furthering his worldview — an elaborate humanitarian framework involving resource conservation, applied geometry, and neologisms like “tensegrity,” “ephemeralization,” and “omni-interaccommodative” — the man wrote over 30 books, registered 28 United States patents, and kept a diary documenting his every fifteen minutes. These achievements and others have made Fuller the subject of at least four documentaries and numerous books, articles, and papers, but now you can hear all about his thoughts, acts, experiences, and times straight from the source in the 42-hour lecture series Everything I Know, available to download at the Internet Archive. Though you’d perhaps expect it of someone whose journals stretch to 270 feet of solid paper, he could really talk.
In January 1975, Fuller sat down to deliver the twelve lectures that make up Everything I Know, all captured on video and enhanced with the most exciting bluescreen technology of the day. Props and background graphics illustrate the many concepts he visits and revisits, which include, according to the Buckminster Fuller Institute, “all of Fuller’s major inventions and discoveries,” “his own personal history in the context of the history of science and industrialization,” and no narrower a range of subjects than “architecture, design, philosophy, education, mathematics, geometry, cartography, economics, history, structure, industry, housing and engineering.” In his time as a passenger on what he called Spaceship Earth, Fuller realized that human progress need not separate the “natural” from the “unnatural”: “When people say something is natural,” he explains in the first lecture (embedded above as a YouTube video above), “‘natural’ is the way they found it when they checked into the picture.” In these 42 hours, you’ll learn all about how he arrived at this observation — and all the interesting work that resulted from it.
(The Buckminster Fuller archive has also made transcripts of Everything I Know — “minimally edited and maximally Fuller” — freely available.)
Parts 1-12 on the Internet Archive: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
Venice Architecture Biennale 2014:curator Rem Koolhaas has used the biennale to announce the end of his "hegemony" over the profession, according to architect Peter Eisenman (+ interview).
"He's stating his end," said Eisenman, adding: "Rem Koolhaas presents the Biennale as la fine [the end]: 'The end of my career, the end of my hegemony, the end of my mythology, the end of everything, the end of architecture'."
The 81-year old American architect, who helped the Dutch architect at the start of his career, said that Koolhaas, 70, was "the totemic figure" of the last 50 years and compared him to Le Corbusier's dominance of the first half of the twentieth century.
"I think it's very important to have lived in the time of Rem, like to have lived in the time of Corbusier," said Eisenman, recalling the day he turned up outside Le Corbusier's Paris atelier in 1962 but felt too intimidated to ring the doorbell: "I think that students today feel the same way about the mythology of Koolhaas."
Called Fundamentals, the biennale opened to the public on the weekend and includes a central exhibition called Elements, which focuses on parts of buildings such as stairs, escalators and toilets rather than buildings.
The Elements exhibition focuses on individual aspects of buildings.
Eisenman said the Elements show was like language without grammar: "Any language is grammar," he said. "So, if architecture is to be considered a language, 'elements' don't matter. So for me what's missing [from the show], purposely missing, is the grammatic."
Eisenman, head of Eisenman Architects, has known Koolhaas since the 70s, when the Dutch architect studied at Eisenman's Institute for Architecture and Urban Studies (IAUS) in New York.
"I helped publish his first book," said Eisenman. "I got the money to publish Delirious New York, I was on the jury that gave him the first prize he ever won for his architecture. I gave him an office where to write Delirious New York, so I know Rem from the beginning."
Eisenman made the comments in Venice on Friday, where he was attending the opening of an exhibition about the Yenikapi Project, a vast new development in Istanbul he designed in collaboration with Aytaç Architects.
A section of the Elements exhibition dedicated to the toilet.
Portrait of Peter Eisenman is courtesy of Vanderbilt University.
Here's a transcript of the interview:
-
Valentina Ciuffi: Let's talk about Elements [the exhibition occupying the Central Pavilion at the Venice Architecture Biennale]. You've known Rem from the very beginning – what do you think of the core show at his biennale?
Peter Eisenman: First of all, any language is grammar. The thing that changes from Italian to English is not the words being different, but grammar. So, if architecture is to be considered a language, 'elements' don't matter. I mean, whatever the words are, they're all the same. So for me what's missing [from the show], purposely missing, is the grammatic.
Look, 50 years ago, we knew that Modernism was dead. Le Corbusier, Mies van der Rohe, Frank Lloyd Wright: all dead. We didn't know what the future was but we knew all this was dead.
In '68 we found out what the future was going to be: the revolution in '68 in the schools, in culture, in art etc: all was changed. We are now 50 years from '64 and the totemic figure of these 50 years, the symbolic figure? Rem Koolhaas, right?
Rem Koolhaas presents the Biennale as la fine [the end]: "The end of my career, the end of my hegemony, the end of my mythology, the end of everything, the end of architecture." Because we don't have architects [in the biennale]. We have performance, we have film, we have video; we have everything but architecture.
So Rem is saying: "You know, I want to say: I don't do this, I don't do this, I don't do this, but I also want to tell you that I don't want you to tell me my end. I'm telling you the end." He makes the point, bonk, like that.
Valentina Ciuffi: He's stating his end?
Peter Eisenman: He's stating his end. And he's finished. And we don't know what's coming in four five years. 2018, like 1968, could be a revolution. Who knows?
Valentina Ciuffi: So this end is the start of something new?
Peter Eisenman: Always. History always goes like this.
Valentina Ciuffi: But when he says no to archistars, yes to architecture…
Peter Eisenman: He is the archistar! He is the origin of the archistar. He was there at the beginning.
Valentina Ciuffi: You taught all the archistars. They all came from your academy [the Institute for Architecture and Urban Studies in New York].
Peter Eisenman: He is the archistar and now he is the curator star. He's killed all the archistars, and now he is going [to be the] single curator star.
Valentina Ciuffi: You are one of the few people able to be so straight with him because…
Peter Eisenman: I know him very well. We started together way back. I helped publish his first book. I got the money to publish Delirious New York, I was on the jury that gave him the first prize he ever won for his architecture. I gave him an office where to write Delirious New York, so I know Rem from the beginning.
Valentina Ciuffi: So you think this idea of taking elements and not thinking about the grammar is totally…
Peter Eisenman: Well it's Rem. It's Rem because he doesn't believe in grammar. That's Rem, and that's good. Look, when he was at the Architecture Association School in 1972, in the spring of '72 when he quit – because he never finish school, you have to understand – because he went to the new director and he said, quote: "I want to learn fundamentals. Where can I learn fundamentals?"
And the director looked at him and said: "We don't teach fundamentals here. We teach language." And then he quit. So there is a relationship between quitting the school in 1972 and Fundamentals today. Okay?
Valentina Ciuffi: You are perhaps one of the the few people who can be so direct about Rem.
Peter Eisenman: I love Rem. I think it's very important to have lived in the time of Rem, like to have lived in the time of Corbusier. In '62 I went to Paris and I stood on the doorstep of Le Corbusier's atelier at 35 rue de Sèvres with my mentor Colin Rowe. He said, "Ring the doorbell!" And I said: "What I'm going to say to this guy? What am I doing here?"
And I think that students today feel the same way about the mythology of Koolhaas: "What am I going to say to him?" So very few people would challenge him. If you ask him questions; yesterday at the press conference people were asking him questions and he said: "I don't answer questions like this. You should stop asking questions."
So he's a very, very clear and a good person to put this biennale on. And sarà la fine dell'architecttura [it will be the end of architecture].
An interesting paper (in French) by Guy Lelong about reductionnism (so as contextual or referential autonomy) and how it possibly have led to its opposite. With words/works by Greenberg, Boulez, Reinhardt, Feldman, Buren, Grisey, Rahm, Hervé.
"Au sortir des deux Guerres mondiales, des protagonistes importants de la plupart des domaines artistiques ont réduit leur médium à des constituants ultimes, voire à des éléments essentiels. Je ne me demanderai pas ici s’il y a relation de cause à effet ou simple concomitance entre cette remise en ordre de l’art et ces événements de l’Histoire. Je voudrais plus simplement faire apparaître, en me limitant à la peinture et à la musique, comment le réductionnisme théorisé et élaboré dans les années 1950-1960 a parfois abouti à son inverse. En cherchant en effet à réduire toujours plus les éléments constitutifs de leur médium, certains peintres ont trouvé une temporalité qui appartenait plutôt à la musique, réalisant par conséquent une peinture du temps, tandis que certains compositeurs, en opérant une réduction analogue sur le fait sonore, ont en quelque sorte déployé celui-ci dans l’espace, découvrant une musique de l’étendue. Les disparités observées dans ce cadre réductionniste me permettront, en élargissant le propos, de montrer que la perception des œuvres de l’art se distingue en fonction des déterminants de la réception qu’elles mettent en place. La critique du réductionnisme que certains courants ont ensuite élaborée, contestant notamment l’autonomie contextuelle et référentielle, me conduira à déterminer les interactions de la référence que les œuvres de l’art sont susceptibles de produire, dès lors qu’elles prônent au contraire l’élargissement. (...)"
A project created by Léopold Lambert as the podcast platform of The Funambulist.
Featured guests for the current law section of Archipelago.
The idea of Archipelago emerged from the will to propose an alternative to the current state of Academia (whether architectural or not). The generalized absence of bridges between disciplines, the petty internal politics, the clear categorization of teachers and learners, as well as the ‘punctualization’ of learning formed the base of this will to propose something different.
Disciplines should be blurred, young thinkers should have access to platforms of expression and learning should be a continuous activity throughout life. Archipelago does not have the illusory ambition to replace the university, but more simply to constitute a free place for learning and questioning the politics of the designed environment that surrounds us all. Its medium allows anyone to listen to it in all kinds of situations: while commuting, cooking, resting, working, or any other situation you might think appropriate.
Archipelago’s editorial line follows the one constructed year after year on The Funambulist. This line is based upon the predicate that design (clothing, objects, architecture and urbanism) organizes (politically) bodies in space. Such a predicate creates the need to wonder simultaneously what a body is and how design is produced.
These questions define the list of guests for the conversation it releases. A significant number of these guests are already part of the network composed by The Funambulist. Some of them took part (or are about to) in the series of curated texts collected in the book The Funambulist Papers: Volume 1published by Punctum Books in 2013 (Volume 2 will be published later in 2014). However, the project also finds its essence in researching the work of other thinkers and creators to diversify and enrich the discourse proposed on both The Funambulistand Archipelago.
An important component in the selection of these guests is their diversity, as well as their relation to the norm from where Archipelago operates. What that means is for the platform to maintain a high awareness of whom it invites, in order to avoid the traditional pattern of domination of a type of academic actor (White Western Heterosexual Male to name only a few of their characteristics). Such practice is the minimum to be done to reduce the violence of normative processes and the ostracisation they create.
Another City is Possible: Alternatives to the Smart City Adam Greenfield
1.5 AIA and New York State CEUs
This lecture was presented by The Architectural League’s Urban Omnibus and The New Museum’s IDEAS CITY last November.
The idea of the “smart city” enjoys considerable intellectual currency at the moment, in the popular media as well as conversations in architecture, urban planning, and local government. In this talk, Adam Greenfield argued that these discourses offer a potentially authoritarian vision of cities under centralized, computational surveillance and control: overplanned, overdetermined, driven by the needs of enterprise. What might some more fruitful alternatives look like? How can we design urban technology that responds to our needs, demands, and desires? Above all, how might we inscribe a robust conception of the right to the city in the technological systems that will do so much to define the urban experience in the twenty-first century?
Over the last few years the South Korean New Town of Songdo has emerged as the epitome of the ‘smart city’ of the future – a city that uses software and sensor-driven feedback loops to optimize all kinds of infrastructural city functions. Songdo, planned to be completed by 2015, was heralded as a city with ‘smart DNA’, a showcase of what could be done in urban development if new media technologies were tightly integrated in the urban planning.
However, according to Fabrica CEO Dan Hill something is missing in this picture. In these scenario’s new technologies are used to solve old world problems such as traffic congestion. And while of course it’s nice to have an adequately managed urban infrastrcuture, the real issue is that the world itself is changing, partly due to the uptake of new technologies such as social media. What we really need is a new vision on how our traditional city making institutions themselves should adapt to this newly emerging network society.
At an expert meeting organized by the Dutch Planbureau voor de Leefomgeving (The Netherlands Environmental Assessment Agency), Dan Hill explained that there are several reasons why he thinks the vision of Sondgo will never be a real model for smart city development. Primarily, we cannot trust cities that are exclusively based on algorithms. Would we really want to deliver ourselves to a system of Automatic Urban Processing that resembles the computer systems involved in High Frequency Trading on the stock market? We are all experiencing the lasting effects of the stock market collapse and we definitely don’t want to have that happening to our cities.
Secondly, one cannot install smart technologies in the way you would install plumbing and other building infrastructure. The fundamental difference is that in the case of holistic smart city systems, one company takes control over all the urban processes. To optimize the city’s performance, it is necessary that every urban process feeds information into the others. And that works best if one company can manage the whole system. In the case of Songdo, Cisco would be responsible for the waste collection, the energy production, the water management, the traffic control;. Undoubtedly, no city government wants to put all their eggs in one basket by trusting just a single company with their entire infrastructure.
The third reason why a smart city like Sondgo would not work is because we simply don’t make cities in order to build infrastructure. Buildings and infrastructure are just the enablers for us to come together and exchange, create cultures, communities and conviviality. The things that we actually look for in our cities are often about inefficiency. There is a clear tension between these two poles and we have to decide where we want our cities to be efficient and where not.
Using technology to solve urban problems is not a new idea; in his 1966 book New Movement in Cities, Brian Richards was already imagining contemporary technologies addressing all folds of urban life. Even the conventional infrastructure built in the ‘50s and ‘60s was all about efficiency in urban living and it’s also facing a lot of problems. As Cerdic Price put it in 1960’s «Technology is the answer, but what is the question?» We do currently have all the technologies we need to build a 21rst century resilient city, so why is it impossible to it?
One of the answers lies in the nature of our institutions. Not only they are old, but they are also responsible for creating the problem. This creates a clear tension between society and institutions, which is expressed, for example, in the widespread riots that have become a common condition in many countries in the last few years. In this framework the design challenge is not the one of the technological development but the redefinition of the culture of public decision making. Referring to the recent example of a design academy graduate who developed a 3d printed gun, Dan Hill questioned how institutions expect to regulate gun use with policies when guns will be printed at home? It is simply impossible to address this problem with the same tools we have been doing it so far.
This issue extends into the use of public space, which has been increasingly regulated in the past decades. This created a vicious circle of narrowing down publics that have access to it and the activities that can be performed there which leads to public spaces’ deterrioration which is usually addressed with more policies controling activities and so on. But we need to understand what public space can be, what one can do in public spaces. The reason, according to Hill, that Beppe Grillo’s party Movimento 5 Stelle did so good in the last italian elections was that they completely rejected all institutional media in promoting their program. Instead, they focused on two things: social media and appearances in public spaces. Beppe Grillo, a devoted blogger, has been talking in a different square every night throughout his electoral campaign, bringing back the public space of the city in the heart of politics.
Similarly, there is a widespread rise of active citizens. This new type of «hipster urbanism» as many call it, creates competition for local governments in running cities. In many cases people take care of public green because the municipality cannot afford it any more, so undoubtebly these initiatives are good, even though they are not stricktly legal and are also not really efficient. However, this is also problematic. These processes are not democratic and these citizens can not be held accountable for their actions. In addition to this, they are fundamendaly based on social media, which provide a very individualistic view of the world and promote a «like-minded» mentality, loosing sense of the civic. So self organising systems are quick and direct but they are also temporary and have no real impact on legal structures. Simply stated: pop ups tend to pop down. Crowdfunding, another very popular concept, also doesn’t come without downsides. It only works for people who can pay anyway, making it impossible to be used in cities and to replace the state.
So to get back to the issue of smart cities, Dan Hill concluded that it is impossible to keep up with the speed of social developments, using an infrastructure-lead mindset. But it could make a real difference to address the nature of the institutions, as policy changes can have a bigger impact. Undoubtedly, we need strong institutions, they just need to be redesigned from scratch. So for him the real question is whether institutions can appropriate the dynamics of social media without inheriting their ideology, to become more agile, project based and able to maintain a central role in city management.
Simple information feedback doesn’t change behaviors. Open Data is a starting point but data alone is not enough, it is the people who make the algorithms that have the connection to the public. On the other hand, this connecting position cannot be left to private companies. There lies a potentially new position for governments, according to Hill. Governments should regulate the technologies market and create the interfaces to create coherent platforms bringing together many providers.
Dan Hill, is the CEO of Fabrica, a communications research centre and transdisciplinary studio based in Treviso, Italy, which is part of the Benetton Group. In the past, Hill has been part of Sitra’s (the Finish Innovation Fund) Strategic Design Unit. He was also an Urban Informatics leader for Arup. He is also an Adjunct Professor in the Architecture department at University of Technology in Sydney (UTS) and a member of the Integrated Design Commission Advisory Board in South Australia. In 2012 Hill was a keynote speaker at Social Cities of Tomorrow, a conference organized by The Mobile City with Virtueel Platform and Arcam.
The Making of an Avant Garde: The Institute for Architecture and Urban Studies 1967-1984
A documentary written, produced, and directed by Diana Agrest
1.5 AIA and New York State CEUs
This film screening is organized by The Irwin S. Chanin School of Architecture of The Cooper Union and co-sponsored by The Architectural League.
A screening of The Making of an Avant Garde: The Institute for Architecture and Urban Studies 1967-1984.
The Institute for Architecture and Urban Studies, founded in 1967 with close ties to The Museum of Modern Art, made New York the global center for architectural debate and redefined architectural discourse in the United States. A place of immense energy and effervescence, its founders and participants were young and hardly known at the time but would ultimately shape architectural practice and theory for decades. Diana Agrest’s film documents and explores the Institute’s fertile beginnings and enduring significance as a locus for the avant-garde. The film features Mark Wigley, Peter Eisenman, Diana Agrest, Charles Gwathmey, Mario Gandelsonas, Richard Meier, Kenneth Frampton, Barbara Jakobson, Frank Gehry, Anthony Vidler, Deborah Berke, Rem Koolhaas, Stan Allen, Suzanne Stephens, Bernard Tschumi, Joan Ockman, among others.
Time & Place
Wednesday, November 13, 2013
7:00 p.m.
The Great Hall
The Cooper Union
7 East 7th Street
Tickets
This event is free and open to all. Reservations neither needed nor accepted.
Personal comment:
Undoubtedly a documentary I'll look to get a copy!
Dans un livre pionnier, «Théorie du drone», le philosophe français Grégoire Chamayou analyse le rôle grandissant du drone dans la guerre moderne, et sur ce qu’il changera en termes de géopolitique et de surveillance globale.
Grégoire Chamayou, Editions La Fabrique, 363 pages.
Le drone est un «objet violent non identifié» qui est en train de miner le concept de guerre tel qu’on le connaît depuis Sun Tzu jusqu’à Clausewitz. Dans une œuvre de pionnier, le philosophe français Grégoire Chamayou décode cet objet qui soulève quantité de questions relatives à la stratégie, à la violence armée, à l’éthique de la guerre et de la paix, à la souveraineté et au droit. Le drone et ses clones robotiques ouvrent au sein des conflits violents une vaste terra incognita totalement impensée par le droit international et les lois immémoriales de la guerre.
Dans un ouvrage magistral, le philosophe entreprend la toute première réflexion sur cette nouvelle forme de violence, née de la généralisation d’un gadget militaire, le drone, ce véhicule terrestre, naval ou aéronautique sans homme à son bord (unmanned).
Les drones Predator et Reaper ont la particularité de voler à plus de 6000 mètres d’altitude et d’être télécommandés par des individus souvent civils (faut-il les considérer comme des combattants?) depuis une salle de contrôle informatique du Nevada. D’un clic de souris, un téléopérateur appuie sur une gâchette et déclenche un missile distant de milliers de kilomètres qui immédiatement s’abat sur un village du Pakistan, du Yémen ou de Somalie. Le drone est «l’œil de Dieu», il entend et intercepte toutes sortes de données qu’il fusionne (data fusion) et archive à la volée: en une année, il a généré l’équivalent de 24 années d’enregistrements vidéo.
Cette Théorie du drone a le mérite d’informer sur la mutation majeure des conflits violents entamée sous les présidences Bush et adoptée par l’administration d’Obama. Le drone et la suite des engins tueurs qui se profilent à l’horizon – les Etats-Unis disposent de 6000 drones et travaillent à des avions de chasse sans pilote pour 2030 – transforment une tactique adjacente en stratégie globale, et font de l’anti-terrorisme et de la politique sécuritaire leur doctrine de combat du siècle. Initiés par les Israéliens, premiers adeptes de l’euphémique devise «personne ne meurt sauf l’ennemi», puis repris par les «neocons» américains, les drones font le miel de l’équipe d’Obama, pour qui «tuer vaut mieux que capturer», liquider par avance les suspects terroristes étant préférable à leur enfermement à Guantanamo.
L’auteur poursuit sa démonstration sur l’imprécision et la contre-productivité du drone; du fait de l’altitude à laquelle il opère, son rayon létal est de 20 mètres, tandis que celui d’une grenade est de 3 mètres. Seule la munition classique peut être véritablement considérée comme une «arme chirurgicale» du point de vue de sa précision létale. Etant donné les milliers de morts civils qu’ils ont occasionnés, les drones ont aussi le désavantage de rallier toujours plus les populations locales aux groupuscules terroristes.
L’auteur montre comment la diminution croissante des morts des militaires et l’extension continue du «dommage collatéral» – ce mot qui cache depuis la fin de la Guerre froide la liquidation informelle de civils non combattants – procèdent de l’assomption suivante: dès qu’un actant de «l’axe du mal» est identifié, son réseau social fait de facto partie du c(h)amp du mal que l’on pourra vitrifier depuis une interface informatique. Certains avancent même l’idéal déréalisant que la robotique létale constituerait l’«arme humanitaire» par excellence et l’auteur fait observer combien l’euphémisation des enjeux militaires est légitimée par la rhétorique du care. Chamayou voit dans la novlangue sur le militaire humanitaire les débuts d’une politique «humilitaire».
La géopolitique est en train de laisser place à une aéropolitique. La guerre n’est plus un affrontement ni un duel entre parties combattantes sur un territoire délimité, mais une «chasse à l’homme», où un prédateur poursuit partout et tout le temps une proie humaine. Les notions de temporalité, de territorialité, de frontière, d’éthique guerrière et de droit humanitaire sont rendues obsolètes par ces armes low cost et high-tech.
L’auteur prédit un avenir fait de robots-insectes miniaturisés – les nanotechnologies aidant – concourant à la mise en place d’un système panoptique complet qui risque d’enserrer les Etats et les citoyens.
Cet ouvrage, d’ores et déjà incontournable, en appelle à une prise de conscience politique face à la déshumanisation en cours derrière ce nouvel art de surveiller, d’intercepter et d’anéantir.
Following my recent post about drones (as scanning devices), there are obviously different types of drones and like any other technology, it looks like that this one too has two sides... We are now probably in need of some renewed "Contrat Social" that would take into account additional "parameters" (between humans and machines/technologies + between humans and our planet --Contrat naturel--).
Computing pioneer Doug Engelbart’s inventions transformed computing, but he intended them to transform humans.
Peripheral vision: Engelbart rehearses for the “mother of all demos.”
Doug Engelbart knew that his obituaries would laud him as “Inventor of the Mouse.” I can see him smiling wistfully, ironically, at the thought. The mouse was such a small part of what Engelbart invented.
We now live in a world where people edit text on screens, command computers by pointing and clicking, communicate via audio-video and screen-sharing, and use hyperlinks to navigate through knowledge—all ideas that Engelbart’s Augmentation Research Center at Stanford Research Institute (SRI) invented in the 1960s. But Engelbart never got support for the larger part of what he wanted to build, even decades later when he finally got recognition for his achievements. When Stanford honored Engelbart with a two-day symposium in 2008, they called it “The Unfinished Revolution.”
To Engelbart, computers, interfaces, and networks were means to a more important end—amplifying human intelligence to help us survive in the world we’ve created. He listed the end results of boosting what he called “collective IQ” in a 1962 paper, Augmenting Human Intellect. They included “more-rapid comprehension … better solutions, and the possibility of finding solutions to problems that before seemed insoluble.” If you want to understand where today’s information technologies came from, and where they might go, the paper still makes good reading.
Engelbart’s vision for more capable humans, enabled by electronic computers, came to him in 1945, after reading inventor and wartime research director Vannevar Bush’s Atlantic Monthly article “As We May Think.” Bush wrote: “The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.”
That inspired Engelbart, a young electrical engineer, to come up with the idea of people using screens and computers to collaboratively solve problems. He worked on his ideas for the rest of his life, despite being warned over and over by people in academia and the computer industry that his ideas of using computers for anything other than scientific computations or business data processing was “crazy” and “science fiction.”
Englebart knew right from the start that screens, input devices, hardware, and software could allow the necessary collaborative problem-solving only as part of a system that included cognitive, social, and institutional changes. But he found introducing new ways for people to work together more effectively, the lynchpin of his overall vision, more difficult than transforming the way humans and computers interact.
Engelbart labored for most of his life and career to get anyone to think seriously about his ideas, of which the mouse was an essential but low-level component. Only for one golden decade did he get significant backing. In 1963, the U.S. Defense Department provided the wherewithal for Engelbart to assemble a team, create the future, and blow the mind of every computer designer in the world by way of what has come to be known as “the mother of all demos.”
I first met Engelbart in 1983 in his Cupertino office in a small building that was completely surrounded by the Apple campus. A company that no longer exists, Tymshare, had purchased what was left of Engelbart’s lab and hired him after the Stanford Research Institute stopped supporting the Augmentation Research Center due to the Department of Defense withdrawing funding.
Engelbart noted with dismay that although the personal computer was evolving quickly, the other elements of his plan weren’t. At the time, personal computers weren’t networked to one another—as terminals of large computers could be at the time—and they lacked a mouse or point-and-click interface.
Engelbart told me in our first conversation, as I’m sure he must have told many others, that the computer and mouse were just the “artifacts” in a system that centered on “humans, using language, artifacts, methodology, and training.”
In the late 1980s, Engelbart set up his self-funded “Bootstrap Institute” to try and get his ideas about working more effectively the acceptance his artifacts had. He developed ways of analyzing how people acted inside an organization and specific techniques that he claimed would boost “collective IQ.” A set of detailed presentations on those methodologies started with what he called CODIAK. “Collective IQ is a measure of how effectively a collection of people can concurrently develop, integrate, and apply its knowledge toward its mission,” (emphasis Engelbart’s).
Mouse manufacturer Logitech provided office space, but the Bootstrap Institute – staffed by Engelbart and his daughter Christina—never sold bootstrapping, collective IQ, or CODIAK to any funder, major company, or government department.
Engelbart’s failure to spread the less tangible parts of his vision stems from several circumstances. He was an engineer at heart, and engineers’ utopian solutions don’t always account for the complexities of human social institutions. He only added a social scientist to his lab just before it was shut down.
What’s more, Engelbart’s pitches of linked leaps in technology and organizational behaviors probably sounded as crazy to 1980s corporate managers as augmenting human intellect with machines did in the early 1960s. In the end, the way Silicon Valley companies work changed radically in recent decades not through established companies going through the kind of internal transformations Engelbart imagined, but by their being displaced by radical new startups.
When I talked with him again in the mid-2000s, Engelbart marveled that people carry around in their pockets millions of times more computer power than his entire lab had in the 1960s, but the less tangible parts of his system had still not evolved so spectacularly.
Like Tim Berners-Lee, Engelbart never sought to own what he contributed to the world’s ability to know. But he was frustrated to the end by the way so many people had adopted, developed, and profited from the digital media he had helped create, while failing to pursue the important tasks he had created them to do.
Howard Rheingold, a visiting lecturer at Stanford University, has written since the early 1980s about how innovations in computers and networking change peoples’ thinking. He profiled Doug Engelbart’s work in his 1985 book, Tools for Thought and is most recently author of Net Smart: How to Thrive Online.
Lev Manovich is a leading theorist of cultural objects produced with digital technology, perhaps best known for The Language of New Media (MIT Press, 2001). I interviewed him about his most recent book, Software Takes Command (Bloomsbury Academic, July 2014).
Photograph published in Alan Kay and Adele Goldberg, "Personal Dynamic Media" with the caption, "Kids learning to use the interim Dynabook."
MICHAEL CONNOR: I want to start with the question of methodology. How does one study software? In other words, what is the object of study—do you focus more on the interface, or the underlying code, or some combination of the two?
LEV MANOVICH: The goal of my book is to understand media software—its genealogy (where does it come from), its anatomy (the key features shared by all media viewing and editing software), and its effects in the world (pragmatics). Specifically, I am concerned with two kinds of effects:
1) How media design software shapes the media being created, making some design choices seem natural and easy to execute, while hiding other design possibilities;
2) How media viewing / managing / remixing software shapes our experience of media and the actions we perform on it.
I devote significant space to the analysis of After Effects, Photoshop and Google Earth—these are my primary case studies.
Photoshop Toolbox from version 0.63 (1988) to 7.0 (2002).
I also want to understand what media is today conceptually, after its "softwarization." Do the concepts of media developed to account for industrial-era technologies, from photography to video, still apply to media that is designed and experienced with software? Do they need to be updated, or completely replaced by new more appropriate concepts? For example: do we still have different media or did they merge into a single new meta-medium? Are there some structural features which motion graphics, graphic designs, web sites, product designs, buildings, and video games all share, since they are all designed with software?
In short: does "media" still exist?
For me, "software studies" is about asking such broad questions, as opposed to only focusing on code or interface. Our world, media, economy, and social relations all run on software. So any investigation of code, software architectures, or interfaces is only valuable if it helps us to understand how these technologies are reshaping societies and individuals, and our imaginations.
MC: In order to ask these questions, your book begins by delving into some early ideas from the 1960s and 1970s that had a profound influence on later developers. In looking at these historical precedents, to what extent were you able to engage with the original software or documentation thereof? And to what extent were you relying on written texts by these early figures?
Photograph published in Kay and Goldberg with the caption, "The interim Dynabook system consists of processor, disk drive, display, keyboard, and pointing devices."
LM: In my book I only discuss the ideas of a few of the most important people, and for this, I could find enough sources. I focused on the theoretical ideas from the 1960s and 1970s which led to the development of modern media authoring environment, and the common features of their interfaces. My primary documents were published articles by J. C. R. Licklider, Ivan Sutherland, Ted Nelson, Douglas Engelbart, Alan Kay, and their collaborators, and also a few surviving film clips—Sutherland demonstrating Sketchpad (the first interactive drawing system seen by the public), a tour of Xerox Alto, etc. I also consulted manuals for a few early systems which are available online.
While I was doing this research, I was shocked to realize how little visual documentation of the key systems and software (Sketchpad, Xerox Parc's Alto, first paint programs from late 1960s and 1970s) exists. We have original articles published about these systems with small black-and-white illustrations, and just a few low resolution film clips. And nothing else. None of the historically important systems exist in emulation, so you can't get a feeling of what it was like to use them.
This situation is quite different with other media technologies. You can go to a film museum and experience the real Panoroma from early 1840s, camera obscura, or another pre-cinematic technology. Painters today use the same "new media" as Impressionists in the 1870s—paints in tubes. With computer systems, most of the ideas behind contemporary media software come directly from the 1960s and 1970s—but the original systems are not accessible. Given the number of artists and programmers working today in "software art" and "creative coding," it should be possible to create emulations of at least a few most fundamental early systems. It's good to take care of your parents!
MC: One of the key early examples in your book is Alan Kay's concept of the "Dynabook," which posited the computer as "personal dynamic media" which could be used by all. These ideas were spelled out in his writing, and brought to some fruition in the Xerox Alto computer. I'd like to ask you about the documentation of these systems that does survive. What importance can we attach to these images of users, interfaces and the cultural objects produced with these systems?
Top and center: Images published in Kay and Goldberg with the captions, "An electronic circuit layout system programmed by a 15-year- old student" and "Data for this score was captured on a musical keyboard. A program then converts the data to standard musical notation." Bottom: The Alto Screen showing windows with graphics drawn using commands in Smalltalk programming language.
LM: The most informative sets of images of Alan Kay's "Dynabook" (Xerox Alto) appears in the article he wrote with his collaborator Adele Goldberg in 1977. In my book I analyze this article in detail, interpreting it as "media theory" (as opposed to just documentation of the system). Kay said that reading McLuhan convinced him that computer can be a medium for personal expression. The article presents theoretical development of this idea and reports on its practical implementation (Xerox Alto).
Alan Turing theoretically defined a computer as a machine that can simulate a very large class of other machines, and it is this simulation ability that is largely responsible for the proliferation of computers in modern society. But it was only Kay and his generation that extended the idea of simulation to media—thus turning the Universal Turing Machine into a Universal Media Machine, so to speak. Accordingly, Kay and Goldberg write in the article: "In a very real sense, simulation is the central notion of the Dynabook." However, as I suggest in the book, simulating existing media become a chance to extend and add new functions. Kay and Goldberg themselves are clear about this—here is, for example, what they say about an electronic book: "It need not be treated as a simulated paper book since this is a new medium with new properties. A dynamic search may be made for a particular context. The non-sequential nature of the file medium and the use of dynamic manipulation allow a story to have many accessible points of view."
The many images of media software developed both by Xerox team and other Alto users which appear in the article illustrate these ideas. Kay and Goldberg strategically give us examples of how their "interim 'Dynabook'" can allow users to paint, draw, animate, compose music, and compose text. This maked Alto first Universal Media Machine—the first computer offering ability to compose and create cultural experiences and artifacts for all senses.
MC: I'm a bit surprised to hear you say the words "just documentation!" In the case of Kay, his theoretical argument was perhaps more important than any single prototype. But, in general, one of the things I find compelling about your approach is your analysis of specific elements of interfaces and computer operations. So when you use the example of Ivan Sutherland's Sketchpad, wasn't it the documentation (the demo for a television show produced by MIT in 1964) that allowed you to make the argument that even this early software wasn't merely a simulation of drawing, but a partial reinvention of it?
Frames from Sketchpad demo video illustrating the program’s use of constraints. Left column: a user selects parts of a drawing. Right column: Sketchpad automatically adjusts the drawing. (The captured frames were edited in Photoshop to show the Sketchpad screen more clearly.)
LM: The reason I said "just documentation" is that normally people dont think about Sutherland, Engelbart or Kay as "media theorists," and I think it's more common to read their work as technical reports.
On to to Sutherland. Sutherland describes the new features of his system in his Ph.D. thesis and the published article, so in principle you can just read them and get these ideas. But at the same time, the short film clip which demonstrates the Sketchpad is invaluable—it helps you to better understand how these new features (such as "contraints satisfaction") actually worked, and also to "experience" them emotionally. Since I have seen the film clip years before I looked at Sutherland's PhD thesis (now available online), I can't really say what was more important. Maybe it was not even the original film clip, but its use in one of Alan Kay's lectures. In the lecture Alan Kay shows the clip, and explains how important these new features were.
MC: The Sketchpad demo does have a visceral impact. You began this interview by asking, "does media still exist?" Along these lines, the Sutherland clip raises the question of whether drawing, for one, still exists. The implications of this seem pretty enormous. Now that you have established the principle that all media are contingent on the software that produces, do we need to begin analyzing all media (film, drawing or photography) from the point of view of software studies? Where might that lead?
LM: The answer which I arrive to the question "does media still exist?" after 200 pages is relevant to all media which is designed or accessed with software tools. What we identify by conceptual inertia as "properties" of different mediums are actually the properties of media software—their interfaces, the tools, and the techniques they make possible for navigating, creating, editing, and sharing media documents. For example, the ability to automatically switch between different views of a document in Acrobat Reader or Microsoft Word is not a property of “text documents,” but as a result of software techniques whose heritage can be traced to Engelbart’s “view control.” Similarly, "zoom" or "pan" is not exclusive to digital images or texts or 3D scenes—its the properly of all modern media software.
Along with these and a number of other "media-independent" techniques (such as "search") which are build into all media software, there are also "media-specific" techniques which can only be used with particular data types. For example, we can extrude a 2-D shape to make a 3D model, but we can't extrude a text. Or, we can change contrast and saturation on a photo, but these operations do not make sense in relation to 3D models, texts, or sound.
So when we think of photography, film or any other medium, we can think of it as a combination of "media-independent" techniques which it shares with all other mediums, and also techniques which are specific to it.
MC: I'd proposed the title, "Don't Study Media, Study Software" for this article. But it sounds like you are taking a more balanced view?
LM: Your title makes me nervous, because some people are likely to misinterpret it. I prefer to study software such as Twitter, Facebook, Instagram, Photoshop, After Effects, game engines, etc., and use this understanding in interpreting the content created with this software—tweets, messages, social media photos, professional designs, video games, etc. For example, just this morning I was looking at a presentation by one of Twitter's engineers about the service, and learned that sometimes the responses to tweets can arrive before the tweet itself. This is important to know if we are to analyze the content of Twitter communication between people, for example.
Today, all cultural forms which require a user to click even once on their device to access and/or participate run on software. We can't ignore technology any longer. In short: "software takes command."
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.