A new creation, Satellite Daylight 47°33'N was exhibited at this occasion (img. below), which was also acquiredby HeK and enters it's collection at the same time.
Among others, the exhibition displays works by Tega Brain, Bengt Sjölén & Julian Oliver, Bureau d'études, James Bridle, Trevor Paglen, Quadrature, etc. and was curated by Boris Magrini and Christine Schranz.
The exhibition «Shaping the Invisible World – Digital Cartography as an Instrument of Knowledge» examines, through cartography, the representational forms of the map as a tool between knowledge and technology. The works of the artists on view negotiate the meaning of the map as a gauge of our digital, technological and global society.
(Photo.: fabric | ch)
(Photo.: T. Marti)
fabric | ch, Satellite Daylight 47°33'N (2021) at the Haus der elektronischen Künste (photo.: fabric | ch).
Cartography – the science of surveying and representing the world – developed in antiquity and provided the springboard for communication and economic exchange between people and cultures around the globe. At the same time, maps are undeniably never neutral, since their creation inherently involves interpretation and imagination. Today, it is IT companies that drive progress in the field and drastically influence our views of the world and how we communicate, navigate and consume globally. While map production has become more democratic, digital maps are nevertheless increasingly used for political and economic manipulation. Questions of privacy, authorship, economic interests and big data management are more poignant than ever before and closely intertwined with contemporary cartographic practices.
Today’s maps not only depict, but also document, negotiate and visualize subjective views of the world. But are these maps more democratic? Who benefits from self-determined productions and what consequences do they lead to?
The strategies in digital mapping and cartography employed by the artists presented in Shaping the Invisible World – Digital Cartography as an Instrument of Knowledge are subversive. Their spectacular panoramas and virtual scenarios reveal how the digital technologies culturally affect our understanding of the world.
Navigating between subversive cartography and digital mapping, the exhibition puts the spotlight on the fascination of maps in relation to the democratization of knowledge and appropriation. By uncovering hidden realities, scarcely visible developments and possible new social relationships within a territory, the artists delineate the evolution of invisible worlds.
-
Artists: Studio Above&Below, Tega Brain & Julian Oliver & Bengt Sjölén, James Bridle, Persijn Broersen & Margit Lukács, Bureau d'études/ Collectif Planète Laboratoire, fabric | ch, Fei Jun, Total Refusal (Robin Klengel & Leonhard Müllner), Trevor Paglen, Esther Polak & Ivar Van Bekkum, Quadrature, Jakob Kudsk Steensen
Note: an online talk with Patrick Keller, lead archivist and curator Sang Ae Park from Nam June Paik Art Center (NJPAC) in Seoul, and Christian Babski from fabric | ch.
The topic will be related to an ongoing design research into automated curating, jointly led between NJPAC, ECAL and fabric | ch.
How would Augmented Reality change exhibition curating and design in the future? Join our June Science Club and learn how the ECAL and Nam June Paik Art Center are collaborating to develop a novel range of museums. This talk program is hosted by Swissnex and Embassy of Switzerland in the Republic of Korea. All talks shall be in English.
---
Date
June 24, 2021. 17:00 – 18:00
Venue
Zoom
Panels
Patrick Keller (Associate Professor, ECAL / University of Art and Design Lausanne (HES-SO))
Sang Ae Park (Archivist, Nam June Paik Art Center)
Christian Babski (Co-founder fabric | ch)
Note: during the long shutdown of the museums in Switzerland last Spring, fabric | ch has nevertheless the chance to see Public Platform of Future Past (pdf), one of its latest architectural investigations, integrated into the permanent collection of the Haus der elektronischen Künste (HeK), in Basel.
We are pleased that our work is recognized by innovative and risk taking curators (Sabine Himmelsbach, Boris Magrini), and become part of the museum's collection, along several others works (by Jodi, !Mediengruppe Bitnik, Olia Lialina, Christina Kubisch, Zimoun, etc.)
It is also the first of our work whose certificate of authenticity has been issued by a blockchain! (datadroppers)
A second work - currently in production - will enter the collection in the spring of 2021, which will be documented at that time.
Note: the discussion about "Data Materialization" between Nathalie Kane (V&A Museum, London) and Patrick Keller (fabric | ch, ECAL / University of Art and Design Lausanne (HES-SO)), on the occasion of the ECAL Research Day, has been published on the dedicated website, along with other interesting talks.
Donald Knuth at his home in Stanford, Calif. He is a notorious perfectionist and has offered to pay a reward to anyone who finds a mistake in any of his books. Photo: Brian Flaherty
For half a century, the Stanford computer scientist Donald Knuth, who bears a slight resemblance to Yoda — albeit standing 6-foot-4 and wearing glasses — has reigned as the spirit-guide of the algorithmic realm.
He is the author of “The Art of Computer Programming,” a continuing four-volume opus that is his life’s work. The first volume debuted in 1968, and the collected volumes (sold as a boxed set for about $250) were included by American Scientist in 2013 on its list of books that shaped the last century of science — alongside a special edition of “The Autobiography of Charles Darwin,” Tom Wolfe’s “The Right Stuff,” Rachel Carson’s “Silent Spring” and monographs by Albert Einstein, John von Neumann and Richard Feynman.
With more than one million copies in print, “The Art of Computer Programming” is the Bibleof its field. “Like an actual bible, it is long and comprehensive; no other book is as comprehensive,” said Peter Norvig, a director of research at Google. After 652 pages, volume one closes with a blurb on the back cover from Bill Gates: “You should definitely send me a résumé if you can read the whole thing.”
The volume opens with an excerpt from “McCall’s Cookbook”:
Here is your book, the one your thousands of letters have asked us to publish. It has taken us years to do, checking and rechecking countless recipes to bring you only the best, only the interesting, only the perfect.
Inside are algorithms, the recipes that feed the digital age — although, as Dr.Knuth likes to point out, algorithms can also be found on Babylonian tablets from 3,800 years ago. He is an esteemed algorithmist; his name is attached to some of the field’s most important specimens, such as the Knuth-Morris-Pratt string-searching algorithm. Devised in 1970, it finds all occurrences of a given word or pattern of letters in a text — for instance, when you hit Command+F to search for a keyword in a document.
Now 80, Dr. Knuth usually dresses like the youthful geek he was when he embarked on this odyssey: long-sleeved T-shirt under a short-sleeved T-shirt, with jeans, at least at this time of year. In those early days, he worked close to the machine, writing “in the raw,” tinkering with the zeros and ones.
“Knuth made it clear that the system could actually be understood all the way down to the machine code level,” said Dr. Norvig. Nowadays, of course, with algorithms masterminding (and undermining) our very existence, the average programmer no longer has time to manipulate the binary muck, and works instead with hierarchies of abstraction, layers upon layers of code — and often with chains of code borrowed from code libraries. But an elite class of engineers occasionally still does the deep dive.
“Here at Google, sometimes we just throw stuff together,” Dr. Norvig said, during a meeting of the Google Trips team, in Mountain View, Calif. “But other times, if you’re serving billions of users, it’s important to do that efficiently. A 10-per-cent improvement in efficiency can work out to billions of dollars, and in order to get that last level of efficiency, you have to understand what’s going on all the way down.”
Dr. Knuth at the California Institute of Technology, where he received his Ph.D. in 1963. Photo: Jill Knuth
Or, as Andrei Broder, a distinguished scientist at Google and one of Dr. Knuth’s former graduate students, explained during the meeting: “We want to have some theoretical basis for what we’re doing. We don’t want a frivolous or sloppy or second-rate algorithm. We don’t want some other algorithmist to say, ‘You guys are morons.’”
The Google Trips app, created in 2016, is an “orienteering algorithm” that maps out a day’s worth of recommended touristy activities. The team was working on “maximizing the quality of the worst day” — for instance, avoiding sending the user back to the same neighborhood to see different sites. They drew inspiration from a 300-year-old algorithm by the Swiss mathematician Leonhard Euler, who wanted to map a route through the Prussian city of Königsberg that would cross each of its seven bridges only once. Dr. Knuth addresses Euler’s classic problem in the first volume of his treatise. (He once applied Euler’s method in coding a computer-controlled sewing machine.)
Following Dr. Knuth’s doctrine helps to ward off moronry. He is known for introducing the notion of “literate programming,” emphasizing the importance of writing code that is readable by humans as well as computers — a notion that nowadays seems almost twee. Dr. Knuth has gone so far as to argue that some computer programs are, like Elizabeth Bishop’s poems and Philip Roth’s “American Pastoral,” works of literature worthy of a Pulitzer.
He is also a notorious perfectionist. Randall Munroe, the xkcd cartoonist and author of “Thing Explainer,” first learned about Dr. Knuth from computer-science people who mentioned the reward money Dr. Knuth pays to anyone who finds a mistake in any of his books. As Mr. Munroe recalled, “People talked about getting one of those checks as if it was computer science’s Nobel Prize.”
Dr. Knuth’s exacting standards, literary and otherwise, may explain why his life’s work is nowhere near done. He has a wager with Sergey Brin, the co-founder of Google and a former student (to use the term loosely), over whether Mr. Brin will finish his Ph.D. before Dr. Knuth concludes his opus.
The dawn of the algorithm
At age 19, Dr. Knuth published his first technical paper, “The Potrzebie System of Weights and Measures,” in Mad magazine. He became a computer scientist before the discipline existed, studying mathematics at what is now Case Western Reserve University in Cleveland. He looked at sample programs for the school’s IBM 650 mainframe, a decimal computer, and, noticing some inadequacies, rewrote the software as well as the textbook used in class. As a side project, he ran stats for the basketball team, writing a computer program that helped them win their league — and earned a segment by Walter Cronkite called “The Electronic Coach.”
During summer vacations, Dr. Knuth made more money than professors earned in a year by writing compilers. A compiler is like a translator, converting a high-level programming language (resembling algebra) to a lower-level one (sometimes arcane binary) and, ideally, improving it in the process. In computer science, “optimization” is truly an art, and this is articulated in another Knuthian proverb: “Premature optimization is the root of all evil.”
Eventually Dr. Knuth became a compiler himself, inadvertently founding a new field that he came to call the “analysis of algorithms.” A publisher hired him to write a book about compilers, but it evolved into a book collecting everything he knew about how to write for computers — a book about algorithms.
Left: Dr. Knuth in 1981, looking at the 1957 Mad magazine issue that contained his first technical article. He was 19 when it was published. Photo: Jill Knuth. Right: “The Art of Computer Programming,” volumes 1–4. “Send me a résumé if you can read the whole thing,” Bill Gates wrote in a blurb. Photo: Brian Flaherty
“By the time of the Renaissance, the origin of this word was in doubt,” it began. “And early linguists attempted to guess at its derivation by making combinations like algiros [painful] + arithmos [number].’” In fact, Dr. Knuth continued, the namesake is the 9th-century Persian textbook author Abū ‘Abd Allāh Muhammad ibn Mūsā al-Khwārizmī, Latinized as Algorithmi. Never one for half measures, Dr. Knuth went on a pilgrimage in 1979 to al-Khwārizmī’s ancestral homeland in Uzbekistan.
When Dr. Knuth started out, he intended to write a single work. Soon after, computer science underwent its Big Bang, so he reimagined and recast the project in seven volumes. Now he metes out sub-volumes, called fascicles. The next installation, “Volume 4, Fascicle 5,” covering, among other things, “backtracking” and “dancing links,” was meant to be published in time for Christmas. It is delayed until next April because he keeps finding more and more irresistible problems that he wants to present.
In order to optimize his chances of getting to the end, Dr. Knuth has long guarded his time. He retired at 55, restricted his public engagements and quit email (officially, at least). Andrei Broder recalled that time management was his professor’s defining characteristic even in the early 1980s.
Dr. Knuth typically held student appointments on Friday mornings, until he started spending his nights in the lab of John McCarthy, a founder of artificial intelligence, to get access to the computers when they were free. Horrified by what his beloved book looked like on the page with the advent of digital publishing, Dr. Knuth had gone on a mission to create the TeX computer typesetting system, which remains the gold standard for all forms of scientific communication and publication. Some consider it Dr. Knuth’s greatest contribution to the world, and the greatest contribution to typography since Gutenberg.
This decade-long detour took place back in the age when computers were shared among users and ran faster at night while most humans slept. So Dr. Knuth switched day into night, shifted his schedule by 12 hours and mapped his student appointments to Fridays from 8 p.m. to midnight. Dr. Broder recalled, “When I told my girlfriend that we can’t do anything Friday night because Friday night at 10 I have to meet with my adviser, she thought, ‘This is something that is so stupid it must be true.’”
When Knuth chooses to be physically present, however, he is 100-per-cent there in the moment. “It just makes you happy to be around him,” said Jennifer Chayes, a managing director of Microsoft Research. “He’s a maximum in the community. If you had an optimization function that was in some way a combination of warmth and depth, Don would be it.”
Dr. Knuth discussing typefaces with Hermann Zapf, the type designer. Many consider Dr. Knuth’s work on the TeX computer typesetting system to be the greatest contribution to typography since Gutenberg. Photo: Bettmann/Getty Images
Sunday with the algorithmist
Dr. Knuth lives in Stanford, and allowed for a Sunday visitor. That he spared an entire day was exceptional — usually his availability is “modulo nap time,” a sacred daily ritual from 1 p.m. to 4 p.m. He started early, at Palo Alto’s First Lutheran Church, where he delivered a Sunday school lesson to a standing-room-only crowd. Driving home, he got philosophical about mathematics.
“I’ll never know everything,” he said. “My life would be a lot worse if there was nothing I knew the answers about, and if there was nothing I didn’t knowthe answers about.” Then he offered a tour of his “California modern” house, which he and his wife, Jill, built in 1970. His office is littered with piles of U.S.B. sticks and adorned with Valentine’s Day heart art from Jill, a graphic designer. Most impressive is the music room, built around his custom-made, 812-pipe pipe organ. The day ended over beer at a puzzle party.
Puzzles and games — and penning a novella about surreal numbers, and composing a 90-minute multimedia musical pipe-dream, “Fantasia Apocalyptica” — are the sorts of things that really tickle him. One section of his book is titled, “Puzzles Versus the Real World.” He emailed an excerpt to the father-son team of Martin Demaine, an artist, and Erik Demaine, a computer scientist, both at the Massachusetts Institute of Technology, because Dr. Knuth had included their “algorithmic puzzle fonts.”
“I was thrilled,” said Erik Demaine. “It’s an honor to be in the book.” He mentioned another Knuth quotation, which serves as the inspirational motto for the biannual “FUN with Algorithms” conference: “Pleasure has probably been the main goal all along.”
But then, Dr. Demaine said, the field went and got practical. Engineers and scientists and artists are teaming up to solve real-world problems — protein folding, robotics, airbags — using the Demaines’s mathematical origami designs for how to fold paper and linkages into different shapes.
Of course, all the algorithmic rigmarole is also causing real-world problems. Algorithms written by humans — tackling harder and harder problems, but producing code embedded with bugs and biases — are troubling enough. More worrisome, perhaps, are the algorithms that are not written by humans, algorithms written by the machine, as it learns.
Programmers still train the machine, and, crucially, feed it data. (Data is the new domain of biases and bugs, and here the bugs and biases are harder to find and fix). However, as Kevin Slavin, a research affiliate at M.I.T.’s Media Lab said, “We are now writing algorithms we cannot read. That makes this a unique moment in history, in that we are subject to ideas and actions and efforts by a set of physics that have human origins without human comprehension.” As Slavin has often noted, “It’s a bright future, if you’re an algorithm.”
Dr. Knuth at his desk at home in 1999. Photo: Jill Knuth
A few notes. Photo: Brian Flaherty
All the more so if you’re an algorithm versed in Knuth. “Today, programmers use stuff that Knuth, and others, have done as components of their algorithms, and then they combine that together with all the other stuff they need,” said Google’s Dr. Norvig.
“With A.I., we have the same thing. It’s just that the combining-together part will be done automatically, based on the data, rather than based on a programmer’s work. You want A.I. to be able to combine components to get a good answer based on the data. But you have to decide what those components are. It could happen that each component is a page or chapter out of Knuth, because that’s the best possible way to do some task.”
Lucky, then, Dr. Knuth keeps at it. He figures it will take another 25 years to finish “The Art of Computer Programming,” although that time frame has been a constant since about 1980. Might the algorithm-writing algorithms get their own chapter, or maybe a page in the epilogue? “Definitely not,” said Dr. Knuth.
“I am worried that algorithms are getting too prominent in the world,” he added. “It started out that computer scientists were worried nobody was listening to us. Now I’m worried that too many people are listening.”
Note: we went a bit historical recently on | rblg, digging into history of computing in relation to design/architecture/art. And the following one is certainly far from being the less know... History of computing, or rather personal computing. Yet the article by Maraget O'Mara bring new insights about Engelbart's "mother of all demos", asking in particular for a new "demo".
It is also interesting to consider how some topics that we'd believe are very contemporary, were in fact already popping up pretty early in the history of the media.
Questiond about the status of the machine in relation to us, humans, or about the collection of data, etc. If you go back to the early texts by Wiener and Turing (vulgarization texts... for my part), you might see that the questions we still have were already present.
On a crisp California afternoon in early December 1968, a square-jawed, mild-mannered Stanford researcher named Douglas Engelbart took the stage at San Francisco’s Civic Auditorium and proceeded to blow everyone’s mind about what computers could do. Sitting down at a keyboard, this computer-age Clark Kent calmly showed a rapt audience of computer engineers how the devices they built could be utterly different kinds of machines – ones that were “alive for you all day,” as he put it, immediately responsive to your input, and which didn’t require users to know programming languages in order to operate.
The prototype computer mouse Doug Engelbart used in his demo.Michael Hicks, CC BY
Engelbart typed simple commands. He edited a grocery list. As he worked, he skipped the computer cursor across the screen using a strange wooden box that fit snugly under his palm. With small wheels underneath and a cord dangling from its rear, Engelbart dubbed it a “mouse.”
The 90-minute presentation went down in Silicon Valley history as the “mother of all demos,” for it previewed a world of personal and online computing utterly different from 1968’s status quo. It wasn’t just the technology that was revelatory; it was the notion that a computer could be something a non-specialist individual user could control from their own desk.
The first part of the "mother of all demos."
Shrinking the massive machines
In the America of 1968, computers weren’t at all personal. They were refrigerator-sized behemoths that hummed and blinked, calculating everything from consumer habits to missile trajectories, cloistered deep within corporate offices, government agencies and university labs. Their secrets were accessible only via punch card and teletype terminals.
Earlier in 1968, Stanley Kubrick’s trippy “2001: A Space Odyssey” mined moviegoers’ anxieties about computers run amok with the tale of a malevolent mainframe that seized control of a spaceship from its human astronauts.
Voices rang out on Capitol Hill about the uses and abuses of electronic data-gathering, too. Missouri Senator Ed Long regularly delivered floor speeches he called “Big Brother updates.” North Carolina Senator Sam Ervin declared that mainframe power posed a threat to the freedoms guaranteed by the Constitution. “The computer,” Ervin warned darkly, “never forgets.” As the Johnson administration unveiled plans to centralize government data in a single, centralized national database, New Jersey Congressman Cornelius Gallagher declared that it was just another grim step toward scientific thinking taking over modern life, “leaving as an end result a stack of computer cards where once were human beings.”
The zeitgeist of 1968 helps explain why Engelbart’s demo so quickly became a touchstone and inspiration for a new, enduring definition of technological empowerment. Here was a computer that didn’t override human intelligence or stomp out individuality, but instead could, as Engelbart put it, “augment human intellect.”
While Engelbart’s vision of how these tools might be used was rather conventionally corporate – a computer on every office desk and a mouse in every worker’s palm – his overarching notion of an individualized computer environment hit exactly the right note for the anti-Establishment technologists coming of age in 1968, who wanted to make technology personal and information free.
The second part of the "mother of all demos."
Over the next decade, technologists from this new generation would turn what Engelbart called his “wild dream” into a mass-market reality – and profoundly transform Americans’ relationship to computer technology.
Government involvement
In the decade after the demo, the crisis of Watergate and revelations of CIA and FBI snooping further seeded distrust in America’s political leadership and in the ability of large government bureaucracies to be responsible stewards of personal information. Economic uncertainty and an antiwar mood slashed public spending on high-tech research and development – the same money that once had paid for so many of those mainframe computers and for training engineers to program them.
Enabled by the miniaturizing technology of the microprocessor, the size and price of computers plummeted, turning them into affordable and soon indispensable tools for work and play. By the 1980s and 1990s, instead of being seen as machines made and controlled by government, computers had become ultimate expressions of free-market capitalism, hailed by business and political leaders alike as examples of what was possible when government got out of the way and let innovation bloom.
There lies the great irony in this pivotal turn in American high-tech history. For even though “the mother of all demos” provided inspiration for a personal, entrepreneurial, government-is-dangerous-and-small-is-beautiful computing era, Doug Engelbart’s audacious vision would never have made it to keyboard and mouse without government research funding in the first place.
Engelbart was keenly aware of this, flashing credits up on the screen at the presentation’s start listing those who funded his research team: the Defense Department’s Advanced Projects Research Agency, later known as DARPA; the National Aeronautics and Space Administration; the U.S. Air Force. Only the public sector had the deep pockets, the patience and the tolerance for blue-sky ideas without any immediate commercial application.
Although government funding played a less visible role in the high-tech story after 1968, it continued to function as critical seed capital for next-generation ideas. Marc Andreessen and his fellow graduate students developed their groundbreaking web browser in a government-funded university laboratory. DARPA and NASA money helped fund the graduate research project that Sergey Brin and Larry Page would later commercialize as Google. Driverless car technology got a jump-start after a government-sponsored competition; so has nanotechnology, green tech and more. Government hasn’t gotten out of Silicon Valley’s way; it remained there all along, quietly funding the next generation of boundary-pushing technology.
But perhaps the current moment of high-tech angst can once again gain inspiration from the mother of all demos. Later in life, Engelbart described his life’s work as a quest to “help humanity cope better with complexity and urgency.” His solution was a computer that was remarkably different from the others of that era, one that was humane and personal, that augmented human capability rather than boxing it in. And he was able to bring this vision to life because government agencies funded his work.
Now it’s time for another mind-blowing demo of the possible future, one that moves beyond the current adversarial moment between big government and Big Tech. It could inspire people to enlist public and private resources and minds in crafting the next audacious vision for our digital future.
-
Our new podcast “Heat and Light” features Prof. O'Mara discussing this story in depth.
Note: just after archiving the MOMA exhibition on | rblg, here comes a small post by Eliza Pertigkiozoglou about the Architecture Machine Group at MIT, same period somehow. This groundbreaking architecture teaching unit and research experience that then led to the MIT Media Lab (Beatriz Colomina spoke about it in its research about design teaching and "Radical Pedagogies" - we spoke about it already on | rblg in the context of a book about the Black Mountain College).
The post details Urban 5, one of the first project the group developed that was supposed to help (anybody) develop an architecture project, in an interactive way. This story is also very well explained and detailed by Orit Halpern in the recent book by CCA: When is the Digital in Architecture?
URBAN 5’s overlay and the IBM 2250 model 1 cathode ray-tube used for URBAN 5 (source: openarchitectures.com)
Nicholas Negroponte (1943) founded in 1967, together with Leon Groisser, the Architecture Machine Group (Arch Mac) at MIT, which later in 1985 transformed to MIT Media Lab. Negroponte’s vision was an architecture machine that would turn the design process into a dialogue, altering the traditional human-machine dynamics. His approach was significantly influenced by recent discussion on artificial intelligence, cybernetics, conversation theory, technologies for learning, sketch recognition and representation. Arch Mac laboratory combined architecture, engineering and computing to develop architectural applications and artificially intelligent interfaces that question the design process and the role of its actors.
The Architecture Machine’s computer and interface installation (source:radical-pedagogies.com)
Urban 5 was the first research project of the lab developed in 1973, as an improved version of Urban 2. Interestingly, in his book “Architecture Machine” Negroponte explains, evaluate and criticize Urban5, contemplating on the successes and insufficiencies of the program that aimed to serve as a “toy” for experimentation rather than a tool to handle real design problems. It was “a system that could monitor design procedures” and not design tool by itself. As explained in the book, Urban’s 5 original goal was to “study the desirability and feasibility of conversing with a machine about environmental design project… using the computer as an objective mirror of the user’s own design criteria and form decisions; reflecting formed from a larger information base than the user’s personal experience”.
Urban 5 communicated with the architect-user first by giving him instructions, then by learning from him and eventually by dialoguing with him. Two languages were employed for that communication: graphic language and English language. The graphic language was using the abstract representation of cubes (nouns). The English language was text appearing on the screen (verbs). The cubes could be added incrementally and had qualities, such as sunlight, visual and acoustical privacy, which could be explicitly assigned by the user or implicitly by the machine. When the user was first introduced to the software, the software was providing instructions. Then the user could could explicitly assign criteria or generate forms graphically in different contexts. What Negroponte called context was defined by mode, which referred to different display modes that allow the designer different kinds of operations. For example, in the TOPO mode the architect can manipulate topography in plan, while in the DRAW mode he/she can manipulate the viewing mode and the physical elements. In the final stage of this human-machine relationship there was a dialogue between designer and the computer :when there was an inconsistency between the assigned criteria and the generated form, the computer informed the architect and he/she could choose the next step: ignore, postpone, and alter the criterion or the form.
Source: The Architecture Machine, Negroponte
Negreponte’s criticism give an insight of Arch Mac’s explorations, goals and self-reflection on the research project. To Negroponte, Urban 5 insufficiency was summarized in four main points. First, it was based on assumptions of the design process that can be denuded: architecture is additive(accumulation of cubes), labels are symbols and design is non-deterministic. Also, it offered specific and predetermined design services. Although different combinations could produce numerous results, they were still finite. The designer has always to decide what should be the next step in the cross-reference between the contexts/modes, without any suggestion or feedback from the computer. Last point of his criticism was that Urban 5 interacts with only one designer and the interaction is strictly mediated through “a meager selection of communication artifacts”, meaning the keyboard and the screen. The medium and the language itself.
Although Urban 5 is a simple program with limited options, the points that are addressed are basically the constraints of current CAD programs. This is, up to an extent, expected, given the medium and the language frames the interaction between man and the machine.“The world view of culture is limited by the structure of the language which that culture uses.”(Whorf, 1956) The world view of a machine is similarly marked by linguistic structure”(1). Nevertheless, it seems that Negroponte’s and Arch Mac explorations were ahead of their time, offered an insight in human-machine design interactions, suggesting “true dialogue”. “Urban 5 suggests an evolutionary system, an intelligent system — but, in itself , is none of them”(2).
References:
(1),(2): Quotes of Negroponte from “The Architecture Machine” book -see below
-Negroponte Nicholas, The Architecture Machine: Towards a more human environment, MIT Press, 1970
- Wright Steenson Molly, Architectures of Information:Christofer Alexander, Cedric Price and Nicholas Negroponte & MIT’s Architecture Machine Group, Phd Thesis, Princeton, April 2014
Note: following the exhibitionThinking Machines: Art and Design in the Computer Age, 1959–1989 until last April at MOMA, images of the show appeared on the museum's website, with many references to projects. After Archeology of the Digital at CCA in Montreal between 2013-17, this is another good contribution to the history of the field and to the intricate relations between art, design, architecture and computing.
How cultural fields contributed to the shaping of this "mass stacked media" that is now built upon the combinations of computing machines, networks, interfaces, services, data, data centers, people, crowds, etc. is certainly largely underestimated.
Literature start to emerge, but it will take time to uncover what remained "out of the radars" for a very long period. They acted in fact as some sort of "avant-garde", not well estimated or identified enough, even by specialized institutions and at a time when the name "avant-garde" almost became a "s-word"... or was considered "dead".
Unfortunately, no publication seems to have been published in relation to the exhibition, on the contrary to the one at CCA, which is accompanied by two well documented books.
Thinking Machines: Art and Design in the Computer Age, 1959–1989
November 13, 2017–April 8, 2018 | The Museum of Modern Art
Drawn primarily from MoMA's collection, Thinking Machines: Art and Design in the Computer Age, 1959–1989 brings artworks produced using computers and computational thinking together with notable examples of computer and component design. The exhibition reveals how artists, architects, and designers operating at the vanguard of art and technology deployed computing as a means to reconsider artistic production. The artists featured in Thinking Machines exploited the potential of emerging technologies by inventing systems wholesale or by partnering with institutions and corporations that provided access to cutting-edge machines. They channeled the promise of computing into kinetic sculpture, plotter drawing, computer animation, and video installation. Photographers and architects likewise recognized these technologies' capacity to reconfigure human communities and the built environment.
Thinking Machines includes works by John Cage and Lejaren Hiller, Waldemar Cordeiro, Charles Csuri, Richard Hamilton, Alison Knowles, Beryl Korot, Vera Molnár, Cedric Price, and Stan VanDerBeek, alongside computers designed by Tamiko Thiel and others at Thinking Machines Corporation, IBM, Olivetti, and Apple Computer. The exhibition combines artworks, design objects, and architectural proposals to trace how computers transformed aesthetics and hierarchies, revealing how these thinking machines reshaped art making, working life, and social connections.
Organized by Sean Anderson, Associate Curator, Department of Architecture and Design, and Giampaolo Bianconi, Curatorial Assistant, Department of Media and Performance Art.
Note: a proto-smart-architecture project by Cedric Price dating back from the 70ies, which sounds much more intersting than almost all contemporary smart architecture/cities proposals.
These lattest being in most cases glued into highly functional approaches driven by the "paths of less resistance-frictions", supported if not financed by data-hungry corporations. That's not a desirable future to my point of view.
"(...). If not changed, the building would have become “bored” and proposed alternative arrangements for evaluation (...)"
Cedric Price’s proposal for the Gilman Corporation was a series of relocatable structures on a permanent grid of foundation pads on a site in Florida.
Cedric Price asked John and Julia Frazer to work as computer consultants for this project. They produced a computer program to organize the layout of the site in response to changing requirements, and in addition suggested that a single-chip microprocessor should be embedded in every component of the building, to make it the controlling processor.
This would result in an “intelligent” building which controlled its own organisation in response to use. If not changed, the building would have become “bored” and proposed alternative arrangements for evaluation, learning how to improve its own evaluation, learning how to improve its own organisation on the basis of this experience.
The Brief
Generator (1976-79) sought to create conditions for shifting, changing personal interactions in a reconfigurable and responsive architectural project.
It followed this open-ended brief:
"A building which will not contradict, but enhance, the feeling of being in the middle of nowhere; has to be accessible to the public as well as to private guests; has to create a feeling of seclusion conducive to creative impulses, yet…accommodate audiences; has to respect the wildness of the environment while accommodating a grand piano; has to respect the continuity of the history of the place while being innovative."
The proposal consisted of an orthogonal grid of foundation bases, tracks and linear drains, in which a mobile crane could place a kit of parts comprised of cubical module enclosures and infill components (i.e. timber frames to be filled with modular components raging from movable cladding wall panels to furniture, services and fittings), screening posts, decks and circulation components (i.e. walkways on the ground level and suspended at roof level) in multiple arrangements.
When Cedric Price approached John and Julia Frazer he wrote:
"The whole intention of the project is to create an architecture sufficiently responsive to the making of a change of mind constructively pleasurable."
Generator Project
They proposed four programs that would use input from sensors attached to Generator’s components: the first three provided a “perpetual architect” drawing program that held the data and rules for Generator’s design; an inventory program that offered feedback on utilisation; an interface for “interactive interrogation” that let users model and prototype Generator’s layout before committing the design.
The powerful and curious boredom program served to provoke Generator’s users. “In the event of the site not being re-organized or changed for some time the computer starts generating unsolicited plans and improvements,” the Frazers wrote. These plans would then be handed off to Factor, the mobile crane operator, who would move the cubes and other elements of Generator. “In a sense the building can be described as being literally ‘intelligent’,” wrote John Frazer—Generator “should have a mind of its own.” It would not only challenge its users, facilitators, architect and programmer—it would challenge itself.
The Frazers’ research and techniques
The first proposal, associated with a level of ‘interactive’ relationship between ‘architect/machine’, would assist in drawing and with the production of additional information, somewhat implicit in the other parallel developments/ proposals.
The second proposal, related to the level of ‘interactive/semiautomatic’ relationship of ‘client–user/machine’, was ‘a perpetual architect for carrying out instructions from the Polorizer’ and for providing, for instance, operative drawings to the crane operator/driver; and the third proposal consisted of a ‘[. . .] scheduling and inventory package for the Factor [. . .] it could act as a perpetual functional critic or commentator.’
The fourth proposal, relating to the third level of relationship, enabled the permanent actions of the users, while the fifth proposal consisted of a ‘morphogenetic program which takes suggested activities and arranges the elements on the site to meet the requirements in accordance with a set of rules.’
Finally, the last proposal was [. . .] an extension [. . .] to generate unsolicited plans, improvements and modifications in response to users’ comments, records of activities, or even by building in a boredom concept so that the site starts to make proposals about rearrangements of itself if no changes are made. The program could be heuristic and improve its own strategies for site organisation on the basis of experience and feedback of user response.
Self Builder Kit and the Cal Build Kit, Working Models
In a certain way, the idea of a computational aid in the Generator project also acknowledged and intended to promote some degree of unpredictability. Generator, even if unbuilt, had acquired a notable position as the first intelligent building project. Cedric Price and the Frazers´ collaboration constituted an outstanding exchange between architecture and computational systems. The Generator experience explored the impact of the new techno-cultural order of the Information Society in terms of participatory design and responsive building. At an early date, it took responsiveness further; and postulates like those behind the Generator, where the influence of new computational technologies reaches the level of experience and an aesthetics of interactivity, seems interesting and productive.
Resources
John Frazer, An Evolutionary Architecture, Architectural Association Publications, London 1995. http://www.aaschool.ac.uk/publications/ea/exhibition.html
Frazer to C. Price, (Letter mentioning ‘Second thoughts but using the same classification system as before’), 11 January 1979. Generator document folio DR1995:0280:65 5/5, Cedric Price Archives (Montreal: Canadian Centre for Architecture).
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.