Note: a proto-smart-architecture project by Cedric Price dating back from the 70ies, which sounds much more intersting than almost all contemporary smart architecture/cities proposals.
These lattest being in most cases glued into highly functional approaches driven by the "paths of less resistance-frictions", supported if not financed by data-hungry corporations. That's not a desirable future to my point of view.
"(...). If not changed, the building would have become “bored” and proposed alternative arrangements for evaluation (...)"
Cedric Price’s proposal for the Gilman Corporation was a series of relocatable structures on a permanent grid of foundation pads on a site in Florida.
Cedric Price asked John and Julia Frazer to work as computer consultants for this project. They produced a computer program to organize the layout of the site in response to changing requirements, and in addition suggested that a single-chip microprocessor should be embedded in every component of the building, to make it the controlling processor.
This would result in an “intelligent” building which controlled its own organisation in response to use. If not changed, the building would have become “bored” and proposed alternative arrangements for evaluation, learning how to improve its own evaluation, learning how to improve its own organisation on the basis of this experience.
The Brief
Generator (1976-79) sought to create conditions for shifting, changing personal interactions in a reconfigurable and responsive architectural project.
It followed this open-ended brief:
"A building which will not contradict, but enhance, the feeling of being in the middle of nowhere; has to be accessible to the public as well as to private guests; has to create a feeling of seclusion conducive to creative impulses, yet…accommodate audiences; has to respect the wildness of the environment while accommodating a grand piano; has to respect the continuity of the history of the place while being innovative."
The proposal consisted of an orthogonal grid of foundation bases, tracks and linear drains, in which a mobile crane could place a kit of parts comprised of cubical module enclosures and infill components (i.e. timber frames to be filled with modular components raging from movable cladding wall panels to furniture, services and fittings), screening posts, decks and circulation components (i.e. walkways on the ground level and suspended at roof level) in multiple arrangements.
When Cedric Price approached John and Julia Frazer he wrote:
"The whole intention of the project is to create an architecture sufficiently responsive to the making of a change of mind constructively pleasurable."
Generator Project
They proposed four programs that would use input from sensors attached to Generator’s components: the first three provided a “perpetual architect” drawing program that held the data and rules for Generator’s design; an inventory program that offered feedback on utilisation; an interface for “interactive interrogation” that let users model and prototype Generator’s layout before committing the design.
The powerful and curious boredom program served to provoke Generator’s users. “In the event of the site not being re-organized or changed for some time the computer starts generating unsolicited plans and improvements,” the Frazers wrote. These plans would then be handed off to Factor, the mobile crane operator, who would move the cubes and other elements of Generator. “In a sense the building can be described as being literally ‘intelligent’,” wrote John Frazer—Generator “should have a mind of its own.” It would not only challenge its users, facilitators, architect and programmer—it would challenge itself.
The Frazers’ research and techniques
The first proposal, associated with a level of ‘interactive’ relationship between ‘architect/machine’, would assist in drawing and with the production of additional information, somewhat implicit in the other parallel developments/ proposals.
The second proposal, related to the level of ‘interactive/semiautomatic’ relationship of ‘client–user/machine’, was ‘a perpetual architect for carrying out instructions from the Polorizer’ and for providing, for instance, operative drawings to the crane operator/driver; and the third proposal consisted of a ‘[. . .] scheduling and inventory package for the Factor [. . .] it could act as a perpetual functional critic or commentator.’
The fourth proposal, relating to the third level of relationship, enabled the permanent actions of the users, while the fifth proposal consisted of a ‘morphogenetic program which takes suggested activities and arranges the elements on the site to meet the requirements in accordance with a set of rules.’
Finally, the last proposal was [. . .] an extension [. . .] to generate unsolicited plans, improvements and modifications in response to users’ comments, records of activities, or even by building in a boredom concept so that the site starts to make proposals about rearrangements of itself if no changes are made. The program could be heuristic and improve its own strategies for site organisation on the basis of experience and feedback of user response.
Self Builder Kit and the Cal Build Kit, Working Models
In a certain way, the idea of a computational aid in the Generator project also acknowledged and intended to promote some degree of unpredictability. Generator, even if unbuilt, had acquired a notable position as the first intelligent building project. Cedric Price and the Frazers´ collaboration constituted an outstanding exchange between architecture and computational systems. The Generator experience explored the impact of the new techno-cultural order of the Information Society in terms of participatory design and responsive building. At an early date, it took responsiveness further; and postulates like those behind the Generator, where the influence of new computational technologies reaches the level of experience and an aesthetics of interactivity, seems interesting and productive.
Resources
John Frazer, An Evolutionary Architecture, Architectural Association Publications, London 1995. http://www.aaschool.ac.uk/publications/ea/exhibition.html
Frazer to C. Price, (Letter mentioning ‘Second thoughts but using the same classification system as before’), 11 January 1979. Generator document folio DR1995:0280:65 5/5, Cedric Price Archives (Montreal: Canadian Centre for Architecture).
Note: 2017 was very busy (the reason why I wasn't able to post much on | rblg...), and the start of 2018 happens to be the same. Fortunately and unfortunatly!
I hope things will calm down a bit next Spring, but in the meantime, we're setting up an exhibition with fabric | ch. A selection of works retracing 20 years of activities, which purpose will be also to serve in the perspective of a photo shooting for a forthcoming book.
The event will take place in a disuse factory (yet a historical monument from the 2nd industrial era), near Lausanne.
If you are around, do not hesitate to knock at the door!
During a few days, in the context of the preparation of a book, a selection of works retracing 20 years of activities of fabric | ch will be on display in a disused factory close to Lausanne.
·
Information: http://www.fabric.ch/xx/
·
Opening on February 9, 5.00-11.00pm
·
Visiting hours:
Saturday - Sunday 10-11.02, 4.00-8.00pm
Wednesday 14.02, 5.00-8.00pm
Friday-Saturday 16-17.02, 5.00-8.00pm.
·
Or by appointment: 021.3511021
Guided tours at 6.00pm
-----
Pendant quelques jours et dans le contexte de la création d'un livre monographique, accrochage d'une sélection de travaux retraçant 20 ans d'activités de fabric | ch.
·
Informations: http://www.fabric.ch/xx
·
Vernissage le 9 février, 17h-23h
·
Heures de visite:
Samedi - dimanche 10-11.02, 16h-20h
Mercredi 14.02, 17h-20h
Vendredi-samedi 16-17.02, 17h-20h00
·
Ou sur rendez-vous: 021.3511021
Visites commentées à 18h.
Note: I had the great pleasure to be in discussion with Prof. Fabio Gramazio (ETHZ) during the Research in Art & Design Day that took place at ECAL last October. The session was moderated by Vera Sacchetti.
I know Fabio since we were both assistants, him in Zürich (ETHZ), and me in Lausanne (EPFL). We did collaborate on projects at that time for CAAD-ETHZ (directed by Prof. Gerhard Schmitt at that time), and I know also all the art work Fabio did in the context of the fanous Swiss collective etoy. We didn't had time to talk about it unfortunately, even so it was planned...
The recording of our discussion about academic research in architecture and design, its specificities in the case of Fabio, and their relation to practice in architecture and design, is now accessible on the Vimeo account of the School.
Research Through Art and Design: Materials and Forms
Fabio Gramazio – co-founder, Gramazio + Kohler Architects, Zurich
in conversation with Patrick Keller – professor, ECAL
10+10 Research in Art & Design at ECAL
On the occasion of the 10 years since the moving of ECAL/University of Art and Design Lausanne to its current premises in Renens and marking the 10th anniversary of the foundation of EPFL+ECAL Lab, ECAL hosted a symposium on Research in Art and Design, featuring artists, designers and scholars in these fields from all over the world, in conversation with ECAL faculty members.
Note: Summer is coming again, and like each year now, it's time to digg into unread books or articles! "Luckily" and due to other activities, we didn't publish much since last Summer. So it won't be too much of a hassle to catch back. Nonetheless, there are almost 2000 entries now on | rblg...
So, I hope you'll enjoy your Summer readings (on the beach... or on the rocks)! On my side, I'll certainly try to do the same and will be back posting in September.
As we lack a decent search engine on this blog and as we don't use a "tag cloud" either... but because Summer is certainly one of the best period of the year to spend time reading and digging into past content and topics:
HERE ARE ALL THE CURRENT UPDATED CATEGORIES TO NAVIGATE ON | RBLG BLOG:
(to be seen below if you're navigating on the blog's html pages or here for rss readers)
Note: this "car action" by James Bridle was largely reposted recently. Here comes an additionnal one...
Yet, in the context of this blog, it interests us because it underlines the possibilities of physical (or analog) hacks linked to digital devices that can see, touch, listen or produce sound, etc.
And they are several existing examples of "physical bugs" that come to mind: "Echo" recently tried to order cookies after listening and misunderstanding a american TV ad (it wasn't on Fox news though). A 3d print could be reproduced by listening and duplicating the sound of its printer, and we can now think about self-driving cars that could be tricked as well, mainly by twisting the elements upon which they base their understanding of the environment.
James Bridle entraps a self-driving car in a "magic" salt circle. Image: Still from Vimeo, "Autonomous Trap 001."
As if the challenges of politics, engineering, and weather weren't enough, now self-driving cars face another obstacle: purposeful visual sabotage, in the form of specially painted traffic lines that entice the car in before trapping it in an endless loop. As profiled in Vice, the artist behind "Autonomous Trip 001," James Bridle, is demonstrating an unforeseen hazard of automation: those forces which, for whatever reason, want to mess it all up. Which raises the question: how does one effectively design for an impish sense of humor, or a deadly series of misleading markings?
Note: in direct link with the previous post about vr, this interesting evening discussion next April at the Bartlett School of Architecture about the relation between architecture and videogames (by extension, the architecture of videogames? and/or the architecture in videogames?
Or If we go for older references in our own work, this reminds me of projects in which we explored this relation between architecture and artificial environments of games or interactive 3d spaces, like for exemple the MIX-m project (2005) or even La_Fabrique (1999 (!))... Hum.
REALMS is an evening discussion on the relationship between video games and architecture held at the Bartlett School of Architecture as part of the London Games Festival 2017. As games become ever more complex and immersive, and architects increasingly adopt game technologies for visualizing and exploring their design ideas, Realms asks what the shared future of the two mediums may be. Might architects turn towards realizing ideas in virtual realms in the face of financial pressures, and what can we learn from the weird and wonderful spatial experiences that games can offer us?
REALMS is an evening of informal talks from architects, writers and game developers followed by a panel discussion and audience Q&A. It will provide a platform for the free discussion of how architecture and video games may develop together both technologically and culturally. As part of Realms we will also showcase architecture student work from the Bartlett that deals with the relationship between architecture and video game space.
The panel of speakers for REALMS is:
Darran Anderson - author of Imaginary Cities, and writer for Killscreen/Versions. @oniropolis
James Delaney - founder of Blockworks, one of the world's leading Minecraft builders. @BlockWorksYT
Catrina Stewart - architect and founder of Office S&M and architectural designer on BAFTA award winning Lumino City. @CatrinaLStewart
Maciek Strychalski - game developer and founder of SMAC Games releasing the upcoming Tokyo 42. @Tokyo42Game
Philippa Warr - writer and author, currently working at Rock Paper Shotgun. @philippawarr
Entry is free on a first come first seated basis.
Address: Room G.12, Bartlett School of Architecture, 22 Gordon Street, London, WC1H 0QB.
Refreshments will be provided.
Realms is supported by the Architecture Projects Fund of the Bartlett, UCL.
Note: Obviously, it was just a matter of time before something like this (virtual virtual reality) happened! "Virtual reality" is part of "reality" isn't it? So why not represent it as well, as part of vr... Etc.
Which brings us to the 20 years old question: when will we start trigger new experiences with VR that are not necessarily linked to some kind of representation, even if this representation is an "hallucination", or some sort of surrealistic visual narrative as stated here?
But this question addresses the paradoxal limitations or presuppositions of the media itself, so to say. It seems to open doors to alternate realities, but at the same time, it is entirely based on perspective, human vision and sound perception. It is in fact quite limitative and hard to overcome, but nonetheless dimensions of human perception that have been challenged for a long time by artistic practices of different sorts.
"A game about VR, AI and our collective sci-fi hallucinations."
"In the near future, most jobs have been automated. What is the purpose of humanity? Activitude, the Virtual Labor System, is here to help. Your artisanal human companionship is still highly sought by our A.I. clients. Strap on your headset. Find your calling.
Pssst. . . Sure, you could function like a therapy dog to an A.I. in Bismarck and watch your work ratings climb, but don’t you yearn for something more: adventure, conflict, purpose? Escape backstage into Activitude’s system by putting on an endless series of VR headsets in VR. Outrun Chaz, your manager, as he attempts to boot you out PERMANENTLY. Along the way, uncover the story of Activitude’s evolution from VR start-up to the “human purpose aggregator” it is today."
Note: following the two previous posts about algorythms and bots ("how do they ... ?), here comes a third one.
Slighty different and not really dedicated to bots per se, but which could be considered as related to "machinic intelligence" nonetheless. This time it concerns techniques and algoritms developed to understand the brain (BRAIN initiative, or in Europe the competing Blue Brain Project).
In a funny reversal, scientists applied techniques and algorythms developed to track human intelligence patterns based on data sets to the computer itself. How do a simple chip "compute information"? And the results are surprising: the computer doesn't understand how the computer "thinks" (or rather works in this case)!
This to confirm that the brain is certainly not a computer (made out of flesh)...
When you apply tools used to analyze the human brain to a computer chip that plays Donkey Kong, can they reveal how the hardware works?
Many research schemes, such as the U.S. government’s BRAIN initiative, are seeking to build huge and detailed data sets that describe how cells and neural circuits are assembled. The hope is that using algorithms to analyze the data will help scientists understand how the brain works.
But those kind of data sets don’t yet exist. So Eric Jonas of the University of California, Berkeley, and Konrad Kording from the Rehabilitation Institute of Chicago and Northwestern University wondered if they could use their analytical software to work out how a simpler system worked.
They settled on the iconic MOS 6502 microchip, which was found inside the Apple I, the Commodore 64, and the Atari Video Game System. Unlike the brain, this slab of silicon is built by humans and fully understood, down to the last transistor.
The researchers wanted to see how accurately their software could describe its activity. Their idea: have the chip run different games—including Donkey Kong, Space Invaders, and Pitfall, which have already been mastered by some AIs—and capture the behavior of every single transistor as it did so (creating about 1.5 GB per second of data in the process). Then they would turn their analytical tools loose on the data to see if they could explain how the microchip actually works.
For instance, they used algorithms that could probe the structure of the chip—essentially the electronic equivalent of a connectome of the brain—to establish the function of each area. While the analysis could determine that different transistors played different roles, the researchers write in PLOS Computational Biology, the results “still cannot get anywhere near an understanding of the way the processor really works.”
Elsewhere, Jonas and Kording removed a transistor from the microchip to find out what happened to the game it was running—analogous to so-called lesion studies where behavior is compared before and after the removal of part of the brain. While the removal of some transistors stopped the game from running, the analysis was unable to explain why that was the case.
In these and other analyses, the approaches provided interesting results—but not enough detail to confidently describe how the microchip worked. “While some of the results give interesting hints as to what might be going on,” explains Jonas, “the gulf between what constitutes ‘real understanding’ of the processor and what we can discover with these techniques was surprising.”
It’s worth noting that chips and brains are rather different: synapses work differently from logic gates, for instance, and the brain doesn’t distinguish between software and hardware like a computer. Still, the results do, according to the researchers, highlight some considerations for establishing brain understanding from huge, detailed data sets.
First, simply amassing a handful of high-quality data sets of the brains may not be enough for us to make sense of neural processes. Second, without many detailed data sets to analyze just yet, neuroscientists ought to remain aware that their tools may provide results that don’t fully describe the brain’s function.
As for the question of whether neuroscience can explain how an Atari works? At the moment, not really.
Note: I just read this piece of news last day about Echo (Amazon's "robot assistant"), who accidentally attempted to buy large amount of toys by (always) listening and misunderstanding a phrase being told on TV by a presenter (and therefore captured by Echo in the living room and so on)... It is so "stupid" (I mean, we can see how the act of buying linked to these so-called "A.I"s is automatized by default configuration), but revealing of the kind of feedback loops that can happen with automatized decision delegated to bots and machines.
Interesting word appearing in this context is, btw, "accidentally".
Amazon's Echo attempted a TV-fueled shopping spree
It's nothing new for voice-activated devices to behave badly when they misinterpret dialogue -- just ask anyone watching a Microsoft gaming event with a Kinect-equipped Xbox One nearby. However, Amazon's Echo devices is causing more of that chaos than usual. It started when a 6-year-old Dallas girl inadvertently ordered cookies and a dollhouse from Amazon by saying what she wanted. It was a costly goof ($170), but nothing too special by itself. However, the response to that story sent things over the top. When San Diego's CW6 discussed the snafu on a morning TV show, one of the hosts made the mistake of saying that he liked when the girl said "Alexa ordered me a dollhouse." You can probably guess what happened next.
Sure enough, the channel received multiple reports from viewers whose Echo devices tried to order dollhouses when they heard the TV broadcast. It's not clear that any of the purchases went through, but it no doubt caused some panic among people who weren't planning to buy toys that day.
It's easy to avoid this if you're worried: you can require a PIN code to make purchases through the Echo or turn off ordering altogether. You can also change the wake word so that TV personalities won't set off your speaker in the first place. However, this comedy of errors also suggests that there's a lot of work to be done on smart speakers before they're truly trustworthy. They may need to disable purchases by default, for example, and learn to recognize individual voices so that they won't respond to everyone who says the magic words. Until then, you may see repeats in the future.
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.