Note: 2017 was very busy (the reason why I wasn't able to post much on | rblg...), and the start of 2018 happens to be the same. Fortunately and unfortunatly!
I hope things will calm down a bit next Spring, but in the meantime, we're setting up an exhibition with fabric | ch. A selection of works retracing 20 years of activities, which purpose will be also to serve in the perspective of a photo shooting for a forthcoming book.
The event will take place in a disuse factory (yet a historical monument from the 2nd industrial era), near Lausanne.
If you are around, do not hesitate to knock at the door!
During a few days, in the context of the preparation of a book, a selection of works retracing 20 years of activities of fabric | ch will be on display in a disused factory close to Lausanne.
·
Information: http://www.fabric.ch/xx/
·
Opening on February 9, 5.00-11.00pm
·
Visiting hours:
Saturday - Sunday 10-11.02, 4.00-8.00pm
Wednesday 14.02, 5.00-8.00pm
Friday-Saturday 16-17.02, 5.00-8.00pm.
·
Or by appointment: 021.3511021
Guided tours at 6.00pm
-----
Pendant quelques jours et dans le contexte de la création d'un livre monographique, accrochage d'une sélection de travaux retraçant 20 ans d'activités de fabric | ch.
·
Informations: http://www.fabric.ch/xx
·
Vernissage le 9 février, 17h-23h
·
Heures de visite:
Samedi - dimanche 10-11.02, 16h-20h
Mercredi 14.02, 17h-20h
Vendredi-samedi 16-17.02, 17h-20h00
·
Ou sur rendez-vous: 021.3511021
Visites commentées à 18h.
Note: I had the great pleasure to be in discussion with Prof. Fabio Gramazio (ETHZ) during the Research in Art & Design Day that took place at ECAL last October. The session was moderated by Vera Sacchetti.
I know Fabio since we were both assistants, him in Zürich (ETHZ), and me in Lausanne (EPFL). We did collaborate on projects at that time for CAAD-ETHZ (directed by Prof. Gerhard Schmitt at that time), and I know also all the art work Fabio did in the context of the fanous Swiss collective etoy. We didn't had time to talk about it unfortunately, even so it was planned...
The recording of our discussion about academic research in architecture and design, its specificities in the case of Fabio, and their relation to practice in architecture and design, is now accessible on the Vimeo account of the School.
Research Through Art and Design: Materials and Forms
Fabio Gramazio – co-founder, Gramazio + Kohler Architects, Zurich
in conversation with Patrick Keller – professor, ECAL
10+10 Research in Art & Design at ECAL
On the occasion of the 10 years since the moving of ECAL/University of Art and Design Lausanne to its current premises in Renens and marking the 10th anniversary of the foundation of EPFL+ECAL Lab, ECAL hosted a symposium on Research in Art and Design, featuring artists, designers and scholars in these fields from all over the world, in conversation with ECAL faculty members.
Note: Summer is coming again, and like each year now, it's time to digg into unread books or articles! "Luckily" and due to other activities, we didn't publish much since last Summer. So it won't be too much of a hassle to catch back. Nonetheless, there are almost 2000 entries now on | rblg...
So, I hope you'll enjoy your Summer readings (on the beach... or on the rocks)! On my side, I'll certainly try to do the same and will be back posting in September.
As we lack a decent search engine on this blog and as we don't use a "tag cloud" either... but because Summer is certainly one of the best period of the year to spend time reading and digging into past content and topics:
HERE ARE ALL THE CURRENT UPDATED CATEGORIES TO NAVIGATE ON | RBLG BLOG:
(to be seen below if you're navigating on the blog's html pages or here for rss readers)
Note: this "car action" by James Bridle was largely reposted recently. Here comes an additionnal one...
Yet, in the context of this blog, it interests us because it underlines the possibilities of physical (or analog) hacks linked to digital devices that can see, touch, listen or produce sound, etc.
And they are several existing examples of "physical bugs" that come to mind: "Echo" recently tried to order cookies after listening and misunderstanding a american TV ad (it wasn't on Fox news though). A 3d print could be reproduced by listening and duplicating the sound of its printer, and we can now think about self-driving cars that could be tricked as well, mainly by twisting the elements upon which they base their understanding of the environment.
James Bridle entraps a self-driving car in a "magic" salt circle. Image: Still from Vimeo, "Autonomous Trap 001."
As if the challenges of politics, engineering, and weather weren't enough, now self-driving cars face another obstacle: purposeful visual sabotage, in the form of specially painted traffic lines that entice the car in before trapping it in an endless loop. As profiled in Vice, the artist behind "Autonomous Trip 001," James Bridle, is demonstrating an unforeseen hazard of automation: those forces which, for whatever reason, want to mess it all up. Which raises the question: how does one effectively design for an impish sense of humor, or a deadly series of misleading markings?
Note: in direct link with the previous post about vr, this interesting evening discussion next April at the Bartlett School of Architecture about the relation between architecture and videogames (by extension, the architecture of videogames? and/or the architecture in videogames?
Or If we go for older references in our own work, this reminds me of projects in which we explored this relation between architecture and artificial environments of games or interactive 3d spaces, like for exemple the MIX-m project (2005) or even La_Fabrique (1999 (!))... Hum.
REALMS is an evening discussion on the relationship between video games and architecture held at the Bartlett School of Architecture as part of the London Games Festival 2017. As games become ever more complex and immersive, and architects increasingly adopt game technologies for visualizing and exploring their design ideas, Realms asks what the shared future of the two mediums may be. Might architects turn towards realizing ideas in virtual realms in the face of financial pressures, and what can we learn from the weird and wonderful spatial experiences that games can offer us?
REALMS is an evening of informal talks from architects, writers and game developers followed by a panel discussion and audience Q&A. It will provide a platform for the free discussion of how architecture and video games may develop together both technologically and culturally. As part of Realms we will also showcase architecture student work from the Bartlett that deals with the relationship between architecture and video game space.
The panel of speakers for REALMS is:
Darran Anderson - author of Imaginary Cities, and writer for Killscreen/Versions. @oniropolis
James Delaney - founder of Blockworks, one of the world's leading Minecraft builders. @BlockWorksYT
Catrina Stewart - architect and founder of Office S&M and architectural designer on BAFTA award winning Lumino City. @CatrinaLStewart
Maciek Strychalski - game developer and founder of SMAC Games releasing the upcoming Tokyo 42. @Tokyo42Game
Philippa Warr - writer and author, currently working at Rock Paper Shotgun. @philippawarr
Entry is free on a first come first seated basis.
Address: Room G.12, Bartlett School of Architecture, 22 Gordon Street, London, WC1H 0QB.
Refreshments will be provided.
Realms is supported by the Architecture Projects Fund of the Bartlett, UCL.
Note: Obviously, it was just a matter of time before something like this (virtual virtual reality) happened! "Virtual reality" is part of "reality" isn't it? So why not represent it as well, as part of vr... Etc.
Which brings us to the 20 years old question: when will we start trigger new experiences with VR that are not necessarily linked to some kind of representation, even if this representation is an "hallucination", or some sort of surrealistic visual narrative as stated here?
But this question addresses the paradoxal limitations or presuppositions of the media itself, so to say. It seems to open doors to alternate realities, but at the same time, it is entirely based on perspective, human vision and sound perception. It is in fact quite limitative and hard to overcome, but nonetheless dimensions of human perception that have been challenged for a long time by artistic practices of different sorts.
"A game about VR, AI and our collective sci-fi hallucinations."
"In the near future, most jobs have been automated. What is the purpose of humanity? Activitude, the Virtual Labor System, is here to help. Your artisanal human companionship is still highly sought by our A.I. clients. Strap on your headset. Find your calling.
Pssst. . . Sure, you could function like a therapy dog to an A.I. in Bismarck and watch your work ratings climb, but don’t you yearn for something more: adventure, conflict, purpose? Escape backstage into Activitude’s system by putting on an endless series of VR headsets in VR. Outrun Chaz, your manager, as he attempts to boot you out PERMANENTLY. Along the way, uncover the story of Activitude’s evolution from VR start-up to the “human purpose aggregator” it is today."
Now that machines (or should we rather say companies?) are starting to listen continuously, that they are installed in your home or your everyday vicinity, we can start see some glitches... don't we? Sounds here quite absurd again.
We can then envision situations where machines would be hacked (rather than only trolled) by sounds or phrases. Literally "Spells"!
Or to reverse the process, where machines could pirate other machines, or even reproduce what they are doing only by listening to the noise they are doing: a colleague recently pointed me to this special case in which a 3d print could possily be copied and remade only by listeing the printing process in the first place. Reverse-engineer it (the printing process = movements = specific sounds) and you might end being able to reprint it!
Early during tonight’s game, Google’s ad for the Google Home aired on millions of TVs. We’ve actually seen the ad before: loving families at home meeting, hugging, and being welcomed by the Google Assistant. Someone says “OK Google,” and those familiar, colorful lights pop up.
But then my Google Home perked up, confused. “Sorry,” it said. “Something went wrong.” I laughed, because that wasn’t supposed to happen. I wasn’t the only one.
Poor Dave... at some point, some enterprising TV writer or ad jerk is gonna plant an “OK Google” into some on TV with intent and force everyone to listen to Nickelback. Mark my words. This is a massive troll waiting to happen.
Note: following the two previous posts about algorythms and bots ("how do they ... ?), here comes a third one.
Slighty different and not really dedicated to bots per se, but which could be considered as related to "machinic intelligence" nonetheless. This time it concerns techniques and algoritms developed to understand the brain (BRAIN initiative, or in Europe the competing Blue Brain Project).
In a funny reversal, scientists applied techniques and algorythms developed to track human intelligence patterns based on data sets to the computer itself. How do a simple chip "compute information"? And the results are surprising: the computer doesn't understand how the computer "thinks" (or rather works in this case)!
This to confirm that the brain is certainly not a computer (made out of flesh)...
When you apply tools used to analyze the human brain to a computer chip that plays Donkey Kong, can they reveal how the hardware works?
Many research schemes, such as the U.S. government’s BRAIN initiative, are seeking to build huge and detailed data sets that describe how cells and neural circuits are assembled. The hope is that using algorithms to analyze the data will help scientists understand how the brain works.
But those kind of data sets don’t yet exist. So Eric Jonas of the University of California, Berkeley, and Konrad Kording from the Rehabilitation Institute of Chicago and Northwestern University wondered if they could use their analytical software to work out how a simpler system worked.
They settled on the iconic MOS 6502 microchip, which was found inside the Apple I, the Commodore 64, and the Atari Video Game System. Unlike the brain, this slab of silicon is built by humans and fully understood, down to the last transistor.
The researchers wanted to see how accurately their software could describe its activity. Their idea: have the chip run different games—including Donkey Kong, Space Invaders, and Pitfall, which have already been mastered by some AIs—and capture the behavior of every single transistor as it did so (creating about 1.5 GB per second of data in the process). Then they would turn their analytical tools loose on the data to see if they could explain how the microchip actually works.
For instance, they used algorithms that could probe the structure of the chip—essentially the electronic equivalent of a connectome of the brain—to establish the function of each area. While the analysis could determine that different transistors played different roles, the researchers write in PLOS Computational Biology, the results “still cannot get anywhere near an understanding of the way the processor really works.”
Elsewhere, Jonas and Kording removed a transistor from the microchip to find out what happened to the game it was running—analogous to so-called lesion studies where behavior is compared before and after the removal of part of the brain. While the removal of some transistors stopped the game from running, the analysis was unable to explain why that was the case.
In these and other analyses, the approaches provided interesting results—but not enough detail to confidently describe how the microchip worked. “While some of the results give interesting hints as to what might be going on,” explains Jonas, “the gulf between what constitutes ‘real understanding’ of the processor and what we can discover with these techniques was surprising.”
It’s worth noting that chips and brains are rather different: synapses work differently from logic gates, for instance, and the brain doesn’t distinguish between software and hardware like a computer. Still, the results do, according to the researchers, highlight some considerations for establishing brain understanding from huge, detailed data sets.
First, simply amassing a handful of high-quality data sets of the brains may not be enough for us to make sense of neural processes. Second, without many detailed data sets to analyze just yet, neuroscientists ought to remain aware that their tools may provide results that don’t fully describe the brain’s function.
As for the question of whether neuroscience can explain how an Atari works? At the moment, not really.
Note: I just read this piece of news last day about Echo (Amazon's "robot assistant"), who accidentally attempted to buy large amount of toys by (always) listening and misunderstanding a phrase being told on TV by a presenter (and therefore captured by Echo in the living room and so on)... It is so "stupid" (I mean, we can see how the act of buying linked to these so-called "A.I"s is automatized by default configuration), but revealing of the kind of feedback loops that can happen with automatized decision delegated to bots and machines.
Interesting word appearing in this context is, btw, "accidentally".
Amazon's Echo attempted a TV-fueled shopping spree
It's nothing new for voice-activated devices to behave badly when they misinterpret dialogue -- just ask anyone watching a Microsoft gaming event with a Kinect-equipped Xbox One nearby. However, Amazon's Echo devices is causing more of that chaos than usual. It started when a 6-year-old Dallas girl inadvertently ordered cookies and a dollhouse from Amazon by saying what she wanted. It was a costly goof ($170), but nothing too special by itself. However, the response to that story sent things over the top. When San Diego's CW6 discussed the snafu on a morning TV show, one of the hosts made the mistake of saying that he liked when the girl said "Alexa ordered me a dollhouse." You can probably guess what happened next.
Sure enough, the channel received multiple reports from viewers whose Echo devices tried to order dollhouses when they heard the TV broadcast. It's not clear that any of the purchases went through, but it no doubt caused some panic among people who weren't planning to buy toys that day.
It's easy to avoid this if you're worried: you can require a PIN code to make purchases through the Echo or turn off ordering altogether. You can also change the wake word so that TV personalities won't set off your speaker in the first place. However, this comedy of errors also suggests that there's a lot of work to be done on smart speakers before they're truly trustworthy. They may need to disable purchases by default, for example, and learn to recognize individual voices so that they won't respond to everyone who says the magic words. Until then, you may see repeats in the future.
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.